I have read numerous posts on the semantic web this past year or so. The latest one by Marshall KirkPatrick at ReadWriteWeb in which he writes about an academic that warns us to pay attention to the question if the semantic web should have a gender.
The semantic web is greatly inspired and advocated by Sir Tim Berners-Lee, who suggests it will be the next step in web evolution. Already in 2001 Sir Tim Berners-Lee wrote an article in Scientific American describing this semantic web.
Most of the Web’s content today is designed for humans to read, not for computer programs to manipulate meaningfully. Computers can adeptly parse Web pages for layout and routine processing here a header, there a link to another page but in general, computers have no reliable way to process the semantic
The Semantic Web will bring structure to the meaningful content of Web pages, creating an environment where software agents roaming from page to page can readily carry out sophisticated tasks for users. Such an agent coming to the clinic’s Web page will know not just that the page has keywords such as “treatment, medicine, physical, therapy” (as might be encoded today) but also that Dr. Hartman works at this clinic on Mondays, Wednesdays and Fridays and that the script takes a date range in yyyy-mm-dd format and returns appointment times. And it will “know” all this without needing artificial intelligence on the scale of 2001’s Hal or Star Wars’s C-3PO. Instead these semantics were encoded into the Web page when the clinic’s office manager (who never took Comp Sci 101) massaged it into shape using off-the-shelf software for writing Semantic Web pages along with resources listed on the Physical Therapy Association’s site.
The semantic web describes a structure that allows machines to not only process data but also extract meaning (semantics) from it. The idea of course being that if software has access to this knowledge and meaning it could serve its user better.
Personally I would love to have a C-3PO friend walking alongside with me (but then I want the jedi sword as well). But honestly, right now it is hard for me to come up with viable scenario’s in which this would really help me as a user. In the past I have worked in the field of Artificial Intelligence and have seen many promising technologies that would ultimately change the world we live in. Neural Networks, Artificial Intelligent Agents, natural language processing, speech recognition technology. Each of these technologies helped us dream of a world in which machines could understand humans, thus serving them better. If anything I have learned that this isn’t a simple problem to crack. Not just because the technology may provide less capabilities then expected, but even more so because humans are unpredictable in their behavior and usage of the technology.
A simple example in the field of speech recognition. The company I worked for build a speech recognition tool that allowed users to call a phone number and ask for information about departure times of trains. The main driver behind this was cost reduction. Having an operator answer such questions is expensive. If the operator can be replaced by a machine, this would reduce costs. While this sounds perfectly obvious there were always 2 problems that needed to be tackled. One was obviously to train the speech recognition software to recognize speech. That was a daunting task and it brings many difficulties. Just think about users talking in noisy surroundings. There were many other technological difficulties. But the hardest problem to resolve was the user who did unexpected things.
“Where are you traveling too?” -> I want to go to my uncle in San Fransisco
“I’m sorry, I didn’t understand, where are you traveling too?” -> To my uncle
“I’m sorry, I do not understand, where are you traveling too?” -> Are you deaf, I said my uncle, three times already
See the problem in this conversation? The computer/speech recognition software has very limited knowledge and is unable to process the answer of the user. It really wanted to know a destination (in this case a train station or city). The computer is off course asking the wrong question here as it leaves the user with too many choices to answer. But once the answer isn’t recognized, it becomes increasingly difficult to get the user to answer correctly. The example is a bit exaggerated to show you what I mean, but believe me, it is nearly impossible to formulate a question in such a way that users will answer it the way you expect or want them to answer it.
Back to the semantic web. It sounds like a lot of power is unleashed if it becomes possible for machines to “understand” what data means. And I’m sure that there will be cases and situations where this might come in handy for me as a user. But for now I remain skeptic about the power of the semantic web. There is so much more involved in understanding data. There are complex factors that can’t easily be modeled or handled by machines or algorithms. Just think about something as simple as mood. Marshall Kirkpatrick (who is much more of an expert on this than I am BTW), gives an example of how knowledge can be added to data:
The semantic web today is based largely on what are called “triples” – sets of subject, predicate and object. For example Marshall Kirkpatrick [subject], loves [predicate] Punkin’ the Tabby Kitten [object]. (Hypothetical, I don’t have any kittens and please don’t send me any.)
Using these triples we can enrich data and add semantics to it. Now bring in the very human factor of mood. I love ice cream. Does that mean I love it all the time? No it doesn’t. Actually, I rarely eat ice cream, but I do when I feel like it. How can this be modeled into data? Depending on whether or not I had a great cup of coffee in the morning I might feel differently about ice cream in the afternoon etc. etc.
What makes the semantic web such a difficult thing to implement in a useful way is, again, a combination of the limitations in the technology, but most of all the human factor. It just isn’t possible to model human behavior. There is mood, taste, circumstances, irrational behavior and all other types of complexities that we humans can deal with (barely) but machines can’t. Machines might infer semantics from data in the semantic web, but I feel that (unless the task or circumstance is extremely basic) it will add to the confusion the user already has when he interacts with people or machines on the web.
I welcome the research and development being done in the field of the semantic web. But until it provides practical solutions that actually help the user, I remain with many questions about its value. I sure hope that in the mean time people will start developing solutions to current problems in the web. Why not focus on a User-Centric Web first. Easier to do and it provides the user with great value 😉