How
Do Semantics Get into the Semantic Web?
No one would ever say that
the Semantic Web is the world’s largest artificial intelligence project—that
would be political and marketing suicide. Nor would anyone ever point out that
when the AI bubble burst, a lot of its practitioners moved into the field of
bibliography, since bibliography involves building models of worlds, a project
familiar to them, because a world view is one thing an AI would need in its
“mind” to be or at least seem, well, “intelligent.” Nor that both RDF and topic
maps have their roots in building biblio-graphic worlds. Not even that many
technologies in widespread use today (such as full-text search and semantic
networks) started out as pieces in the great AI puzzle. So I will refrain from
saying any of those things.
Nevertheless, I will say that
the Semantic Web certainly partakes of the AI nature. Let’s start with the
notion that the fundamental value proposition of the Semantic Web is a
conversation. Why? Because that is what a collection of statements (the RDF
subject/predicate/object structure, or the topic map association) can be. Let’s
take our book order example once more:
Buy me a
hard-cover copy of Jane’s book, in Chinese, if available for less than $39.95.
Remember that we translated
this sentence into FOL, and you saw how a machine that understands FOL (as
represented in RDF, for example) could act upon that logic to pur-chase Jane’s
book. (Because accurate machine translation of idiomatic English is a long way
off, we’ll assume that the human user communicated to the machine through an
interface of some kind; for example, a GUI that assembles the FOL sentences, or
an Englishized version of FOL syntax that users can just type in.)
Now suppose one of the FOL
relations, such as
Book (Chinese)
is not true. Now, the whole
sentence is not true. What should the publisher do? One option, of course,
would be to roll back the transaction and tell the customer to come back
another day. A better option is for the publisher to have a smart Semantic
Web–based system with additional relations in it. For example,
script
(Chinese, Traditional Characters)
script
(Chinese, Simplified Characters)
where traditional characters
are Chinese ideographs, and the simplified characters are Chinese written using
Latin characters, are both true. Therefore, when the original FOL constant,
Chinese, is replaced by the constant Traditional Characters or Simplified
Characters, the FOL statement representing the preconditions for the
transaction becomes true and the transaction can proceed—or rather, it could,
if the user authorizes it. The Semantic Web–based system tells the user the
following information:
Your
book is not available in Chinese.
Your
book is available in Chinese traditional characters.
Your
book is available in Chinese simplified characters.
The system displays the
books, and the user can add the books to her shopping cart in the usual way—or
engage in further conversation with the system. (Topic maps have the potential
for exactly the same sort of human/machine interaction; because humans are the
ultimate arbiters of subject identity, any topic map system worth its salt must
engage in conversation with the user whenever issues of subject identity come
into question.)
The key concept is conversation. The user has stated
something to the machine, the machine has responded to the user with more
statements, and the cycle may persist indefinitely (like many conversations
do). Naturally, I won’t remind you here of a classic AI project, Eliza, where
the machine imitated a not-very-insightful psychiatrist by pars-ing the user’s
input, transforming it, and writing it back out to the user.
User: I feel curiously oppressed, Doctor.
Eliza: Can you tell me what you mean by “oppressed”?
User: You’re the doctor, why don’t you tell me?
Eliza: Can you tell me what you mean by “why”?
And so on and so forth.
Interestingly, some patients reported gaining relief from these
“conversations.” Most Eliza sites would be “sticky”—although a Freudian Eliza
site, where the analyst is expected to be entirely silent, probably would not
be.
Now
let’s put the notion of conversation on hold for just a moment and ask the
seem-ingly unrelated question, “What do we mean by semantic?” Some place semantics
under the curse of the S-words, where words such as syntax, semiotics, signify, sign (as well as standard and
specification) generate endless religious controversies about their true meaning. I prefer to use the markup
community’s definition of semantics:
Semantics
is that upon which people do not agree.
Alternatively,
one could use the programming community’s definition:
Semantics
is that which enables my program.
Then
again, one could use the layperson’s definition:
Semantics has to do with the
meaning of words.
Markup/people;
programs/machines. The cultural (if not, necessarily) technical dichotomies
persist. The markup community’s definition can at least be operational. But are
there any factors that all the definitions have in common? Yes. It takes two to
make semantics. This is equivalent to Wittgenstein’s apothegm that there are no
private lan-guages. A word that I make up in my head, and use only in my head,
can’t be said to have meaning. It is also equivalent to saying that meaning, whatever
it may be, can be found in conversations. (If the patient’s conversations with
Eliza had been without mean-ing, doubtless no placebo effect would have
occurred.)
This concept locates both
meaning and semantics in the conversations occurring on the Semantic Web;
conversations over a network are what make the Web “semantic.” Now, no one
would ever say that this means that the Semantic Web, if successful, would be a
large-scale implementation of AI. That’s science fiction stuff, probably wrong,
and cer-tainly suicidal from a professional standpoint. So, I won’t say that
the famous Turing Test boils down to having a conversation, and if we can have
conversations with Semantic Web–enabled machines, then they (and we, for that
matter) have passed the Turing Test.
Granted, Turing’s imitation game is still pretty easy for a human to win; one simply poses a question whose answer would be “obvious” to a human but not to a machine: The conversation breaks down because the bounds of the machine’s microworld over-flow, and the machine fails to imitate a human successfully. The way to avoid any thera-peutic benefit from a conversation with Eliza is to feed Eliza gibberish. Garbage in, garbage out applies to machines, although not necessarily to humans. And humans have all kinds of garbage readily available for conversation, including lies, jokes, irony, para-dox, rhetoric, and everything represented by the S-words.
Nevertheless,
to say that the machines of the Semantic Web won’t be able to have every kind
of conversation with humans is not to say that their conversations with us (and
with each other) will not be meaningful, or that the Semantic Web fails the
Turing Test definitively. (In fact, I never said it took the test, because that
would have involved mentioning AI.) After all, there are people who
cannot—cannot because their brain structure does not permit them to do
so—appreciate jokes, irony, paradox, or rhetoric. They might fail the Turing
Test. Are their conversations therefore not meaningful? Are they then not fully
human?
Related Topics
Privacy Policy, Terms and Conditions, DMCA Policy and Compliant
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.