Jeopardy! gets a computer champion: Does it put our humanity in the form of a question?
A really interesting techno-cultural milestone is about to be passed; an IBM supercomputer named Watson will soon be crowned champion of our favorite TV trivia game where your response has to be in the form of a question. That’s right, a cleverly programmed and very powerful machine is on the cusp of becoming Jeopardy! champion.
What makes this worth the hype in the following videos, hype also found in today’s NY Times Magazine article by Clive Thompson “Smarter Than You Think: What Is I.B.M.’s Watson?” and on the Singularity Hub, is the truly deep psychological challenge presented by the often clever, even pun-y, clues Alex Trebek reads to contestants.
Understanding “natural language” is no small thing. IBM is well within its rights to crow about the achievement.
One step in determining a correct response is they very “computer-y” process of accessing and searching a huge database of cultural knowledge. And we all have experiences with, even take for granted, this step in a machine answering a trivia question—you have used that google thing haven’t you? But our experience with computers trying to simulate an actual conversation—what the IBM spokesperson calls a “a question-answering system”—is very different. In fact we’ve all had experience with what terrible conversationalists computers are. Consider what frequently happens when you call an insurance company or utility and get one of those infernal voice-only telephone response systems; they are terrible. That is what makes IBM’s achievement so amazing. It really is a triumph.
But, as one of the researchers said in the second video, is it really fair to say it is “capable of understanding your question”? Isn’t it better to use the more accurate description and say the computer successfully simulated understanding your question? This is not just a semantic trick nor an exercise in academic wordiness. It changes the meaning of Watson’s championship from being one more piece of lost human uniqueness into a celebration of a fascinating human-made technology that just might be able to be used for human purposes. In other words, feeling the awe this technology deserves is possible only if we ignore it’s capacity for simulation entrapment, i.e., getting so caught-up in the technologically-mediated simulation that you forget you’re interacting with a machine. The best way to enjoy Watson’s win is to be both in it and out of it at the same time.
Part of the problem is that our psychology leads us to see human qualities whenever possible; we’re tuned to experience empathy. Fritz Heider and Mary-Ann Simmel were mid-century psychologists who asked subjects to watch the following film:
Like I’m pretty sure you just did, their subjects saw a story, complete with intentions, attraction, and maybe even feelings of love present. We saw that little triangle have feelings for the little circle even though we know it is impossible; it’s just a geometric shape that was made to move in a particular pattern by the film-maker. But we see humanity nonetheless. Same thing with Watson. Just like the Heider-Simmel triangles Watson looks like it’s “understanding”—even “playing”— even when all it is doing is quickly (really really quickly) obeying a set of mathematical instructions put there by a team of really smart people.
When we watch Watson respond correctly in the form of a question we attribute human qualities not because the machine actually is human-like but because we are.
Let me tell a quick story illustrating how not everything that looks like understanding is understanding. 25 years ago I worked on an inpatient unit with college-age schizophrenics. I was giving a series of psychological tests to a young man tragically going through his first psychotic break with reality. He was chaotic and confused. During a test of intellectual capacities I asked him a question he should have failed because he had gotten the previous, and easier items, wrong: “A man drives 275 miles in 5 hours, how fast was he going in miles per hour?” But he quickly and correctly replied “55.” I was kind of shocked. He should not have ben able to understand this question nor the division involved. So, I asked him how he figured out the answer and he got angry. He said “my father, my father, my father is a good man, a good man, he always drives the speed limit.”
Can we say he understood the problem and the arithmetic required for it’s solution? I don’t think so. The process is just too different. And the same goes with Watson responding to a Jeopardy! clue. It works by statistically associating the co-0ccurance of terms across a vast database. It’s a really, really clever way to simulate natural language understanding. Bravo to the programmers! But Watson does not “understand” the clues anymore than that suffering young man understood the arithmetic problem with which he was presented.
We can become entrapped by the simulation and ignore what we know about the processess involved in something like Watson winning at Jeopardy!. But that diminishes our humanity by incorrectly attributing to a machine a rich inner life like ours complete with longing and understanding. The other possibility is to embrace the differences thereby letting the human acheivement that is technology like Watson enhance our humanity.
If we’re going to live in a world in which we’re forced to talk with cost-saving customer service computers instead of other people, they should at least work as well as Watson. But we shouldn’t let ourselves become so enthralled by the experience that we lose sight of where machines stop and people begin.