What Is True/Slant?
275+ knowledgeable contributors.
Reporting and insight on news of the moment.
Follow them and join the news conversation.
 

Jun. 20 2010 - 5:12 pm | 490 views | 1 recommendation | 9 comments

Jeopardy! gets a computer champion: Does it put our humanity in the form of a question?

A really interesting techno-cultural milestone is about to be passed; an IBM supercomputer named Watson will soon be crowned champion of our favorite TV trivia game where your response has to be in the form of a question. That’s right, a cleverly programmed and very powerful machine is on the cusp of becoming Jeopardy! champion.

What makes this worth the hype in the following videos, hype also found in today’s NY Times Magazine article by Clive Thompson “Smarter Than You Think: What Is I.B.M.’s Watson?” and on the Singularity Hub, is the truly deep psychological challenge presented by the often clever, even pun-y, clues Alex Trebek reads to contestants.

Understanding “natural language” is no small thing. IBM is well within its rights to crow about the achievement.

One step in determining a correct response is they very “computer-y” process of accessing and searching a huge database of cultural knowledge. And we all have experiences with, even take for granted, this step in a machine answering a trivia question—you have used that google thing haven’t you? But our experience with computers trying to simulate an actual conversation—what the IBM spokesperson calls a “a question-answering system”—is very different. In fact we’ve all had experience with what terrible conversationalists computers are. Consider what frequently happens when you call an insurance company or utility and get one of those infernal voice-only telephone response systems; they are terrible. That is what makes IBM’s achievement so amazing. It really is a triumph.

But, as one of the researchers said in the second video, is it really fair to say it is “capable of understanding your question”? Isn’t it better to use the more accurate description and say the computer successfully simulated understanding your question? This is not just a semantic trick nor an exercise in academic wordiness. It changes the meaning of Watson’s championship from being one more piece of lost human uniqueness into a celebration of a fascinating human-made technology that just might be able to be used for human purposes. In other words, feeling the awe this technology deserves is possible only if we ignore it’s capacity for simulation entrapment, i.e., getting so caught-up in the technologically-mediated simulation that you forget you’re interacting with a machine. The best way to enjoy Watson’s win is to be both in it and out of it at the same time.

Part of the problem is that our psychology leads us to see human qualities whenever possible; we’re tuned to experience empathy. Fritz Heider and Mary-Ann Simmel were mid-century psychologists who asked subjects to watch the following film:

Like I’m pretty sure you just did, their subjects saw a story, complete with intentions, attraction, and maybe even feelings of love present.  We saw that little triangle have feelings for the little circle even though we know it is impossible; it’s just a geometric shape that was made to move in a particular pattern by the film-maker. But we see humanity nonetheless. Same thing with Watson. Just like the Heider-Simmel triangles Watson looks like it’s “understanding”—even “playing”— even when all it is doing is quickly (really really quickly) obeying a set of mathematical instructions put there by a team of really smart people.

When we watch Watson respond correctly in the form of a question we attribute human qualities not because the machine actually is human-like but because we are.

Let me tell a quick story illustrating how not everything that looks like understanding is understanding. 25 years ago I worked on an inpatient unit with college-age schizophrenics. I was giving a series of psychological tests to a young man tragically going through his first psychotic break with reality. He was chaotic and confused. During a test of intellectual capacities I asked him a question he should have failed because he had gotten the previous, and easier items, wrong: “A man drives 275 miles in 5 hours, how fast was he going in miles per hour?” But he quickly and correctly replied “55.” I was kind of shocked. He should not have ben able to understand this question nor the division involved. So, I asked him how he figured out the answer and he got angry. He said “my father, my father, my father is a good man, a good man, he always drives the speed limit.”

Can we say he understood the problem and the arithmetic required for it’s solution? I don’t think so. The process is just too different. And the same goes with Watson responding to a Jeopardy! clue. It works by statistically associating the co-0ccurance of terms across a vast database. It’s a really, really clever way to simulate natural language understanding. Bravo to the programmers! But Watson does not “understand” the clues anymore than that suffering young man understood the arithmetic problem with which he was presented.

We can become entrapped by the simulation and ignore what we know about the processess involved in something like Watson winning at Jeopardy!.  But that diminishes our humanity by incorrectly attributing to a machine a rich inner life like ours complete with longing and understanding. The other possibility is to embrace the differences thereby letting the human acheivement that is technology like Watson enhance our humanity.

If we’re going to live in a world in which we’re forced to talk with cost-saving customer service computers instead of other people, they should at least work as well as Watson. But we shouldn’t let ourselves become so enthralled by the experience that we lose sight of where machines stop and people begin.


Comments

4 T/S Member Comments Called Out, 9 Total Comments
Post your comment »
 
  1. collapse expand

    Isn’t it better to use the more accurate description and say the computer successfully simulated understanding your question?

    Isn’t it better to say that you just simulate answering questions? I mean, what’s the difference to me between you thinking something and you simulating, to every possible extent, that you thought something?

    Saying you’re a human, and the computer is not, just begs the question. Anyway on the internet nobody knows if you’re a human or not. Maybe you’re just a very sophisticated machine simulating the act of writing these posts. What would be the difference, precisely?

    • collapse expand

      You ask a very good question: “what’s the difference to me between you thinking something and you simulating, to every possible extent, that you thought something?”

      I can tell you what the difference would be to me, and hopefully to you; the traditional actuality of a fellow flesh-and-blood person thinking something has different limits than a simulation. Simulations always at some point break down and the limit of the simulation is reached. Remember that the Turing Test is NOT creating a moment when you can’t tell the difference (even though it is frequently misunderstood to mean that) but that after a good faith effort it is impossible to tell the difference.

      I know on the Internet no one knows you’re a dog (as the cartoon goes) but I don’t live on the Internet and neither do you. And in the real world it makes a huge difference. Should you happen to see my mug on the cross town bus or in an airport you might have a vague don’t I know you feeling. We’d talk. And who know, if our flights were delayed we just might find ourselves in an airport bar singing Weird Al Jankovic songs.

      The limits of humans thinking and sharing thoughts are different than when engaged with a computer simulation, no matter how sophisticated.

      In response to another comment. See in context »
      • collapse expand

        I can tell you what the difference would be to me, and hopefully to you; the traditional actuality of a fellow flesh-and-blood person thinking something has different limits than a simulation.

        So let’s say I created a Simul-Justin that simulated my limits, too. It seems like what you’re saying is that the difference is simply one of engineering – nobody’s yet made a simulation that’s good enough to be indistinguishable. But suppose that was no longer the case, and someone succeeded in creating a simulation that was so good, it broke in precisely the same ways we’d expect a human to break. It was bad at math. It was good at Trivial Pursuit, except for the sports questions. If it posted on the internet, it wrote posts you couldn’t in any way distinguish from a regular forum troll.

        Are you still saying there’s a difference? That what it’s doing is “fake” and what I’m doing is “real”? I don’t believe that.

        The limits of humans thinking and sharing thoughts are different than when engaged with a computer simulation, no matter how sophisticated.

        I guess I can’t quite pin down what you’re saying. First you indicate that simulations will never replicate true consciousness, because they’ll simply never be good enough. Now it seems like you’re saying no matter how good they get, no matter how perfectly they can simulate human cognition, they’re still not truly conscious.

        Which is it? Do you think you could be a little more specific about it? Is it fundamentally an engineering problem – we’re just not good enough at writing the simulations yet – or is it fundamentally an impossibility for anything but a human brain to be conscious? If so, why?

        That’s what I want to get at, I guess. Fundamentally it seems to me that the phenomenon of human consciousness is a simulation; our brains are the platform on which a simulation of consciousness is running. Sure, we feel conscious, but that’s just part of the simulation.

        There’s a line in the Animatrix – the series of animated shorts released a few years after The Matrix – where one scientist mentions “to a machine, all reality is virtual.” That’s true of us, as well. What we experience as “real” is a virtual, reflected reality generated for our brains by our sense organs. The flash of recognition we might have in an airport is just a function of our sense organs, and that’s a sense input that we could simulate, as well.

        In response to another comment. See in context »
        • collapse expand

          Man oh man, do you ask good questions!!! You may want to take a look at (and I’m not putting you off, just recommending something you might find infuriatingly at odds with your assumptions) Alva Noe’s recent book Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness. In a nutshell, he argues that consciousness is not something that goes on inside your head, a program running on our wet, messy brains, but something we do in our engagement with the world. Interesting stuff.

          Commercial over. My point being that a simulation’s limit is not an engineering problem but a human inevitability. The limits are found in our experience of a simulation, not in its specific functions.

          However much I’m truly enjoying this Sunday evening public conversation (albeit a simulated one) I must profoundly disagree with your comment that “the flash of recognition we might have in an airport is just a function of our sense organs.” Starting in the 50s with Jerome Bruner and the “new look” in perception we know that desire, meaning, and how one is engaging the world determines what we see as much as the functioning of our sense organs (despite what those 18th century Brits may have said!). Philosophers like Merleau-Ponty have done lots to explore how perception is a human action, something that results from our engaging the world, and not a linear process starting with sense organs and ending with consciousness.

          I also want to say I have nothing against the idea that our techno-genius may one day build a self-aware conscious machine. My point is that machine consciousness will have little or nothing in common with human consciousness.

          In response to another comment. See in context »
          • collapse expand

            In a nutshell, he argues that consciousness is not something that goes on inside your head, a program running on our wet, messy brains, but something we do in our engagement with the world.

            I don’t see those as mutually inconsistent alternatives, I guess, much as our computers operate by means of software that did not originate inside them. Consciousness may well be both something that occurs inside human brains and something that those brains do in response to their environment – an environment that includes other brains.

            I must profoundly disagree with your comment that “the flash of recognition we might have in an airport is just a function of our sense organs.”

            But I must insist that it is nontheless the case. Were I suddenly to be struck blind, deaf, numb, and noseless, I would be completely unable to recognize you, nor even my dearest friends, nor even my own wife. Nor anything else! I would literally be without any sense whatsoever of who or what was going on around me.

            While it’s true that interpreting those sense organs is an active process on my part, not simply the reception of passive data, that’s no more surprising than realizing that telescopes are pointed by the astronomers who wish to look through them. They don’t just sit there receiving images, they direct their telescopes in the direction they wish to observe. Astronomic observation is an active, interpretive process. But while it’s obvious that the astronomer directs the observation, it’s false to suggest that it’s the astronomer who is creating the stars. The light from those stars originates from without, not within.

            In response to another comment. See in context »
  2. collapse expand

    Fascinating, every woman I lived with wanted to get me on one of those shows so she could spend that money too :) , I have it from good sources that with-in 5 years we will have interactive holograms on top of the line PC’s, they already have it, just polishing it up. Fascinating.

  3. collapse expand

    Is politics outdated? Spotlight the world monetary crisis: By consciously, delegating decisions to algorithmic constructs, the banking community has a real evolutionary advantage. In purposing ideas to a database the banksters can out maneuver the governmental homo sapiens at each and every, decision making interval. They create simple minded, complexities just for the hell of it.

    Contrast this to senators and a Federal Judiciary, who cannot or, do not use email and ATMs. Leaving it up to bureaucratic, ponderous government programs is not enough.

    Evidence clearly shows, that those who take advantage of a conscious delegation of decision making to computers will and perhaps already do, rule the world monetary system. Moreover the essence of money outstrips attempts at control or regulation

    

How do we peacefully benefit from these technologies without increasing the chances of an accident or unstable individuals and fanatical groups using them maliciously? Resource based economy?

    • collapse expand

      What an interesting comment! Nature, red in tooth and claw .. but culture? When the pessimism of utilitarian competition starts to get the best of me I think about Bill McKibben’s excellent book Deep Economy.

      Also, FYI, I’ve got a post I hope I can soon finish about Gary Kasparov’s (the former world chess champion who in 1997 lost a match to IBM’s Deep Blue computer) comment that people playing chess assisted by the best chess playing computers might elevate the sophistication and beauty of the game to a new level.

      In response to another comment. See in context »
Log in for notification options
Comments RSS

Post Your Comment

You must be logged in to post a comment

Log in with your True/Slant account.

Previously logged in with Facebook?

Create an account to join True/Slant now.

Facebook users:
Create T/S account with Facebook
 

My T/S Activity Feed

 
     

    About Me

    I'm a psychologist and psychoanalyst with a full-time therapy practice. Over the last 20 years I've noticed how the NEXT BIG THING, or the one after that, sometimes leaves people feeling more miserable than before; life in the "future" doesn't always feel very good by the time it gets here. But sometimes it does. We just don't know how the future will feel.

    I have been writing and lecturing to professional audiences about how our emerging technologies can change how we feel about and relate to each other, ourselves, and our bodies. Now it's time to go public.

    In case you're wondering, my clinical office is like Vegas; what's said there, stays there. How could it be otherwise? So rather than writing about individual patients, I'll be writing in general about the perils and promise we all confront as we try to build a good life in our increasingly over-simulated world. While no one knows what's coming next nor how it will make life feel, one thing I do know is that for us to thrive as individuals and a society, for us to hold on to our humanity as we become post-human, we're going to have to do it together.

    See my profile »
    Followers: 82
    Contributor Since: April 2009
    Location:New York City

    What I'm Up To

    Ever been in online therapy or e-counseling?

    Even just therapy by phone or SKYPE?

    Would you be willing to talk with me about your experience?  I want stories from the “consumer” point-of view for a professional workshop about the ethics of providing care at a distance. No information will be used without your permission.

    Click the <EMAIL ME TIPS> link above to contact me if interested.