University of Connecticut University of UC Title Fallback Connecticut

A Step Closer to Science Fiction

Jeremy Teitelbaum, dean of the College of Liberal Arts and Sciences

Jeremy Teitelbaum, dean of the College of Liberal Arts and Sciences

By: Jeremy Teitelbaum

As I watched IBM’s Watson supercomputer beat champion players Ken Jennings and Brad Rutter at Jeopardy back in February, shivers ran down my back. I have been something of a computer geek since I was introduced to the Monroe 1880 programmable calculator back in eighth grade, and having grown up with computers, played computer games through the decades and used them in my research, I thought I knew what I could expect from them. To me, Watson seems a very small, but very real step forward toward the intelligent machines of science fiction.

My own excitement at Watson’s (and IBM’s) achievement isn’t universally shared. Many people seemed focused on the narrower fact that Watson played Jeopardy, and a common theme seemed to be that Watson cheated. For example, here’s an anonymous quote from the IBM research blog:

I can’t believe that IBM went through all the trouble of creating and meeting this challenge and will NOT compete on a level playing field. How could they say its competition when Watson gets the input as TCP/IP text. Just like the human players it should only get the audio and visual streams that the other two players are getting.

And here’s a quote from Ken Jennings in a NY Daily News op-ed:

Like any human player, Watson does buzz with a “thumb” of sorts (actually a magnetic coil mounted over a buzzer), but it can also rely on the millisecond-precision timing of a computer. The reflexes of even a very good human player will vary slightly, but not Watson’s. If it knows the answer, it makes the perfect buzz. Every single time. And it’s hard to win if you can’t buzz.

These comments strike me as not only irrelevant, but also sour grapes. Watson’s achievement isn’t about thumbs or buzzers or eyes or speech recognition or anything like that. What’s amazing about Watson is that it can take the following Jeopardy clue (for example):

“This 1959 Daniel Keyes Novella about Charlie Gordon & a smarter-than-average Lab Mouse won a Hugo Award.”

and conclude that the correct answer (or question) is “What is Flowers for Algernon?”

To get a little taste of the problem, google that clue. You’ll get a link to the amazon.com page pointing to Flowers for Algernon, but you get a huge amount of other junk, too (including a link to the Jeopardy show itself.) Using powerful statistical methods that I don’t begin to understand, Watson is able to pull out from the clue what is being asked – namely, find the name of a novella by Daniel Keyes that won a Hugo Award in 1959 and is about a lab rat – and then to determine the answer.

Stanley Fish, in his NY Times blog, offered a more pointed critique of Watson’s achievement:

[Watson’s] achievement is impressive but it is a wholly formal achievement that involves no knowledge (the computer doesn’t know anything in the relevant sense of “know”); and it does not come within a million miles of replicating the achievements of everyday human thought.

While I certainly agree with Fish that Watson does not “come within a million miles” of human thought, I do believe it makes a significant step toward the kind of artificial intelligence described by Alan Turing in his influential paper “Computing Machinery and Intelligence,” published in the journal Mind in 1950. In that paper, Turing proposed his famous test for machine intelligence, in which a computer is pronounced intelligent if a person is unable to distinguish between it and another human in a conversation carried out over a terminal. More broadly, Turing argues that computers are perfectly capable of acting, in every way, as if they were intelligent – and once that happens, then they are intelligent.

Turing’s definition of intelligence is controversial and, in an email exchange, Fish made it clear to me that he rejected Turing’s notion of intelligence. He referred me to Searle’s Chinese Room argument which makes the case, roughly, that purely automatic processes cannot capture what we mean by intelligence.

Well, I’m with Turing. We already interact frequently and naturally with machines that, in a limited way, act like humans – think about the machine that handles plane reservations for United Airlines as one example. Having seen Watson, I believe that over the next decades we will begin to interact more and more frequently and naturally with machines until at some point the interaction will be transparent.

The science fiction writer Vernor Vinge has proposed the notion of a “technological singularity” – a point at which machines become sufficiently capable that the further development of technology and even culture happens at machine speeds rather than at human speeds. In this scenario, civilization would change, comprehensively, and very, very suddenly. Even though I see Vinge’s singularity as an idea from science fiction rather than a real possibility, seeing what Watson was capable of made me wonder if such a transformation might really be in our future.

This blog is a greatly condensed version of my talk “Open the pod bay doors, Watson: Artificial Intelligence in Science Fact and Fiction” given at the Library Forum on April 14, 2011. Skip about 12 minutes into the video to get past the introductions and get on with the real content.

Read more posts by Jeremy Teitelbaum, dean of the College of Liberal Arts and Sciences, on his blog.


More News Stories

Upcoming Events