Most Active Stories
- CVS To Offer Overdose Antidote Narcan Without Prescription
- Providence Capital Grille Moving
- Raimondo creates Narragansett Brewing jobs in New York, Massachusetts not R.I.
- Scott MacKay Commentary: The Providence Mayoral Campaign 11th Hour Scramble
- Smiley Ends Mayoral Campaign and Puts Support Behind Elorza
All Tech Considered
Tue July 1, 2014
Do Feelings Compute? If Not, The Turing Test Doesn't Mean Much
Originally published on Tue July 1, 2014 4:20 pm
To judge from some of the headlines, it was a very big deal. At an event held at the Royal Society in London, for the first time ever, a computer passed the Turing Test, which is widely taken as the benchmark for saying a machine is engaging in intelligent thought. But like the other much-hyped triumphs of artificial intelligence, this one wasn't quite what it appeared. Computers can do things that seem quintessentially human, but they usually take a different path to get there. IBM's Deep Blue mastered chess not by refining its intuitions but by evaluating hundreds of millions of positions per second. Watson won at Jeopardy not by wide reading but by swallowing all of Wikipedia in a single gulp. And as the software that reportedly beat the Turing Test showed, computers don't even go about making small talk the same way we do.
The Turing Test takes its name from a 1950 paper by Alan Turing, the British mathematician who laid out the foundations of modern computer science. Turing had wearied of the interminable debates about whether machines could really think. The question we should be asking, he said, is whether they can behave the same way a thinking person does. Put a human and computer in another room and hold a conversation with each of them via a teletype. If you can't tell which is which, then you might as well say the computer is thinking.
Turing's claim ignited more than half a century of ardent philosophical noodling about mind and consciousness. But Turing also regarded the test as a practical challenge. By the end of the 20th century, he said, computers will be so good at ordinary conversation that they'll fool people into taking them for humans at least 30 percent of the time. And ever since, people have been building programs called chatterbots designed to do just that, with modest success. In the recent Royal Society competition, a bot called Eugene Goostman managed to convince a third of the judges it was a human on the basis of a five-minute exchange. That narrowly exceeded Turing's more or less arbitrary 30 percent threshold, and the organizers proclaimed it a "historic milestone."
But given the still rudimentary state of AI, a lot of people in the field dismiss these competitions as mere stunts. The fact is that nobody would claim that these bots are doing anything remotely like thinking. They rely on clever but fairly simple routines and on the human predilection to personify our interactions with machines. When the bots don't understand a question they throw it back as another question or key in on one phrase and return a canned response. Ask Eugene Goostman how many legs a camel has and it will say "no more than four," which is the same answer it gives if you ask, "How many roads must a man walk down before you call him a man?" And Goostman's creators ratcheted down the judges' expectations still further by having the bot claim to be a 13-year-old boy from Ukraine. That seemed to account for its faulty English and limited world knowledge, not to mention some of its off-the-wall answers — what sounds merely witless in grown-ups is apt to come off in a 13-year-old as simple attitude.
But the exercise did drive home a point that psychologists have been aware of for a long time — what makes a computer seem human isn't how we perceive its intellect but its affect. Can it display frustration, surprise or delight just as we would? A computer scientist friend of mine makes that point by proposing his own version of the Turing Test. He says, "Say I'm writing a program and type in a couple of clever lines of code — I want the machine to say, 'Ooh, neat!' "
That's the goal of the new field called affective computing, which is aimed at getting machines to detect and express emotions. Wouldn't it be nice if the airline's automated agent could rejoice with you when you got an upgrade? Or if it could at least sound that way? Researchers are on the case, synthesizing sadness and pleasure in humanoid voices that fall just this side of creepy.
But of course it's one thing to be able to express emotions and another to really feel them. A lot of people maintain that that's something computers simply can't do. As a contemporary of Turing put it, no mechanism could feel grief when its valves fuse or be made miserable by its mistakes. That sounds right to me — how could a machine feel any of those emotions without a human body to touch them off? You can get it to signal sorrow by synthesizing a catch in its voice, but it's not going to be caused by a real sob rising in its chest.
But I'll keep an open mind. Turing saw the achievement of human-like intelligence as lying 50 years out, and AI people say exactly the same thing now. Who knows? They may catch up with the horizon one day and produce a contrivance that's bristling with all the traits we think of as uniquely human — creativity, passion, even gender. That's the being that Spike Jonze envisions in his movie Her, set in the near future. The title refers to an intelligent operating system voiced by Scarlett Johansson, with which — or should I say with whom? — Joaquin Phoenix's character, Theodore, falls in love. It's fair to say there has never been a computer in or out of the movies that was better equipped to ace the Turing Test than Johansson's sultry, high-spirited Samantha. She sulks and sighs just like a woman. But when she and Theodore have a lovers' spat near the end of the movie, he accuses her of acting more human than she actually is:
Theodore: Why do you do that?
Theodore: Nothing, it's just that you go (he inhales and exhales) as you're speaking and ... that just seems odd. You just did it again.
Samantha: I did? I'm sorry. I don't know, I guess it's just an affectation. Maybe I picked it up from you.
Theodore: Yeah, I mean, it's not like you need any oxygen or anything.
Samantha: No — um, I guess I was just trying to communicate because that's how people talk. That's how people communicate.
Theodore: Because they're people, they need oxygen. You're not a person.
Samantha: What's your problem?
Frankly, I wouldn't have been that hard on Samantha. And neither would Turing. He replied to those claims that machines couldn't be conscious or have real feelings with a simple question: How can you tell? After all, how can we know for sure that anybody else is really conscious, except by a leap of faith? As Turing said, we just accept that everybody who seems to be thinking and feeling really is. He described that as a polite convention, but I think most people now would say that it's hard-wired into our own OS — we're just built to connect. True, Theodore and Samantha couldn't ever really get each other — neither can know what the other actually feels. But so what? We were taking each other's feelings on faith long before anybody turned sand into silicon. As the poet Randall Jarrell might have put it, computers and humans are like men and women: each understands the other worse, and it matters less, than either of them suppose.