It's like Sunday afternoon football, with computers

i-5b4021444d5d15f6d21f3eac615d8a81-mechturkbib1.jpg

I'm off to a wedding this weekend, so no posts for a few days. But I wanted to give you a heads up that six computers will be competing in a Turing test on Sunday.

The competitors, named Alice, Brother Jerome, Elbot, Eugene Goostman, Jabberwacky and Ultra Hal, must converse for five minutes and fool their human questioners into thinking that they're also human - or at least make the questioners uncertain whether they're human or machine. This imitation game, devised by Alan Turing in 1950, is rightly or wrongly considered the definitive test for successful artificial intelligence:

If any program succeeds, it is likely to be hailed as the most significant breakthrough in artificial intelligence since the IBM supercomputer Deep Blue beat world chess champion Garry Kasparov in 1997. It could also raise profound questions about whether a computer has the potential to be "conscious" - and if humans should have the 'right' to switch it off.

Professor Kevin Warwick, a cyberneticist at the university, said: "I would say now that machines are conscious, but in a machine-like way, just as you see a bat or a rat is conscious like a bat or rat, which is different from a human. I think the reason Alan Turing set this game up was that maybe to him consciousness was not that important; it's more the appearance of it, and this test is an important aspect of appearance."

All of this bypasses the thorny question of what consciousness is, what thinking is, what Turing meant his test to capture, and whether the Loebner competition really captures it, which is too complex for a morning blog post written before I've had my coffee. There's a lot written on this topic because humans have been intrigued (and disturbed) for hundreds of years by the prospect of true mechanical intelligence. Consider the fascination with Wolfgang von Kempelen's Mechanical Turk, a steampunk precursor of Deep Blue. Built in 1770, the putative automaton defeated scores of opponents, including Napoleon and Benjamin Franklin, before it was verified to be a hoax almost a century later. During those years, popular books were written by skeptics attempting to debunk the Turk's secret (a human crouched inside made all the moves); the image at the top of the post is from such a book, via Bibliodyssey. It's interesting to note that many of these hypotheses and schemata were wrong; it was extremely hard for people to figure out how the illusion was accomplished, although they knew intuitively - or perhaps hoped? - that it had to be a hoax.

Amazon's modern spin on the Mechanical Turk is "artificial artificial intelligence": crowdsourcing. Now it's humans doing simple, repetitive, "machine-like" (but deceptively complex) tasks:

Amazon Mechanical Turk is based on the idea that there are still many things that human beings can do much more effectively than computers, such as identifying objects in a photo or video, performing data de-duplication, transcribing audio recordings, or researching data details.

(Emphasis mine.)

Of course, in the case of the original Mechanical Turk, the illusion couldn't be too good - people had to believe they were seeing a machine, not a costumed person. The sight of a machine defeating a human at chess is still pretty amazing, but since Deep Blue's victory, it's much more believable that computers can beat us. Now what we find unbelievable, and prizeworthy, is the idea that machines could fool us into thinking they are us.

At any rate, the Loebner Prize will probably, as in previous years, be awarded to "the most human-like computer. The winner of the annual contest is the best entry relative to other entries that year, irrespective of how good it is in an absolute sense."

In other words, AI still has a little way to go. You won't see a football team composed of computerized Mechanical Turks anytime soon. . . although that would be a really good name for a team. Or a band. (Why isn't it already a band?!)

Tags

More like this

Any Turing test I administered would involve sensical nonsense. I'm not quite sure, although it's not my field, how you'd get a computer up to speed on that.

Any Turing test I administered would involve sensical nonsense.

are you talking about a Greg Laden post? (or would that be nonsensical substance?, I'm not in the humanities...)

By cashmoney (not verified) on 14 Oct 2008 #permalink

And is "fuckity fuck" sensical nonsense, PP? How would a computer programmed to respond like PhysioProf do during a Turing test, I wonder? ;)