Wednesday, 11 July 2012

The Most Human Human

Last night I went on a whim to a lecture at the Royal Institution given by Brian Christian, author of The Most Human Human, a book about his experiences of taking part in the Loebner Prize, an annual competition based around Alan Turing's famous 'Turing Test' thought experiment. Christian was talking about artificial intelligence, and the way it is changing what it means to be human. The title of the talk comes from one of the sub-prizes awarded at the Loebner Prize - there is an award for 'most human program' and also for the human that most judges guessed was a human - the 'most human human'.

It was an interesting experience on several levels. Firstly, I had never actually been inside the RI before, although as a teenager and budding scientist I used to love the televised Royal Institution Christmas Lectures, and watched them avidly, even though (in an era before video recorders) it sometimes meant getting up at 6:30am or the like. This lecture was in that same lecture theatre, which is a lot smaller and more intimate than it looks on TV, and where people like Michael Faraday and HG Wells have lectured. I also dicovered the RI has a very nice (if understaffed, at least last night) bar and restaurant which is open to the public and which I'll definitely be making more use of.

Anyway, the lecture. He started off with a potted history of the philosophy of what it meant to be human, from Aristotle to Descartes, and the theory that the thing that differentiated us most from the animal and plant kingdoms was our capacity for rational and abstract thought. Then he ran through a history of artificial intelligence, reminding us that 'computer' used to be a job description for mathematicians, and that Turing only used the word by way of analogy - 'this machine... well it's a bit like a computer'. Now 60 years on the definition has flipped, and the computer is the machine, and we use it as an analogy for a human who is skilled at maths. He described the way that computers have staked out territory that we once thought of as belonging purely to humans, but argued that the easiest things to duplicate via a machine were exactly those things we once most valued in ourselves and considered made us distinctively human (playing chess 25 moves in advance, knowing the answer to Jeopardy questions), while the most difficult to automate were actually those things we take for granted (recognising people, understanding language, walking around without bumping into things). In AI circles this is known as Moravec's Paradox, but Christian suggested that now that we are measuring ourselves against machines rather than animals, it is precisely these biological skills that we may come to value more.

For the second part he moved into a discussion about the Turing Test, and how we judge whether someone else is human through a low bandwidth medium like text messaging. He pointed out that in fact we now all do it every day, every time we read an email and decide if it has come from a spambot or a real human, and asked us to concentrate on the next batch of emails we scan, and work out at which point we decided if this was a human or a machine, and to try and analyse what our decision making process was. This part of the talk roamed through speed dating, CAPTCHA codes (where a machine is, ironically, deciding if we are human or not) and the dreaded autocorrect, where the AI is interposing itself between us and our audience, trying to second guess us, and in so doing smoothing out precisely those human foibles that make us distinct. With reference to the hacker that accessed Sarah Palin's Yahoo account, he discussed computer security and how we are moving away from content-led security (passwords, ID codes) which computers find easy but we don't, back towards form-led security like signatures and biometric recognition, and the ways that we recognise each other (voices, faces). He argues that human and machine intelligence are already in a symbiotic relationship, and so changes in machine intelligence will continue to change how we view ourselves and how we relate to each other.

The talk had some interesting ideas, and was a great way to spend 90 minutes, but somehow left a lot of loose ends. I suppose it was aiming to make you think a bit - and buy his book of course! But as to the future - when I asked him about Searle and the Chinese Room he clearly came down on the Strong AI side of the argument, that human intelligence is in effect a physical process which will, ultimately, be simulated or duplicated to the point where we can no longer tell the difference. However, he did admit that we're nowhere near there yet. Even now the best Loebner Prize programs can only fool humans 25% of the time under perfect conditions (the judge only gets 5 minutes of interaction purely via text). Still, Turing predicted a 30% success rate by 2000. He wasn't that far off, was he?


No comments:

Post a Comment