Malware investigate

Artificial Intelligence: What happened to the hunt for thinking machines?

Mankind has long been fascinated by the idea of intelligent machines, but in the information age the sci-fi dream of creating a human-like AI appears increasingly anachronistic.

The idea of creating a sentient machine has fascinated mankind for centuries. And while sci-fi offers artificial intelligences that rival our own, the fiction bears little resemblance to real world AI.

AI is all around us, not as a synthetic overlord, but as specialised software that help fly planes and run factory production lines. For many, the idea of creating thinking machines has become a distant dream.

However, not everyone has given up on the idea of creating a machine that can think like a man. Inventor Hugh Loebner is at the forefront of the hunt: each year for more than two decades Loebner has run a competition based on the Turing Test, the game devised by British mathematician and father of computing Alan Turing in 1950 to identify a thinking machine.

In the Loebner Prize competition, software known as chatbots conduct instant messenger or verbal conversations with human judges, attempting to fool them into believing they are a real person.

Any bot that fools half the judges can win up to $100,000 for its creator, and each year there is a $2,000 prize for bot deemed to be most human-like.

Turing predicted that a machine able to fool people into thinking it was human in one third of conversations would exist by 2000.

And yet so far the performance of the chatbots has been underwhelming: after 22 years of contests no bot has come close to fooling half of the judges into thinking it is human. Bots make convincing humans during short chats, but their credibility breaks down in a prolonged conversation.

Even Loebner, for all of the time and effort he has invested, says he has little passion for event, and continues to run it largely out of a sense of obligation.

"I continue to run it because I said it would continue to run, it's a matter of personal integrity. It doesn't so much excite me now, not so much as raises anxiety," he told TechRepublic.

"For the first one it was 'Wow, wow', after 22 years now it's 'What can go wrong?'."

Loebner's excitement has been eroded by the burden of running the contest, but he also admits that the rate at which the contest's chatbots have improved has been "glacial".

"I always thought it was going to be a long haul," he said, adding that at age 70, he doesn't expect to live to see a chatbot take the contest's silver prize for tricking half of the judges into thinking it's human.

"There's been some change, now if you ask some of these chatbots how much is two plus three they'll say five, which they didn't say before, and they know which is larger, a grape or a grapefruit," he said.

The slow rate of progress possibly reflects the calibre of entrants to the Loebner Prize. Competitors are generally small groups of enthusiasts or individual hobbyists, who, understandably, have limited time and money to build a thinking machine outside of holding down a full-time job.

Loebner agrees that a thinking machine is more likely to originate in the labs of the world's tech giants - whether it's IBM, whose supercomputer Watson recently beat the reigning champion of the US quiz show Jeopardy, or Google with its vast store of natural language queries that could be mined by an AI.

Unfortunately for Loebner those firms have no interest in taking part in his contest.

"Someone like Google, of course I'd love to see them do it but they don't want to, they've got their own thing. They don't want to touch me with a 10-foot pole are you kidding? They run screaming from the room," he said.

The problem Loebner has is that computer scientists in universities and large tech firms, the people with the skills and resources best-suited to building a machine capable of acting like a human, are generally not focused on passing the Turing Test.

This lack of interest is partly a question of priorities. Dr Sam Joseph is associate professor of computer science at Hawaii Pacific University, and teaches courses on AI. Joseph said there are many complex problems within the field of AI - in areas ranging from machine learning to high level planning - that need tackling before you can create a machine capable of passing the Turing Test.

"There's more than enough work to be done solving the component problems," he said.

Computer scientists can build careers solving these granular problems, and their research is advancing areas as diverse as facial recognition and algorithmic trading, so it is perhaps unsurprising that AI researchers focus on parts of the field with immediate application.

And while passing the Turing Test would be a landmark achievement in the field of AI, the test's focus on having the computer have to fool a human is a distraction. Prominent AI researchers, like Google's head of R&D Peter Norvig, have compared the Turing Test's requirement that a machine fools a judge into thinking they are talking to a human as akin to demanding an aircraft maker constructs a plane that is indistinguishable from a bird.

"That kind of misses the point about the real goal, which is making something that flies," said Joseph.

Despite these caveats, not all computer scientists think pursuit of a thinking machine is a quixotic endeavour.

"There's no logical reason why one shouldn't be able to reproduce the complexity of human thought in a digital form, it's just a question of time," Joseph said, adding that IBM has made some impressive steps towards that goal with its Watson supercomputer.

Watson takes a sophisticated approach to understanding questions put to it, parsing them through thousands of different algorithms to understand their grammar, syntax and meaning, and analysing the output from each to come up with the most likely answer. It is an approach that Joseph said is "more akin to what we think of as human".

However, the details of how AI is developing matter little to Loebner, who said "I don't pay much attention to AI research". His interest in the field - and the primary reason for him starting the contest - is his vision of a future where AI takes away the need to work.

"I hope AI will take over all labour from human beings. My goal is to see 100 per cent unemployment," he said.

A jobless society is a rather unusual utopia, and there is a question mark over how far Loebner's contest has advanced progress towards toward that goal.

Loebner himself admits that he is becoming weary of the annual venture, and has even toyed with the idea of cancelling it, "I've been tempted to but I said I wouldn't.".

However, Loebner is hopeful of some good news for the contest. After years of roaming it may find a permanent home in Bletchley Park, the Buckinghamshire manor house where Alan Turing played a crucial role in cracking Nazi codes during World War II.

The idea is still being discussed, but Bletchley, the site of some of Turing's greatest accomplishments, would perhaps be the natural home for the test he invented at the dawn of the information age.

About

Nick Heath is chief reporter for TechRepublic UK. He writes about the technology that IT-decision makers need to know about, and the latest happenings in the European tech scene.

6 comments
protothinker
protothinker

I suspect that modeling minds instead of conversations is more useful for purposes of trying to reach the goal of reproducing "the complexity of human thought in a digital form." To this end, I've constructed a simple model of a cognitive agent that engages in conversations but, unlike chatbots, is specifically designed NOT to fool anyone. (See ProtoThinker.com)

MJB007
MJB007

It may be that we are the robots someone else invented! Wouldn't the best robot would be a biologic ... didn't the Summerians describe this whole story 5000 years ago? Now if we did it in the last 100 years and it was that good how would we know?

araybo
araybo

"While sci-fi offers artificial intelligences that rival our own, the fiction bears little resemblance to real world AI" It would be more appropriate to say that so-called real-world 'AI' bears little resemblance to intelligence. Calling "specialized software that helps fly planes and run factory production lines" AI is a tacit admission of failure, at least for now. I do not mean to imply that the Turing test is the only or best test of AI, but any such test means little unless it requires that actual intelligence is displayed, just as any test of an airplane should require actual flight.

C-3PO
C-3PO

Just wondering why the author did not mention SIRI... I know she doesn't hold up a great conversation, but definitely has some interesting replies. And why was Apple not mentioned as one of those groups that might be interested in competing? They do seem to be the Tech company most interested in making the human interface more palatable - a conversation based interface would seem to be such, so why not?

Brian.Buydens
Brian.Buydens

I have heard that while no computer has passed the Turing test there have been people who failed it. ;-) But more seriously, the test is based on the assumption that if an entity is intellegent it will behave just like human. This reminds me of an XKCD cartoon where the ants decide there is no intellegent life other than them because the humans have not picked up on their scent trails. Perhaps Marshall McLuhan's observation that the message depends on the medium also applies to intellegence. Humans have one form, based on our evolution. Dolphins have another form. Computers will eventually have yet another form. I think we are seeing increasingly smart machines, but they are designed to solve problems and not just act like humans.

dominele
dominele

The Turing Test is based on the concept of loading a device with a lot of data and then developing algorithms for it to use this data. This is the way technology has been perceived to be used. During the past fifty years, we have witness a transformation from data processing to information processing. The difference is in that the technology is not just simply processing data, but it is collecting data and then providing information. This is all fine and dandy for answering questions, but does provide for a conversation. To have a conversation requires more than data or information processing, it requires intent. Where is this conversation going? What do I want to say? Why am I having this conversation? What and why did the other person say this? The "hunt" for a "thinking machine" does not rest with "thinking" but with machine logic that focuses on what the machine can do and why it should do it, if it at all. This, in my opinion, will come more from the robotics than data driven analytics.