AI, I’ve heard of this. It’s when machines try to take over the world and eliminate all life on earth isn’t it?
Sounds like you’ve been watching too many Hollywood blockbusters.

The Matrix, Terminator, Blade Runner – awesome!
While those are indeed awesome films, they are more sci-fi than AI – at least the kind of artificial intelligence that’s around today.

What kind of artificial intelligence is around today?
Let’s go back to the beginning. Back in the 1950s when the research field was being established one of its early pioneers, the scientist Marvin Minsky, defined it as the science of making machines do things that would require intelligence if they were done by humans.

Nowadays, however, scientists sometimes refer to this as the ‘old definition’ or the ‘classical definition’ because the focus of research has shifted since those formative years. Instead of its early emphasis on human intelligence, the AI umbrella today covers all sorts of processing mechanisms that have been made by humans.

What sort of mechanisms?
Both technological and biological.

In the biological realm, some scientists are already conducting experiments with hybrid AI, connecting biological neurons taken from rat brains to electronic robot bodies via a multi-electrode array and using the biological component of the entity as the controlling mechanism for the body, for example.

Other, more straightforward technology mechanisms can also be considered AI: software brains driving and manoeuvring a robot body, for instance, or a program running on the internet.

But how do you actually go about determining if those mechanisms are intelligent or not?
A good question. Intelligence as a concept is subjective and hard to definitively pin down.

What about the Turing Test? Doesn’t that prove when machines are artificially intelligent?
The Turing Test was the brainchild of the English mathematician Alan Turing who, all the way back in the 1940s, was interested in the notion of intelligent machines and whether machines can think.

In a paper published in 1950, Turing set out an idea, not of testing whether a machine is actually thinking – he decided it was simply too subjective and difficult a concept to pin down – but whether a machine can at least appear to be thinking, when judged by a human.

The test he proposed was for a human judge to converse separately with one machine and one human via a text terminal and then have to decide which was which.

If the judge did not consistently identify the machine as a machine, it would have succeeded in appearing human and thereby passed the test.

Since 1991 the Loebner Prize for artificial intelligence, sponsored by US inventor Hugh Loebner, has run versions of the Turing Test with AI chat programs pitting their digital wits against human interrogators.

While machines regularly fool judges into thinking they’re human, these days the Turing Test is considered by some in the AI field as an interesting experiment or a tick in a clever box but not much more a sign of intelligence than when Big Blue beat Garry Kasparov.

What’s this about Big Blue?
Back in 1957 another early AI pioneer, Simon Herbert, predicted a machine would beat a human chess champion within a decade.

His prediction came true – albeit 30 years later – when IBM’s Deep Blue supercomputer beat world chess champion Garry Kasparov in 1997.


IBM’s Deep Blue
(Photo credit: Pedro Villavicencio via Flickr.com under the following Creative Commons licence)

So did the match herald the era of intelligent machines has arrived? Not exactly. Let’s just say the prevailing sentiment became more that just being able to play chess was not a sign of genuine artificial intelligence and was instead more a matter of brute processing force and good engineering.

In essence, Deep Blue is an example of narrow AI: a system designed to carry out a specific application rather than to exhibit the kind of general intelligence we humans have.

Narrow AIs are where a lot of real world examples of AI can be found.

What sort of examples?
AI applications have found their way into many systems underpinning Western society.

If you take a broad view of what AI is, it’s possible to count hundreds of apps in use throughout our infrastructure – from intelligent algorithms routing comms data, to computer-assisted design that produces sophisticated gadgetry, to autopilot programs that fly and land aeroplanes – that developments from more than half a century of AI research have contributed to.

Many very modern applications such as web services are essentially made possible by aspects of AI research too.

Google apps for example – its search, translation, voice recognition, advertising and spam filtering to name but a few – are all dependent on AI techniques.

Google’s spam filter for instance uses machine learning – a scientific discipline that feeds into AI where systems typically improve over time as they handle more data. In the case of Google’s spam filter, it asks real users to label what’s spam and what’s not and then improves its own filtering by studying their results.

That’s all pretty cool but what about robots? When will all these smart AI advances mean I can have a robot in my house to do the washing up, vacuum my bedroom, pick up my dirty socks – that sort of thing?
Robotics is another sub-field of AI where there have been some exciting developments lately: there are already robot vacuum cleaners for instance, although they can’t do your washing up at the moment.

A bit more exciting than robot vacuum cleaners are driverless cars – the most recent Darpa Grand Challenge, for example, saw five automated driving systems complete a 150-mile race across the Mojave Desert with nary a human at the wheel.

A real world application of such a system will be operational in the UK next year when business travellers using Heathrow’s Terminal 5 will be able to use one of these driverless pod cars to shuttle them between a car park and the terminal building.

Noel Sharkey, a professor of AI and robotics at the University of Sheffield, told silicon.com that he foresees even more applications for driverless cars in future: “I think that one of the big areas in AI will be the independent car – which’ll be wonderful for say disabled people or blind people; you don’t have to worry about a taxi you just get out and flag down a car,” he said. “Autonomous cars that will fit into a road system with sensors so that we can all hammer up the motorway at hundreds of miles an hour without worry of bumping into each other.”


One of the driverless pod cars that will be used at Heathrow Terminal 5
(Photo credit: BAA)

So robot cars will be big. Where else will AI take off in a big way?
Care of the elderly is another possible industry where robots can be put to work, so one day when you’re old and grey, you might end up being offered the option of a live-in robot carer in your home as an alternative to residential care.

As for the robot butler to clean up after you, Kevin Warwick, professor of cybernetics at Reading University, believes it will happen – and will prove popular when it does.


Kevin Warwick in his lab at Reading University
(Photo credit: Chris Beaumont/CBS Interactive)

So there are all these clever applications of AI. Will any of them ever end up with a human-like level of intelligence?
It’s a good question and the answer varies depending on who you ask – although it’s actually hard to find a scientist who will comprehensively say humans will never be able to create an AI with an intellect as rich and nuanced as we are. But then in science proving that something is impossible is, well, near impossible.

Here’s Sheffield’s Sharkey again: “My view about strong artificial intelligence – which is the idea of a machine actually having a mind – I find very difficult. I think we probably can’t do it because I think you probably need to be biological to have a mind – I mean what we talk about really is sentience or consciousness and there’s been no machine capable of doing that.”

Yet there are others – such as futurist Ray Kurzweil and also Reading’s Warwick – who believe intelligent machines will be out-thinking humans much sooner than might be comfortable for most people. Click the video below to see professor Warwick discussing his views:

“I feel that by 2050 that we will have gone through the singularity and it will either be intelligent machines actually dominant – The Terminator scenario – or it will be cyborgs – upgraded humans,” says Warwick. “I really, by 2050, can’t see humans still being the dominant species. I just cannot believe that the development of machine intelligence would have been so slow as to not bring that about.”

So there you have it: two very different views on the future of AI – only one of which might well result in the end of the human race as we know it.

Er, thanks, I think.
You’re welcome.