With Google DeepMind’s recent success in mastering the game of Go, Tesla’s advances in autonomous driving capabilities, and voice recognition systems like Amazon’s Alexa taking off, interest in AI and machine learning have reached an all-time high.
But, can it last?
Those living in the AI world in the 1980s remember what has been referred to as an “AI winter”–a time when the inflated expectations resulted in a “crash,” and funding began to dry up. While it’s unlikely that the current enthusiasm in AI will wane, some worry that huge attention, and expectations, about AI could have negative side effects. Some also worry about how AI is equated with machine learning–or even, more specifically, deep learning, which is a narrow subset of AI.
So, what happened in the ’80s? According to Manuela Veloso, robotics professor at Carnegie Mellon, the expectations were too high.
“The approach was that a computer program could help a doctor make a diagnosis, and things like that,” she said. “But the way they assumed it would happen was by knowledge engineering–asking humans how they make decisions.”
The problem, Veloso said, is that the knowledge acquisition became too hard, and the expectations couldn’t be met.
Raj Reddy, computer science professor at Carnegie Mellon, sees a lot of differences between today’s AI and what was available in the ’80s.
“Lots of companies got born in the ’80s and they went public in the 90s. But, they didn’t solve everything.” But, Reddy said, “what we have now is a million times more computational power than 1980s.”
SEE: Machine Learning: The Smart Person’s Guide (TechRepublic)
“It changes the whole equation when you have so much computing power,” Reddy said. “What I call unlimited memory and unlimited bandwidth. You can have as much memory as you want, as much bandwidth, and as much computation. All of a sudden, it’s a game changer.”
Still, other AI experts worry about our expectations around AI.
“I definitely have concerns about overhyping AI,” said Marie desJardins, AI professor at the University of Maryland in Baltimore County. “Deep learning, in particular, is being touted as the breakthrough that’s going to change the world, which I think is not at all true, and pushing that message just undermines the broader research agenda.”
On the subject of funding, desJardins thinks the emphasis on deep learning could result in other research becoming more difficult to fund and publish. “That will be detrimental to innovation–just as we’ve seen with other “bandwagon” research methods in the past,” she said.
SEE: The 7 biggest myths about artificial intelligence (TechRepublic)
Roman Yampolskiy, director of the cybersecurity lab at the University of Louisville, takes a different view.
“This is not hype; it’s real and sustained progress, which accelerates research,” he said. He’s also not convinced that funding will be impacted. “Funding is currently diversified between government agencies and industry, so it is unlikely to dry up,” he said. “The market has a huge appetite for new AI capabilities and government is very interested in military and security applications. If anything, the funding will become even greater.”
“Plus,” Yampolskiy added, “now it is possible to crowdfund good ideas.”
Aside from potential concerns about funding, some believe that conflating machine learning with AI is a problem.
“There’s likely a lot more involved in building intelligent systems than deep learning,” said Toby Walsh, professor of AI at the University of New South Wales. “Unlike humans, deep learning needs lots of data.”
Walsh gave the example of Google DeepMind’s AlphaGo. To train the system, it “needed millions, if not billions, of games of Go, more than any human could play in a lifetime,” he said. But Lee Sedol, the human challenger, “only needed three games to learn enough about the novel way that AlphaGo played to win a match.”
SEE: AI experts weigh in on Microsoft CEO’s 10 new rules for artificial intelligence (TechRepublic)
Collecting the data needed for deep learning can often be impossible, or expensive. “It’s impressive to win at Go, but there are only, at most, a few hundred moves to make at each turn. In life, there can be an infinite number of options, and there are lots of other challenges like uncertainty about the world and nature throwing curve balls at us.”
There are, Wash believes, big hurdles to achieving intelligent systems that “we can trust, systems that can explain themselves, systems that can be guaranteed to behave in certain ways.”
Still, he said, “AI is making real progress this time, with better algorithms, more data, and faster CPUs. For this reason, I don’t see too much concern about another AI winter amongst my colleagues.”
Veloso said that, while people may overestimate what AI can do, deep learning is the “fad of the day,” she doesn’t think there will be reason for disappointment.
“This is a different situation,” she said. “We have an enormous amount of data being digitalized. Your money, your credit, all the transactions, pictures, videos–the concept of having a lot of information available will not go away.”
SEE: 10 artificial intelligence insiders to follow on Twitter (TechRepublic)
According to Veloso, the next big thing in AI is transparency via explanations–which can lead to trust. Deep learning she said, “doesn’t give any hint at why it arrived at the answer. No ability to introspect or analyze or offer explanations.”
“We need to be able to question why the programs are doing this,” Veloso said. “If we don’t worry about the explanation, we won’t be able to trust the systems.”
Either way, most experts agree that another AI winter is unlikely.
“We are too close to the target,” Yampolskiy said. “We have hit the point of no return.”
- Educate yourself on AI: Seven books to get you started (TechRepublic)
- How Google’s DeepMind beat the game of Go, which is even more complex than chess (TechRepublic)
- Why AI could destroy more jobs than it creates, and how to save them(TechRepublic)
- 10 things you need to know about artificial intelligence (TechRepublic)
- Artificial Intelligence and life beyond the algorithm: Alan Turing and the future of computing (TechRepublic)
- Smart machines are about to run the world: Here’s how to prepare (TechRepublic)