We’ve seen startling moves in artificial intelligence in 2015. Robots are doing the grunt work in factories. Driverless cars have become a reality. WiFi-enabled Barbie uses speech-recognition to talk (and listen) to children. Companies are using AI to improve their product and increase sales. AI saw significant advances in machine learning.
To get a handle on what to look for in the AI world, TechRepublic caught up with Andrew Moore, dean of Carnegie Mellon’s School of Computer Science, Kathleen Richardson, Senior Research Fellow in the Ethics of Robotics at De Montfort University, and Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville for what they see as the most important areas of AI research in the year ahead–what Yampolskiy says will be “like 2015 on steroids.”
1. Deep learning
“We will see an exponential improvement in performance of Convolutional Neural Networks (deep learning),” said Yampolskiy, “particularly as it will be paired with significant computation resources of ever-growing supercomputers.” Richardson agreed. She called deep learning one of the top areas of focus for 2016.
Moore sees a lot more high-level interest in this issue–“whether this industrial revolution is different from the others.” A study from The National Academy of Sciences brought together technologists and economists and social scientists to figure out what’s going to happen. “Serious groups of people are trying to figure out what will happen when white collar jobs, which are primarily about pure information processing–something computers do well–migrate to white collar jobs which are safe, people interacting with other people.”
Yampolskiy sees more and more devices becoming connected and “resulting in smarter homes, smarter cars, smarter everything.” Richardson sees IoT leading to a point where “no object will just be an object–it will all be wirelessly connected to something else.” Both Yampolskiy (whose focus is cybersecurity) and Richardson (robot ethics) worry about how the mined data can potentially be exploited.
4. Breakthroughs in emotional understanding
According to Andrew Moore, AI that can detect human emotion is, perhaps, one of the most important new areas of research. And Yampolskiy believes that our computers’ ability to understand speech will lead to an “almost seamless” interaction between human and computer. With increasingly accurate cameras, voice and facial recognition, computers are better able to detect our emotional state. Researchers are exploring how this new knowledge can be used in education, to treat depression, to accurately predict medical diagnoses, and to improve customer service and shopping online.
5. AI in shopping and customer service
And, speaking of customer service and shopping, businesses are starting to use AI to figure out what makes customers happy or unhappy, said Moore. The North Face and other companies are using AI to help customers figure out the perfect item. “It’s like when somebody is browsing and shows they want to dress like this, but a little warmer, and having the computer understand what that means and coming up with the right results for them,” said Moore.
TechRepublic has reported on how customer service is where some of the greatest breakthroughs in AI can be seen. Moore agrees that it’s changing business in a big way. “This is where IBM is placing its biggest bet,” he said. “In the late ’90s, there was a rush to see who would be the big providers of databases which run the planet. Now there is a platform race for who’s providing the platform for the sophisticated decision-making process which you can plug in to do anything in your business which involves explaining, answering questions, presenting data.”
6. Ethical questions
All three AI experts agreed that ethical considerations must be at the forefront of research. “One thing I’m seeing among my own faculty is the realization that we, technologists, computer scientists, engineers who are building AI, have to appeal to someone else to create these programs,” said Moore. When coming up with a driverless car, for example, how does the car decide what to do when an animal comes into the road? When you write the code, he said, there’s the question: how much is an animal’s life worth next to a human’s life? “Is one human life worth the lives of a billion domestic cats? A million? A thousand? I would hate to be the person writing that code.”
We need a discussion to come up with these answers. “I think we’d agree that many people have completely different personal thoughts as to what’s valuable.” And, the problems could become even more complex. “None of us are even touching this at the moment, but what if that car is going to hit a pedestrian, and the pedestrian might be pregnant? How much does that affect the car’s decision?” asked Moore. “These are not problems that are going to get us computer scientists and engineers solving. Someone has to come up with an answer.”
Richardson, head of the Campaign Against Sex Robots, worries about the “ongoing erosion of the distinction between human and machines.” Her work shows how detrimental sex robots can be to humans–by creating an asymetrical relationship of power. While she doesn’t see that becoming a real thing very soon, Richardson thinks that in 2016, we will “start to see artificial avatars acting in cyberspace like persons,” albeit modified.
7. A problem with representation
While many schools are pushing to recruit a more diverse student base, “we still have a terrible gender imbalance,” said Moore. “We cannot have the AI systems of the future all being built by one demographic group. These systems need to be built by a representation of the country’s population.”