
Carnegie Mellon has long been one of the best places in the world to get an education in computer science, and within that field there’s now a significant demand for jobs in artificial intelligence. Still, Andrew Moore, dean of the School of Computer Science at Carnegie Mellon University sees an “incredible shortage” in people trained in the mathematics and statistics and computer science which underlies making systems autonomous–the AI jobs of the future.
Originally from the UK, Moore has a background in statistical machine learning, artificial intelligence, robotics, and statistical computation. He cares deeply about the impact of technology on our society. TechRepublic spoke to Moore about his thoughts on the direction of AI research, ethical issues in AI research, and the biggest problem he sees in preparing for future jobs in AI.
You’ve said that the Pittsburgh region is at the the center of innovation in AI. Why?
Pittsburgh is one of a few cities that is working in robotics and AI machine learning, getting computers to improve themselves while operating. For us, this is a renaissance time. In the last 12 months, the median salary for a roboticist here has gone up 40% because there’s so much internal competition. Throughout many different industries, people are producing consumer products for launch in those areas. At the same time, Google announced they’re looking to hire 400 faculty-level researchers to help win the race, bringing AI assistance, helping people have intelligent conversations with Google. Fifty years ago, two founders of AI, Alan Newell and Herb Simon, decided to base their operations in Pittsburgh. The school of computer science at CMU has grown from two people back in 1965 to about 280 faculty now. A hundred are working on AI, 100 on machine learning. And plenty are working on understanding speech, exotic things that will become commonplace…understanding human emotions.
What’s the most pressing area of research in robotics?
We have a dirty secret. One of the reasons we’re having this renaissance in AI in the last few years is that we’ve become very good at computer vision. We’ve become very good at learning, so that robots no longer need to be programmed for every possible eventuality–they just adapt to their environment. That’s why you’re seeing this big burst in robotics, in car industries and the logistics industries and retail and medicine and so forth. But we have not had the same success in grasping and manipulation. The claw of the hand of the robot being dextrous, quickly moving around and picking things up without breaking them. That’s where we’re devoting a huge amount of effort. Roboticists around the world are focusing on that. Until then, robots will be deployed in areas where they’re not controlling manipulation, but they’re controlling machines and detecting problems, moving large bulky objects around. We’ve given ourselves a 5-year moonshot project. We want to put a robot arm on 100,000 powered wheelchairs in the US. The goal is that the people on those wheelchairs who have high spinal cord injuries or degenerative diseases, can’t use their own arms, look at an object, hold their focus on it, and the robot arm will reach to pick it up and place it where the user looks or indicates. If we can get this problem solved–we think there’s a 50/50 chance to do it in five years–it will be an extremely good thing for all the people who need this help. It’s a big test to see if we’ve broken the barriers of manipulation. This is exactly what we did about 15 years ago with self-driving car technology. That one panned out.
With sociable robots comes a new set of ethical considerations in AI research. How do you address these at CMU?
About 10 of our faculty are working on various aspects of this problem right now. Aaron Steinfeld is currently looking at questions like what should an autonomous vehicle do when it’s driving down a road and sees an animal jump out in front of the car? It’s an interesting question, not just because of all the accidents involving animals out there, but it’s important as a kind of starting point for what the planet is going to decide to do about what do about the three laws of robotics. We know that those three laws simply won’t work, but we do need to look at new laws. The autonomous vehicle has to decide whether to save the animal’s life, make sure they’re protecting the human occupant’s life, and it only has a fraction of a second to make these really important decisions. This is no longer a technological problem, it’s a policy problem, and everyone should be involved in how autonomous vehicles will make these kinds of trade-offs. It’s the beginning of this whole new discipline.
Is there anything that worries you about AI?
I have a huge sense of urgency on this. There are so many places where there are unnecessary deaths and suffering. We could have the target of bringing down the number of unnecessary car deaths by a factor of ten. Or using AI to detect emerging disease outbreaks which could kill millions very quickly. Or the question of using AI to help run the logistic supply chain, making sure that after an emergency, instead of everyone running around in a fog, there’s a very clear plan on who needs what. All of these things are instances where you can save lives. Or people who are sick getting actual replies about decisions on when to seek medical treatment. Ten or 20 million people go to search engines every day asking for medical advice, and the advice they’re getting back is not very good. We see so many examples of places it will save lives.
Uber recently poached 40 faculty and staff from CMU to work on self-driving cars. Do you worry about top researchers leaving academia and heading into for-profit careers?
Since last year, we’ve doubled the size of all our robotics programs because there is so much demand in the world for these people. In general, if you look at the number of folks we’re training in machine learning, it’s skyrocketing here, and it’s skyrocketing at other places. In terms of education, these are boom years because there’s so much demand for people with these skills. As a result, we’ve grown our faculty by 17 new hires last year, lost faculty to Uber, but if you look overall, we’re in a huge expansion mode. We’re encouraging faculty and students to go out and work at places like Amazon and Microsoft and NASA–to do that for a few years and come back. While the AI revolution is happening, you’re going to see a world where top researchers keep re-circulating between these different worlds. There’s a lot of basic theory that’s still being worked out and a lot of actual implementation where great ideas coming out of the university need to get put in real world products.
The thing that keeps me up at night is whether we will find enough middle schoolers and high schoolers who want to come into this area. That’s vastly more important to me than questions about whether one company behaves badly.
Are schools meeting the demands?
There’s been a lot of progress, and I’m excited by the new inclusion of CS in the New York curriculum. In Queensland, Australia, robotics is becoming an actual part of the required curriculum for kids. The countries that really push the math and statistics behind AI are the ones that will prosper in the long run.
How is CMU dealing with recruiting more women into tech?
We’re really passionate about this. We’re the first university to have broken through the 40% barrier–40% of our incoming class is women. The national average is about 20%. That didn’t happen by chance. Our wants to be extremely welcoming and ready for diversity in the classroom. If you look further along the career trajectory the numbers are far more depressing for the number of women in leadership roles in high tech companies. Universities and industry and government need to team up to make sure the leadership of AI represents the entire population.