When Marie desJardins was growing up, most people didn’t have computers at home. She grew up with an Atari, which was “really, really cool,” and remembers playing tic-tac-toe against a computer at a science museum.
“I just thought that was the most amazing thing that you could type in your moves, and then the computer would play against you,” desJardins said. “I had never imagined such a thing.”
Now an AI professor at the University of Maryland, Baltimore County, desJardins is an important name in the AI world. For the last dozen years or so, she has been involved with the National Conference of the Association for the Advancement of Artificial Intelligence (AAAI), the main North American conference on artificial intelligence. It started in grad school–desJardins earned her PhD in computer science from Berkeley–when she started reviewing for the conference, and later she became an AAAI counselor. In 2013, desJardins was invited to be the co-chair of the conference.
TechRepublic caught up with desJardins to talk about her journey into the world of AI, where she sees the future of AI research heading, and what’s coming up at AAAI-16 in Phoenix in February.
What first interested you in AI?
While I was an undergraduate at Harvard and studying for my computer science degree, I took a psychology class and got really interested in questions of cognitive psychology, particularly understanding how people learn and remember and forget things. How do you know when to pay attention and how do you generalize from experience and do better on tasks over time? That pushed me towards focusing on artificial intelligence and thinking about how we could model, computationally, those processes that people do naturally.
The thing about computer science in general and AI in particular is that it’s really a very interdisciplinary field inherently, because there’s a very small number of people who are computer scientists who only work on theoretical computer science. Most people who are in computer science are trying to solve problems outside of computer science. They’re trying to build software or hardware systems that solve some problem in some other field. Whatever fields you’re interested in, you could do that in computer science. It gives you an opportunity to explore applications of computing in many different areas.
What’s your main area of research in AI?
Interactive AI, where you’re not just handing some problem off to a machine to solve and then give you the answer. You’re really trying to have human beings work in partnership with the computer to solve some problems. Decision support is one of the areas that I’ve done a lot of work in, where you have some person who’s trying to plan an operation. I’ve done some work focused on military planning, but also planning and decision making for intelligent robots to support people in performing various tasks. How do you make that interactive where the human being is the decision maker?
The system is helping to analyze the problem and point out possible paths of action, identify contingencies in case of failure, have backup plans in case something goes wrong, draw attention to things that maybe are going off track as you carry out the plan, and find opportunities for either using resources more efficiently or combining tasks to make things faster. I’ve done interactive tutoring systems and work on interactive machine learning. Big data and machine learning are big buzzwords now. You’ve got all this data and you’re trying to model some phenomenon or make predictions or analyze data. A lot of the algorithms for doing that are very black-boxy. I take a bunch of marketing data and I dump it into an algorithm. The algorithm says, “You should stock your shelves with more crunchy peanut butter because people are buying more crunchy peanut butter these days.”
It doesn’t necessarily give you any insight as to why that’s happening. How did it come up with that suggestion? Why peanut butter and why not jelly? I’ve worked some, particularly, with a colleague who does visualization, who tried to provide people with insight into the models that are being developed. It’s not just the model is telling you, making a prediction, but it’s explaining why that prediction is made. Then, you might go back and say, “Oh. You know, I should constrain this to look at these particular factors because it’s making predictions based on stock market prices and I don’t think that’s actually relevant for what I’m trying to do. I’m going to focus on the demographics of my purchasers.” Again, then I can rerun the model, focusing on the things that I think are important and I may get different results at the other end.
What do you think of AI thinkers like Nick Bostrom and others who are looking at the future of humanity?
There’s a lot of pop stuff right now. Right? Like Ray Kurzweil. I’m more practical. I’m thinking about how we can solve problems. The stuff that makes it into the popular media it’s more the like, “Oh, killer robots are going to take over the world and destroy humanity,” or “We’re going to reach this singularity and all upload ourselves into the cloud and live forever.” I don’t believe in that stuff. Maybe some of these things may happen, but I think they’re like Nostradamus. You predict the future. If you’re right everybody says, “Wow. What an amazing prediction.” If you’re wrong, everybody forgets that.
I think it’s a really interesting question whether we will ever build computers that are self-aware or not, but the fact is from a practical perspective it doesn’t matter. If we have a computer program that behaves like a really great administrative assistant who’s pleasant and we can talk to and get things done and we feel they care about us, what difference does it make whether they’re self-aware or not? If we have a self-driving car that can get us from our home to the airport and then drop us off and we don’t have to find parking, I’d use that, which would get in fewer accidents than human beings do.
I’m sure you saw Ex Machina. What did you think of that?
I thought it was terrible. I really hated that movie. I also really hated Steven Spielberg’s AI movie. Ex Machina is our nightmare case. I think that’s what people tend to focus on. My favorite is Blade Runner. Harrison Ford is a cop, a Blade Runner who tracks down these rogue robots that look like just people and destroys them because they’ve gone rogue. They look just like people and talk just like people. They appear to have emotions just like people, but do they really, because they’re just robots? These questions of what does it mean to be human and what does it mean to be sentient? The ethics of intelligent robots is explored really, really well in that movie in a way that Ex Machina and AI just don’t even come close to.
Do you have any worries about robots?
Robots will take over jobs, just like every automation has made certain jobs irrelevant. Whatever technology we develop is going to perform some task that right now people do. Those people are not going to be able to do that job anymore because we have a faster, more efficient way to do it. To me, the ethical question is how do we as a society make sure that those technological advances benefit everybody, not just a few really rich people? That’s a political question. You should be asking Donald Trump this question. Are we going to let Sam Walton and the Koch brothers get richer and richer and richer because they have more automation and don’t have to pay people?
Taxpayer dollars pay for research. Consumer dollars pay for research to develop this new technology. The funding to create this new technology is coming from everybody. We should try to make sure that it goes to everybody.
We need to think about what do we want people to be doing that still continues to make the world a better place. Part of that is about education and making sure that people can be educated into the jobs that robots in automation can’t do. Part of it is I think we will become, as we’ve already been becoming, more of a service economy. What you have are instead of people farming and building mechanical things, which is what used to take a lot of labor, we don’t need as much labor to do those things. We can use more labor to create beauty and take care of each other and provide services and create and invent and do the things that we don’t expect robots to be able to do. If eventually all the jobs go away, then wouldn’t that be good? We could all just relax.
What trends do you see in AI?
People are starting to talk a lot about personalized education and personalized medicine. 2016 is a short horizon for some of what’s coming out. Right now, some of the self-driving car stuff is going to start to be deployed. One of these big zoos might start to have self-driving cars taking people around parks or estates or things like that. I don’t think they’ll be out on the roads for another 5 to 10 years, but then they will.
Another area is in personalized medicine and personalized education. Systems will be able to take data associated with your personal health condition, your personal educational needs and help to tailor your learning experience or your healthcare experience and get us away from some of the problems that happen when information falls through gaps.
What are the main issues in accomplishing all this?
Honestly, I think the biggest problem is from people being resistant to adopting certain technology and from competition which can be healthy, keeping people from adopting standards and also from badly designed systems. A lot of doctors are moving towards electronic health records, which sounds great. The problem is a lot of those systems that are used to create those electronic medical records are very poorly designed. My sister is a pediatric cardiologist. She spends at least twice as long entering notes on each patient visit as she did 5 or 10 years ago because she has to do it all electronically. It should take her less time, but it actually takes her more time because the interfaces are so poorly designed.
We’re moving in the right direction, but some of the things get worse before they get better. I think, right now, a lot of things are getting worse as we pay the price of figuring out how to do all these things well. You need to have people who really understand say, medicine, and also really understand computers. For really good online delivery of education, integrating that into the K-12 system. I think we should have people in classrooms and we should also have really great online instruction. To do that really well requires people that understand education and policy and computer science.
I work a lot with education folks and they do not know the first thing about computer science and the computer scientists don’t know the first thing about education. We need more people who can break those gaps. I think that’s partly a generational issue that we’re seeing more and more students come through who are majoring in all different things, but who are taking classes in computer science and technology. We’re seeing more and more states that are trying to introduce K-12 computer science standards so that all students will know something about computers and how they work and what can be done and what can’t be done and designing good systems. If we can get more of that literacy, I don’t just mean knowing how to type, but really understanding how computers work. Maybe you’re not going to be a programmer, but if you know how to program a little bit then you can work with people who are software engineers a lot more efficiently.
What is it like being a woman in AI?
It gets discouraging. The problems that computer scientists, particularly AI practitioners are working on, are the problems that women should really care about and would be really good at, and yet only 12% of undergraduate computer science degrees go to women. Maybe it’s climbed a tiny bit in the last few years, but we’re down in the 12-15% range. If you look in industry in most technical companies, maybe they have 20% female employees, but a lot of those people are administrators. If you just look at technical staff, it’s going to be more like 10-15%.
There’s this just implicit bias thing where computing has been male-dominated for so long that everybody just thinks it’s natural and they think it’s inevitable, just like 50 years ago everybody thought only men could be doctors. Now, half of the people in med school are female, maybe more than half. It hasn’t changed in computer science. It is discouraging. The reasons are complicated and entirely cultural. I think it will change when we get to the point where there’s more universal exposure in computer science, because right now it’s very self-driven. Computer science in most high schools is purely an elective. The people who take those classes are just going to be people who already have an interest. In our society, that’s almost all men.
What looks good to you on the AAAI-16 agenda next month?
There’s a robot exhibition. There’s a tutorial on organ exchanges, which is actually really interesting. This has been an application of artificial intelligence techniques like kidney donations. What they’ve done is they’ve used AI-based algorithms to find teams of people who are willing to donate kidneys. Let’s say I would be willing to give you my kidney because you’re my sister, but we’re not a match. Maybe there’s another person who could give you a kidney and I could give a kidney to that person’s sister. Now, we have a little loop. Those loops can be created, but it’s really hard to find them if they get longer than two people. Algorithms that have been used for other AI problems have been used in the last few years to analyze organ exchanges and to find these long chains of organ exchanges.
There’s an AI for disasters with Robin Murphy, who sent some of her robots into the rubble after 9/11 to try to find survivors and other things. Really, really cool stuff.