Innovation

'Socially-cooperative' cars are part of the future of driverless vehicles, says CMU professor

At Carnegie Mellon, one of the leaders in robotics, Professor John Dolan is finding ways humans and machines can communicate safely on the road. He spoke with TechRepublic about his research.

John Dolan with his research vehicle at CMU
Image: Tim Kaulen/Carnegie Mellon University

Driverless cars are our future, with nearly every automaker racing to create their own version of autonomous vehicles. But autonomous systems still have a long way to go—and the cues and signals that human drivers know instinctively are not second-nature for our machines.

To find solutions for how vehicles with autonomous features can drive safely on the road, professor John Dolan, a principal systems scientist in the Robotics Institute at Carnegie Mellon University, studies how humans communicate and coordinate with these machines, helping them understand how to complete complex tasks on the road.

TechRepublic spoke with Dolan about his research, why he doesn't see adoption of driverless vehicles in the next two years, and why GPS is a fundamental tool for ensuring safety on the road. Here is the conversation, lightly edited for clarity.

What is "socially-cooperative driving"?

The basic idea is that if you program a robot to do some tasks, it may not behave in a way that human beings would normally behave when other human beings are around. You can imagine situations where robots are in a lab and they really don't care, they just get a job done. But things are different out of a lab, when you're in a driving situation.

SEE: Photos: A list of the world's self-driving cars racing toward 2020

Think about when our car enters a highway from an entrance ramp. We negotiate with nearby cars; if we're close to another car, if it's ahead, we let it go. If we're ahead, it lets us go. If we're close to it, we negotiate with visual cues, and also with speed cues. We speed up in order to indicate that we don't want to yield to the other car. Or, vice versa—they speed up in order to get in front of us. So, our team is looking at how to use probability to judge the intentions of that driver, and to be able convey to that driver what our intentions are, in order to have safer and more natural interactions with other cars.

dolanjohn.jpg
John Dolan, principal systems scientist in the Robotics Institute at Carnegie Mellon University
Image: Carnegie Mellon

Do you think humans will, for a while, have some involvement in driving, even when cars have autonomous capabilities?

On the one hand, there's the question of the individual car—is it going to be fully autonomous, or is the human going to be involved still? And, on the other hand, your car might be fully autonomous, but there's still going to be human-driven cars on the road, at the same time.

I do think having humans involved is desirable because of the fact that the technology's not sufficiently matured to avoid it. It's also unavoidable from the standpoint of the insurance companies, and also, probably the automakers, that there be human overwatch with the autonomy functions in the short term.

Elon Musk talks about having Level 4 (full autonomy) in two years. Of course, he's prone to make various strong statements, which I think can be beneficial in terms of prodding people along, in terms of accelerating the technology. But I'd still be very surprised if we had widespread Level 4 in two years, because that literally means a driver can read a book and go to sleep in a car, and I just don't think we're going to be ready for that in two years.

Tesla's approach seems to be very different from, say Google's—slowly, incrementally, releasing updates all the time, whereas Google is waiting for the technology to reach a certain level. What do you think about these different approaches?

To be honest, I'm not certain a lot of us in this area, who aren't at Google, haven't been wondering what Google's endgame is. They seem to be more serious about working with an automaker, and possibly getting to the point of fielding production vehicles. Tesla is running some risks, but the value of it is the kind of strong, overall capability that Musk is claiming, with at least another 70 or 80 million-plus miles of data, of building confidence in consumers.

SEE: Tesla speaks: How we will overcome the obstacles to driverless vehicles (TechRepublic)

I do think that Tesla's approach is very much anathema to the mentality of the automakers that have been established over the course of 100 years. They want stuff to be as reliable as it can possibly be, tremendously well-tested before they release it to market.

Do you think Tesla's is a smart approach?

I think there's one big advantage of doing it: You hook your customers, and the customers are fairly cautious in using it. I mean, some people have done foolish things. It's kind of like people trying to mow or trying to cut their head in with their lawnmower by lifting up above their waist. If you're crazy, you'll do that; if you're a reasonable person, you won't. Clearly, if you're a reasonable person, Tesla says you need to pay attention when this thing's operating on the road because it's not perfect yet, then you'll do that. So, I think it can build a customer base, even from a non-selfish perspective, and build enthusiasm and trust in the technology. The disadvantage is that if you have an accident attributed to the autonomy, that's going to be a tremendous PR disaster, and it'll be difficult to recover.

Isn't that bound to happen at some point?

Well, I agree with you. It's a fear we all have, working in this area, that it could happen, and it could be too soon and would generate a lot of bad press that could derail the technology. I think one way to help head that off is what Elon Musk proposed—to make data available to evaluators, Department of Transportation or whomever. There have to be some caveats there, because I don't know what the exact nature of the data is. It needs to be convincing. It needs to take into account the fact that humans can intervene in order to prevent accidents, so you could say "drove x number of miles without any problems."

But one question is, to what extent were those problems prevented by human intervention? Still, if you can make the case based on data like that to the public where there's some assurance that this is really safe like air travel is regarded as very safe, even though we hear about crashes from time to time. That would help.

How exactly are you testing these things out at CMU?

We just don't have the manpower to test with the same level of rigor and repetition that the automakers are doing it. What we do is that we always have two people in the car. We have a 2011 Cadillac SRX, which we retrofitted ourselves, and that's the only vehicle we're currently using in our lab. (PHOTO). We've got a safety driver in the driver's seat who is ready at any time to put the car back from autonomous mode into manual mode, and we typically have a developer or somebody with a laptop who's testing out software, making adjustments as needed, sitting in the passenger seat.

maxresdefault.jpg
Image: Carnegie Mellon

We've driven it in highway environments, some semi-urban environments, and then also in interurban environments around the university, and if we ever see we're getting too close to a car, or if there's a situation we can't handle because it's too complex, we intervene in order not to cause any problems.

What are the most common issues that come up when you're doing this? What are the cars not getting right?

We had occasional problems with using GPS accurately, which are more severe when you're in an urban environment with some taller buildings around, and the problem there is that if you're off by several meters, then you won't know exactly which lane you're in even though you know which road you're on. So, that can give you some problems with your position jumping in a lane. We address that, for the most part, with the ability to read lane markings and to use those as a substitute or combine them with the GPS, but occasionally you have a situation where the GPS is bad and then you also have some faded lane markings or some part of the road that doesn't have any lane markings. So, that would be one issue.

Another one has to do with bad weather, and that's a problem that all the autonomous driving efforts are facing. If you have really heavy rain or snow, that tends to fool the LiDAR sensors, the laser sensors, and so sometimes we drive on the borderline where we just have some rain and that can occasionally cause the car to brake, thinking that heavy, or almost heavy, rain is actually an obstacle.

Another thing that is generally difficult, and we tend to avoid as a driving environment because we're not ready for it yet, is the kinds of things that could go on in downtown LA or downtown Manhattan, where you have a bunch of cars on the road. Maybe they're swerving out of their lane in order to avoid, let's say, trucks that are offloading things to shops along the sidewalk, kids or people darting out into the road. We've got a basic ability to recognize these obstacles, but the sorts of judgments that we quickly make in a very dense, uncertain environment like that are very difficult for autonomous vehicles to do right now.

How do you handle stop lights and obstacles?

We are using a couple of different ways to handle stop lights. On the one hand, if the stop light is equipped with a DSRC radio, then we can communicate with it and figure out what its state is based on that. And then we also have a vision system or a camera system, which is able to see the stop light and make a judgment about what color it is.

We also have some ability to use cameras to do pedestrian and bicyclist detection and avoidance. We've got a pretty detailed motion planning. Our typical, somewhat complex, scenario would be a bidirectional load. We've got a car coming in another direction, and let's say a person steps out in the street in front of you. Well then you want to avoid the person but still move into the other lane in time to avoid the car that's coming your way. So, that requires, not a super-complex, but at least a more complex trajectory than continuing in your own lane. So we've got a motion plan that's able to reason about those things.

What are the most interesting takeaways or most surprising things you've learned after doing this?

Well, I think of them in terms of the gaps. Then, one of them is localization, because it costs a lot of money to get the accuracy and localization you need to drive the way it's commonly done now, which is with a very accurate map and a very accurate GPS system. Costs over $50,000, typically, for a GPS that's that accurate—so a localization system that is less expensive than that, but still a similar accuracy, would be a big step forward.

Now, Google does something that gives them good accuracy, but they have to drive the environment ahead of time and get a very detailed and fairly data-intensive map of the whole area, and then they match that map as they go along. So, that's one thing. Cheaper localization would be a great thing.

Another one is reliability of the entire perception process. So, the potential hardware itself, the algorithm used to interpret those data and to reason about them. That's an area where there's still lots of work to do. There's been a lot of work done, but there's nothing close to what we as humans are capable of apprehending and reasoning about, and that kind of brings in another level.

You say the GPS system that you really need costs in the range of $50,000. So, Tesla, for example, will release a Model 3 next year for $35,000. How can they do that?

My guess is that what they're doing is depending primarily on the lane marking they talk about because they're thinking of highway driving. In the West, because the weather's better, the lane markings don't fade maybe as quickly as with the ones that get beat up with salt and everything in the Northeast. In any event, depending on the lane marking, you can buy a Mobileye, which provides a camera-based way of reading lane markings, and if you buy it in bulk, or if you are an automaker, and can get it re-priced, I think it costs about $500 per vehicle. You can combine that with relatively inexpensive GPS and deal with a lot of the cases—but as soon as you get into an environment where you don't have the lane markings it can notice, then you're in trouble.

SEE: 10 autonomous driving insiders to follow on Twitter (TechRepublic)

You can imagine going a different pathway would be what I assume Google's working on. They announced they were working internally on a new LiDAR sensor in January of 2015. And, you could use the method where, if you traveled an area already, you gather a whole bunch of laser range data, and you build this three-dimensional representation of the environment that you then match to your laser readings when you're driving through it later, and you localize yourself that way. That's typically been expensive because the laser sensor to do that is quite expensive, but the company, Velodyne, that built the laser sensor that a lot of us have used has been steadily bringing its price down so that now it has a version that costs only $8,000—doesn't have the high-density, but my guess is that Google internally is working on something that maybe will be more down around the $1,000 level that it needs to be for it to be affordable on a car.

How are most automakers approaching full autonomy?

In the recent past, I've interacted with a lot of research labs from the various automakers in Silicon Valley and elsewhere, and it does seem like all of them have some kind of a driving effort going on, and certainly all of them have what they call ADAS, advanced driver assistance systems. The philosophy has been if you bring them all together, or you introduce them incrementally and then string them together so to speak, you're going to get something close to autonomous driving. I think that's been a little bit naïve because there are architectural issues with combining them all effectively, but that's the pathway that a lot of the automakers have foreseen.

What's the most interesting thing that most people don't know about autonomous driving research?

Machine learning is a fundamental technology for a lot of things going on in high tech now—including, for example, Google's rise to find prominence with their search engine. And now, in recent years, so-called deep learning has become popular and very effective. So, a lot of people are looking to using that in autonomous driving. I think it has strong potential, particularly for accurate classifications, marking different objects in the environment and possibly inferring what activities those people or animals what they're doing. Then using that information to help effectively make decisions about what to do. I think deep learning can have an increasing impact on driving, improve some components of it.

Also see...

About Hope Reese

Hope Reese is a Staff Writer for TechRepublic. She covers the intersection of technology and society, examining the people and ideas that transform how we live today.

Editor's Picks

Free Newsletters, In your Inbox