A team of researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a system to determine whether self-driving cars can be programmed to predict the driving personalities of drivers in other vehicles.
The system classifies drivers’ behavior to help self-driving cars better anticipate what other cars will do so they can drive more safely among them.
The researchers employed an existing framework used for personalities known as “social value orientation,” which represents the degree to which someone is selfish (“egoistic”) as opposed to altruistic or cooperative (“prosocial”). The system then maps out real-time driving trajectories for driverless vehicles based on that measurement.
Presently, self-driving cars are generally programmed to assume that all humans act the same way, said Wilko Schwarting, an MIT graduate study and the study’s lead research author.
For example, there might be a challenging merging scenario on a short highway on-ramp where an autonomous vehicle needs to negotiate with another driver on whether and how it can conduct the merge.
Often, Schwarting said, human drivers either slow down to increase the gap size to enable one car to successfully complete the merge—or speed up to signal that it is not OK to merge.
“An autonomous vehicle must recognize these subtle social cues of selfishness or cooperation—and failure to do so not only decreases the overall flow of the traffic network but also impacts the safety” of the cars in that traffic, he explained. “We wanted to create a system that enables more human-like driving for [autonomous vehicles], by better understanding the social behavior of the drivers around them.”
The researchers designed and tested an algorithm in this type of merging scenario as well as one where unprotected left turns are made. They demonstrated that they could better predict the behavior of other cars by a factor of 25%.
One of the challenges the researchers discovered in the first phase of testing was that modeling human drivers is difficult, Schwarting said. “We need to take into account how our own actions will influence the actions of the drivers around us.”
The SVO is a good metric to estimate the behavior of human drivers during these merging and left-hand turns interactions, he said.
“It also allows us to decide how selfless (or selfish) an AV should be depending on the scenario. Acting overly conservative is not always the safest option, because it can cause confusion among human drivers.”
There is no timeline yet for implementing the SVO system Schwarting said. “As a next step, we hope to try to apply the model to pedestrians, bicycles and other types of agents that would be part of these environments,” he said.
“We’d also like to look at other robotic systems that need to interact with us, such as household robots that can benefit from such a system,” as well as care-taking robots and robot tour guides.
“The ultimate goal is to develop AVs that can more easily interact with human drivers in real-world environments,” he said. “Creating more human-like behavior for them is fundamental for the safety of passengers and surrounding vehicles, because behaving in a predictable manner enables humans to understand and appropriately respond to the actions of the AV.”
Right now, all of the elements involved in driving are too complex for a robotics system to handle alone, according to a separate MIT study from August.
Yet, the authors said the fact that humans need to play an integral role in the self-driving process is the current challenge, due to “the underlying uncertainty of human behavior as represented by every type of social interaction and conflict resolution between vehicles, pedestrians, and cyclists.”