Innovation

MIT's 'Moral Machine' crowdsources decisions about autonomous driving, but experts call it misguided

Conversations around driverless cars often drift into the sphere of ethics. MIT's new platform allows the public to weigh in on how autonomous 'decisions' should be made. But its premise has flaws.

screen-shot-2016-10-12-at-2-13-00-pm.jpg
Image: MIT


When presented with the choice, should a self-driving car kill a group of children or a single old man?

This is the kind of ethical dilemma that has begun to float into public discourse as automakers and technologists come closer to releasing fully-autonomous vehicles on the road—and human drivers slowly release control. To grapple with these issues, MIT recently developed what it dubs a "Moral Machine"—a platform through which the public can weigh in on the kinds of "decisions" they believe autonomous vehicles should be programmed to make.

How does it work? Through the platform, developed by Scalable Cooperation at the MIT Media Lab, users can view a "moral dilemma"—such as killing two passengers or five pedestrians—and judge which outcome they prefer. Participants can then compare viewpoints with other users to see how they line up, and can discuss the issue online. There's also an opportunity, if participants are so inclined, for users to create their own moral dilemma.

The goal of the machine, said Iyad Rahwan, MIT Media Lab associate professor and co-creator of the Moral Machine, is to "further our scientific understanding of how people think about machine morality. The platform generates random scenarios in which we vary different factors. For example, do people have a tendency to favor passengers over pedestrians, all else being equal? Do people penalize pedestrians who are crossing illegally?"

But while this kind of tool fits perfectly into the public conversation around autonomous driving, touching on a central fear of the technology, the premise of the 'moral dilemma' contains serious flaws. Here's why.

Human drivers don't make these decisions

"Humans in a moment of panic are rarely equipped to make moralistic decisions to choose between killing one or two people," said Michael Ramsey, autonomous vehicle analyst at Gartner. "They simply try to avoid killing anyone or anything. The most likely scenario is that the car will be programmed to avoid a collision, without regard to 'whom to save.'"

Other autonomous car experts agreed.

John Dolan, principal systems scientist in the Robotics Institute at Carnegie Mellon University echoed this point: Normally, we consider which action will cause harm, "rather than, should I kill the baby or the little old lady?" said Dolan.

These dilemmas, Dolan said, have often stemmed from fears among religious communities.

"This problem," said Bryant Walker Smith, one of the leading experts in the legal aspects of autonomous driving, "has been dangerously hyped."

Limited scenarios

Jeffrey Miller, IEEE member and associate professor of engineering at the University of Southern California, doesn't think the platform covers enough scenarios with driverless vehicles.

"The reality is that life-or-death decisions for the car are not the main moral issue that car makers have to code into vehicles," said Miller. "There are more mundane decisions about breaking the law in order to be safe, like keeping up with traffic on a busy highway, or running a red light in an emergency. These kinds of moral issues happen with frequency with human drivers."

In response to this point, Rahwan pointed out that the tool cannot capture all real-life scenarios, "which involve a much wider range of possible actions, and they incorporate a great degree of uncertainty about tradeoffs."

For example, Rahwan said, "in real-life, a car may recognize danger ahead, and must decide whether to move quickly to a second lane," he said. "This may slightly decrease the chance of harming its own passengers, but in doing so, it can also surprise other cars. This is not a black-and-white situation, but involves similar tradeoffs to those we're exploring in the Moral Machine."

Driverless cars cannot yet make these decisions

The dilemmas laid out in the Moral Machine "will, or should be, extremely rare," said Dolan. "The car should not be driving so fast that such dilemmas arise."

These scenarios, said Ramsey, "assume that a car driving at high speed could actually recognize the difference between a pregnant woman and a small child or an old man and a bank robber. That kind of technology is not available and won't be for some time after the dawn of autonomous vehicles."

SEE: Autonomous driving levels 0 to 5: Understanding the differences (TechRepublic)

Dolan agreed. "We are currently far from a situation in which a self-driving car will be omniscient and be able to determine unequivocally that action A will kill this group of people and action B will kill that group," he said.

Rahwan agreed that these decisions are complex. Still, he argued, through "the integration of these components, complex behavior will emerge, and this behavior will embody tradeoffs, whether we admit it or not."

People must consider, "if car maker X provides better passenger safety, but kills more pedestrians on average than car make Y, should regulators intervene?" Rahwan asked. "This is an aggregate pattern, not a single accident."

How will the data be used?

Ultimately, this kind of platform could collect very interesting data—but what may be most interesting are the decisions humans make.

At this point, Rahwan said, the Moral Machine has collected 14 million decisions from 2 million people worldwide, which will "help form a global picture of people's perception of machine ethics, and investigate cross-cultural differences."

Ramsay said he thinks that this kind of data could be interesting. "This will create a fairly good picture of a select society's moral code," he said. "I would be more interested in seeing the societal differences collected in the data from different countries than whether it would be used as a framework for making decisions."

SEE: When your driverless car crashes, who will be responsible? The answer remains unclear (TechRepublic)

But as Miller pointed out, "we don't know if there is going to be a large enough sampling to represent all of the different groups."

Even if there is a representative sampling, "polling people about moral decisions, while the results may be intriguing, is not my idea of how to give engineers basic material for programming a self-driving car's moral decisions," said Dolan.

To be fair, the Moral Machine is not intending to replicate real-world scenarios.

Rahwan explained that the dilemmas are "significantly simpler than real-life accidents." Still, he said, "they help people appreciate the difficulties of making moral choices by algorithms."

Miller agreed. "It is definitely a step in the right direction," he said. "At least it's starting the conversation about the ethics with driverless vehicles."

Also see

About Hope Reese

Hope Reese is a Staff Writer for TechRepublic. She covers the intersection of technology and society, examining the people and ideas that transform how we live today.

Editor's Picks

Free Newsletters, In your Inbox