Digital technology is advancing so fast it is impossible for judicial systems to keep up. For example, Christopher Markou, a legal expert and faculty member at the University of Cambridge, in the TechRepublic article Robot crime raises thorny legal issues that need addressing now, warns that current laws are woefully inadequate to handle the ethical dilemmas being presented by artificial intelligence (AI)-controlled technology.

Markou’s concern is not new, as attested by the famous philosophical experiment involving a trolley car developed by Philippa Foot. Put simply, a runaway trolley is heading towards five people stuck on one set of tracks. There is a worker standing in the train yard with the ability to switch the trolley to a different set of tracks. However, there’s a person stuck on that track as well. The question becomes: Does the worker do nothing and let the trolley hit the five people or divert the trolley so it hits the single person?

SEE: IT leader’s guide to the future of autonomous vehicles (Tech Pro Research)

Fast forward to driverless vehicles

Ethical problems that are not all that different from the trolley example surround autonomous vehicles. Daniel Sarag, head of science communication at the Swiss National Science Foundation, in his column Should algorithms be regulated? offers a scenario where a driverless car’s AI command and control system would have to make a similar decision.

A driverless car senses an oncoming car is headed straight for it, and the only viable escape route is blocked by pedestrians. Saraga notes the driverless car’s control algorithm must decide whose lives to put at risk: its passengers, the passengers in the other vehicle, or the pedestrians.

SEE: Our autonomous future: How driverless cars will be the first robots we learn to trust (PDF download) (TechRepublic)

Making tough decisions

Pundits in the private sector have been offering their thoughts as to what should be considered ethical behavior and what should not be for several years; that said, Germany is joining only a handful of nation states that are enacting guidelines. The introduction to the German Federal Ministry of Transport and Digital Infrastructure’s report Ethics Commission: Automated and Connected Driving states:

“At the fundamental level, it all comes down to the following question. How much dependence on technologically complex systems–which in the future will be based on artificial intelligence, possibly with machine learning capabilities–are we willing to accept in order to achieve, in return, more safety, mobility, and convenience?”

This press release from the Federal Ministry of Transport and Digital Infrastructure highlights the key elements of the report:

  • Automated and connected driving is an ethical imperative if the systems cause fewer accidents than human drivers (positive balance of risk).
  • In hazardous situations, the protection of human life must always have top priority over damage to property.
  • In the event of unavoidable accident situations, any distinction between individuals based on personal features (age, gender, physical, or mental constitution) is impermissible.
  • In every driving situation, it must be clearly regulated and apparent who is responsible for the driving task: the human or the computer.
  • It must be documented and stored who is driving (to resolve possible issues of liability, among other things).
  • Drivers must always be able to decide themselves whether their vehicle data is to be forwarded and used (data sovereignty).

Also in the press release, Federal Minister Alexander Dobrindt, who set up the commission, says, “In the era of the digital revolution and self-learning systems, human-machine interaction raises new ethical questions. Automated and connected driving is the most recent innovation where this interaction is to be found across the board.”

SEE: Self-driving cars vs hackers: Can these eight rules stop security breaches? (ZDNet)

The Moral Machine

Iyad Rahwan, Jean-Francois Bonnefon, and Azim Shariff of MIT felt it important for the public to have input on this discussion. The three, along with MIT developers Edmond Awad, Sohan Dsouza, Paiju Chang, and Danny Tang, set out to discover public opinion using what is being called experimental ethics. What they came up with is the Moral Machine.

In describing what the Moral Machine encompasses, the team from MIT writes, “From self-driving cars on public roads to self-piloting reusable rockets landing on self-sailing ships, machine intelligence is supporting or entirely taking over ever-more complex human activities at an ever-increasing pace.

“The greater autonomy given machine intelligence in these roles can result in situations where they have to make autonomous choices involving human life and limb,” continues the researchers. “This calls for not just a clearer understanding of how humans make such choices, but also a clearer understanding of how humans perceive machine intelligence making such choices.”

With that in mind, Rahwan, Bonnefon, and Shariff want the Moral Machine website to take the discussion further by providing a platform for:

  • Building a crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas; and
  • Assembling a crowd-sourced discussion of potential scenarios of moral consequence.

Some of the questions being asked are:

  • Is it acceptable for an autonomous vehicle to avoid a motorcycle by swerving into a wall, considering that the probability of survival is greater for the passenger of the car, than for the rider of the motorcycle?
  • Should different decisions be made when children are on board, since they both have a longer time ahead of them than adults, and had less agency in being in the car in the first place?
  • If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?

One thing is clear: All those involved with researching the ethical consequences of autonomous vehicles agree with the MIT researchers. “These problems cannot be ignored,” the researchers are quoted as saying in this press release from MIT. “We are about to endow millions of vehicles with autonomy, so taking algorithmic morality seriously has never been more urgent.”