What happens when criminals figure out how to use robots to commit crimes? Christopher Markou, a Ph.D. candidate and Faculty of Law at the University of Cambridge, takes a look at the disturbing possibility in We could soon face a robot crimewave … the law needs to be ready, a commentary he wrote for The Conversation.

“How do we make sense of all this?” asks Markou. “Should we be terrified? Generally unproductive. Should we shrug our shoulders as a society and get back to Netflix? Tempting, but no. Should we start making plans for how we deal with all of this? Absolutely.”

SEE: Robots of death, robots of love: The reality of android soldiers and why laws for robots are doomed to failure (TechRepublic)

Who’s guilty: Robots or the owners?

For starters, Markou is curious how fault is determined when a robot does something considered illegal. For example, is it right the US government absolved Tesla Motors of any responsibility after a driver was killed when his autopiloted Tesla crashed? How about the robot that was arrested and then released for buying drugs in Switzerland?

Successfully wading through the can-of-worms of determining culpability seems impossible. However, Markou, in his commentary, mentions something mildly reassuring. He writes that little if any thought was given to who owned the sky before the Wright brothers achieved their first sustained flight of a powered, heavier-than-air aircraft. “Time and time again, the law is presented with novel ideas,” he explains. “And despite initial overreaction, it got there in the end. Simply put: the law evolves.”

SEE: Stanford expert says liability issues puts future of robotics in peril (ZDNet)

The role of law

Before getting into the thorny issues of robot crime, Markou offers his thoughts as to why a system of laws is needed. “Ultimately, it [the law] is required within society for stabilizing people’s expectations,” writes Markou. “If you get mugged, you expect the mugger to be charged with a crime and punished.”

Markou then points out the law, by definition, holds people accountable. They must comply with the law to the fullest extent their consciences allow. This compliance applies to organizations as well. “To varying degrees, companies are endowed with legal personhood, too,” he explains. “It grants companies certain economic and legal rights, but more importantly, it also confers responsibilities on them.”

Robots making decisions on their own

Now to the thorny stuff: Markou suggests the law is on the cusp of needing to evolve. Robotic platforms using artificial intelligence (AI) are close to making, what might be considered, independent decisions, and the law is unable to answer questions like the following:

  • If an advanced autonomous machine commits a crime of its own accord, how should it be treated by the law?
  • How would a lawyer go about demonstrating the “guilty mind” of a nonhuman?
  • Does evolving entail adapting to existing legal principles or writing new ones?

The “guilty mind” Markou refers to is an interesting concept. “Criminal law requires that an accused is culpable for their actions,” writes Markou. “The idea behind the guilty-mind concept is that the accused both completed the action of assaulting someone and had the intention of harming them, or knew harm was a likely consequence of their action.”

The first requirement for a crime based on “guilty mind” per Markou is that AI technology has reached a level of sophistication that allows the device to bypass human control; at that time questions about harm, risk, fault, and punishment will become important. So, Markou believes that robots can commit crimes, but there is a caveat: “If a robot kills someone then it has committed a crime, but technically only half a crime, as it would be far harder to determine ‘guilty mind,'” explains Markou. “How do we know the robot intended to do what it did?”

SEE: Robot Law, book review: People will be the problem (ZDNet)

Why robot crime depends on emergence

Markou feels that whether a robot can commit a crime or not depends on “emergence.” Emergence is where a system does something new and likely good, but also unforeseeable, which is why it presents a problem for the law. The hope is for safe and beneficial emergent behavior, and not manifest illegal, unethical, and/or dangerous behavior.

SEE: Robot kills worker on assembly line, raising concerns about human-robot collaboration (TechRepublic)

How would robots be punished?

It does not take much thought to envision the complexity of deciding whether a robot is guilty of committing a crime. To make matters even more untenable, Markou (maybe with a hint of sarcasm) talks about punishment, writing, “What’s a 30-year jail stretch to an autonomous machine that does not age, grow infirm, or miss its loved ones?” Markou concludes his commentary on a cautionary note:

“At present, we are systematically incapable of guaranteeing human rights on a global scale. So I cannot help but wonder how ready we are for the prospect of robot crime given that we already struggle to contain that done by humans.”