Innovation

Police use robot to kill for first time; AI experts say it's no big deal but worry about future

To prevent a gunman from killing more people in Dallas, the police chief deployed a robot to stop him. Police and AI experts weigh in on what this decision means for the use of tech in law enforcement.

Image: Northrup Grumman

On July 8, after 1:00 a.m., on the second floor of El Centro Community College in Dallas, Texas, chief of police David Brown ordered his SWAT team to do something that had never been done before by the police: Use a robot to kill a shooter.

The decision didn't happen quickly. It was made after hours of negotiations and 45 minutes in active shooting with Micah Johnson, who had shot and killed five Dallas police officers, and wounded nine others. It was a decision that, according to the Dallas Morning News, had mayor Rawlings' full support.

In this case, the robot used was the Remotec Andros Mark 5A-1, produced by Northrup Grumman. This particular type of robot had been used in the military before—to defuse bombs—but had not been used in a policing context.

While the use of the robot was novel, the rationale behind it seemed typical to the way police chiefs must make decisions about the use of force.

The choice was one that Michael D. Reitan, chief of police for the West Fargo Police Department, said he would have made, if he'd been in similar circumstances. (TechRepublic spoke to Reitan last year about drones in law enforcement, since North Dakota was the first state to legally allow the use of drones by the police ).

SEE: Dallas Police's killer robot sparks debate (ZDNet)

"Placing a robot instead of a human being into an environment where there is a high probability of harm to an officer or the public is the right decision," Reitan told TechRepublic.

When there is an encounter with a dangerous subject, Reitan said, officers must use the least amount of force possible, including taking a subject into custody. However, "depending on the immediacy of the threat, lethal force may be the first and only option," Reitan said.

"Leadership," said Reitan, "must employ the tools and tactics available to them to reach the best outcome."

The tools now include robots.

The robot was operated by a human

Robots have several functions in law enforcement. They can have cameras and microphones that can help surveil an area. The smaller ones can be thrown into an area; larger ones can physically climb obstacles. Some can grasp and manipulate objects.

However—and this is an important point—"none of the robots operate autonomously," Reitan said. "They require the input of the human operator to move or perform an action."

AI experts agree, and stress that the robot was not acting autonomously.

"There's a real risk that because of the headlines, the public will misunderstand what happened. They start thinking that this is actually a robot that's making its own decisions," said Toby Walsh, professor of AI at The University of New South Wales. "But, although it was a robot, it was not autonomous in any way. It was just a remote controlled device."

Roman Yampolskiy, head of the Cybersecurity Lab at the University of Louisville, agrees.

"The headlines on this story are a bit deceiving. Saying that a robot was used to blow up the suspect implies that the robot was the one to make the lethal decision, which is not true in this case," Yampolskiy said.

SEE: Creating malevolent AI: A manual (TechRepublic)

"A police officer was making every single decision with regards to this robot," said Yampolskiy. "This is no different than any other type of killing at a distance currently used, such as firing a gun or using a rocket."

Walsh said we shouldn't worry about the robot being used in this case. "At the end of the day, there was still a human in the loop very much making that decision as to whether to explode the device."

Walsh was one of many technologists and experts who signed an open letter against using AI in war tactics. But this situation, he said, doesn't apply. "It's not crossing the boundary that myself and the thousands of others who signed that open letter last year warning about autonomous weapons being given the ability to choose life or death," Walsh said.

Experts worry about implications for use of technology in future

Yampolskiy expects, and believes it is urgent to address, "a significant amount of independent behavior from military robots and governing their lethal behavior. Should it be banned? It is my personal opinion that a robot should never be in a position to take a human life on its own volition, without human supervision. Many NGOs agree with me, such as Human Rights Watch, but the debate is far from settled," he said.

SEE: AI gone wrong: Cybersecurity director warns of 'malevolent AI' (TechRepublic)

Marie desJardins, AI professor at the University of Maryland, Baltimore County, agrees with Yampolskiy. "The real challenge will come when we start to put more autonomy into drones and assault robots. I don't think that we should be building weapons that can, of their own accord, decide who to kill, and how to kill," said desJardins.

"I think those decisions always need to be made by people—not just by individual people, but by processes in military organizations that have safeguards and accountability measures built into the process," she said.

What Walsh worries about, on top of robots being authorized to kill on their own, is seeing the technology, meant for a different use, so quickly repurposed.

"This wasn't intended to be a robot bomb," said Walsh. "It was actually designed to do quite the opposite—it was designed to defuse bombs, to make the world a safer place. It just goes to show once you have these technologies, how easy it is for people to get them to do things that they're not intended," he said.

What will happen, Walsh wonders, "if we start building autonomous weapons that will be quickly made to do things that we weren't expecting? The Dallas police force was in a very unfortunate situation, and arguably doing perfectly the right thing," he said, "but in the future it could be terrorist organizations like ISIS doing this."

Societal questions remain

The situation brings to light larger social concerns. In both the public and within the AI community, Walsh said, people are "rightly concerned that the technology's moving perhaps faster than our legal, ethical and moral frameworks. There's many important questions that we, not just as technologists, but we as a society have to think about where this is going to take us, what sort of world we want to end up with."

It's a moment that gives us a chance "to reflect on where we're going to go," said Walsh. "It's not really a particularly novel use of technology. It's more of actually helping us to frame the questions that we should be thinking about as a society, as to how the technology should be used.

"This was a remote controlled robot—but at some point, it will be an autonomous robot. We do have autonomous vacuum cleaners running around, we do have autonomous lawn mowers running around," said Walsh. "You could reposition those to do similar things, right?"

Also see

About Hope Reese

Hope Reese is a Staff Writer for TechRepublic. She covers the intersection of technology and society, examining the people and ideas that transform how we live today.

Editor's Picks

Free Newsletters, In your Inbox