Artificial Intelligence

Google employees demand end to company's AI work with Defense Department

More than 3,000 Google employees signed a letter criticizing the company for assisting with Project Maven, a Pentagon initiative involving AI and drone footage.

Building a slide deck, pitch, or presentation? Here are the big takeaways:
  • Employees addressed a letter to CEO Sundar Pichai, saying Google should "not be in the business of war," urging the firm to stop its work on AI analysis of drone footage.
  • Google claims the technology, a part of Project Maven, will "save lives and save people from having to do highly tedious work."

Google is facing heavy criticism from its own employees following revelations that the tech company is working with the Department of Defense on Project Maven, an effort to use artificial intelligence (AI) image recognition software to sort through drone and security footage.

"We cannot outsource the moral responsibility of our technologies to third parties," they wrote in a letter signed by 3,100 employees. "Building this technology to assist the US Government in military surveillance - and potentially lethal outcomes - is not acceptable."

Outrage has been growing within Google since the pact with the Pentagon was announced last year. The deal involves Google's TensorFlow software, which the letter says is being adapted into "a customized AI surveillance engine that uses 'Wide Area Motion Imagery' data captured by US Government drones to detect vehicles and other objects, track their motions, and provide results to the Department of Defense."

SEE: Employee political activity policy (Tech Pro Research)

In a statement, Google said the project is for "non-offensive purposes" and was only intended "to save lives and save people from having to do highly tedious work."

"Any military use of machine learning naturally raises valid concerns," Google said in the statement. "We're actively engaged across the company in a comprehensive discussion of this important topic and also with outside experts, as we continue to develop our policies around the development and use of our machine learning technologies."

Both Google and the Pentagon have stressed that the technology is not ready to be used in combat situations, with Marine Corps Col. Drew Cukor telling the audience at the 2017 Defense One Tech Summit audience that "AI will not be selecting a target [in combat] ... any time soon. What AI will do is [complement] the human operator."

But Col. Cukor also said that he believes the Defense Department is "in an AI arms race," and acknowledged that "the big five Internet companies are pursuing this heavily."

Cukor later added: "Key elements have to be put together...and the only way to do that is with commercial partners alongside us."

According to the Wall Street Journal, the Defense Department spent $7.4 billion on technology involving AI last year, and Google, Microsoft, and Amazon are openly battling for a variety of defense contracts involving cloud computing and other software.

But the employee letter argues that Google is damaging its brand by working on Project Maven and contributing to "growing fears of biased and weaponized AI."

"The argument that other firms, like Microsoft and Amazon, are also participating doesn't make this any less risky for Google," the letter said. "Google's unique history, its motto Don't Be Evil, and its direct reach into the lives of billions of users set it apart."

Project Maven began in April last year, with the stated goal of utilizing machines to capitalize on the Defense Department's massive troves of data collected through drone footage and surveillance operations. AI is already used by other parts of the military, and since 2014 has been used widely in law enforcement.

The Justice Department now promotes the use of AI software to do "risk assessments" on how likely a person on trial is of committing a future crime. The scores are often handed to judges and affect sentencings in states across the country, having disastrous effects. Black defendants were 77% more likely to be pegged as "at higher risk of committing a future violent crime" and 45% were "more likely to be predicted to commit a future crime of any kind," according to ProPublica.

Google has tried to tamp down concerns about handing over vital AI recognition software to the Defense Department, with former Alphabet Executive Chairman Eric Schmidt admitting last year in an interview that "there's a general concern in the tech community of somehow the military-industrial complex using their stuff to kill people incorrectly, if you will."

But Schmidt went on to say in that interview that it was vital that he and other tech industry leaders stay in communication with the military "to keep the country safe."

Yet many of Google's employees disagreed, starting the letter off with: "We believe that Google should not be in the business of war."

Also see

drone.jpg
Image: iStockphoto/vadimmmus

About Jonathan Greig

Jonathan Greig is a freelance journalist based in New York City. He recently returned to the United States after reporting from South Africa, Jordan, and Cambodia since 2013.

Editor's Picks

Free Newsletters, In your Inbox