The Rekognition software has faced criticism for being particularly bad at identifying people with darker skin.
Amazon made headlines on Wednesday evening when it announced that it would be putting a one-year moratorium on the use of its facial recognition software by police departments.
In a short blog post, the company said it "will continue to allow organizations like Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics to use Amazon Rekognition to help rescue human trafficking victims and reunite missing children with their families."
"We've advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested," the statement said.
This week, IBM announced it was exiting the facial recognition business. Hundreds of companies are taking steps to address areas of bias in light of the recent global protests centered around George Floyd, who died after a Minneapolis police officer knelt on his neck for nearly nine minutes.
SEE: Coronavirus: Critical IT policies and tools every business needs (TechRepublic Premium)
Rekognition has long been criticized by civil rights groups and by black technology leaders, who have spent years taking issue with the software for repeatedly proven bias when it comes to identifying faces with darker skin.
Two studies done by the ACLU in 2018 and in 2019 showed that the software falsely matched headshots from members of congress or famous athletes with mugshots from people in prison. Two weeks ago, Comparitech conducted a follow up study two weeks ago that proved that nothing had changed in terms of the software's accuracy.
The software actually performed worse this time, "incorrectly matched an average of 32 US Congresspersons to mugshots in the arrest database" out of the 530 that were tested and misidentified an average of 73 UK politicians among the 1,429 politicians tested.
"Amazon's and IBM's announcements about moratoriums on police use of face recognition is welcome news. At this critical moment in our history, now is not the time to empower police with the ability to identify protesters or restrict freedoms of movement and assembly. We need more regulation that stipulates how, when, where, and in what context police are allowed to use face recognition, and with whom police can share face recognition data," said Paul Bischoff, privacy advocate at Comparitech, who conducted the May 2020 study of Rekognition.
"Allowing police to purchase face recognition services without oversight could have serious consequences, both predictable and unforeseen. As our study found, the technology has an unintentional but clear racial bias, and improper use can result in high rates of misidentification."
The company courted controversy in 2018 when Amazon Web Services general manager of artificial intelligence Matthew Wood criticized the ACLU study and the work of Buolamwini and Raji in multiple blog posts, claiming they were misrepresenting the program's accuracy by intentionally using it incorrectly.
Much of the dispute over police departments using it boils down to the confidence threshold that users set for Rekognition. After the study from Buolamwini and Raji made headlines, Amazon repeatedly said in documents that all police departments should use it at a 95% threshold. Police departments have already said they do not do this, with most using the software at the 80% threshold that the program is set to at first. All of the studies done by researchers use the 80% threshold as the benchmark.
Despite the issues with Rekognition, Amazon has openly sold it widely to police departments and security forces across the world. The company tried to sell the program to the Immigration and Custom Enforcement agency but will not say officially how many police departments are using the software.
When pressed on the issue in February, CEO of Amazon's Web Services Andy Jassy told PBS company officials would stop any police department from using Rekognition if they found out it was being misused, but the company has released no further information about how this would work or how they would even know how a police department was using it.
When news of the Rekognition moratorium was released, many thanked Buolamwini and Raji for their work in publicizing the problems with the software.
"This is a collective effort by not only researchers, but also civil liberties organizations, activists, employees, and shareholders applying pressure coupled with the tragic death of #GeorgeFloyd and tardy corporate acknowledgment that #BlackLivesMatter," Buolamwini said on Twitter.
Despite excitement over the announcement, many groups and researchers criticized the move for how vague it was and for setting a time limit on it. Many also noted that there is no way for the public to know whether police departments have actually stopped using Rekognition.
Evan Greer, deputy director of Fight for the Future, called the move "a public relations stunt" and said it was a sign "that facial recognition is increasingly politically toxic."
"Amazon knows that facial recognition software is dangerous. They know it's the perfect tool for tyranny. They know it's racist—and that in the hands of police it will simply exacerbate systemic discrimination in our criminal justice system," Greer said in a statement.
"Amazon's Ring surveillance doorbell company still maintains more than 1,000 surveillance partnerships with police departments across the country, enabling automated racial profiling and surveillance of entire neighborhoods. The reality is that facial recognition technology is too dangerous to be used at all. Like nuclear or biological weapons, it poses such a profound threat to the future of humanity that it should be banned outright."
Greer went on to imply that Amazon was only putting the moratorium in place as a show of good faith to lawmakers currently writing legislation on the use of facial recognition software.
Nearly 80 scientists, including the former principal scientist for artificial intelligence at Amazon, Anima Anandkumar, signed a letter last year condemning Rekognition, calling on Amazon to stop selling Rekognition to law enforcement until legislation was in place to regulate how it was used.
The ACLU also released a statement about Amazon's decision, writing that Rekognition might need to be banned for longer than one year.
"It took two years to get to this point, but it is about time Amazon started recognizing the dangers face recognition surveillance poses to Black and Brown communities and civil rights more broadly. Face recognition technology gives governments the unprecedented power to spy on us wherever we go. It fuels police abuse. These dangers will not just go away in a year," the ACLU wrote on Twitter Wednesday night.
"Amazon must commit to a full stop of its face recognition sales until the dangers can be fully addressed. It must also urge Congress and legislatures across the country to pause law enforcement use of the technology. We will continue the fight to defend our privacy rights from face surveillance technology. That means you're next, Microsoft."
- IT leader's guide to deep learning (TechRepublic Premium)
- Magic Leap 1 augmented reality headset: A cheat sheet (free PDF) (TechRepublic download)
- Artificial intelligence ethics policy (TechRepublic Premium)
- What is AI? Everything you need to know about Artificial Intelligence (ZDNet)
- 6 ways to delete yourself from the internet (CNET)
- Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)