Police in Detroit have admitted that they mistakenly arrested a Black man because of facial recognition software they were using.
On Wednesday morning, the ACLU announced that it was filing a complaint against the Detroit Police Department on behalf of Robert Williams, a Black Michigan resident whom the group said is one of the first people falsely arrested due to facial recognition software.
Williams and lawyers from the ACLU said Detroit police were looking for someone who had broken into a Shinola watch store. They took security camera footage from the store's owner and put it into the city's facial recognition software, getting the 42-year-old Williams as a match.
SEE: Cheat sheet: Artificial intelligence (free PDF) (TechRepublic)
Police arrested him in January in front of his wife, children, and neighbors, Williams said in a first-person recounting of the incident published by The Washington Post. After holding him for 16 hours in a crowded Detroit Detention Center cell, an officer brought him into an interrogation room and showed him the security photos.
According to Williams, he held the photo next to his face to prove it wasn't him, and one of the officers turned to another and said, "the computer must have gotten it wrong." His attorney later discovered that the security camera footage was sent to the Michigan State Police, and its facial recognition software pulled up Williams' driver's license photo.
The Detroit Police Department did not respond to requests for comment but in a statement to NPR said, "After the Williams case, the department enacted new rules. Now, only still photos, not security footage, can be used for facial recognition and only in the case of violent crimes."
SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)
For years, researchers with the ACLU, MIT and other institutions have consistently proven that facial recognition software is still very inaccurate, particularly when it comes to people with darker skin. Despite the inaccuracy and concerns of many, hundreds of police departments and the FBI use the software on a daily basis to identify people during investigations.
But in a statement, the ACLU said law enforcement never tells people if they have been identified using facial recognition software, and this is the first documented instance where police admitted that their use of the software is what caused the mistake. According to the ACLU, the mistake was only revealed because Williams heard what the officers said during his interrogation and his lawyers were able to push for more information about how he was identified.
"We have long warned that one false match can lead to an interrogation, arrest, and, especially for Black men like Robert, even a deadly police encounter. Given the technology's flaws, and how widely it is being used by law enforcement today, Robert likely isn't the first person to be wrongfully arrested because of this technology," the ACLU said in a statement.
"He's just the first person we're learning about."
SEE: Equitable tech: Companies pause facial recognition, but major questions remain (TechRepublic)
Calls for bans increase
Several US cities like San Francisco have outright banned police from using facial recognition software, but Williams, the ACLU, and other experts are now calling for a nationwide ban on use of the software, at least until it can be perfected.
Josh Bohls, CEO and founder of Inkscreen, a content capture company, said facial recognition technology cannot be solely relied on to make arrest determinations and called it "too new and unproven to be determinative of a suspect's identification."
He added that police still have not fully fleshed out the legal and privacy implications of using it and said at most, police in Detroit should have simply used it to interview Williams, not arrest him.
According to James McQuiggan, security awareness advocate at KnowBe4, facial recognition uses artificial intelligence (AI) and outlines a face by creating vectors and matrices, but can be limited by the quality of light on the subject.
"People with darker skin tones and an image with low light quality presents a complication that does not appear adequately addressed by facial recognition software. More concerning is there is no proper auditing system in place for these systems when it comes to false positives or misuse," McQuiggan said, adding that whatever these systems produce should be used as reference points and not sole reasons to arrest people.
Facial recognition software should only be used with high-definition cameras and depth-matching sensors at close range, according to Chris Clements, vice president of solutions architecture at Cerberus Sentinel.
He said use of the technology was still in its infancy and should be treated with low confidence until it can be corroborated by other evidence. Attempting to match a still frame from a low-quality video camera several feet away is likely to produce very low confidence matches, he added.
Just three weeks ago, Comparitech.com's Paul Bischoff released a study of Amazon's facial recognition software that found it struggled to even identify politician headshots, which are far clearer than any kind of security footage police are using. Even with crystal clear photos, the software incorrectly matched an average of 32 US Congresspersons to mugshots in an arrest database.
Bischoff said instances like this are prime examples of why a moratorium on police use of face recognition software is needed until regulations are put in place to restrict how it can be used.
SEE: OpenAI's commercial release of API raises serious questions about AI misuse (TechRepublic)
"The Detroit Police fed grainy video footage to a face recognition tool--tools that can misidentify people even when clear headshots are used. The suspect was Black, which reinforces the fact that face recognition misidentifies people of color at a higher rate than white people, and thus disproportionately impacts people of color," Bischoff said.
"Worst of all, the mismatch led police to jump to conclusions and make an arrest without proper due diligence. This is just one case that went public, but police use face recognition behind closed doors all the time, and we'll keep seeing the same mistakes and abuse of face recognition until proper regulation is in place."
Multiple advocacy groups are now pushing lawmakers to at least put some laws in place to regulate how the software is used. Like the changes the Detroit Police Department described in its statement, civil rights organizations want limits to be set on what crimes facial recognition can be used for and what accuracy thresholds should be in place.
"What happened to Robert Williams and his family should be a wake up call for lawmakers. Facial recognition is doing harm right now. This is only the first case that has come to light. There are almost unquestionably people sitting in jail right now who were put there because they were falsely accused by a racist computer algorithm. Enough is enough. It's time for Congress to do their job and ban facial recognition surveillance in the United States," said Evan Greer, deputy director of rights group Fight for the Future.
Even the companies selling facial recognition software are asking for legislation to govern the technology. In light of recent protests around the globe, Amazon, IBM and other major tech companies agreed to at least a one-year moratorium on allowing their facial recognition programs to be used by police. An Amazon statement said it has "advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology," and said the US Congress "appears ready to take on this challenge."
SEE: Robotic process automation: A cheat sheet (free PDF) (TechRepublic)
"We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested," the statement said.
But for Williams, the damage has already been done. In his retelling of what happened, he spoke about the horror of his children watching him being arrested and his fear that he was one of the lucky ones.
"I never thought I'd have to explain to my daughters why Daddy got arrested. How does one explain to two little girls that a computer got it wrong, but the police listened to it anyway? Why is law enforcement even allowed to use such technology when it obviously doesn't work? I get angry when I hear companies, politicians and police talk about how this technology isn't dangerous or flawed," Williams wrote.
"I wouldn't be surprised if others like me became suspects but didn't know that a flawed technology made them guilty in the eyes of the law. I wouldn't have known that facial recognition was used to arrest me had it not been for the cops who let it slip while interrogating me. I keep thinking about how lucky I was to have spent only one night in jail—as traumatizing as it was. Many Black people won't be so lucky. My family and I don't want to live with that fear. I don't want anyone to live with that fear."
- IT leader's guide to deep learning (TechRepublic Premium)
- Magic Leap 1 augmented reality headset: A cheat sheet (free PDF) (TechRepublic download)
- Artificial intelligence ethics policy (TechRepublic Premium)
- What is AI? Everything you need to know about Artificial Intelligence (ZDNet)
- 6 ways to delete yourself from the internet (CNET)
- Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)