People Faces Recognized With Intellectual Learning System
Image: Andrey Popov/Adobe Stock

Long the domain of sci-fi and James Bond movies, facial recognition is finally entering the mainstream. Tools like Apple’s FaceID and Microsoft’s Windows Hello have brought facial recognition to consumer devices and helped replace passwords for everything from unlocking our devices to paying for goods and services.

It’s easy to let your mind run wild with business use cases for facial recognition — ranging from using employees’ faces as an access token to the myriad possibilities in industries from marketing to travel. The potential applications range from the relatively mundane of replacing old-fashioned timecards with facial recognition to individually-tailored marketing that appears on interactive signage as someone walks by.

SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)

Like any new technology, some risks extend far beyond technical feasibility or implementation challenges and stray into areas as diverse as ethics and legality. As you evaluate facial recognition applications, consider the following challenges.

Is the face even a good identifier?

An unstated assumption behind facial recognition is that the human face is unique. Perhaps this assumption is based on the very human ability to pick out a friend or loved one in a crowd with a glance and our routine use of our faces as an emotive tool.

However, we’re all aware of identical twins and perhaps have experienced the doppelgänger phenomenon, where we’re mistaken for someone we’ve never met. Interestingly, very little academic research has been done into how unique our faces are, both from a biological perspective and the ability of common facial recognition algorithms to discern individuals reliably.

A rather dense recent study from Cornell University attempts to test the impact of factors ranging from age to gender to the presence of twins in various facial recognition algorithms’ ability to identify unique faces. The study concludes that we don’t fully understand how unique a particular face truly is.

This fact might be acceptable if you’re tracking traffic flows through your warehouse or presenting tailored advertisements. Still, it could be problematic if you’re using facial recognition to provide access to a secure or dangerous facility, for example.

It might seem odd to question the fundamental premise of facial recognition technology, that the face is a truly unique identifier. However, this most human of assumptions may not only be biologically incorrect, as identical twins and doppelgängers demonstrate, but may also be technically incorrect.

Just because you can, doesn’t mean you should

In an era of convoluted Terms And Conditions and other legalese, it’s relatively straightforward to legally protect yourself from applying technologies like facial recognition. You can generally gain consent from consumers or employees without them fully understanding what they’re consenting to and likely appease your legal department. However, it’s worth pausing and applying the Headline News test: How would you feel if your activities were the lead story in a major news outlet, detailed in plain language?

Using facial recognition to clock hourly employees in and out of work might be fine, but if you’re installing hidden cameras near the restroom and docking employees for “bio breaks,” what message does that send to employees? Similarly, using facial recognition to gather demographic information is technically possible, but how would you explain a not-unfounded accusation that you’re racially profiling your employees or customers? Several facial recognition algorithms promise an ability to read emotional states, but what happens if your algorithm incorrectly interprets a particular gender, race or age and causes your company to treat them differently?

This phenomenon is already playing out. Organizations ranging from the Internal Revenue Service to tech giants like Amazon and Microsoft have been taken to task in the press for using facial recognition technology.

Test the non-tech risks just like you’d test the technology

Tech leaders are used to building proofs of concept or doing a small pilot test of unfamiliar technology. You can apply a similar approach to testing the ethical and reputational risks of the technology.

For example, if you’re considering capturing demographic information on your employees, create a new policy document detailing how the company will capture and store their data for a hypothetical company and see how some current employees react. You could create a hypothetical news story and test the reaction of some key customers or run dozens of similar experiments.

Just as you wouldn’t invest significantly in an unfamiliar technology without initial, limited testing, so too should you test the consumer perception of risky technologies like facial recognition.

There are undoubtedly exciting applications of facial recognition technology, and the popularity of tools like Apple’s Face ID proves that consumers are willing to embrace them when used appropriately. However, there are also cases of significant pushback and reputation-damaging press resulting from these technologies’ actual or perceived misapplication. It’s well worth the time investment in understanding the risks of applying them at your company.

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays