Fairness-verification tool helps avoid illegal bias in algorithms

Researchers suggest human bias influences algorithms more than we realize, and offer a solution to weed the bias out. They received a $1 million National Science Foundation grant for their project.

Video: How to tell the difference between AI, machine learning, and deep learning

If science-fiction writers are correct, at some point in the future, smart robots will take over the world--whether that's good or bad depends on the writer. That said, it is important to note that non-humanoid robots (i.e., computers) incorporating artificial intelligence (AI) are already taking over parts of our world by making decisions that affect our lives.

For instance, my TechRepublic article Biases in algorithms: The case for and against government regulation discusses how the algorithm behind a driverless car using AI will likely be required to make choices on whether to put its passengers, passengers in other vehicles, or people on the side of the road in harm's way.

Something else to consider is the controlling algorithm carries the bias of the person/s who designed the algorithm-based control system. "They [algorithms] do not have prejudices and are unemotional," writes Alan Reid, senior lecturer in law at Sheffield Hallam University, in this The Conversation post. "But algorithms can be programmed to be biased or unintentional bias can creep into the system."

SEE: The Machine Learning and Artificial Intelligence Bundle (TechRepublic Academy)

Remove bias from algorithms

Regarding bias in algorithms, this University of Wisconsin-Madison press release by Jennifer Smith states that computer-science professors Aws Albarghouthi, Shuchi Chawla, Loris D'Antoni, and Jerry Zhu, along with graduate students Samuel Drews and David Merrell, are well aware of the problem. From the press release:

"Yet while some may assume that computers remove human bias from decision-making, research has shown that is not true. Biases on the part of those designing algorithms, as well as biases in the data used by an algorithm, can introduce human prejudices into a situation. A seemingly neutral process becomes fraught with complications."

If you are wondering why this is important, consider whether you have recently applied for a loan, checked your credit score, or booked a flight. If the answer is yes, algorithms or mathematical models--designed to predict a likely outcome--were probably a factor in the decision-making process.

In the ZDNet article Inside the black box: Understanding AI decision-making, Charles McLellan writes, "Artificial intelligence algorithms are increasingly influential in peoples lives." He goes on to look at several areas where AI is already in place, making decisions.

SEE: Special report: How to implement AI and machine learning (free PDF) (TechRepublic)

Image: iStock/Jirsak

How a fairness-verification tool could help

In their research paper, Fairness as a Program Property (PDF), three members of the UW-Madison research team (Albarghouthi, D'Antoni, and Drews), along with Aditya Nori of Microsoft Research, write:

"We have built a fairness-verification tool, called FairSquare, that takes a decision-making program, a population model, and verifies fairness of the program with respect to the model."

The researchers feel a tool like FairSquare is vital because those working with decision-making algorithms may not fully understand the methodology being used by the algorithm. "Many companies using these algorithms don't understand what the algorithms are doing," explains Albarghouthi. "An algorithm seems to work for them, so they use it, but usually there is no feedback or explainability on how it is exactly working. That makes these algorithms difficult to regulate in terms of avoiding illegal bias."

The researchers offer an example of a bank using a third-party AI tool to evaluate who qualifies for a mortgage or small-business loan. The bank may not know:

  • how the software is classifying potential customers;
  • the accuracy of the predictions; or
  • whether the tool introduces bias or not.

Another example was brought to light in the ProPublica story Machine Bias, where a team of investigative reporters uncovered racial bias in a tool used to predict an offender's likelihood of committing another crime. The reporters write, "The formula was likely to falsely flag black defendants as future criminals, wrongly labeling them at almost twice the rate as white defendants."

And that's just two examples.

SEE: Ethics Policy (Tech Pro Research)

Why not reverse engineer the code?

So why not have a third party reverse engineer the algorithm and ancillary code? The simple answer: Algorithms, for the most part, are proprietary. "Algorithms are usually commercially sensitive and highly lucrative," notes Alan Reid in his The Conversation article. "Corporations and government organizations will want to keep the exact terms of how their algorithms work a secret."

Reid adds that algorithms are usually protected by patents and confidentiality agreements, making it nearly impossible to obtain an algorithm's inner workings.

Who decides what's fair when it comes to bias?

Remember the driverless car? Who decides what is fair when it comes to bias? The UW-Madison researchers suggest that "fairness" must be formally defined, proven, and considered a property of the software in order to control human bias--and they feel FairSquare can be used to achieve that:

"We have used FairSquare to prove or disprove fairness of a suite of population models and programs representing machine-learning classifiers that were automatically generated from real-world datasets used in other work on algorithmic fairness."

The UW-Madison press release ends on a solemn note. "Machine-learning algorithms have become very commonplace, but they aren't always used in responsible ways. I hope our research will help engineers build safe, reliable and ethical systems," says UW-Madison researcher David Merrell.

Support from the National Science Foundation

The National Science Foundation has shown enough interest in the UW-Madison researchers' work to issue a million-dollar grant to the researchers' project Formal Methods for Program Fairness.

Also see