Artificial Intelligence

How tech leaders can combat bias in AI systems

At the 2018 Grace Hopper Celebration, Rebecca Parsons of ThoughtWorks explained why AI bias is so dangerous for companies.

At the 2018 Grace Hopper Celebration, Rebecca Parsons of ThoughtWorks spoke with TechRepublic's Alison DeNisco Rayome about why AI bias is so dangerous for companies. The following is an edited transcript of the interview.

Rebecca Parsons: With many of these AI systems, it's hard to know on what basis they're making the decision. And there are legal protections in many countries in the world against discrimination on the basis of race, gender, sexual orientation. And if you don't know on what basis decisions are being made, how can you, in fact, know that you are compliant with the law?

When you think about how an AI system learns a model, it looks at the data that exists from the past, and it develops a model based on that data. If there is systemic bias in the creation of that data, as there is a lot of evidence that there is, for example, racial bias in our criminal justice system, how can these models, which are trained on historical data, not be biased?

SEE: Executive's guide to AI in business (free ebook)

There are ways that you can think about testing these models to see if they exhibit bias. There are ways of thinking about the data set and analyzing the data set to try to see if there are surrogates for things like race and gender that are coming into the data, and analyze the data in such a way that perhaps you can start to mitigate against it. But a lot of it is just being aware of what kind of data you're drawing from. And, are there issues in the creation of that data that we should be aware?

There are also techniques where it's a bit easier to understand the basis on which a recommendation is being made. And so, maybe you can train using different methods from the same data, and look at the one telling you what kinds of patterns it's picking up in the data, and that might give you insight into the bias that might exist in the data.

Humans, all humans have biases. And we have to recognize how those biases can creep in, both through our unconscious reactions to people, but also in the society around us. And we have to be aware of what those biases are, because if we're not aware, we can't do anything about it.

Also see

pawns-figures-on-wooden-seesaw-picture-id921345556.jpg
© iStock/AndreyPopov

About Alison DeNisco Rayome

Alison DeNisco Rayome is a Staff Writer for TechRepublic. She covers CXO, cybersecurity, and the convergence of tech and the workplace.

Editor's Picks

Free Newsletters, In your Inbox