Big Data

Why AI bias could be a good thing

Artificial intelligence and machine learning algorithms are filled with human biases, which is bad. But the fact that they force us to confront them may be good.

As much as we like to talk about artificial intelligence (AI) and machine learning as something superhuman—something that machines can do better than people—the reality is that AI and machine learning simply speed up the rate at which our human bias normally operates.

As former Googler Yonatan Zunger wrote in an exceptionally thoughtful post on AI bias, the minute we start building an ML model we run into an inconvenient truth: The "biggest challenges of AI often start when writing it makes us have to be very explicit about our goals, in a way that almost nothing else does."

In other words, machines reflect and amplify our bias, rather than eradicating them. As we turn to AI and machine learning for everything from marketing to judicial sentencing, we need to be hyper-aware of this.

Do as I say...

For better and worse, machines do exactly what we tell them to do. As Zunger has highlighted, the best thing about machines cranking through data isn't speed, but rather a built-in lack of creativity:

Their real advantage is that they don't get bored or distracted: an ML model can keep making decisions over different pieces of data, millions or billions of times in a row, and not get any worse (or better) at it. That means you can apply them to problems that humans are very bad at — like ranking billions of web pages for a single search, or driving a car.

This "don't get bored" advantage is real, but it also points to the problem.

SEE: IT leader's guide to the future of artificial intelligence (Tech Pro Research)

...Not as I do

As much as marketers like to sell their AI wares as somehow "beyond human," they're not. People program computers, not the other way around, and in the process people imbue their computers with all their biases. As Zunger wrote:

Machine-learned models have a very nasty habit: they will learn what the data shows them, and then tell you what they've learned. They obstinately refuse to learn "the world as we wish it were," or "the world as we like to claim it is," unless we explicitly explain to them what that is — even if we like to pretend that we're doing no such thing.

Or, as he summarized: "AI models hold a mirror up to us; they don't understand when we really don't want honesty. They will only tell us polite fictions if we tell them how to lie to us ahead of time." An AI model isn't some neutral arbiter of truth, in other words: We tell it our truths, and it spits them back at us.

Zunger walks through a range of famous (and less famous) examples of this bias in action, which you should take the time to read. What emerges is less a concern that we'll never be able to teach cars to drive so much as a worry that we're already expecting too much of AI and machine learning in how we use computers to speak to, or for, human agency.

When we program AI and machine learning algorithms, we must make explicit decisions about what matters, and that can make us extremely uncomfortable. (For example, if you're programming a car do you tell it to kill the child that ran out into the road or the driver? Choose one.) Perhaps that discomfort is a learning opportunity for us all. Maybe, just maybe, in being forced to overtly face our biases in programming these models, we may learn to overcome them, even if our machines cannot.

Also see

bias.jpg
Image: iStockphoto/SIphotography

About Matt Asay

Matt Asay is a veteran technology columnist who has written for CNET, ReadWrite, and other tech media. Asay has also held a variety of executive roles with leading mobile and big data software companies.

Editor's Picks

Free Newsletters, In your Inbox