CES 2019: How IBM Watson uses AI to uncover hidden bias

General Manager of IBM Watson business applications Inhi Cho Suh discusses how insufficient data, IoT devices, incorrect values, and other factors can influence machine-generated bias within AI.

CES 2019: How IBM Watson uses AI to uncover hidden bias General Manager of IBM Watson business applications Inhi Cho Suh discusses how insufficient data, IoT devices, incorrect values, and other factors can influence machine-generated bias within AI.

At CES 2019, Senior Writer Teena Maddox spoke with General Manager of IBM Watson business applications Inhi Cho Suh to talk about the ways machine-generated biases can influence AI. The following is an edited transcript of the interview.

Inhi Cho Suh: One of the top topics in the industry that I think is incredibly important is trust in AI; more importantly, how we can actually apply AI to uncover biases. When thinking about bias, think about all different types—both social bias and machine-generated bias.

What I mean by that is, as you think about how AI works, it starts with data. You may have a bias around the lack of data, so in the loan processing areas, where you're approving loans, there may be less women-owned businesses as a data set collectively in the US market than there are men-owned businesses, and maybe that might be an area that you have a shortage of data. Therefore, you can have an inherent bias for it, as an example.

SEE: CES 2019 news, photos, videos, and more (TechRepublic on Flipboard)

There may be other biases, such as groupthink. It could be time constraints around projects, but also machine-generated biases in data and AI. For example, IoT devices, as you think about temperature and gauge and control... if those devices aren't capturing the data set correctly for that environment, it could generate a whole host of challenges.

One of the core projects that we're working on is called AI Trust and Transparency. It's the ability to see AI models in operations, during production, and see where you can detect, 'Hey, I may have a bias because of sensitivity around certain decision criteria.'

SEE: CES 2019: The Big Trends for Business (ZDNet Special Feature)

There may be bias because the data set is insufficient; there may be bias because of values, meaning how you've actually established the decision criteria within the governance structure to make a decision. One of the things that we're working on—in terms of an open ecosystem—is to enable developers and businesses to adopt open frameworks, machine-learning frameworks, in the industry from a variety of vendors that provide AI beyond just IBM and see how those applications actually run in real environments.

Also see

20190110chosuhbiasaiteena.jpg
Image: TechRepublic

By Teena Maddox

Teena Maddox is a Senior Editor at TechRepublic, covering hardware devices, IoT, smart cities and wearables. She ties together the style and substance of tech. Teena has spent 20-plus years writing business and features for publications including Peo...