As artificial intelligence (AI) continues its march into enterprises, many IT pros are beginning to express concern about potential AI bias in the systems they use. A new report from DataRobot finds that nearly half (42%) of AI professionals in the US and UK are “very” to “extremely” concerned about AI bias.

The report, conducted last June of more than 350 US- and UK-based CIOs, CTOs, VPs, and IT managers involved in AI and machine learning (ML) purchasing decisions, also found that “compromised brand reputation” and “loss of customer trust” are the most concerning repercussions of AI bias. This prompted 93% of respondents to say they plan to invest more in AI bias prevention initiatives in the next 12 months.

SEE: The ethical challenges of AI: A leader’s guide (free PDF) (TechRepublic)

Despite the fact that many organizations see AI as a game changer, many organizations are still using untrustworthy AI systems, said Ted Kwartler, vice president of trusted AI, at DataRobot.

He said the survey’s finding that 42% of executives are very concerned about AI bias comes as no surprise “given the high-profile missteps organizations have had employing AI. Organizations have to ensure AI methods align with their organizational values, Kwartler said. “Among the many steps needed in an AI deployment, ensuring your training data doesn’t have hidden bias helps keep organizations from being reactionary later in the workflow.”

DataRobot’s research found that while most organizations (71%) currently rely on AI to execute up to 19 business functions, 19% use AI to manage as many as 20 to 49 functions, and 0% leverage the technology to tackle more than 50 functions.

While managing AI-driven functions within an enterprise can be valuable, it can also present challenges, the DataRobot report said. “Not all AI is treated equal, and without the proper knowledge or resources, companies could select or deploy AI in ways that could be more detrimental than beneficial.”

The survey found that more than a third (38%) of AI professionals still use black-box AI systems–meaning they have little to no visibility into how the data inputs into their AI solutions are being used. This lack of visibility could contribute to respondents’ concerns about AI bias occurring within their organization, DataRobot said.

AI bias is occurring because “we are making decisions on incomplete data in familiar retrieval systems,” said Sue Feldman, president of the cognitive computing and content analytics consultancy Synthexis. “Algorithms all make assumptions about the world and the priorities of the user. That means that unless you understand these assumptions, you will still be flying blind.”

This is why it is important to use systems that include humans in the loop, instead of making decisions in a vacuum, added Feldman, who is also co-founder and managing director of the Cognitive Computing Consortium. They are “an improvement over completely automatic systems,” she said.

SEE: Managing AI and ML in the enterprise 2019: Tech leaders expect more difficulty than previous IT projects (TechRepublic Premium)

How to reduce AI bias

Bias based on race, gender, age or location, and bias based on a specific structure of data, have been long-standing risks in training AI models, according to Gartner.

In addition, opaque algorithms such as deep learning can incorporate many implicit, highly variable interactions into their predictions that can be difficult to interpret, the firm said.

By 2023, 75% of large organizations will hire AI behavior forensic, privacy and customer trust specialists to reduce brand and reputation risk, Gartner predicts.

“New tools and skills are needed to help organizations identify these and other potential sources of bias, build more trust in using AI models, and reduce corporate brand and reputation risk,” said Jim Hare, a research vice president at Gartner, in a statement.

“More and more data and analytics leaders and chief data officers (CDOs) are hiring ML forensic and ethics investigators,” Hare added.

Organizations such as Facebook, Google, Bank of America, MassMutual, and NASA are hiring or have already appointed AI behavior forensic specialists to focus on uncovering undesired bias in AI models before they are deployed, Gartner said.

If AI is to reach its potential and increase human trust in the systems, steps must be taken to minimize bias, according to McKinsey. They include:

  • Be aware of the contexts in which AI can help correct bias and those in which there is high risk for AI to exacerbate bias
  • Establish processes and practices to test for and mitigate bias in AI systems
  • Engage in fact-based conversations about biases in human decisions
  • Explore how humans and machines can best work together
  • Invest more in bias research and make more data available, while restricting privacy
  • Invest more in diversifying the AI field

The DataRobot study found that to combat instances of AI bias, 83% of all AI professionals say they have established AI guidelines to ensure AI systems are properly maintained and yielding accurate, trusted outputs. In addition:

  • 60% have created alerts to determine when data and outcomes differ from the training data
  • 59% measure AI decision-making factors
  • 56% are deploying algorithms to detect and mitigate hidden biases in the training data

The latter stat surprised Kwartler. “I am concerned that only about half of the executives have algorithms in place to detect hidden bias in training data.”

There were also cultural differences discovered between US and UK respondents to the DataRobot study.

While US respondents are most concerned with emergent bias—which is bias resulting from a misalignment between the user and the system design— UK respondents are more concerned with technical bias—or bias arising from technical limitations, the study found.

To enhance AI bias prevention efforts, 59% of respondents say they plan to invest in more sophisticated white box systems, 54% state they will hire internal personnel to manage AI trust, and 48% say they intend to enlist third party vendors to oversee AI trust, according to the study.

The 48% figure should be higher, Kwartler believes. “Organizations need to own and internalize their AI strategy because that helps them ensure the AI models align with their values. For each business context and industry, models need to be evaluated before and after deployment to mitigate risks,” he said.

Besides those AI bias prevention measures, 85% of all global respondents believe AI regulation would be helpful for defining what constitutes AI bias and how it should be prevented, according to the report.

Image: iStockphoto/PhonlamaiPhoto