Tech companies in the U.S. and the U.K. haven’t done enough to prevent bias in artificial intelligence algorithms, according to a new survey from Data Robot. These same organizations are already feeling the impact of this problem as well in the form of lost customers and lost revenue.
DataRobot surveyed more than 350 U.S. and U.K.-based technology leaders to understand how organizations are identifying and mitigating instances of AI bias. Survey respondents included CIOs, IT directors, IT managers, data scientists and development leads who use or plan to use AI. The research was conducted in collaboration with the World Economic Forum and global academic leaders.
In the survey, 36% of respondents said their organizations have suffered due to an occurrence of AI bias in one or several algorithms. Among those companies, the damage was significant:
- 62% lost revenue
- 61% lost customers
- 43% lost employees as a result of AI bias
- 35% incurred legal fees due to a lawsuit or legal action
Respondents report that their organizations’ algorithms have inadvertently contributed to a wide range of bias against several groups of people:
- Gender: 34%
- Age: 32%
- Race: 29%
- Sexual orientation: 19%
- Religion: 18%
In addition to measuring the state of AI bias, the survey probed attitudes about regulations. Surprisingly, 81% of respondents think government regulations would be helpful to address two particular components of this challenge: defining and preventing bias. Beyond that, 45% of tech leaders worry that those same regulations increase costs and create barriers to adoption. The survey also identified another complexity to the issue: 32% of respondents said they are concerned that a lack of regulation will hurt certain groups of people.
Emanuel de Bellis, a professor at the Institute of Behavioral Science and Technology, University of St. Gallen, said in a press release that the European Commisison’s proposal for AI regulation could address both of these concerns.
“AI provides countless opportunities for businesses and offers means to battle some of the most pressing issues of our time,” de Bellis said. “At the same time, AI poses risks and legal issues including opaque decision-making (the black-box effect), discrimination (based on biased data or algorithms), privacy and liability issues.”
AI bias tests are failing
Companies are aware of the risk of bias in algorithms and have attempted to put some protections in place. Seventy-seven percent of respondents said they had an AI bias or algorithm test in place before determining that bias was happening anyway. More organizations in the U.S. (80%) had AI bias monitoring or algorithm tests in place prior to bias discovery than organizations in the U.K. (63%).
At the same time, U.S. tech leaders are more confident in their ability to detect bias with 75% of American respondents saying they could spot bias, as compared with 56% of U.K. respondents saying the same.
Here are the steps companies are taking now to detect bias:
- Checking data quality: 69%
- Training employees on what AI bias is and how to prevent it: 51%
- Hiring an AI bias or ethics expert: 51%
- Measuring AI decision-making factors: 50%
- Monitoring when the data changes over time: 47%
- Deploying algorithms that detect and mitigate hidden biases in training data: 45%
- Introducing explainable AI tools: 35%
- Not taking any steps: 1%
Eighty-four percent of respondents saidtheir organizations are planning to invest more in AI bias prevention initiatives in the next 12 months. According to the survey, these actions will include spending more money to support model governance, hiring more people to manage AI trust, creating more sophisticated AI systems and producing more explainable AI systems.