Cognitive bias leads to AI bias, and the garbage-in/garbage-out axiom applies. Experts offer advice on how to limit the fallout from AI bias.
Artificial intelligence (AI) is the ability of computer systems to simulate human intelligence. It has not taken long for AI to become indispensable in most facets of human life, with the realm of cybersecurity being one of the beneficiaries.
AI can predict cyberattacks, help create improved security processes to reduce the likelihood of cyberattacks, and mitigate their impact on IT infrastructure. AI can also free up cybersecurity professionals to focus on more critical tasks in the organization.
However, along with the advantages, AI-powered solutions—for cybersecurity and other technologies—also present drawbacks and challenges. One such concern is AI bias.
SEE: Digital transformation: A CXO's guide (free PDF) (TechRepublic)
Cognitive bias and AI bias
AI bias directly results from human cognitive bias. So, let's look at that first.
Cognitive bias is an evolutionary decision-making system in the mind that is intuitive, fast and automatic. "The problem comes when we allow our fast, intuitive system to make decisions that we really should pass over to our slow, logical system," writes Toby Macdonald in the BBC article How do we really make decisions? "This is where the mistakes creep in."
Human cognitive bias can color decision making. And, equally problematic, machine learning-based models can inherit human-created data tainted with cognitive biases. That's where AI bias enters the picture.
Cem Dilmegani, in his AIMultiple article Bias in AI: What it is, Types & Examples of Bias & Tools to fix it, defines AI bias as the following: "AI bias is an anomaly in the output of machine learning algorithms. These could be due to the discriminatory assumptions made during the algorithm development process or prejudices in the training data."
Where AI bias comes into play most often is in the historical data being used. "If the historical data is based on prejudiced past human decisions, this can have a negative influence on the resulting models," suggested Dr. Shay Hershkovitz, GM & VP at SparkBeyond, an AI-powered problem-solving company, during an email conversation with TechRepublic. "A classic example of this is using machine-learning models to predict which job candidates will succeed in a role. If the data used for past hiring and promotion decisions is biased—or the algorithm is designed in a way that reflects bias—then the future hiring decision will be biased."
Unfortunately, Dilmegani also said that AI is not expected to become unbiased anytime soon. "After all, humans are creating the biased data while humans and human-made algorithms are checking the data to identify and remove biases."
How to mitigate AI bias
To reduce the impact of AI bias, Hershkovitz suggests:
- Building AI solutions that provide explainable predictions/decisions—so-called "glass boxes" rather than "black boxes."
- Integrating these solutions into human processes that provide a suitable level of oversight.
- Ensuring that AI solutions are appropriately benchmarked and frequently updated.
The above solutions, when considered, point out that humans must play a significant role in reducing AI bias. As to how that is accomplished, Hershkovitz suggests the following:
- Companies and organizations need to be fully transparent and accountable for the AI systems they develop.
- AI systems must allow human monitoring of decisions.
- Standards creation, for explainability of decisions made by AI systems, should be a priority.
- Companies and organizations should educate and train their developers to include ethics in their considerations of algorithm development. A good starting point is the OECD's 2019 Recommendation of the Council on Artificial Intelligence (PDF), which addresses the ethical aspects of artificial intelligence.
Hershkovitz's concern about AI bias does not mean he is anti-AI. In fact, he cautions we need to acknowledge that cognitive bias is often helpful. It represents relevant knowledge and experience, but only when it is based on facts, reason and widely accepted values—such as equality and parity.
He concluded, "In this day and age, where smart machines, powered by powerful algorithms, determine so many aspects of human existence, our role is to make sure AI systems do not lose their pragmatic and moral values."