In 2021, Spiceworks reported survey results that revealed, “Almost one-third (31%) of the professionals surveyed said their organizations are now using artificial intelligence (AI), and 43% are exploring the technology. About 34% reported their companies had not deployed any AI projects.”
This and other surveys show that most companies are in early stages of AI adoption — and they most likely have not yet thought about change management for their AI systems, and what it’s going to take to keep their AI systems up, running and relevant.
SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)
How important is an AI maintenance and tuneup strategy?
In 2016, Microsoft developed a chatbot called Tay. Tay was designed to learn from human interactions on social media. Soon after it was deployed, Tay began learning from social media and started to spew out racist and hateful comments. Tay was one of the first forays into AI learning models and social media, and it ended in disaster.
The Tay incident is not an isolated one.
AI systems can easily get corrupted and lose effectiveness by processing “poisonous” data (such as deepfakes) that have maliciously been injected into the data the AI is learning from. AI systems can also process data from a limited number of sources that over time begin to lose their effectiveness because newer, more relevant data sources come online.
“Post-AI deployment risks arise from an incomplete understanding of the behavioral boundaries, untested failure modes and susceptibility to manipulation by adversarial elements in the deployment environment,” said Francois Candelon, Global Director of the BCG Henderson Institute. As data and environments change, AI systems must grow and adapt with them.
The question is, how many organizations have thought through how they are going to verify, maintain and tune AI systems so they stay relevant?
Ways to ensure AI systems are maintaining their function
Here are five strategies that companies can use to ensure that their AI stays relevant and that they don’t lose the value of their AI investment over time:
1. Engage a diverse team of AI evaluators
“Creating effective AI+Human systems will require leaders to engage developers, managers, users, consumers and others to understand AI’s application context,” said Candelon.
Without a diverse set of AI collaborators, you risk missing important perspectives and elements for your AI. This diverse team should be kept together as your AI system evolves so your system can maintain its effectiveness and relevance.
If assembling a team of AI architects has been a challenge at your organization, the experts at TechRepublic Premium have a hiring kit that should help. It includes tools to help find and hire the right people for the team.
2. Check AI accuracy against outside benchmarks
If AI predictions start to significantly trend away from what the organization has experienced in the past, questions need to be asked:
- Have conditions really changed?
- Is there some source of data or human input that hasn’t been included in the AI data repository?
- Are the right algorithms and questions being formulated?
- If the physical reality of what is actually occurring (e.g., critical path equipment failing on production lines) isn’t in line with what the AI is predicting will fail, does the AI need to be re-calibrated?
3. Review and update data sources
Are new data sources that are more comprehensive and accurate now available that weren’t available when the AI system was first implemented? If so, it makes sense for IT and other AI contributors to incorporate this data so the AI is working with as complete a set of data as it can.
4. Evaluate whether business use cases have drifted
The business use case that your AI was originally built for focused on customer acquisition — but times (and the business case) have changed, and now the focus is on customer retention.
When a business use case for AI changes, the AI must change with it, or the AI becomes obsolete and you may lose the benefit of your time, effort and monetary investment.
5. Evaluate for risk
If your AI is analyzing human interactions on social media with the goal of responding to them or extracting information, it’s incumbent to evaluate AI processing for potential risks.
“Deep stakeholder involvement across the AI lifecycle will help uncover blind spots and ensure effective monitoring of risks in data, modeling, tradeoffs and concept drift,” said Caldelon.
So, too, will effective governance and best practices for change management as AI systems continue to evolve and learn from the data that they process.