Humans have long wondered if artificial intelligence (AI) would replace them in their work. But right now, business transactions still need a human touch.
Several years ago, I was working with a European financial company that was renovating its disaster recovery and failover technologies in its data center. The goal was to automate many of the alerts in systems so that IT would have early warnings before any system failed.
SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)
The technology worked so well that the CIO was faced with a choice: Does he totally automate failover, with the automation taking over system failover functions if and when the automation determined it was needed? Or does he leave the “last mile” of failover—that point when the alerts tell you a mission-critical system is going to fail and you have to personally make the decision to go into failover and recovery—to himself, where it is he who pushes the button?
“I wanted to be the one to push the button,” he said. “I wasn’t comfortable at having an AI engine make a failover decision that might have been avoided, and then having to explain this to customers and to my board.”
Why risk management with AI is a double-edged sword
The CIO’s decision is not uncommon. Most CIOs would not feel comfortable leaving a major disaster recovery and failover decision to automation software, although they do understand the benefits of automatic data replications and recovery software.
At the same time, CIOs and business leaders know that AI will be an instrumental and transformative technology that will facilitate decision-making through the help of Internet of Things (IoT) and other forms of big data.
SEE: Natural language processing: A cheat sheet (TechRepublic)
Consider these real world examples:
- A logistics carrier receives an alert that the environmentals in the container being carried by one of its trucks are failing. This means that all produce carried within that container is at risk of spoilage unless a nearby market can be found. AI kicks in and recommends several nearby markets for an expeditor to reroute to. The expeditor makes the ultimate decision, but the AI decisioning engine saves valuable time.
- A factory that has implemented Manufacturing 4.0 principles has a piece of equipment on an assembly line with a sensor that is indicating a risk of imminent failure that could shut down the line. To avoid the risk of downtime, the manufacturer sends a technician out to the line, and the technician uses AI to assist in diagnosis and repair.
- A construction company must source a special crane for a project and uses AI to assess the risk of future inclement weather and the risk or pricing increases to determine the best time to place the order for the crane.
What we have learned from all of these examples is that AI is a useful tool that saves critical time but that in the end, humans want to be the ones to make the final decisions.
Can AI replace human decision-making?
In discussing manufacturing and risk, Wolters Kluwer, a risk management company, said “For now, it is still unthinkable to ever leave humans out of the [manufacturing] Deming cycle (plan-do-check-act) altogether. Even if cognitive computing would be fully developed, the scenario of having machines taking over entirely is a fearsome picture to paint for most.”
Will we get to the point where machine automation and AI can fully supplant the decision-making and bastion of expertise of humans?
Man, machine, and AI
“AI’s abilities will complement us, rather than replicate us,” said Dr. Iain Brown, head of data science at SAS UK and Ireland.
From a risk management perspective, AI helps human collaborators manage risk in new ways. AI can ingest and digest vast quantities of information quickly. It can inform a logistics dispatcher about a road closure or a problematic weather condition that is a thousand miles away. This helps the dispatcher manage the risk of a shipment being late.
At the same time, humans act as checkpoints to manage the risk of faulty AI decisions. “Banks have long worried about bias among individual employees when providing consumer advice,” wrote Juan Aristi Baquero, Roger Burkhardt, Arvind Govindarajan, and Thomas Wallace in a recent McKinsey risk management paper. “But when employees are delivering advice based on AI recommendations, the risk is not that one piece of individual advice is biased but that, if the AI recommendations are biased, the institution is actually systematizing bias into the decision-making process.”
The takeaway for companies implementing AI is that a correct interface point must be found between machine-driven AI and humans so that both risk management and excellent decision-making can be enabled.
This should be the goal of every AI project that companies undertake.