The most difficult thing about AI may not be getting it to actually work, but getting society to accept it. Granted, functioning AI would be a huge advance, as most companies still struggle to get into production with AI or machine learning workloads. But even if every AI project magically morphed into a success, we’d still be a long way from the finish line.
The reason, it turns out, is people.
I fought the law…
Well, people and the laws they make. One big challenge with AI is explaining it. As IIA’s chief analytics officer Bill Franks has pointed out, you can have perfectly functional AI (i.e., makes good predictions) that you can’t explain, but that may not be acceptable: “If you only care about predicting who will get a disease, or which image is a cat, or who will respond to a coupon, then the opacity of AI is irrelevant. It is important, therefore, to determine up front if your situation can accept an opaque prediction or not.”
For example, insurance companies might love to use AI to screen out risky candidates, but courts might frown on an AI that is biased against certain classes of the would-be insured. As Franks wrote, “In many cases, such as credit scoring and clinical trials, amazing predictions mean nothing in absence of clear explanation of how the predictions are achieved.” Our sense of human fairness simply won’t allow it, even if the machines are “right.”
SEE: Cut the marketing nonsense: Will the real data scientist please stand up? (ZDNet)
Complicating matters, even if an algorithm correctly predicts the future 99% of the time, that 1% is fatal where people’s lives are concerned. Speaking of the financial services industry, Charles Ellis of Mediolanum Asset Management has said: “All things go wrong eventually, every algorithm has a bad day. The difference between those that survive and don’t is those that can explain what they do.” If AI remains a black box, that box will get dismantled the minute it wrongly denies someone health coverage, gets a drug wrong, etc.
As such, Franks is correct when he argued: “We are certain to have AI that will be capable of solving very valuable problems sooner than we’ll be allowed to actually put those models to use.”
But first it’s got to work
Explaining AI is a secondary issue, of course–first, the AI or machine learning tool needs to work. Sadly, as James Mackintosh has called out, sometimes they do their job too well, as it were: “Machine-learning systems are now really good at spotting patterns. Unfortunately, computers are just too good, and frequently find patterns that aren’t really there.” This is because machines aren’t always good at filtering out noise.
SEE: How machine learning’s hype is hurting its promise (TechRepublic)
Mackintosh looks to Anthony Ledford, chief scientist at quant fund Man AHL, on this issue, who said: “The more complicated your model the better it is at explaining the data you use for the training and the less good it is about explaining the data in the future.” In other words, the more precise we make the model (in terms of adding intricacies that optimally predict the training data), the more we make it incapable of making sense of new data from outside the training dataset.
We have, in short, a long way to go. Along the way, we’d do well to hype AI less and create more measured enthusiasm for its potential. Just ask IBM, which is now having to defend its Watson business from the criticism of “overhyped marketing.” Far better to under-promise and over-deliver.
