As much as we want to rely on machines to improve our thinking, it turns out the interplay between human and machine is complicated.
Artificial intelligence (AI) may not save the world, but it's just as unlikely to ruin it. Indeed, if we've learned anything about AI over the past few years of its boom-and-bust hype cycle, it's that AI is both more potent than we imagine, while also being far more dependent on people to achieve that potency.
In other words, if AI isn't working, we might just be the problem.
SEE: Artificial intelligence: A business leader's guide (free PDF) (TechRepublic)
More and less than we expected
Years ago our robot overlords seemed poised to push "go" on Skynet and doom us forever. Well, we're still waiting. Much of the backlash against AI was simply a misunderstanding as to what, in fact, it is (stoked by vendors overpromising on its near-term potential). But it's also a function of the inherent difficulty in making machines "think" like humans.
As detailed in a Guardian editorial:
[T]he most difficult human skills to replicate are the unconscious ones, the product of millennia of evolution. In AI this is known as Moravec's paradox. "We are all prodigious Olympians in perceptual and motor areas, so good that we make the difficult look easy," wrote the futurist Hans Moravec. It is these that our brains excel in, hidden but complex processes that machine learning attempts to replicate.
Such things that seem simple to us are remarkably complex to program. On the flipside, machines can comb through copious quantities of data quickly, performing feats of pattern-matching, for example, that no human could replicate. For example, T-Mobile uses machine learning to give customer service agents access to customer data in real-time, thereby improving the support experience. Kia Motors, for its part, uses AI to analyze computer vision data inside its cars to optimize the driving experience (personalizing seating position, etc.).
Indeed, it's this give-and-take that makes AI useful, even when it may not yet be as intelligent as we'd like. But it's also where the risk of AI becomes most apparent. Also from the Guardian's editorial: "The promise of AI is that it will imbue machines with the ability to spot patterns from data, and make decisions faster and better than humans do. What happens if they make worse decisions faster?"
People in the way
This possibility for computers to make bad decisions is complicated by the data being fed into them by people who are biased themselves, as Rishidot founder Krishnan Subramanian has highlighted: "[T]here is very little diversity among people building these AI algorithms." This can be mitigated through conscious efforts to hire diverse data engineers and scientists, but it's a tricky conundrum.
It's made all the trickier because people (whether those building the AI models or not) are influenced by the data coming from the machines. In this way, we can become ever more distant from raw data, and ever more incapable of giving good data to our models, as Manjunath Bhat has written: "People consume facts in the form of data. However, data can be mutated, transformed and altered--all in the name of making it easy to consume. We have no option than but to live within the confines of a highly contextualized view of the world." Catch that nuance? We rely on ever-increasing quantities of data to make decisions, but that data is just as increasingly mediated by machines that try to spoon-feed it to us in ways that make it easier to consume.
In short, we're not "getting the facts." As Bhat goes on, "Digital technologies not only augment human reasoning, but tends to influence that reasoning." AI, in other words, is influenced by human inputs, but it in turn influences those inputs.
None of which is to say AI is impossible. As mentioned, there are real-world examples of companies putting AI/ML to good use today. It is, however, to suggest that we should approach AI with an increased measure of care and humility, recognizing that AI will too often be influenced by us, for better and for worse.
For more on AI, check out "Pharma researchers test AI for predicting vision loss" and "Why artificial intelligence leads to job growth" at TechRepublic.
Disclaimer: I work for AWS, but in my work I have no involvement (whether direct or indirect) with any teams building AI-related products.
Telemedicine, AI, and deep learning are revolutionizing healthcare (TechRepublic download)
- What is bias in AI really, and why can't AI neutralize it? (ZDNet)
Artificial Intelligence: More must-read coverage (TechRepublic on Flipboard)