Machines can learn a lot of things–probably more than you can imagine. But can they learn common sense? David Ferrucci, the builder of IBM’s original Jeopardy!-playing version of Watson, recently discussed his latest AI effort: An attempt to instill common sense knowledge from the everyday world into the automated reasoning that takes place with artificial intelligence (AI).
At his company, Elemental Cognition, Ferrucci described how his AI team gave an advanced language program the sentence, “Zoey moves her plant to a sunny window. Soon …”. The AI program was tasked to complete the second sentence.
SEE: An IT pro’s guide to robotic process automation (free PDF) (TechRepublic)
A human would likely complete the sentence by saying, “the sun will help the plant to grow and stay healthy.” In the real world, it’s common knowledge that plants need light.
Unfortunately, the AI program couldn’t deliver this common observation. Instead, the AI completed the sentence by analyzing statistical patterns. It came up with these possible answers: “she finds something, not pleasant,” “fertilizer is visible in the window,” and “another plant is missing from the bedroom.”
This story is an entry point to myriad “common sense” issues that face today’s AI. It begins to explain why a self-driving vehicle may not be able to decipher the varying degrees of danger between striking a traffic cone or striking a pedestrian.
“The great irony of common sense—and indeed AI itself—is that it is stuff that pretty much everybody knows, yet nobody seems to know what exactly it is or how to build machines that possess it,” said Gary Marcus, CEO and founder of Robust.AI. “Solving this problem is, we would argue, the single most important step towards taking AI to the next level. Common sense is a critical component to building AIs that can understand what they read; that can control robots that can operate usefully and safely in the human environment; that can interact with human users in reasonable ways. Common sense is not just the hardest problem for AI; in the long run, it’s also the most important problem.”
SEE: Managing AI and ML in the enterprise 2020 (free PDF) (TechRepublic)
If common sense is basic knowledge that ordinary people possess, infusing it into AI processing by necessity means engaging regular people and what they know, not just data scientists. Many AI companies are attempting to do this, but it isn’t easy. Additionally, in the global environment that most major AI vendors sell to, what’s common sense in England won’t necessarily agree with what common sense constitutes in China, or in Saudi Arabia.
This is a tribute to the diversity of human minds and experiences—and it’s hard to imagine that those developing AI programs will ever “catch up” AI to what humans experience and learn, especially since what constitutes common sense continues to evolve, as human thinking and the world around us do.
This brings us full circle to the enterprises that are using AI today. How do you employ AI so it assists in planning, decision-making, and running your company—while minimizing the risks of “false positives” or nonsensical results?
1. Use AI, even if it isn’t perfect
Just because AI isn’t 100% accurate in every situation doesn’t mean it isn’t an effective tool. Companies in virtually every industry sector are using AI, and those that fail to do so will find themselves at a competitive disadvantage
2. Retain your human experts
One semiconductor company eliminated its “old” material scientists in favor of automated AI programs and a younger cast of engineers who didn’t have the experiential knowledge of working with alternate materials when preferred materials weren’t available. Now the company’s having trouble keeping pace.
SEE: How is your company managing its AI and ML initiatives? (TechRepublic)
The moral of the story is: Don’t rush to put your in-house experts out to pasture. Whether it is a brewmaster who is the only one who seems to know your recipe, or a material scientist who knows which material he or she can substitute, you should always have human “backup” for your AI.
3. AI should have human overrides
In recent years, self-driving vehicle makers have made headway on introducing the idea of autonomous vehicles without human controls. I’m going to take a contrarian position on this, arguing that for “backup” reasons alone (e.g., there is a software or an Internet outage), that there should be human overrides to autonomous processes where AI plays a major role. In companies, a human being at some point should oversee AI operations to assure that they stay on course and that the results they yield make common sense as well as logical sense.
4. There is a place for no common sense
When Pat Riley was coaching the Los Angeles Lakers, he was once asked how he evaluated team game performance. The anticipated response was that Riley would cite statistics like points scored, rebounds and assists. Instead, Riley talked about an “effort index.” Riley had someone track all of the times each player went up for the ball on a rebound. He reasoned that the more times players went up for rebounds, the better the chance you get the ball, facilitating a win. No one thought like that at that time because the approach was not within the purview of common sense that teams at that time used.
AI is like this, too. You might get disappointed when it acts nonsensically, but it might also tell you that wives of soccer fans in England did the most online shopping on days that their husbands were at games. No one with any “common sense” would have thought this, but the AI was revenue-enhancing for one UK retailer.
5. Find a balance
Companies must find the right “mix” of common sense and uncommon results that they want from their AI.
It is through experimentation with AI and continuous oversight by human subject matter experts and data scientists that they will achieve this balance.