Innovation

Transparent machine learning: How to create 'clear-box' AI

AI and robots can be trained to perform many tasks, but systems often operate in a black box, so we don't know how decisions are made. Here's how one company created a transparent alternative.


The next big thing in AI may not be getting a machine to perform a task—it might be requiring the machine to communicate why it took that action. For instance, if a robot decides to take a certain route across a warehouse, or a driverless car turns left instead of right, how do we know why it made that decision?

According to Manuela Veloso, professor of computer science at Carnegie Mellon University, explainable AI is essential to building trust in our systems. Veloso, who works with co-bots (collaborative robots), programs the machines to verbalize their decision process. "We need to be able to question why programs are doing what they do," Veloso said. "If we don't worry about the explanation, we won't be able to trust the systems."

To tackle this problem, a startup called OptimizingMind has created tech to gain insight into machine decision-making.

The algorithm aims to enable "clear-box access" that shows how machine learning makes predictions. "This model is based on the brain's real neural networks. Moreover, we convert any deep network to our form, seeing not only the underlying expectations, but also which aspects of the pattern being classified were most important for the decision,," said Tsvi Achler, head of OptimizingMind.

Achler, who has a background in neuroscience, medicine, and computer science, thinks that there is a lot we can learn from how the human brain makes, and explains, its decisions.

"What I'm interested in is: What's in the brain that's like a computer? Why can the brain learn any pattern and describe it, so if I said, 'octopus,' you could start telling me what to expect?" asked Achler. "And if I ask, 'What do the tentacles look like?' you can tell me?"

Humans, he said, can see a new pattern, and immediately learn that pattern—but for AI, that's not possible quite yet. "You have what's called 'batch learning.' If you want to add one new pattern or one new note, you'd have to retrain all the old patterns with the new pattern from the beginning," Achler said.

The algorithm Achler developed "displays the neuroscience phenomenon of 'bursting.' When there's a new pattern, we see an activation of multiple neurons, and then they settle down," he said. "When you present a pattern to be recognized, in the next moment, you see this jump, and then gradual coming down. You see the same thing with the algorithm."

What his company has done differently, Achler claims, is shift the line of power—"we are actually using the available context and doing the procedure while the context is available," he said.

SEE: Machine learning: The smart person's guide (TechRepublic)

It's a way of re-thinking traditional machine learning, like Deep Learning, Perceptrons, SVM (Support Vector Machines), CNN (Convolutional Neural Networks), RNN (Recurrent Neural Networks), Backpropagation, etc., Achler said. They were never meant to address the problem of learning in real-time. "The whole technology has been evolving with one main objective, which is get the problem solved," he said. "Nobody was thinking about approaching the problem of making it flexible or trustable. The whole goal is about being more accessible."

OptimizingMind's brain-based algorithm is meant to let developers "peer inside of their networks, understand what they are doing, and easily edit them without retraining from the beginning," said Achler. It allows for "one-shot" learning, through which a neural network can be taught on the spot. For example: Siri could be told the definition for a word, which would then be stored. Today, neutral networks can't do that—they need to be trained to incorporate new things, learning using thousands of examples.

So what does "clear-box" mean? According to Achler, it provides a way to view decision-making in real-time. "It can access weights, features and nodes, providing flexibility to read them as well as change them. Ultimately this enables understanding of how the neural network is arriving to a decision," he said.

The tool, Achler said, can help significantly reduce the time for machine development, and can therefore result in savings for businesses.

In addition to providing transparency, said Achler, the algorithm can also be modified. Not only can "the expectations be expressed, but the individual expectation can also be changed at the instance when new information is available," he said.

Today, most methods of machine learning use a "feedforward" technique. According to Ed Fernandez, co-founder of Naiss.io, a VC firm, "Feedforward methods use optimized weights to perform recognition. In feedforward networks 'uniqueness information' is encoded into weights based on the frequency of occurrence found in the training set." Fernandez said that this means weights must be optimized over the whole training set. This means the OptimizingMind can "perform optimization on the current pattern that is being recognized," he said, which is "not optimization to learn weights—instead, it's optimization to perform recognition."

As machine learning becomes increasingly interwoven into business, and acts as the underpinning of driverless car research and other high-stakes technology, understanding what happens in machine learning is crucial. In fact, DARPA recently started an initiative for funding for explainable artificial intelligence (XAI).

As Veloso pointed out, "we can't assume that AI systems will be flawless."

We must, however, learn from their mistakes. "If there is an accident tomorrow, it can never be the same accident again," Veloso said.

Also see...

hires.jpg
Image: iStockphoto.com/VOLODYMYR GRINKO

About Hope Reese

Hope Reese is a Staff Writer for TechRepublic. She covers the intersection of technology and society, examining the people and ideas that transform how we live today.

Editor's Picks

Free Newsletters, In your Inbox