Innovation

Carnegie Mellon invests $12M into AI to 'reverse-engineer the brain'

A research project at Carnegie Mellon will try to emulate the brain to gain insights to apply to machine learning. Here's what you should know.

Image: iStockphoto.com/cybrain

The primary goal of artificial intelligence is to create machines with the ability to reason. One of the best ways to achieve this? By looking at our own real-time model: the human brain.

AI has long been interested in using artificial neural networks, similar to the brain's own biological systems, to help machines learn. Now, Carnegie Mellon's Computer Science Department is investing $12 million into brain research, attempting to dissect the "rules" of the brain—and applying it to the technology behind AI.

Professor Tai Sing Lee is leading the project, funded through the Machine Intelligence from Cortical Networks (MICrONS) research program—part of President Barack Obama's BRAIN Initiative.

While AI researchers have already been using neural nets, this research is an effort to update them, improving face recognition, speech processing, and decision-making capabilities in machines.

SEE: Artificial Intelligence and IT: The good, the bad and the scary (Tech Pro Research)

"The implications to technology could be enormous," said Lee. "Right now, the most powerful neural network is based on ideas from the '50s."

According to Professor Lee, the advanced computing powers and label data provided by Google and Facebook are central to the success of neural network algorithms. Google has DeepMind, Facebook has an AI lab, Baidu has a deep learning institute—"they are all trying to capitalize on the advance in neural networks," said Lee.

Professor Tai Sing Lee
Image: Tim Kaulen

The problem? "The algorithms are based on the feed-first architecture: one layer of neurons on top of another, mapping input to output."

Lee said that what's missing is recurring connections. "When information feeds forward," said Lee, "there's also lot of information coming back." Sometimes, said Lee, "the feedback carries 10 times more connections than feed-forward connections. The brain must be doing something, or else it wouldn't be wasting so many cables."

Lee is interested in looking at what the feedback is doing. A mere 5-10% of neuron communication is coming from the bottom up, he said. "Most information is coming from other brain areas.

The feedback, he believes, is synthesizing information—letting us know what we should expect to see.

"Now is an opportunity to look deeply into the detail for the machinery," said Lee. "This is a high risk, high impact project. It has not been done before."

SEE: Why robots still need us: David A. Mindell debunks theory of complete autonomy

Right now, we have not been able to reconstruct the brain's circuitry. Although there have been many models, Lee said there have been no definitive answers. To do this, his team will look at very small tissue samples from animal brains (1mm x 1 mm x 1mm), which is roughly 50,000 neurons (in a human brain, there are billions, he said). Lee wants to recreate a circuit of neural activity and "incorporate the computational rules to a computer system."

The research will hopefully lend insight into how the current neural networks of the brain work, and when they don't work.

The current neural networks, said Lee, requires a huge amount of data. "Basically, you need a teacher to present an image, say a car, and then there's a detector. That's supervised learning."

What Lee wants to do is understand how we can learn from interactions with the world, what he calls "unsupervised learning from examples."

For example: "If you see Spock from Star Trek, then based on that example, you see how this person is similar to us and yet has distinct features. Then you generalize—when you meet another person from that planet, you know they're the same species."

If we can improve unsupervised learning, he said, it would mean that we could learn from just a few examples instead of all of the examples required for supervised learning.

That's what humans are good at: being creative, imaginative, having foresight. "We want to create feedback that uses our internal model of the world so we are not reactive, but predictive," he said."

SEE: 10 terrifying uses of artificial intelligence

When driving, for example, we need to consider many factors: Where are the pedestrians? Are we aligned with the road? What do the signs say? These are things we play close attention to when learning, but soon they become automatic. Lee says this is because we have internal models to help us predict. "It allows for fast understanding," he said. "When something is different from expectations, we can react very fast."

"It's like the Apollo project of the brain," said Lee. "You have to trust in the mathematics, computation, and engineering that we can do it."

"Whether we will land on the moon is uncertain."

Also see

About

Hope Reese is a Staff Writer for TechRepublic. She covers the intersection of technology and society, examining the people and ideas that transform how we live today.

Editor's Picks

Free Newsletters, In your Inbox