Innovation

IBM's brain-like chip and the quest for a 'cognitive planet'

IBM recently announced its TrueNorth chip, which operates like a brain does. Here are the details of the chip and how IBM sees it changing the future of computing.

brainibm.jpg
Image: IBM

Despite the numerous advances in computing over the past few decades, some have begun to question if the traditional computer architecture model can keep up with the data-heavy era of cognitive computing we are experiencing today.

Researchers at IBM have been pondering the question for more than a decade. IBM's Dharmendra Modha recalls the exact date that his team at IBM began to think about how they could pursue computing in a different way—July 16, 2004.

"It was becoming evident that the computer architecture that human civilization has been pursuing, that has been with us since 1946—the days of ENIAC and Von Neumann architecture, is beginning to reach its limits because of its sequential nature," Modha said.

Thus began IBM's 10-year journey to develop a new kind of technology to power a new kind of computing, which eventually led to its new chip, TrueNorth. In 2006, Modha chaired the Almaden Institute on Cognitive Computing. Then, in 2008, he submitted an IBM research proposal and won the DARPA SyNAPSE grant.

After securing nearly $60 million in total of government funding, the IBM team held a meeting in 2010 and decided on how to focus the project and move forward toward a brain-like chip. The concept is formally known as neuromorphic computing.

Traditional computing architecture relies on separate memory and processors. The processors run computation steps that access the memory. It's a dichotomy. In the brain, however, memory and computation are interwoven, said Nir Shavit, a professor at MIT CSAIL.

In the brain, a neuron is both a memory device and a computation device, Shavit said. The neuron is a nerve cell and the synapse is the structure that permits the neuron to communicate. The assumption is that synapses strengthen as a response to inputs, Shavit said.

"Therefore the next time this computing unit fires, then it fires at a stronger strength or a weaker strength, and that is the way that we learn to compute," Shavit said. "The premise of neuromorphic computing is to capture that paradigm."

The first chip was unveiled in August 2011. It was the scale of a worm brain with 256 neurons and they had two chips, one with 64,000 synapses and another with 256,000 synapses. Three years later, a later version of the chip graces the cover of Science magazine. This one had 100 million neurons with 256 million synapses. Modha said it was: "Literally a supercomputer the size of a postage stamp, consuming the power of a hearing aid battery."

From there, IBM created single chip boards the size of a human palm, which they eventually brought together with the software ecosystem they had previously put in place.

chipcore.jpg
The TrueNorth chip core array.
Image: IBM

One of the most unique features of the chip is the fact that the chips can be tiled together. So, if you put two chips next to each other, east-west or north-south, they can communicate without the need of additional communication architecture. However, one of the main selling points is the low amount of power it consumes. The TrueNorth chip consumes 70 milliwatts, but is capable of 46 billion synaptic operations per second.

As far as applications go, Modha said IBM is increasingly focused on cognitive computing and he is particularly interested in IoT at the edge of the network. By the end of the decade, 2020, they expect there will be nearly 30 times more sensor enabled devices than there will be humans. With this in mind, Modha said he wants to bring computation to those sensors, embedding intelligence at the edge.

"Truly, our vision is a cognitive planet," he said.

The other glaring application for a brain-like chip is artificial intelligence. However, it's still too new to know if it will help or hurt the field, said University of Maryland professor Jennifer Golbeck.

"Previous brain-inspired techniques work well, but they don't work like the human brain," Golbeck said. "Techniques that aren't brain-inspired often work better."

Still, TrueNorth is different from previous attempts at this kind of architecture. For example, she said, the capability to quickly find patterns in data could be an asset to certain type of AI like self-driving cars. Ultimately, though, she believes AI researchers just need to try it and see what happens.

"Because it's so different, it will certainly lead us to try new things," Golbeck said. "Whether those things transform AI or if they are just different ways of implementing stuff we already know how to do is something we just have to wait and see."

The ultimate neuromorphic computer would be one where computation and memory would be completely intertwined, mimicking an organic neural network. Shavit said it's an interesting model and a noble idea, but he believes it has a lot of problems.

For starters, Shavit said, there are fundamental questions about how neurons compute that we still have no answers to, questions that scientists are still debating.

"We don't know that much about how neurons compute right now, to be able to design a real neuromorphic computing device where the algorithms are actually embedded in hardware," Shavit said.

In this field, what's essentially being built is a platform for running machine learning processes faster and more efficiently, Shavit said, which are achieving "beautiful results" and are "extremely helpful." But, they're not necessarily how brains compute.

Shavit contends that, by altering the hardware, compared to a general purpose computer, you could be limiting your flexibility and your ability to implement new paradigms, should they emerge. The interesting point of the TrueNorth hardware is its low power consumption, Shavit said, but the main question is what is being traded for that lower consumption.

The hope is that, with neuromorphic computing, we will one day be able to mimic the human brain. Shavit said we have to understand the connectivity before building the hardware.

"History tells us that almost everything you can do with specialized hardware, you can do also with general purpose hardware if you just work the software well enough," Shavit said.

Still, Shavit said neuromorphic computing is a "beautiful idea" and it's a thing we should doing in the future, but he believes there are too many parameters and we need more studies and research comparing it with traditional models, a better understanding of neural networks, and we need to know how to properly code machine learning algorithms before diving in.

"I would not get too excited," Shavit said.

What do you think?

We want to know. Are projects like TrueNorth the next step in neuromorphic computing? Or should we focus on software first?

Also see

About Conner Forrest

Conner Forrest is News Editor for TechRepublic. He covers enterprise technology and is interested in the convergence of tech and culture.

Editor's Picks