If anyone is qualified to talk about the machine-learning revolution currently underway it’s Terry Sejnowski.
Long before the virtual assistant Alexa was a glint in Amazon’s eye or self-driving cars were considered remotely feasible, Professor Sejnowski was laying the foundations for the field of deep learning.
Sejnowski was one of a small group of researchers in the 1980s who challenged the prevailing approach to building artificial intelligence and proposed using mathematical models that could learn skills from data.
SEE: IT leader’s guide to deep learning (Tech Pro Research)
Today those brain-inspired, deep-learning neural networks have led to major breakthroughs in machine learning: giving rise to virtual assistants that increasingly predict what we want, on-demand translation and computer vision systems that allow self-driving cars to “see” the world around them.

But Sejnowski says the machine learning is very much in its infancy, comparing it to the rudimentary aircraft that the Wright brothers flew in the US town of Kitty Hawk at the turn of the 20th century. While a landmark achievement, this early machine today appears impossibly crude next to the commercial jets that would follow in its wake.
“What we’ve done is, I think, is solve the difficult problems that are precursors to intelligence. Being able to talk on the telephone, and respond to queries and so forth, is just the first layer of intelligence. I think we’re taking our first steps,” he says.
Sejnowski compares the neural networks of today to the early steam engines developed by the engineer James Watt at the dawn of the Industrial Age – remarkable tools that we know work but are uncertain how.
“This is exactly what happened in the steam engines. ‘My god, we’ve got this artifact that works. There must be some explanation for why it works and some way to understand it’.
“There’s a tremendous amount of theoretical mathematical exploration occurring to really try to understand and build a theory for a deep learning.”
If research into deep learning follows the same trajectory as that spurred by the steam engine, Sejnowski predicts society is at the start of journey of discovery that will prove transformative, citing how the first steam engines “attracted the attention of the physicists and mathematicians who developed a theory called thermodynamics, which then allowed them to improve the performance of the steam engine, and led to many innovative improvements that continued over the next hundred years, that led to these massive steam engines that pulled trains across the continent.”
How AI research is evolving
While early AI research focused on hard coding the rules for intelligence in a computer program, in the intervening years it became apparent that such an approach was too inflexible to accurately interpret the messy and unpredictable real world.
SEE: Research: Companies lack skills to implement and support AI and machine learning (Tech Pro Research)
If we want to develop machines with the same cognitive abilities as humans — to think, reason and understand — then Sejnowski says we need to look to how intelligence emerged in the brain.
“It turns out that the only proof you could solve any problem in intelligence at all is the fact that nature has already solved it.
“So wouldn’t it make sense to actually ask the question: ‘Well, how did nature solve it? What was the architecture? What are the general principles?’.
“The Wright brothers used general principles about how birds glide in order to design their airfoils, shouldn’t we also be able to take away things like that from nature?
“The only progress that’s been made in AI over the past 50 years, that is really having impact on the economy and on science, is really inspired by nature, by the brain, that’s where we are.”
The end result has been the development of neural networks, mathematical models loosely inspired by the brain that are able to highlight patterns in data, gifting them the ability to learn how to carry out specific tasks, be that speech recognition or computer vision.
SEE: What is deep learning? Everything you need to know (ZDNet)
As further proof of that overlap between machine-learning and nature, Sejnowski points to the close relationship between temporal differences, a key mathematical model used in reinforcement learning — where a system learns by trying to maximize a reward — and the workings of the basal ganglia found in vertebrate brains, which helps animals and humans solve real-world problems.
“The basal ganglia helps the animal to learn how to make a sequence of decisions to reach a goal.
“For example, the goal might be ‘I want to catch a fish’. Okay, where do you go? You have to figure out the most likely place to find a fish. You’ve got to come up with some way to catch it. You’ve got to be there when the fish is there. You’ve got to have a spear or some other tool.
“There’s an incredible number of uncertainties in that process, and it’s learned through experience. It’s learned through actually looking around and observing and then making hypothesis. ‘Oh that’s stream there’. ‘You know, I saw a fish there yesterday’. ‘Oh, maybe I’ll look again today’.
“So the basal ganglia is doing exactly what you need to solve those problems, and it turns out that, interestingly, that algorithm that is embedded in the brain was also worked out back in the 1980s and called temporal differences.”
This temporal-difference algorithm was used together with deep learning by Google’s game-playing AI AlphaGo, says Sejowski, and played a role in helping the system beat the world’s leading champion at Go, a game so complex that the total number of positions that can be played is more than the number of atoms in the universe.
However, looking to nature for inspiration also exposes the gulf in the complexity of natural systems compared to the even the largest deep-learning neural networks today.
“Look into the brain, and what do you see? Well, deep learning turns out to be a tiny part of what goes on in the brain, tiny. The biggest deep learning networks have on the order of a billion connections, a billion parameters. Well, if you look into your brain and look at one cubic millimeter of the brain, it has about a billion synapses,” he says.
“What we have now is kind of, like an almost minuscule little bit of the brain, that we are beginning to master in terms of how to use it to represent things and solve problems.”
Even if society did build a neural network with a comparable number of connections to a human brain, we’d still be missing information about how this network should be structured to give rise to general intelligence found in humans.
“There’s the rest of the brain right? The brain doesn’t consist just of the cerebral cortex, right, say there are a million deep learning networks in our brain. How do you connect them up? How do they integrate?”
His belief that machine learning researchers should look to nature is echoed by Demis Hassabis, the co-founder of Google DeepMind.
“Studying animal cognition and its neural implementation also has a vital role to play, as it can provide a window into various important aspects of higher-level general intelligence,” he wrote in a paper last year.
An accelerating revolution
While Sejnowski says we can expect a steady stream of incremental improvements in machine-learning capabilities, he says it is impossible to predict when the next nature-inspired breakthrough will occur. However, for guidance, he says we can look at how long it took to construct the convolution neural networks (CNNs) that underpin existing computer vision systems today.
“The most successful deep learning network, the go-to network, is the convolutional neural network.
“That’s Yann LeCun‘s baby, right? He spent 20 years building that thing to the point where it’s now practical.”
The structure of CNNs build on human knowledge about the visual system, dating back to research carried out by Hubel and Wiesel in 1960s.
“So really it took about 40 years to go from the knowledge we had about the brain to being able to build something based on that on those principles and seeing how it works.
“It was a process of slow incremental advances and increases in computing power and data.”
SEE: Deep learning: An insider’s guide (free PDF) (TechRepublic)
Even with the immense processing power available to the largest tech firms and deep wells of training data that have been amassed, he says significant progress in the field of machine-learning is going to require a deeper understanding of the human brain.
To further that goal, Sejnowski was one of the leading academics who helped the Obama White House launch the BRAIN Initiative in 2013. Under the initiative, neuroscientists are working alongside engineers, mathematicians, and physicists to improve the tools available for probing the brain, with a focus on furthering understanding of learning and memory.
Sejnowski expects the initiative will play a role in “developing innovative tools, rapidly accelerating our knowledge of what’s happening in different parts of the brain” and that in turn “new learning algorithms are going be discovered”, adding that greater understanding about the hippocampus may help illuminate how machine-learning systems could learn from fewer examples, rather than the huge datasets needed to train systems today.
“The other good news is that between 1980 and today, there’s been tremendous advances, understanding other parts of the brain that have not been exploited by machine learning.”
Despite there being so much we don’t understand about the brain, Sejnowski is optimistic about the rate of future advances, citing the feedback loop between machine learning and neuroscience, with advances in one field accelerating research in the other.
“Neuroscientists are generating terabytes of information per experiments and how do you analyze it? Machine learning,” said Sejnowski.
“Tools we developed based on the brain are now being used to analyze data from the brain, and this is revolutionizing neuroscience.
“You go to the brain for inspiration, you build a machine, and now you use the machine for understanding the brain and that loop is turning faster and faster.”
- Sejnowski’s new book The Deep Learning Revolution, covering how deep learning is changing our lives and transforming the economy, is available from the 30th October from MIT Press.
Read more:
- Facebook’s machine learning director shares tips for building a successful AI platform (TechRepublic)
- AI helpers aren’t just for Facebook’s Zuckerberg: Here’s how to build your own (TechRepublic)
- IBM Watson: What are companies using it for? (ZDNet)
- How developers can take advantage of machine learning on Google Cloud Platform (TechRepublic)
- How to prepare your business to benefit from AI (TechRepublic)
- Executive’s guide to AI in business (free ebook)