Hardware

Moore's law: About to fail or ticking along nicely?

For over 50 years, progress has been powered by the exponential growth of integrated circuits and the industries they power. Will that trend continue or reach an abrupt end?

Written in a Pittsburgh hotel and despatched to TechRepublic at 6Mbps via open wi-fi from Philadelphia Airport the next day.

Every so often we see a clutch of reports pronouncing the imminent end to Gordon Moore's law, which says the number of transistors on integrated circuits doubles roughly every two years.

Recently, we've been told it is the limits of lithography, power density, feature definition, and the approaching quantum limit that are sounding its death knell.

Yet every year problems are overcome and semiconductor companies realise yet another ground-breaking generation of chips. And like Intel, the manufacturers always seem to have a road map stretching 10 to 15 years into the future.

Roughly speaking, the history of transistor size and packing density per square millimetre across the industry looks like this:

Image: Peter Cochrane/TechRepublic

So, are we actually getting close to some barrier, some limit that cannot be transgressed?

In theory, we should hit serious quantum effects when we get down to five atoms or about 1nm, but that would be for the size of the smallest transistor element, while the dielectric layers tend to be much thinner.

However, new device geographies and styles including 3D structures may bypass this most critical of limits for a while at least. That being the case we might reasonably expect Moore's law to survive until about 2020.

Then what? To answer this one, we have to do a bit of crystal-ball gazing and look at some of the promising results coming out of the research labs world-wide including the single atom transistor.

If that became a manufacturing reality, we might see Moore's law validated or extended until 2030. However, today this scenario is almost pure guessology and we might do better by addressing alternative materials, devices, topologies and circuit configurations, not to mention new methods of production that move away from lithography.

Where would I put my money? I see an eventual departure from silicon with move to graphites and carbons followed by a bifurcation into organic and inorganic technologies developed to address applications demanding different very speeds and storage densities. And in both cases I don't see lithography playing a part, but programmable assembly and self-organisation.

Common basic limitation

But no matter whether I am right or wrong, there will be one basic limitation common to silicon and all technologies that is also evident in the human brain.

Ultimately, the size of any computing technology, no matter what the materials, growth, assembly and operational mechanisms, is limited by the need to interconnect elements, distribute the energy to power the system, and the need to remove the heat generated.

For example, humankind will not be getting any smarter because we have already peaked at a point where the space occupied by the vascular systems defines the maximum amount of interconnects - dendritic and neural pathways - and the maximum number of neurons.

These factors also seem to be the primary limiting mechanism for any networked computing elements no matter what the technology - wired or wireless.

So what of Moore's law? I vote that we keep watching the density of the elements, their size and their cost while ignoring the materials and base technology. My guess is that it all has a long way to go.

Why am I optimistic ? Because our biology evolved over some four billion years. It is distinctly suboptimal, but we've already managed to achieve quite a lot and we have a lot of catching up to do with our engineering.

About

Peter Cochrane is an engineer, scientist, entrepreneur, futurist and consultant. He is the former CTO and head of research at BT, with a career in telecoms and IT spanning more than 40 years.

12 comments
Bazzie
Bazzie

@peter You are absolutely correct, and my resulting conclusion is that perhaps we are limiting ourselves too much by staying in 2 dimensions. Perhaps the only way to now move forward is to go into the 3rd, possibly using holography, don't know, I'm not a semiconductor expert - that goes back to 1987 when I finished my engineering degree... Interesting though, as one starts to compare the brain architecture to computer architecture. The benefit of the brain is that it's like building the rules engine plus an ability to self-learn and change the rules, in real time, in hardware as opposed to our normal software implementation where the memory is separate from the processing unit. It's like comparing graphics processors to X86 CPUs trying to do graphics via software - there's just no performance comparison. I still however can't stop marveling at the brain. If you start to look at supercomputers, compare the size and capability to the size and capability of the brain. Consider energy consumption and the heat problem referred to above. The only way to get a similar result in a computer is to also go massively parallel. The problem with heat is that if you serialise the processing, you have to increase the speed with which you can do 1 thing at a time in order to improve performance - and that's when you have to start increasing power, thus generating heat. With massively parallel you can slow down the individual processors in order to cut power, and still get a huge amount of processing done in a short space of time.

YetAnotherBob
YetAnotherBob

We are coming to the end of Moors Law. It will be larger than the 5 atoms listed in the Article. Experiments from the mid 1990's on showed what the limits of miniaturization really are. For a transistor, it is 7 atoms. For a capacitor, it's around 100 times that. For a wire, metal wires have to be at least 60 atoms wide to reliably carry current. The proposals to use Nano tubes are about that size. So, the limit on lithography will be established with the minimum size that can connect trace wires that are 60 Atoms wide, or around 5 to 10 nanometers. That's about eight times denser than anything we can achieve today. We have already reached one limit on Moors Law. The increases in clock speed that were expected and commonplace 20 years ago are no longer happening. the reason is that the heat dissipated in the circuit is a function of the frequency. by 2002, the fastest chips were literally melting the microprocessor. That's the whole reason for water based cooling, and other attempts to increase cooling. To be able to increase the clock speeds again, we will need to replace silicone with carbon. That lets us raise the temperature of the chips by over an order of magnitude. Moors law will not stop, it will instead just slow down. For example, between 1980 and 1990, microprocessor speed went from 3 MHZ on the original PC, to 100 MHZ on the Pentium. From 1990 to 2000, microprocessor speed went from 100 MHZ to 400 MHZ. From 2000 to 2010, microprocessor speed went from 400 MHZ to around 2 to 3 GHZ. There it stalled since around 2004. Intel doesn't even advertize it's products any more based on processor speed. There have been increases in processor speed, but, they are not dramatic any more.

atuldeshmukh
atuldeshmukh

Absolutely, the next big thing to work on would be chemical and biological addressing buses, like human brain as processor.

Slayer_
Slayer_

Especially hard drive secondary storage is really slow. SSD's are a step in the right direction but are still slow compared to RAM.

Bazzie
Bazzie

Good article in general, however I hope I misread that you meant that about the human brain, because if you did, I believe you are dead-wrong. While computers may be faster in what they can do, there's a lot that we can't make them do yet, that our brains can do. Simple example - place a piece of paper half-way over a line of text (cover the bottom half of the line). Our brains easily filter out the 'noise' and we can read the line. Try to get a standard OCR to do that, I've not seen a system that can. It could quite possibly be done if expressly programmed to do it. The amazing thing is, you've probably never done that before, yet your brain was able to cope with the noise. No-one had to expressly program the brain to do that, it learnt it all by itself. Man has managed to in isolation, mimic certain features of our brains, but no system exists that can fully reach the brain's potential. We have an incredible ability to do massively multi-tasking, the brain checks/confirms/discards millions of links all the time, building up our world-view. While the outcome may be flawed due to garbage-in, the ability of the brain to do all of this is unique, and really beautiful and amazing.

peter
peter

Remember that electronics was born around 1912 - 15, so let's say we have been at it for a mere 100 years. Mother nature has been on the go for 530M since the Cambrian Explosion and a total of around 3.6Bn since cells first formed. So let's be a little patient - we are hardly at first base :-)

peter
peter

We won't be using lithography! It is worth reading up on the latest thinking and experiments. Single atom devices have been made, and self organisation is looking like a potential winner. The biggest limiter today is the $10Bn required to build a fab plant! This is the real end of the road - time to move on from the old ways and adopt the radically new!

peter
peter

Biological systems/brians vary so widely - from the semi mechanical (ants) through to the semi wireless (human) with EM and chemical transmission. Buses are one convenient structure - but only when they are used in the right place. But then again in biological systems you find serial and parallel transmission, pre processing, post processing and combined memory and processing. We are gradually getting a handle on all this! BUT REMEMBER mother nature never optimises anything - she always goes for 'good enough' and much of evolution can be honed and improved. OUR CHALLENGE is to get down to the right microscopic scale in order to achieve similar or better densities whilst not losing too much speed and flexibility.

peter
peter

This is true of silicon and biological systems! The next big advance is to put processing into the memory function and to get away from current architectures.

peter
peter

That is like comparing a man and a mouse brain! Supercomputers can do even better than that by a looong way! BUT these kinds of comparisons are generally worthless - far better to look at what biology + engineered systems can achieve together. This is not some kind of contest - quite the reverse!

Slayer_
Slayer_

The same trick works with misspelled words. As long as the first few and last few letters are correct, we can mostly figure out what the word is.

Bazzie
Bazzie

Exactly what I was saying earlier. The first step would be to make purpose-built processors that can do the common (read that get done a lot) complicated things very fast and cheap. The second step would be to build self-learning systems, give them basic abilities like understanding language, and then let them loose on specific subject matter

Editor's Picks