Written in a Pittsburgh hotel and despatched to TechRepublic at 6Mbps via open wi-fi from Philadelphia Airport the next day.

Every so often we see a clutch of reports pronouncing the imminent end to Gordon Moore’s law, which says the number of transistors on integrated circuits doubles roughly every two years.

Recently, we’ve been told it is the limits of lithography, power density, feature definition, and the approaching quantum limit that are sounding its death knell.

Yet every year problems are overcome and semiconductor companies realise yet another ground-breaking generation of chips. And like Intel, the manufacturers always seem to have a road map stretching 10 to 15 years into the future.

Roughly speaking, the history of transistor size and packing density per square millimetre across the industry looks like this:

So, are we actually getting close to some barrier, some limit that cannot be transgressed?

In theory, we should hit serious quantum effects when we get down to five atoms or about 1nm, but that would be for the size of the smallest transistor element, while the dielectric layers tend to be much thinner.

However, new device geographies and styles including 3D structures may bypass this most critical of limits for a while at least. That being the case we might reasonably expect Moore’s law to survive until about 2020.

Then what? To answer this one, we have to do a bit of crystal-ball gazing and look at some of the promising results coming out of the research labs world-wide including the single atom transistor.

If that became a manufacturing reality, we might see Moore’s law validated or extended until 2030. However, today this scenario is almost pure guessology and we might do better by addressing alternative materials, devices, topologies and circuit configurations, not to mention new methods of production that move away from lithography.

Where would I put my money? I see an eventual departure from silicon with move to graphites and carbons followed by a bifurcation into organic and inorganic technologies developed to address applications demanding different very speeds and storage densities. And in both cases I don’t see lithography playing a part, but programmable assembly and self-organisation.

Common basic limitation

But no matter whether I am right or wrong, there will be one basic limitation common to silicon and all technologies that is also evident in the human brain.

Ultimately, the size of any computing technology, no matter what the materials, growth, assembly and operational mechanisms, is limited by the need to interconnect elements, distribute the energy to power the system, and the need to remove the heat generated.

For example, humankind will not be getting any smarter because we have already peaked at a point where the space occupied by the vascular systems defines the maximum amount of interconnects – dendritic and neural pathways – and the maximum number of neurons.

These factors also seem to be the primary limiting mechanism for any networked computing elements no matter what the technology – wired or wireless.

So what of Moore’s law? I vote that we keep watching the density of the elements, their size and their cost while ignoring the materials and base technology. My guess is that it all has a long way to go.

Why am I optimistic ? Because our biology evolved over some four billion years. It is distinctly suboptimal, but we’ve already managed to achieve quite a lot and we have a lot of catching up to do with our engineering.