The future of processors, part 2: Processes

Featured Content

This article is courtesy of TechRepublic Premium. For more content like this, as well as a full library of ebooks and whitepapers, sign up for Premium today. Read more about it here.

Join Today

Limitations of current fabrication techniques and basic physics will soon put an end to processor performance/price growth as we have known it in the silicon/Moore's Law era. What comes next is still taking shape.

New processor designs are one part of the way chips are developing, but these go hand-in-hand with new technologies and advances in how they're are made and what they're made from. 'Process' is the shorthand for the overall bundle of design and manufacturing that actually produces chips, and to date advances have usually been about making things smaller, in more complex patterns, and out of an increasing menu of materials.

Intel characterises chip progress as the 'tick-tock' cadence. A 'tick' is an old design implemented in a new process, while a 'tock' is a new design in an old process. Both give a fillip to performance and reduce costs, and the two-stage approach minimises the risk that something will go horribly wrong.

The headline process improvement is to make things smaller. Chips are made through photolithography: a slice of silicon has a layer of chemicals deposited on its surface, followed by a photoresistive ink. A pattern is shone onto the surface of the wafer, setting the ink where light strikes. Acid then eats away the unprotected parts, leaving a delicate tracery of the original chemicals in that pattern. Repeating the process builds up the components and connections which form the logic circuits of the chips: the wafer is then diced, the chips put in packages with electrical connections to the outside world and you have your processor.

If you shrink those patterns, you get more components per square inch — and thus more powerful or more plentiful chips — for the same material cost. This simple observation has driven Moore's Law for fifty years. Smaller transistors also tend to run faster and use less power: this is how we've ended up with our insanely powerful technology at ever decreasing costs.

Enjoying this article?

Download this article and thousands of whitepapers and ebooks from our Premium library. Enjoy expert IT analyst briefings and access to the top IT professionals, all in an ad-free experience.

Join Premium Today

The first processor, the Intel 4004 from 1971, had a feature size — roughly, the most important dimension of an individual transistor on the chp — of 10 micrometres. Intel is planning to introduce 10 nanometre process chips in 2016. A nanometre is a thousand times smaller than a micrometre, which means that a million 2016 transistors — enough to make around four hundred 4004 processors — will fit into the space taken by one of the 4004's original transistors.

The limits of lithography

cpu2-intel.jpg
Intel's processor fabrication roadmap. Image: Intel

But the magic is running out. Intel only has public plans to go as far as 7nm, and then 5nm — and there are huge problems getting there. The primary one is the light that creates the pattern: beyond 10nm it's widely thought only very short wavelengths will do, which means extreme ultraviolet (EUV). But available EUV light sources aren't strong enough. Wafers have to spend a long time waiting for the ink to react, and if you can't run a chip fabrication plant at high speed, you can't make enough money to justify it. EUV has been under development for decades and has consumed tens of billions of dollars in R&D funding. It still doesn't work: ASML, the company developing EUV, said in mid-2014 that its EUV machine can manage around 200 wafers a day; meanwhile, its non-EUV machine, which immerses the wafers in water to get finer features through refraction, can do more than 5,000.

cpu2-euv.jpg
NXE:3100: ASML's second-generation EUV lithography tool. Image: ASML

There are possible alternatives, such as using multiple electron beams, producing finer features by multiple incremental exposures of patterns, or even physically impressing the pattern on the wafer. All are already in use to some extent, but none seem to have the mixture of throughput or low defect production to make them viable below 10nm.

Another, more esoteric, possibility is called Directed Self Assembly (DSA). This uses polymer chemistry where blocks of molecules follow guidelines produced by standard lithography but self-configure around these to produce much finer features. Although this has been demonstrated in an experimental production line, it needs further development — not least in the design tools that chip engineers use to plan their circuits.

In any case, basic physics will have the last word. Individual silicon atoms in a crystal have their centres just over 0.54nm apart: 14nm can contain fewer than thirty. This makes it very hard to create an effective insulator from them to block the flow of electricity; leakage current goes up, resulting in power consumption and heat problems. This has been the case for at least a decade, with chip makers introducing ever more complex physics and configuration to keep things under control. Those only works so far: at 1nm, you'll have just one silicon atom. Even Intel can't make a transistor from empty space. And while a wide variety of exotic transistor designs at very small sizes have been made in the labs, the challenges to make them into profitable products are enormous.

It may already be game over. Some industry analysts say that the increased costs and delays in each process shrink have effectively ended Moore's Law's cost-per-transistor advantage, and the potential performance improvements don't justify the investment to squeeze the last drops out of traditional chip making. (Intel disagrees, but leaves it at that.)

So what comes next?

Tunnelling out

One possible solution is to change the basic physics used to switch a transistor on and off. Today's basic transistor structure is called a MOSFET (Metal Oxide Semiconductor Field Effect Transistor). There have been many designs of these as processes have shrunk, with features optimised to work with each new set of physical parameters, but they all work in the same way. A voltage changes on an area of the transistor called the gate, which acts as a tap that turns the channel between two other areas — the source and the drain — on or off. The channel can be either an insulator or a conductor (which is, incidentally, why silicon is called a 'semiconductor').

This channel length is what's defined by the feature size — and is what leaks when there aren't enough atoms. A MOSFET only works at a high enough voltage: you can't turn the voltage down enough to reduce the leakage.

However, electrons can flow through an insulator under other circumstances — in particular, through a quantum effect called tunnelling. A kind of quantum transportation, this is important for nuclear fusion in stars, quantum computing and, more prosaically, the flash memory in your mobile phone.

Under the right conditions, tunnelling can occur at very low voltages indeed, when the insulating layer it has to cross is very thin. The precise point at which it occurs can be affected by an applied electric field — in other words as with a MOSFET. Some varieties look quite like ordinary MOSFETs, while others use more exotic structures and materials to explore different characteristics.

This new transistor type won't get past the limitation of the size of silicon atoms, but could remove the limits on power and performance that have been keeping clock speeds down for the past ten years — or make processors with today's performance levels but much lower power consumption.

Down to the wire

Another favourite to push performance at the 5nm level is the nanowire transistor. This is a simple beast: a very thin wire with the gate electrode wrapped around it. Applying a voltage to the gate effectively pinches off the conductivity off the wire through simple electrostatic repulsion.

cpu2-nanowire.jpg
A nanowire-based transistor. Image: IEEE Spectrum

The major problem with nanowire is that it can't carry enough electricity to be useful, and so multiple wires need to be configured in parallel, which removes their advantage. The industry is therefore investigating a huge range of potential materials that could circumvent this, in particular what are known as Group III-V (three-five) compounds after their position in the periodic table.

Taking the tube

A similar idea replaces the nanowire with a carbon nanotube — basically, a single-atom-thick sheet of carbon formed into a hollow pipe. These lend thesmelves to mass production and to self-assembly, and Stanford researchers have already produced a working computer that runs a simple operating system. The two big problems with making the system work were malformed nanotubes that had become metallic — they acted like conductors that wouldn't switch off — and others that were misaligned and shorted out nearby nanotubes.

cpu2-nanotube.jpg
A Stanford researcher holding a wafer of carbon nanotube computers, plus an experimental unit plugged into an I/O board. Image: Stanford University

The first problem was solved by switching off all the good nanotubes and then putting enough current through the metallic ones to vaporise them in a puff of carbon dioxide. The second was to run a self-healing algorithm that identified the errant whiskers and mapped those areas out of the processor, configuring around them. This idea, curiously enough, was first used in products in the 1980s by UK company Anamartic, an offshoot of Sinclair Research, which created wafer-scale-integration memory products out of complete wafers with bad areas mapped out.

Theoretically, carbon nanotube transistors have the potential performance and power parameters to outperform silicon. Professor HS Philip Wong of Stanford University told a 2014 symposium that the technology could replace a computational device like IBM's Watson with a unit the size of a smartphone and using a thousandth of the power (although since Watson consumes nearly 200kW, that still leaves 200W to deal with).

The End of Days

Although a number of avenues will continue to advance the basic physics and component design within processors, none of these has the potential to deliver the rate of change of logic performance/price for anything like the length of time that silicon has delivered Moore's Law. We have lived through that revolution: it is coming to an end.

This isn't a counsel of despair. There is still plenty of room for I/O, networking and storage technologies to improve, and developments in areas like battery technologies will give current processor designs new uses. The Internet of Things, which won't have a premium on processor performance but will demand new designs running at very low power, will give the industry new places to explore, and the move to the cloud means it's once more advantageous to create very high-performance compute clusters that will ask new things of current and near-future technologies.

Further reading

The future of processors, part 1: Architectures

Join Premium Today