It promises to be the biggest revolution in programming since object orientation — but it remains virtually unheard of to most developers. Thanks to the development and uptake of multi-core CPUs, developers must begin to consider truly programming in parallel.
The equation is simple: if you have a single threaded application and it is running on a multi-core chip, it will have the same multi-core performance as when it is run on a single core system.
To take advantage of modern architectures, applications will increasingly need to be parallelised as much as possible. There is no point in creating an application with four threads because the latest chip has four cores, as in 12 months that chip will be obsolete and replaced by a chip with even more cores. It has never been a good maintenance idea to base client applications tightly on inconsistent and easily upgradable hardware — and that idea remains true for multi-core.
Parallelising a piece of software is not something that can be tackled at the end of a program’s development — it needs to be thought of in the planning and design stages and be constant throughout the entirety of development.
Turning to frameworks
When a developer attempts to modify a piece of software for parallel environments, it may be tempting to try to spawn and control multiple threads using hand-written and maintained code. As multi-core approaches the stage of becoming manycore, the number of threads needed to take full advantage of the hardware increases, and the knowledge needed to optimise the code for a manycore environment increases again.
Few programmers can match the knowledge or application of a team of developers. Due to the increased complexity involved with parallel programming, programmers are turning to frameworks to make life easier. Examples of projects that can provide a scalable and controllable framework for parallel applications are OpenMP and Intel’s Threaded Building Blocks.
A language that makes use of a virtual machine can take a different approach — the threading libraries and knowledge can be built into the virtual machine, this can give the added and somewhat fortuitous benefit of increased throughput for no code changes.
Below James Gosling, the man behind Java, speaks of the benefits of the Java VM (HotSpot) and how it can deal with multi-core environments:
What about Windows?
For the ma and pa user, when will they see the benefits of multi-core? Clearly not until the most deployed operating system in the world, Windows, begins to takes full advantage of the underlying architecture. — and even then, not until it provides the surrounding .NET developer ecosystem with the tools to allow them to parallelise their applications.
The way that Microsoft intends to handle this issue is to take multi-core support into .NET itself. This solution remains years away however — until .NET is ready, Windows will remain sub-optimal in comparison to other operating systems and parallelism.
Below Jason Zander, the general manager of Visual Studio, explains the state of play in the Windows world.
Where it is useful today
To this point it may appear that the dawn of parallelism is a number of years off, but nothing could not be further from the case. There are a number of environments where parallelism has made an impact and will continue to.
The first of these environments is the server room. Much time has already been spent tuning services such as http and databases to take advantage of multiple processors. By being architected such that each request is generally given its own process, adding another core onto a chip will provide immediate performance gains before the multi-core tuning even begins.
Any sysadmin who has seen hundreds of http or database processes queued up waiting for execution can attest that life would be much better with an extra channel of execution such that one request did not slow down the entire system.
Another server-side application which has benefited greatly from multi-core excels is virtualisation. Running a guest operating system on a single core processor is a recipe for determining whether you have a good scheduler or not. With the host OS and the guest OS competing for the same resources, and with the host OS better placed to win such contests, the application within the guest OS is often left with a small number of remaining cycles.
Back in 2001, I was in the employ of a software house that had a virtualised server — the guest OS was an early version of Red Hat and it was running inside of Windows NT (and yes, it was as bad as it sounds).
That early introduction to virtualisation tainted my thoughts on the topic for quite some time. As a single core system there seemed to be no benefit at all, I could not see the point of running an OS within another OS — all it seemed to do was cause pain.
Enter multi-core systems and virtualisation makes a lot more sense and is far less painful.
The guest OS is able to run on one set of cores and the host OS is able to run on another; the conflict for resources is largely over and lack of responsiveness within the guest OS becomes a thing of the past.
The elephant in the room that we have been avoiding is gaming — both on the PC and latest generation consoles. Inside the Playstation 3 is a 8 core Cell chip (read our interview) and the Xbox 360 uses a tri-core custom Power PC chip. Modern gaming has fully embraced parallelism and must be forefront of any new gaming engine design.
Below is an example of the engine behind Half Life 2 Episode 2, which is built to take full advantage of multi-core architectures.
Depending on the number of cores that the Episode 2 engine detects, the engine will spawn independent modules for processing elements such as lighting, artificial intelligence and particle calculations — up to 32 modules are able to be spawned.
Does it matter?
The adage that “what Intel giveth, Microsoft taketh away” is an apt one to bring up at this point.
Despite the advances in processing power, how much will be used up in cycles to provide the user with an “experience”?
This comparison between a 1986 Mac Plus and a 2007 AMD Dual Core shows the problem. While the hardware is massively faster and the user experience is glitzier, the productivity of the user has remained about the same and the complexity has increased.
While developers and hardware nuts are excited by the possibilities of that multi-core has to offer, a cynical user could be forgiven for thinking that nothing is going to change whatsoever for them on a day-to-day level.
At this point in time the cynical user has a point.
In the datacentre, it shall be multi-core just as it was multi-processor before it. Server applications have been engineered to perform in a parallel environment and so it will remain.
For the everyday user, until vendors like Microsoft figure out how to take advantage of the new hardware and showcase it in software, multi-core will remain as an untapped resource.
The unlocking of multi-core will begin with the use of properly architected frameworks that allow developers to take advantage of the hardware. As the burden of thread control is removed from developers, multi-core will blossom.