Earlier this year, I did a lot of research into the history of computing. I learned about mainframes, microcomputers (what we now call a PC), clusters, grids, and minicomputers. I also learned about the fabled class of computers known as supercomputers.
During the Labor Day weekend, my family and I went to the National Air and Space Museum in Washington DC. We saw one of the Cray computers (the Cray-1) that NASA used. While the Cray computers have changed, and supercomputers as a class have lost a bit of their mythological cachet, the supercomputing's legacy is playing a greater role in the way we write software today. (On a fun side note, check out a CNET TV video that shows how Star Trek influenced development of the Cray supercomputers.)
What separates the supercomputer class from other multiprocessor "Big iron" machines is the focus on execution speed. Mainframes are largely devoted to batch processing, such as filing insurance claims or handling financial transactions. Supercomputers, on the other hand, are charged with things such as weather prediction and chess playing. While both may have dozens, hundreds, or thousands of CPUs, mainframes are more oriented towards reliability and fail safety, while supercomputers care more about speed. A good example of this can be seen in my picture of the Cray-1. All of that blue in the middle are wires; there are miles upon miles of wires (none are more than a few feet long), and the system has the cylinder shape to reduce the distance that any signal needs to travel. These systems are the computer equivalent of jet fuel funny cars -- insanely fast but useless on a normal road.
Supercomputers have changed a lot over the years. The class often used specialized processors not used for other purposes; today, the x86/x64 family of CPUs is the overwhelming choice. Their OSs have varied from obscure "one off" systems to various Linux distros, and Windows even has a "High Performance Computing" variant that is now offered on Crays and other supercomputers.
The design of these systems is often decades ahead of what 99.99% of us are using today, but at the same time, our systems tomorrow will often use the same technology or ideas. Some of the revolutions occurring in servers rooms, such as clustering and grid computing, were first used in these systems. Programmers of supercomputers pioneered techniques that mainstream developers are just now starting to use, thanks to the adoption of multi-core CPUs in the server room and on the desktop. Ideas such as the N-tiered architecture that is so common today has roots in various message passing systems used in supercomputing. Systems like the NVIDIA CUDA are designed to turn a graphics card into a massive array of processors for intense number crunching, which can cheaply and easily turn a $1,000 PC into a supercomputer.
I recently went on the record as stating that parallel processing techniques do not have many applications for the typical developer today. At some point in the future, I think we will get past the types of applications that we are doing today and enter an era in which computer analysis becomes more common in relation to computer accounting than it is today; when that happens, these types of use cases will become "typical business applications." In other words, if you want to know what you may be doing 20 years from now, take a look at what supercomputers are doing today.
J.JaDisclosure of Justin's industry affiliations: Justin James has a working arrangement with Microsoft to write an article for MSDN Magazine. He also has a contract with Spiceworks to write product buying guides.
---------------------------------------------------------------------------------------------------------------Get weekly development tips in your inbox Keep your developer skills sharp by signing up for TechRepublic's free Web Developer newsletter, delivered each Tuesday. Automatically subscribe today!
Justin James is the Lead Architect for Conigent.