Hardware

This supercomputer is rethinking the future of software

Supercomputers will soon be one thousand times more powerful than they are today, and the UK has enlisted an IBM Blue Gene/Q to help develop software for the machines.

Five years from now supercomputers will be able to carrying out more than one billion billion calculations per second - and such blistering speed will require an overhaul of how we write software.

To help with this rethink, an IBM BlueGene/Q supercomputer is being installed at the newly formed International Centre of Excellence for Computational Science and Engineering in Daresbury in the UK. Its purpose: to help re-engineer software to run on computers packing millions of processor cores, many times more than the number of cores inside the fastest supercomputers available today.

The 98,304-core machine will have a peak performance of 1.26 petaflops - more than one million billion calculations per second - and will likely rank as one of the top 20 fastest machines in the world upon its completion in a couple of months.

The Science and Technology Facilities Council lab at Daresbury in Cheshire where the supercomputer will be based. Photo: STFC

Adrian Wander, a director at the Computational Science and Engineering department at the Daresbury lab, told TechRepublic that in five years time there is likely to be an exaflop system, on capable of carrying out at least one billion billion calculations per second.

The majority of existing software will not run on machines with millions of cores, due to the very different hardware architecture of such computers compared to existing x86 machines.

"There's a whole bunch of technical issues around the application software that we need to address now if we are going to have applications that will run on these systems in five years' time," he said.

"We're going to have millions of cores and a number of our algorithms just won't scale up to those kind of processor counts," Wander said, adding that challenges relating to core-to-core communication, limited memory per core and memory bandwidth need to be addressed.

"The Blue Gene/Q system is our test bed for doing this application development. It's going to give us very large core counts and enable us to address these scalability issues."

The jump in the number of cores means that sticking with existing x86 computer architecture is not an option for supercomputers of the future. For instance, the memory available to each core will need to be limited in order to keep power consumption within acceptable levels. Exaflop computers will likely have millions of cores, many times more than the fastest supercomputers available today, so devoting the same amount of power hungry memory to each core is not viable.

"Today the Blue Gene/Q has 16GB of memory per core, if we are going to 1GB or 0.5GB per core [in future machines] we're going to have to do a major redesign of the code," said Wander.

And unlike many of today's computer processors, the chips inside these future machines will likely pack in a far wider range of processing units than machines of today. Future CPUs are likely to include additional circuitry to aid processing, such as field programmable gate arrays. Software will also need to be rewritten to take advantage of the more diverse range of processing units inside the chips of the future.

The server turns supercomputer

While enterprises are unlikely to build exaflop machines in the near future, the leap in the number of cores inside each server will give business access to formidable processing power.

"Each rack is going to be a petaflop or a few petaflops," said Wander. "That means that the local servers that companies put in, which are typically a few racks, are going to be the equivalent of the most powerful supercomputers in the world right now."

The jump in processing power available to business will likely prove invaluable in weeding out useful insights from deluge of data predicted to swamp enterprise in the near future, with IDC forecasting a 50-fold increase in the amount of information generated by 2020.

The downside of this shift to servers packing highly concentrated clusters of processor cores is that most existing enterprise software will also need to be re-engineered to run on hardware architectures very different to what exists today.

The lab at Daresbury is working with hardware companies - such as IBM, Intel, graphics card manufacturer Nvidia, scalable storage specialist DataDirect Networks (DDN) and others -  and with as yet unnamed software makers to help develop code to suit these new computer architectures.

"These activities require scales of resources that are beyond the ISVs [software suppliers], so that's why we've established this centre," Wander said.

Running alongside the Blue Gene/Q machine in conducting the research into scalable software will be a 8,192-core, 196 teraflop iDataplex system, backed by a 7.2PB DDN storage array. The three year research project is being funded by £37.5m of government grants.

The lab is also developing and offers software platforms - portals, workbenches and workflow engines - to make it easier for industry to use of high performance computers, which today are predominantly used for running simulations in scientific and military research.

"People see it as being hard to learn the skills necessary to make their software run on high performance computer systems, and one of our aims is to start making it easy," said Wander.

About

Nick Heath is chief reporter for TechRepublic UK. He writes about the technology that IT-decision makers need to know about, and the latest happenings in the European tech scene.

6 comments
JohnOfStony
JohnOfStony

The BIG problem with software is bugs and how to find them before the software is released. The more complex the software, the greater the chance of undetected bugs and the greater the difficulty in debugging. So the question arises - how do we exploit multi-processor systems yet keep the software simple? The obvious answer is SIMD - Single Instruction, Multiple Data - where the same program runs in every core but on different data. This way the processors can be simple which means that, for a given die size, there can be more in-core RAM. Such a system maps very well onto problems such as Air Traffic Control or Meteorology where vast quantities of data need the same work doing on each chunk of data. I worked on a simulation of an SIMD array processor with over 1000 cores back in 1985 where I looked into the operation of various standard algorithms (such as Fast Fourier Transform, Convolution, the Viterbi Algorithm - to name those I remember) on such an array. Unfortunately the company concerned, Anamartic, didn't survive to expoit the array processor but the potential was fascinating.

JohnOfStony
JohnOfStony

Back in 1985-6, Acorn computers had a problem - improving performance versus backward compatibility. They had to move on from their existing 6502 8-bit processor based computer (the BBC Micro) but they weren't willing to compromise on ANY of the properties of the 6502 when moving to a 16/32-bit processor. Both the Intel 8086 and the Motorola 68000 had inferior interrupt latency to the 6502 and neither was software compatible. Acorn made a brilliant decision - design their own processor and so ARM was born. It was totally non-backward-compatible with anything BUT it was so fast that it could run software emulations of processors such as the 6502 so fast that 6502 software ran at least as fast on the ARM emulation as on an original 6502. This is the approach that any computer designer should take - forget hardware backward compatibilty and all the legacy overheads that it entails; go for speed and software emulation if backward compatibility is needed - it works, and you only have to look at the success of the ARM to see that such decisions can bring major success.

sboverie
sboverie

It would be smarter to not make the future super computers to be backwardly compatible. It would be better to write the applications fresh than to attempt to upgrade an existing application built for a computer with a few cores. This super computer looks like a solution for a problem we don't have, yet. It sounds like it would be best to model complicated systems like weather to improve weather forecasting. Using such a super computer to figure your check book balance or play video games could be fun but a waste of computational power, even most business needs would use less power than what this super computer is expected to handle.

newtaorg
newtaorg

Use these computers not for the mundane tasks of today. Use them to provide alerts well in advance for natural disasters. I am sure the insurance companies will be able to support research and developement expenses. No need to use such powers for applications. My opinion.

aiellenon
aiellenon

but making computers backwards compatible is the reason why we are stuck with such trash personal computer systems, and every time microsoft tries to ditch outdated tech some idiots complain about not being able to use a 15 year old reporting application, because they like the graphical interface it offers better than current offerings they looked at. Not because of what it can do or how it does it. Everyone complains about MS making bloatware, but they only do it so there is enough backwards compatibility to keep enough of the people happy. (not that I often praise Apple, but...) Apple has it right by only supporting certain models when they release a new OS version, you have to ditch the crap if you want to progress to anything better. If we are lucky the next major OS releases from Apple and Microsoft will no longer come in 32 bit versions, considering a 32bit exclusive x86 CPU has not been manufactured for personal computers in the last 10 years. Hopefully they will use such systems to help us design more efficient methods of doing the things we do today. No doubt someone will use one to "map the universe", I am sure they will be used to compute the math required to get a human to mars and back safely, in addition to predicting the survival rate of such a trip and able to recompute it with slight modifications in various conditions and time of year and planetary alignment. I also expect them to be used to combat hackers I would not be surprised at all if they are used to develop more efficient CPUs so that we can build bigger and better computers that can take over the world and remove the infestation known as humanity... yes I watch too many movies, but the real McCoy is in the books.