Hardware

This supercomputer is rethinking the future of software

Supercomputers will soon be one thousand times more powerful than they are today, and the UK has enlisted an IBM Blue Gene/Q to help develop software for the machines.

Five years from now supercomputers will be able to carrying out more than one billion billion calculations per second - and such blistering speed will require an overhaul of how we write software.

To help with this rethink, an IBM BlueGene/Q supercomputer is being installed at the newly formed International Centre of Excellence for Computational Science and Engineering in Daresbury in the UK. Its purpose: to help re-engineer software to run on computers packing millions of processor cores, many times more than the number of cores inside the fastest supercomputers available today.

The 98,304-core machine will have a peak performance of 1.26 petaflops - more than one million billion calculations per second - and will likely rank as one of the top 20 fastest machines in the world upon its completion in a couple of months.

The Science and Technology Facilities Council lab at Daresbury in Cheshire where the supercomputer will be based. Photo: STFC

Adrian Wander, a director at the Computational Science and Engineering department at the Daresbury lab, told TechRepublic that in five years time there is likely to be an exaflop system, on capable of carrying out at least one billion billion calculations per second.

The majority of existing software will not run on machines with millions of cores, due to the very different hardware architecture of such computers compared to existing x86 machines.

"There's a whole bunch of technical issues around the application software that we need to address now if we are going to have applications that will run on these systems in five years' time," he said.

"We're going to have millions of cores and a number of our algorithms just won't scale up to those kind of processor counts," Wander said, adding that challenges relating to core-to-core communication, limited memory per core and memory bandwidth need to be addressed.

"The Blue Gene/Q system is our test bed for doing this application development. It's going to give us very large core counts and enable us to address these scalability issues."

The jump in the number of cores means that sticking with existing x86 computer architecture is not an option for supercomputers of the future. For instance, the memory available to each core will need to be limited in order to keep power consumption within acceptable levels. Exaflop computers will likely have millions of cores, many times more than the fastest supercomputers available today, so devoting the same amount of power hungry memory to each core is not viable.

"Today the Blue Gene/Q has 16GB of memory per core, if we are going to 1GB or 0.5GB per core [in future machines] we're going to have to do a major redesign of the code," said Wander.

And unlike many of today's computer processors, the chips inside these future machines will likely pack in a far wider range of processing units than machines of today. Future CPUs are likely to include additional circuitry to aid processing, such as field programmable gate arrays. Software will also need to be rewritten to take advantage of the more diverse range of processing units inside the chips of the future.

The server turns supercomputer

While enterprises are unlikely to build exaflop machines in the near future, the leap in the number of cores inside each server will give business access to formidable processing power.

"Each rack is going to be a petaflop or a few petaflops," said Wander. "That means that the local servers that companies put in, which are typically a few racks, are going to be the equivalent of the most powerful supercomputers in the world right now."

The jump in processing power available to business will likely prove invaluable in weeding out useful insights from deluge of data predicted to swamp enterprise in the near future, with IDC forecasting a 50-fold increase in the amount of information generated by 2020.

The downside of this shift to servers packing highly concentrated clusters of processor cores is that most existing enterprise software will also need to be re-engineered to run on hardware architectures very different to what exists today.

The lab at Daresbury is working with hardware companies - such as IBM, Intel, graphics card manufacturer Nvidia, scalable storage specialist DataDirect Networks (DDN) and others -  and with as yet unnamed software makers to help develop code to suit these new computer architectures.

"These activities require scales of resources that are beyond the ISVs [software suppliers], so that's why we've established this centre," Wander said.

Running alongside the Blue Gene/Q machine in conducting the research into scalable software will be a 8,192-core, 196 teraflop iDataplex system, backed by a 7.2PB DDN storage array. The three year research project is being funded by £37.5m of government grants.

The lab is also developing and offers software platforms - portals, workbenches and workflow engines - to make it easier for industry to use of high performance computers, which today are predominantly used for running simulations in scientific and military research.

"People see it as being hard to learn the skills necessary to make their software run on high performance computer systems, and one of our aims is to start making it easy," said Wander.

About

Nick Heath is chief reporter for TechRepublic UK. He writes about the technology that IT-decision makers need to know about, and the latest happenings in the European tech scene.

Editor's Picks