CERN researcher Andrzej Nowak discusses some of the computing done for the Large Hadron Collider, the largest machine ever built.
During a presentation at the recent Intel Developer Forum (IDF) 2011 in San Francisco, Intel CTO Justin Rattner interviewed CERN staff researcher Andrzej Nowak about the computing for the Large Hadron Collider (LHC). Watch the about nine-minute ZDNet video of the interview with Nowak.
The LHC currently produces 15 to 25 petabytes of data annually; the data are then processed by a combined total of 250,000 Intel Many Integrated Core (MIC) processor cores spread throughout the world. Intel MIC processors are built upon the popular Intel Xeon server processors, thus making it easy to port code from Xeon-optimized code. In addition, the LHC computers also simulate what physicists expect to happen when particle beams collide, which they can compare against the actual data to find new discoveries.
About six minutes into the interview, you'll see an example of the difference between single-core execution of a simulation and 32-core execution of one of the simulations used in the LHC. This shows the massive power of these MIC processors in powering the data analysis of the LHC.Related links: CERN openlab and Intel MIC Architecture Programming Also read: Get ready for time travel (SmartPlanet)