Big Data

Commoditizing HPC: What it means for IT

The sooner IT readies itself for HPC, the more aggressive companies can be in taking advantage of big data in ways that present-day business analytics can't facilitate.

Titan now reigns as the world's largest supercomputer. It sits at the U.S. Department of Energy's Oak Ridge National Laboratory in Tennessee, and has been compared to the world's seven billion people being able to carry out three million calculations per second.

Titan is also being employed by enterprises like Procter and Gamble, which is using the Titan and partnering with Temple University to develop the first molecular-based model for understanding how the skin absorbs lotions and drugs. Nevertheless, the majority of enterprises are not using Titan. They have been waiting for lower-cost HPC (high performance computing) before they take advantage of HPC to go after big data with complex calculations and algorithms.

That wait will be over soon--as more vendors begin to deliver HPC on "commodity" computing platforms like x86 to the enterprise market.

Who's doing it?

In 2013, AMD is planning to deliver a heterogeneous computing platform that is constructed around x86 processing cores and GPUs (graphics processing unit) to accelerate scientific, engineering and big data-applications.

The Heterogeneous System Architecture Foundation (HSA), an industry consortium that has AMD, ARM, Qualcomm, Samsung and Texas Instruments as member companies, is committed to reevaluating CPU-GPU architecture in order to optimize it on x86 platforms.

Intel already has chips that combine the strengths of CPU and GPU.

IBM's acquisition of Platform Computing in January, 2012 gave it an entry point into x86-based high performance computing that it plans to leverage for its smart computing initiative.

What this portends for corporate IT is that the HPC adoption cycle could significantly be shortened-thanks to an affordable and scalable way to implement HPC on x86 machines that will mean little or no "wait time"  for IT budgets. If this happens, there could well be an acceleration in enterprise demands for HPC, which has the ability to "supercharge" current business analytics because HPC can process more big data faster-and perform the processing against more complex sets of business questions.

This means that IT should be preparing for HPC in the data center now. This can happen if IT takes the following steps:

Workflow management and scheduling-Various end business areas-from lines of business and marketing to engineering and finance-will desire to use HPC in their business analytics and modeling. In early HPC and business analytics deployments, some of these areas purchased and installed their own HPC servers in their departments, but they are quickly seeing that managing an IT resource and using that resource for business results are two different skill sets. HPC is going to be centralized in IT. This means that IT will have the responsibility of building business-unit-centric workflows for end user departments that run in parallel. The key to making this all come together effectively is focusing on a robust scheduling software that can optimize jobs that are all parallel-processing big data on x86 server clusters at the same time. Data Center Architecture-As IT rolls out HPC, it will need to map out whether HPC will ultimately function as a cluster of x86 boxes designed to bring concentrated processing power to compute-intensive jobs, as a grid that can be shared by multiple business units across many geographies, or even as a cloud deployment that uses both physical and virtual resources. ROI-HPC requires different ROI (return on investment) calculations when it comes time for the CIO to present to the CFO how HPC is paying for itself. Whereas traditional computing is measured in terms of numbers of transactions processed and speed of transactions, HPC's parallel processing advantage allows it to continuously utilize 90 to 95 percent of a machine in contrast to transaction processing on an x86 machine, which is in the 40 to 60 percent utilization range. IT staff assignments-Data center workers traditionally trained in transaction and batch processing will require retraining for HPC, which processes in parallel instead of in sequence, and which is measured by different sets of metrics (utilization and speed versus transaction throughput and speed of transactions).

The final word is that working with HPC will require an IT mindset shift-and the sooner IT readies itself for this new computing approach, the more aggressive companies can be in taking advantage of big data in ways that present-day business analytics can't facilitate.

About

Mary E. Shacklett is president of Transworld Data, a technology research and market development firm. Prior to founding the company, Mary was Senior Vice President of Marketing and Technology at TCCU, Inc., a financial services firm; Vice President o...

0 comments

Editor's Picks