Supercomputing and data crunching used to be the sole domain of universities and research institutes—but with today's business analytics and new "affordable" platforms for HPC (high performance computing), more enterprises are taking a hard look at moving HPC into their data centers. First, though, there is planning to be done and decisions to be made.
The decisions begin with who is going to fund HPC and who is going to run it.
To date, it has been end user departments in businesses that have funded HPC investments because they are the ones making the business cases for bringing HPC into the enterprise. These HPC business cases have been very specific. They have ranged from modeling new drugs to simulating the collision impact of new car designs to performing advanced risk analysis on loan portfolios in order to minimize the risk of loss. For those departments with engineering staff onboard, it is engineers who actually develop the algorithms and write the applications for HPC. If they are not developing these apps directly, they are making decision to purchase them from outside providers.
This arrangement has been fine and even preferable for IT, which continues to have its hands full with the normal lineup of daily processing and more standard computing requests-but now, many enterprises are beginning to change the way they view HPC in the enterprises. With this change, some businesses are insisting that IT take a more active role.
Just what kind of role do businesses want IT to assume with HPC?
They aren't asking IT to start programming HPC applications, but they are increasingly recognizing the custodial value of housing HPC assets in the data center under IT supervision. The reason is simple. IT already has resource management disciplines and expertise in place. The policies and the procedures needed to maintain computing assets on an ongoing basis are mature and proven. In contrast, end user departments focus on their business and not on the underlying computing assets that support it. In short, they have no concept on how well the underlying hardware and software are running.
Because of these limitations, IT can deliver HPC value to the enterprise in two critical areas:
- It can assure that HPC hardware and software are performing the way that they are supposed to, and that HPC resources are optimized. This resource optimization comes primarily in the form of verifying that HPC resource utilization is in the 90 to 95 percent range-a significant utilization boost over the 60 to 80 percent of resource utilization that enterprises have learned to expect from traditional transaction processing systems.
- IT has the expertise to optimally schedule the jobs that must run in the HPC environment, based on job priority and resource consumption. This is an important point. Many internal departments running HPC jobs will share the same HPC computing clusters in the data center-and even though they might concurrently run in parallel, choices as to which jobs are prioritized to get the first crack at HPC resources can become a political problems as rapidly as they become technical challenges. In this scenario, especially if IT itself is not an HPC end user, corporate management could see IT as a disinterested and neutral party that is focused on the computing alone-and ask IT to be the job scheduler.
I believe that 2013 will be the beginning of HPC migration into the data center for the above reasons-and that the time is now for IT to start planning for its HPC custodial role. CIOs will be ahead of the game if they take these four steps:
- Determine how HPC is going to fit with the rest of data center infrastructure—-and who in IT will be assigned to administer the HPC resources.
- Arrange to meet with upper management and HPC user departments to establish general guidelines and priorities for HPC scheduling that are based upon corporate priorities.
- Build primary contact relationships with HPC vendors, even if other end user departments initiated these relationships and signed the purchase contracts.
- Incorporate HPC into the IT strategic plan. HPC at the onset is likely to be more of a "siloed" resource in the data center—-but as business analytics becomes more sophisticated, it is likely that HPC will be pulled in and integrated with other data center resources.
Mary E. Shacklett is president of Transworld Data, a technology research and market development firm. Prior to founding the company, Mary was Senior Vice President of Marketing and Technology at TCCU, Inc., a financial services firm; Vice President of Product Research and Software Development for Summit Information Systems, a computer software company; and Vice President of Strategic Planning and Technology at FSI International, a multinational manufacturing company in the semiconductor industry. Mary is a keynote speaker and has more than 1,000 articles, research studies, and technology publications in print.