A
couple of years back, there was much talk of utility computing—that is, access
to computing power that could be turned on like a tap and paid for like
electricity. The future, it was said, is “on-demand” computing.

That
talk has subtly changed since then. Now the interest is in grid computing.
Essentially, grid computing links up the existing resources within an organization
(or those to which it has access as a service) in order that they can be
distributed to where they are needed in real time.

In
other words, grid computing is something of a third way of utilizing computing
resources. In traditional computing resources are dedicated to one particular
application. On-demand computing means that resources are dedicated to no
particular application, but can be accessed as needed. If the former is
inefficient, and the latter was simply impractical, grid computing aims to
achieve the best of both worlds.

Tips in your inbox

Stay up to date with the latest IT news and information affecting the world of finance with TechRepublic’s free Financial Services IT newsletter, delivered each Wednesday.

Automatically sign up today!

Bank of America’s switch to grid computing

“Initially,
a lot of the talk around on-demand computing seemed to be really hype,”
says Andy Sturrock, Distributed Computing Architect at Bank of America. “We needed to
get beyond that and find out what vendors really have on offer.”

Bank
of America has now implemented a global grid computing resource from DataSynapse. Sturrock explains that
their goal was to balance computing supply and demand right across the organization.
It was largely a commercially driven aim, to cut the expense of running with
the redundancy implicit in the traditional approach: typically, each business
function, in each geographical area, had to buy enough resource to run the
applications required by users; thus silos of computing power were created that
lay idle for much of the time.

“We
calculated that much of our hardware was idle for up to 18 hours a day,”
Sturrock explains—which was quite an expense. Bank of America was not an unusual
case; Sturrock says that across the financial services industry, banks
typically run at between 10-20 percent utilization.

The
move to installing a global grid was taken in several steps. First, Sturrock
had to be sure that grid computing would actually work for the bank—that is,
that the peaks in demand in one silo coincided with the troughs in demand in
another, so that the resource could be shifted around without compromising
availability. Monitoring software was installed across the enterprise, and
showed that, indeed, grid computing could successfully share existing
resources.

Next
came the step of migrating to the grid. The bank had developed some grid-like
technology in-house to share resources within regions. These were migrated to
the global grid.

Challenges and benefits of grid implementation

“One
of the challenges that anyone will face is the internal resistance of each
silo,” Sturrock explains. “Each one represents users who want to stay
in control and independent. So they are reluctant to change.” To combat
this, he decided to prove the value of the initiative early on. “For
market analysis tools, we showed that for one application, the batch work could
be finished about six hours earlier then before,” he says. “This had
massive advantages in terms of generating market figures. Users could get the
data ready from one market before another opened.” Needless to say, the
beneficiaries of this advantage were the first to be live on the global grid.

To
date, Bank of America has around 2,500 Windows and Linux services on the grid.
Management of the grid itself is run on two Solaris machines, and the grid
brokers—the systems that distribute the resource—run on Linux machines with at
least two per application. So there is plenty of failover.

The
success of this grid approach can be measured in a number of ways. In terms of
pure hardware costs, Sturrock estimates that the bank will save $15 million
over a three-year time period simply by avoiding extra hardware purchases.
“But a better way of looking at it is the opportunities it has created for
the business too,” he adds. The point is that grid computing allows the
bank to deploy far more computationally-intensive risk calculations. Sturrock
estimates these have freed up to $10 million of reserves.

He
adds that the reliability of IT systems have increased too because they are
better managed. And the bank is also beginning to see savings in terms of IT
development costs because grid allows the sharing of code between systems that
were separate before.

By
the end of 2005, Bank of America expects to be running 10,000 grid engines—a
large figure that reflects the increase in computing power that the bank will
have at its fingertips as a result of the economies of scale inherent in grid
computing.

Needless
to say, if Sturrock was sceptical of on-demand computing before, he has become
an advocate of grid computing!