Distributed computing, with its recent high-profile successes in the scientific community, is enjoying a revival of corporate interest as businesses and software vendors look to cash in on the aggregated processing power of all those desktop PCs and servers already running on the network.

Enterprises—particularly manufacturing and financial companies—realize what distributed computing can bring to a wide range of applications including data mining, financial forecasting, and physical and financial modeling and simulation. The new wave of distributed computing tools focuses on tapping into unused processor cycles to tackle huge data-crunching tasks. The big advantage is pretty clear and pretty simple.

“You can use the computers you already have,” says Scott Griffin, Intel Philanthropic Peer to Peer program manager. “You get to use these systems 100 percent of the time—when people are sleeping, in meetings, or at lunch.”

The unrealized power factor
With a distributed computing approach, processing power adds up quickly. For example, a company with a mix of 2,000 older 166-MHz Pentium-class computers and 100 newer 1-GHz Pentium III PCs may likely be using these machines primarily for office applications. But together, they create an aggregate processing power of about 240 GigaFLOPS. (A GigaFLOPS is equal to 1 billion floating-point operations per second.) That’s the equivalent of four Sun Enterprise 10000 servers, according to United Devices, a distributed computing vendor.

If you want to see firsthand how fast your internal process power adds up, check out these calculators from United Devices and Entropia, another distributing computer software vendor.

Big projects drawing corporate attention
Mainstream business’s new interest in distributed computing can be attributed to two primary reasons.

First, in the last few years, several well-publicized scientific applications have demonstrated the power of distributed computing.

Oxford University teamed with distributing computer software vendor United Devices in February of this year to screen a library of 3.57 billion molecules for possible use in developing anti-Anthrax drugs. To conduct the search, Oxford recruited online volunteers who would provide their own PCs for computing power. The volunteers downloaded a small agent program that ran in the background on their computers. The agent tested 100 molecules at a time, searching for ones that Oxford deemed worthy for a closer look in the lab.

In just 24 days, several hundred thousand volunteers yielded an aggregate 5,426 years (more than 47.6 million hours) of computer time with an equivalent of more than 60,000 GigaFLOPS, or 60 trillion floating-point operations per second.

To put the processing power in perspective, consider this: If you took the 20 fastest supercomputers in the world and ran them simultaneously to scan the same volume of molecules, it would take almost twice as long to get these results.

Another example of Internet-based distributed computing is SETI@Home, the Search for Extraterrestrial Intelligence at Home project’s site, which used volunteers’ PCs to analyze large volumes of data for recognizable patterns that might indicate signals were being sent from intelligent extraterrestrial sources. That project started in July 1999 and has employed more than 3.5 million volunteer PCs. These machines have provided an aggregate CPU time of about 876,000 years, and the project is ongoing.

The power behind the firewalls
A common theme among the scientific community’s successful distributed computer projects is the involvement of the Internet to connect the various program volunteers. However, several vendors are focusing on spare processor cycles that can be readily found on companies’ own networks. It’s a concept that’s beginning to capture CIOs’ attention as they realize how much distributed computing power lies behind their firewalls.

The realization is essentially being prompted by distributed computer software vendors such as Avaki, Blackstone Computing, Entropia, Platform Computing, and United Devices, all of which are introducing new management tools targeted at the corporate IT leader.

The tools perform a wide range of functions. For instance, some give managers a single graphical view of the status of all elements of a distributed computing network. Others let tech leaders allocate resources, such as analysis programs licenses, to specific groups so that the people who need to use an application are ensured access to it. Another set of tools lets a manager prioritize projects submitted to a distributed computing system.

All these new tools let tech leaders manage this new resource—aggregate processing power—in the context of the company’s business.

The distributed computing software vendors are addressing other systems and network management issues. For instance, some are tying software platforms into existing software distribution applications, which makes the distribution of agent software and jobs easier. Others are making status information (i.e., a downed server) available to upper-level management systems like HP OpenView, Tivoli TME 10, and CA Unicenter TNG. Many are capable of passing alarms and alerts up to these management systems.

It’s clear that the new approaches and uses for distributed computing will only surge, given that so many tools are hitting the market.