What's the best way to increase processing power?

Processing power—seems you can never have too much of it. But what's the best and most scalable way to increase the processing power available to your vital applications without breaking the bank?

Just like death and taxes, one thing we can count on is that tomorrow's operating systems and applications will require more processing power than today’s. And the hardware makers are more than keeping up with the demand. Most IT professionals who have been in the business for any length of time are intimately familiar with the effects of "Moore’s Law," first postulated by Gordon Moore of Intel in the 1960s, which estimates that computer processing power roughly doubles every 18 months.

You don’t even have to upgrade to new software to need greater and greater processing capabilities. As the sheer volume of data to be processed grows, you need to be able to process it faster in order to get through it all in a reasonable amount of time. So it’s almost inevitable that you’ll be adding processing power, either by upgrading your current systems or by replacing them with newer, faster systems, within the next few years. The question is: what’s the most effective and cost-effective way to do that?

Faster processors vs. multiprocessing

Let’s say you have an application, such as e-mail services or a SQL database, that needs more processing power. The first and most obvious way to get it is to upgrade to a faster processor, for instance, from a 2.0GHz processor to a 3.0 GHz processor. You could do that either by replacing the processor in the server with a faster one (if the motherboard will support it), or by replacing the entire server with one that has a faster processor.

Tips in your inbox
TechRepublic's free Strategies that Scale newsletter, delivered each Tuesday, covers topics such as how to structure purchasing, when to outsource, negotiating software licensing or SLAs, and budgeting for growth.
Automatically sign up today!

The first option will cost less, but may not be an option if the computer’s motherboard doesn’t support a faster processor than it currently has. It will also require either having the manufacturer install the new processor or opening up the box and installing it yourself, possibly voiding the OEM warranty. It will obviously cost more to buy a whole new machine, but you’ll likely get other benefits, such as support for more RAM, faster buses, and so forth.

Whether you’re upgrading the old machine or replacing it with a new one, buying the fastest available processor is usually much more expensive than, for example, upgrading to a processor speed that’s just under the current top of the line. For example, in building a new Dell Precision recently, I found that to go from a 3.0 GHz dual core processor to a 3.20 GHz only costs an additional $194, but going from a 3.20 GHz to a 3.40 GHz costs an additional $270.

For this reason, you might find that you get more "bang for your buck" by adding another processor, or purchasing a computer with two lower speed processors. Of course, this presupposes that the machine has a dual processor motherboard as well as an operating system that supports multiprocessing. A computer with two 3.0 GHz processors may not be as fast as one with a single 6.0 GHz processor, but it’s likely to outperform one with a 4.0 GHz processor (especially for certain types of tasks/applications) and be significantly less expensive.

Another consideration when you add processors is the cost of software licensing. Some software is licensed per-processor. Other programs are not. If you add a processor to a system running Microsoft ISA Server, you must pay an additional $1,499 to $5,999 per processor, depending on whether you’re using Standard or Enterprise edition. However, you can add a many processors as you want to a Windows Server 2003 file server without paying extra licensing fees, although you may need to buy extra client licenses (CALs) if the extra processing power is needed because of an increased number of users and you’re using the "per device or user" licensing mode (formerly known as "per seat" mode).

Multiprocessing vs. parallel processing

Whereas multiprocessing usually refers to two or more processors installed in the same computer (which can be accomplished via multiple separate processor units or via multiple chips in one unit or even multiple cores on one processor die), parallel processing is more often used to describe multiple separate computers that work together to process a particular task.

For example, a group of computers can be connected through a fast Ethernet connection to make up a cluster, which is seen as a single computer to the rest of the network. Clusters can be implemented for different purposes. Some clusters are designed to provide fault tolerance (where another member of the cluster takes over if the primary member fails). This type of cluster doesn’t provide real multiprocessing. However, other types of clusters perform load balancing, where the processing load is distributed across two or more computers. High Performance clusters are designed to spread processing tasks across multiple computers to improve performance. These types of clusters can also incorporate fault tolerance.

Clustering requires some administrative overhead to implement, and you need an operating system that supports clustering or third-party clustering software. This may or may not result in significant software cost. Windows Server 2003 supports load balancing clusters in all editions (Web, Standard, Enterprise and Datacenter). Free software is available to run Linux high performance clusters on several different Linux distros, as well. DragonFly BSD also supports native clustering.

Distributed processing

For processing huge amounts of data, distributed systems can be employed where many widely dispersed computers work on a problem by having each tackle a different part of the computation. These computers can be geographically spread out, don’t have to be running the same operating system or working on the task at the same time. This is also referred to as grid computing. One of the most famous examples of distributed computing is the SETI project hosted at the University of California at Berkeley, which uses computers all over the world to process data in search of extraterrestrial intelligence. Computer users connected to the Internet download and install the SETI software and their systems work on the project during otherwise idle time.

Distributed computing takes advantage of the power of hundreds or thousands of machines working on the same problem. Disadvantages include network reliability (or the lack thereof), security issues, and the decentralized administration.

Planning a scalable strategy for increasing processing power

Increasing the processing power of a single machine is inherently limited in scalability. However, if you plan ahead, you can make sure you purchase systems with motherboards that will support an upgrade to a faster processor later, or buy a multi-processor system with one processor and then add additional processors as you need them, for best scalability.

Server clusters can be very scalable because you can add more machines to spread the processing load across as your needs expand. Be sure to look at the number of cluster nodes supported by the system you’re planning to implement. For example, Windows Server 2003 supports up to 32 network load balancing (NLB) cluster nodes.

Distributed processing is appropriate for extremely large scale projects where the systems do not need to work tightly together and where security and/or loss of one or more nodes in the distributed system are not big issues.


By Deb Shinder

Debra Littlejohn Shinder, MCSE, MVP is a technology consultant, trainer, and writer who has authored a number of books on computer operating systems, networking, and security. Deb is a tech editor, developmental editor, and contributor to over 20 add...