This article is from TechRepublic’s Disaster Recovery e-newsletter. Sign up instantly to begin receiving the Disaster Recovery e-newsletter in your inbox.

As companies transfer more mission-critical operations to computers, the additional processes require more powerful server systems. As a result, two concepts for staying on top of required resources have evolved: scale-up and scale-out.

Regardless of which route a company chooses to follow, both concepts also impact the development and management of your organization’s disaster recovery (DR) plan.

Over the years, scale-up architecture has become the de facto standard for gaining more power in data centers. It works rather simply: When you need more power, you get a bigger, more powerful server.

Faster and faster processors are coming to market—sometimes in blatant defiance of Moore’s Law—and the ability to scale to larger and larger numbers of processors inside a single server also continues to grow. This allows systems engineers to create immensely powerful servers, containing the improved data systems all within a new, more powerful box.

However, disaster recovery becomes somewhat of a challenge with these systems. As they grow larger, these single-box data systems are running more and more applications on the same machine—putting a lot of very critical eggs in one basket.

After a while, having proper high-availability mechanisms that are local to the data system becomes a mandatory requirement because a single hardware failure could conceivably take out a large chunk of your enterprise’s data architecture. The good news is that these systems are easier to protect between multiple data centers because there are fewer servers overall for which to build redundant systems. But remember to properly size the DR servers in the remote location to ensure they can handle the required load.

Scale-out architectures provide more processing power, but in a different direction. It’s the process of clustering or otherwise splitting up the workload over many different servers, so each individual server need only be powerful enough to handle the load of a smaller percentage of the overall systems.

Solaris and UNIX have been clustering for scale-out architectures for years. And with the new Windows 2003 clustering systems, the Windows world has now joined the mix.

This architecture offers the benefit of running multiple systems on multiple servers, with one or two failover (or standby) nodes available to take over in the event that one server dies. So the very architecture you’re using takes care of your local high-availability concerns.

However, remote failover becomes somewhat of a puzzle. You must figure out which systems should fail over, when, and to which hardware. Since you could fail over multiple applications to the same remote server (taking a performance hit, of course), your organization must determine what can take the hit in order to keep the budget for server hardware under control.

Final word
Hardware vendors are beginning to take sides in the scale-up vs. scale-out debate. For example, Dell recently announced its plans to shy away from the largest server systems (such as eight-way processors for Windows) and concentrate on scale-out architectures instead.

However, most leading hardware vendors for all operating systems have decided to support both architecture concepts. So your organization’s needs will be the primary factor in determining which option will suit it the best. In general, if your company uses many small applications, go with a scale-out architecture, and put them on multiple, smaller, cheaper servers. If your organization uses gigantic, powerhouse applications, then use a scale-up architecture to obtain what will become a more cost-effective, larger set of machines.