Exchange Server 2003 was designed with clustering in mind, and it functions very well in a clustered configuration. Given the resource-intensive nature of Exchange Server and the critical function of e-mail in most organizations, Exchange Server is a perfect candidate for clustering. Here are some of the most important aspects of configuring and operating Exchange Server in a clustered environment.
Justifying new hardware purchases is sometimes so difficult that you might be tempted to simply repurpose server hardware you already own. Doing so, however, often ends up being a fatal mistake. For example, suppose you own a hard disk array that has no trouble working with a Windows 2003 Server. If you configure the same array as a shared disk in a Windows Server 2003 cluster, upon boot-up you might receive a "STOP 0x000000B8" error message. The error occurs because the disk array is not cluster-compatible.
When you're planning an Exchange Server cluster, the Hardware Compatibility List should be your guide. Microsoft doesn't maintain an official list anymore, but you can find Windows Server 2003-certified hardware at the Windows Server Catalog. It will tell you exactly what hardware will work in a clustered configuration. Although the Hardware Compatibility List can usually be ignored, clustering is a big exception. If your cluster hardware isn't on the list, there's a good chance that the cluster won't work at all or that you'll have a really tough time getting it to work.
In addition to the list issue, there are some disk requirements you need to be aware of. Clustering Exchange Server requires a shared hard disk. This hard disk contains the various Exchange databases and is shared among all of the nodes in the cluster. Since there's only one copy of the Exchange databases, all nodes have access to the most recent data, and there's no need to replicate data between nodes. This is where the first disk requirement comes into play. The disk must be accessible from each node in the cluster.
As for other requirements, the disk must be physically attached to a shared bus, and only physical disks can be used as a cluster resource. Even if a physical disk contains multiple partitions, the disk will still be treated as a single resource. When you attach the disk to the various nodes, you must configure the disk as a basic disk. Dynamic disks are not acceptable for an Exchange Server cluster. Finally, any partitions existing on the shared disk must be formatted as NTFS.
Choose a cluster type
When designing an Exchange Server cluster, one of the first decisions you need to make is whether you want to create an active or a passive cluster. There are advantages and disadvantages to both types of clusters, so you'll have to determine the cluster type based on your Exchange organization and on your business needs.
An active cluster is one in which multiple Exchange Servers service requests. The nice thing about this type of clustering is its scalability. If your Exchange organization isn't performing as well as you'd like, just add an extra server to the cluster and it will help pick up the slack. For this reason, an active cluster is sometimes referred to as a load balancing cluster.
A passive cluster is one in which a server remains idle and simply waits for the primary server to fail. If the primary server fails, the other server on the cluster springs to life and takes over for the failed server, thus making sure that the Exchange organization is constantly available.
After reading these two descriptions, it might seem as though the obvious choice is to create an active cluster. After all, why have one server just sitting idle when you can combine the power of the two servers for some seriously high-performance computing? Actually, there's a compelling reason why the passive cluster model is sometimes a better choice.
Imagine for a moment that you have an Exchange Server that's pushed to its maximum capacity in terms of the system resources it's using. If that server failed and it was part of a passive cluster, the cluster's remaining node would have no trouble filling in for the failed server, since it's basically just a hot spare. On the other hand, if the server that failed was part of an active cluster, a remaining cluster node would have to pick up the slack from the failed server, plus continue to handle its existing workload.
In an active cluster with lots of nodes, this isn't a huge deal because the failed server's workload is distributed across all of the remaining nodes. Performance will suffer a bit, but everything will continue to function. If, however, the active cluster has only two nodes, a single server will be forced to carry the workload normally handled by two servers. If those servers are already almost maxed out, doubling the workload during a failure will be beyond the machine's capabilities.
One of the best ways that I've ever heard this concept explained is to imagine that a bus filled with passengers breaks down. The bus company can't send another bus that's already full to pick up those passengers. Instead, they can either send an empty bus to pick up the stranded passengers, or they can send several partially full buses and put a few passengers on each. That's the way a failed server within a cluster works. If a server fails, you can either have a hot spare to take over the workload, or you can distribute the workload across other servers, but you generally canï¿?t offload the serverï¿?s entire workload onto a server that is already servicing a heavy workload.
My recommendation is to pick your cluster type based on the number of nodes you have. If you're going to have only two nodes within the cluster, use a passive cluster. If you have a high-demand environment and plan on using more than two or three nodes, you'll probably be better off with an active cluster. Either way, the art of choosing a cluster type is knowing how much of a workload each server is going to have and how much capacity each server will have in order to handle additional work should a node fail.
Preparing the network
Once you've made sure that your hardware is up to par, and you've decided what type of cluster you want to create, your next task is to prepare your network for the creation of the cluster. The actual requirements for clustering Exchange Server (and for creating a cluster itself) vary considerably depending on the version of Windows and the version of Exchange you plan on using. It also makes a difference whether you're creating a new Exchange cluster or if you're upgrading from another type of cluster or Exchange environment.
Since I can't possibly cover every single configuration, the remainder of this article assumes that you'll create a brand new cluster running Windows Server 2003 Enterprise Edition and Exchange 2003 Enterprise Edition.
The first step in preparing your network is setting aside a block of static IP addresses the cluster can use. A clustered Exchange deployment requires more IP addresses than you might expect. You can use a formula to determine how many IP addresses are actually required. The formula is 2N+E+2, where N is the number of nodes in the cluster, and E is the number of virtual Exchange Servers in the cluster. The two IP addresses, designated by +2, are used by Windows to locate the quorum disk resource and the Microsoft Distributed Transaction Coordinator (MSDTC) resources.
For example, if you were planning to create a two-node cluster with one virtual Exchange Server, the formula would be 2 x 2 nodes + 1 virtual Exchange Server +2. When you're solving the formula, you must remember that the order of operations demands that multiplication precedes addition. The solution would therefore be 7, meaning that you'd need seven static IP addresses.
In case you're wondering, the maximum number of nodes you can have in an Exchange cluster is eight. Using the formula above, an eight-node cluster with a single Exchange virtual server would require 19 static IP addresses.
After you've determined how many static IP addresses your cluster will require and what those addresses should be, you must verify that your network contains a DNS server and that your cluster nodes have their TCP/IP configurations set to point to the DNS server. Ideally, your network should contain a DNS server that is capable of accepting dynamic updates. If you're running the Windows Server 2003 or the Windows 2000 Server implementation of DNS, you have nothing to worry about. If you're running some other type of DNS server, you'll have to create a DNS host record (an A record) for each network name resource in the cluster.
Once you've verified that the necessary DNS infrastructure is in place, go ahead and install Windows Server 2003 onto the cluster nodes. For now, you'll be performing a standard Windows installation. Initially, the only thing special you'll have to do is make sure that each cluster node is a member of the same domain. A lot of people create a special domain just for the cluster, but this certainly isn't a requirement.
Now that you've laid the groundwork, it's time to actually cluster the various nodes together. Remember that each server in the cluster must have two NICs. One NIC must be connected to the corporate network, and the other NIC must be connected to a private network used only by the nodes in the cluster.
One of the nice things about Windows Server 2003 is that you don't have to install any special software in order to create a cluster. The Cluster service is set to run automatically. Your only task is to configure the cluster. To do so, just enter the CLUADMIN command at the Run prompt. This will open the Cluster Administrator program. Select the Create New Cluster option from the Action drop-down list and click OK. Windows will then open the New Server Cluster Wizard.
Creating the cluster is a simple process. Run this wizard on each server (one at a time) to add it to the cluster that you're creating. The wizard will require you to enter information such as the designated IP addresses, the cluster's domain, and the cluster's service account authentication information.
The service account is used to allow the nodes in the cluster to communicate with each other. This account must be a domain account with local administrative privileges over each node in the cluster. Unlike the cluster implementation that was used in Windows 2000, the service account doesn't require any Exchange-specific permissions. This means you don't have to assign the Exchange Full Administrator role to the service account. When you create the service account, it's extremely important that you create it so that the password never expires.
Installing Exchange Server
Now that you have a Windows cluster up and running, it's time to install Exchange Server. Do not install Exchange Server until the cluster is functional. If you try to install Exchange before you establish the cluster, Exchange will have some serious problems.
As you install Exchange, remember that it must be configured so that the databases and transaction logs exist on the shared disk. Microsoft recommends that you never place the transaction logs on the same volume as the databases, so you may want to have multiple partitions or multiple physical disks. In either case, the partitions on the shared disk must be empty prior to your installing Exchange Server.
Also note that you must install the same version of Exchange on each node in the cluster. You canï¿?t install Exchange 2003 Enterprise Edition on one node and then install Exchange 2003 Standard Edition on another node.
Likewise, you can't install multiple copies of Exchange Server simultaneously. You must install Exchange onto the cluster nodes in sequence, not in parallel. This means that if you have more than a couple of cluster nodes, the installation will probably take a long time to complete, so make sure you set aside plenty of time.