Choosing the right type of hardware is critical to your success with larger implementations of Exchange. With many criteria and many solutions, you could waste your days playing a game of guess-the-right-solution.
In this Daily Drill Down, I will show you what I think to be the best solution for setting up Exchange for your corporation.
So you’ve done a great deal of architectural planning. During this planning, you’ve gotten at least a rough idea of how many servers you’ll need to create the routing groups you’ve defined for Exchange 2000. The next step in the process is to decide what type of hardware the servers need to have. As I discuss server hardware, it’s important to remember that the suggestions I make are merely guidelines. You’ll have to plan for the specific needs of your organization. You also need to remember that Windows 2000 tends to be very picky when it comes to hardware. Therefore, make sure that any servers you buy conform to the Windows 2000 Hardware Compatibility List (HCL).
Now that I’ve gotten the standard disclaimer out of the way, let’s look at the issues you need to consider. Within small routing groups, such as a remote office with just a few users, there won’t be nearly as much planning as there would be in a big office or a big routing group. In a small office, just about any server that meets the recommended hardware requirements will get the job done. However, in a big office, you’ll have a choice to make. You can either use a small number of really powerful servers or a lot of lower-end machines. There aren’t really any guidelines as to which method you should use, but I personally prefer to use a lot of smaller servers instead of a few giant ones.
I have two main reasons for this preference. First, there’s the issue of future growth. Large, powerful servers are expensive. As the network grows, upper management is often reluctant to loosen the purse strings enough to let the network administrator buy another high-end server. The reason for this is that usually when you buy the first few, you have to justify the high cost. This justification usually comes in the form of boasting about just how powerful the new server is and what it can do. When you ask for the money to buy another one, management tends to wonder why the previous server is inadequate to handle the additional workload.
If you go with low-end servers, though, you can tell management upfront that you’re using a build-as-you-go technique. The servers are cheap (comparatively speaking), and adding more servers as the network grows is no big deal, as long as management understands your approach ahead of time.
My other reason for preferring to use larger numbers of less powerful servers is fault tolerance. Suppose for a moment that your organization contains 1,000 Exchange users. Now suppose that all of those user mailboxes are contained on a single server. If the server fails, then no one can access his or her mail until the server has been brought back up (which sometimes takes awhile). Now suppose that you had taken the small-scale approach and distributed those users among five smaller servers. If one of those servers were to go down, you’d have 200 users without mail access, but the other 800 users would be able to keep working as if nothing had ever happened.
As I said, though, there’s really no right or wrong way to go about buying servers, as long as they are completely compatible with Exchange and Windows. However, whether you’re buying big or small servers, there are at least four main things that you need to look at when scoping out a new server. These include the processor, the memory, the hard disk, and the network connection. In the sections that follow, I’ll discuss the planning that you should put into each of these components.
A lot of people think of the processor as controlling how fast a system runs. However, this is only partially true. The processor is responsible for processing instructions. While the processor’s clock speed controls how fast the instructions are processed, how quickly the instructions get to the processor is almost as important. For example, if you have a really fast processor but really slow memory, the system will run slowly because the memory’s speed, or rather lack of speed, is acting as a bottleneck in the system.
In spite of the fact that other components can slow the system down, it is important to have a fast processor. Windows 2000 alone has a lot of overhead. When you load Exchange 2000 on top of Windows, you subject the processor to quite a workload.
As processors go, the higher the clock speed on the processor, the faster the processor will run. However, beware of cheap processors with high clock speeds. Many of the cheaper processors have a smaller onboard cache, which can really slow things down.
One option that you have when designing an Exchange 2000 Server is to use multiple processors. Exchange 2000 is a multi-threaded application, which means that it can easily take advantage of the power offered by machines with two or more processors. However, don’t be fooled by systems with two processors. Adding a second processor to a system doesn’t double the system’s speed. Remember that the two processors exist on the same system board, which means that they share the system’s memory, hard disk, and other components. If one processor needs to access the hard disk, it will have to wait until the other processor is done using the hard disk before it can do so.
Another thing that a lot of people don’t realize about machines with multiple processors is that Windows has to assign various tasks to the individual processors. This process actually consumes some amount of processing power. In general, if you install a second processor, you can expect to get about a 50 percent performance gain.
Perhaps the cheapest way to really boost a server’s performance is by adding memory. Remember that both Windows and Exchange are memory hogs. In spite of what Microsoft says are the minimum memory requirements, I recommend that you never run an Exchange 2000 Server with less than 256 MB of physical RAM. Of course, you should add even more memory if the server will be running services other than Exchange 2000. For example, if the server doubles as a domain controller or a DNS server, then you’ll need extra RAM to support the additional services.
So why does memory make such a big difference? Because Windows 2000 is memory-hungry. When Windows fills up the system’s physical RAM, it begins dumping memory pages to a page file on the hard disk. Any time the system needs to access a page of memory that resides on the hard disk, it must empty a portion of the physical RAM by dumping it to the page file. Windows then reads the desired memory page from the hard disk and copies it to the physical RAM. This process is known as swapping or paging.
As you can see, paging is much more time-consuming than reading a memory page directly from memory. For starters, there are more steps involved in paging than in simply reading a page from memory. The real slow down, though, is the hard disk access. Hard disk access time is measured in milliseconds, while memory access is measured in nanoseconds. Physical RAM is a much faster medium than the hard disk.
Any time that you add memory to a server, you decrease the chance that the system will have to rely on paging. Unfortunately, it’s nearly impossible to completely get away from paging. However, generally speaking, the more memory your Exchange Server has, the quicker it will run.
One of the trickiest things to plan within an Exchange environment is the hard disk. The hard disk configuration is absolutely essential to the way Exchange performs. If you’re on a tight budget and you’ve only got a handful of users, then there’s absolutely no reason you can’t use a server that contains a single partition on an IDE hard drive. Generally speaking, though, this is a configuration you’ll want to avoid.
The real trick to planning a hard disk configuration is to think of what the Exchange Server will use the hard disk for. First, you have the Exchange database files that are used for the users’ mailboxes and for the public folders. Next, you have the Exchange log files that are used as a temporary storage location for database transactions. These log files are also used in disaster recovery situations. Finally, you also have to think about the Windows 2000 page file.
For an optimal configuration, I recommend keeping all three of these file types separate. For example, you should place the page file on one physical hard disk, the Exchange log files on another physical hard disk, and the Exchange databases on yet another physical hard disk. Notice that I said physical hard disk instead of partition. As long as you’re using SCSI, Windows can access each hard disk simultaneously. This means that the server can make a database entry, update a log file, and update the page file at the same time. If all of these items existed on the same physical hard disk, then the operating system would have to complete one disk operation before it could start the next one.
I realize that on lower-end servers, three SCSI hard drives may be beyond your budget. As I mentioned, the primary reason for keeping the files separate is performance. Exchange does use the log files for disaster recovery, though, and even if you’ve only got a single hard disk, it’s a good idea to keep the log files on a different partition from the database files. The idea is that if the partition containing the Exchange databases were to fail, you would still be able to access the log files and bring the server back up. Of course, that trick will only work if the failure is at the partition level. If the entire drive were to fail, then you’d lose the databases and the log files, even if they were in separate partitions.
That gives you an idea of how to do things if you’re using a low-end server, but what if you want to use a high-end server? In such a situation, I recommend using a separate physical hard drive for the page file, a RAID array for the log files, and another RAID array for the Exchange databases. Doing so will provide the fastest possible access to the log files and databases. If you’ve chosen to use a fault tolerant RAID array, you’ve also protected your data while boosting performance.
There isn’t really a lot to be done with the network connection beyond the obvious. However, if you have multiple Exchange servers in close proximity, you can install a second network card in each server and create a dedicated backbone between the Exchange servers. By doing so, you can take the Exchange-related replication traffic off of the main network and confine it to a dedicated network. Not only will this ease the traffic on your network, but the replication process will be faster, because replication traffic won’t be delayed by user traffic.
Although a resource hog, Exchange 2000 can be implemented to suit your needs. Whether you’re using low-end or high-end servers, you can create an Exchange server that will service your company’s requirements.
Just remember that faster memory is better than faster processors, SCSI is better than IDE, separating file types on different drives is better than all-in-one, and fault tolerant RAID is your friend.