As you probably know, Exchange Server is an extremely resource hungry application. The one resource that Exchange is probably the most dependent on is the server's hard disk. It's easy to think of hard disk resources in terms of capacity, but when it comes to Exchange Server it is equally important to consider the disk's speed. If the server's hard disks do not deliver sufficient throughput, then the server will likely have trouble keeping pace with the user's demands.
Before I get into a discussion of how you can find out if your hard disks are performing optimally though, I need to quickly address file placement. There is a lot of inconsistent information, even among Microsoft publications, regarding the placement of files within Exchange Server. However, Table A displays a breakdown of how I think that it makes the most sense to arrange the files on your Exchange Server in order to achieve optimal performance and fault tolerance
Obviously, the Windows and Exchange system files almost always go onto the C: volume of any system, but you might be curious as to why I have chosen to place files in the other locations listed above. I have dedicated entire volumes for the pagefile and for the SMTP / MTA queues because both are fairly disk intensive resources and it is best to place them in a location in which they can have free reign of the volume without depriving other system components of disk resources.
Likewise, I am recommending a dedicated volume for the log files for a storage group. Since all of the databases within a storage group share a common set of log files, the databases within a storage group can comfortably share a volume. However, under no circumstances should the transaction logs ever be placed onto the same volume as the databases. If a disaster ever requires you to restore a database, then the transaction logs are used repopulate the database with any information that has accumulated since the last backup. As such you don't want to place the transaction logs onto the same volume as the databases, because if the volume were to fail, you could lose the databases and the transaction logs.
You might have noticed that although I have talked about file placement, I have not discussed the underlying hardware in terms of what types of disks or disk arrays should be used. The reason for this is because my purpose in writing this article is to help you figure out if your current hardware is performing at the level that Exchange really needs it to be, rather than giving you a bunch of theoretical information on the recommended hardware requirements for a server.
Sure, I could give you some recommendations, but the problem is that the recommended hardware varies depending on the server's anticipated workload. For example, local attached storage typically works fine for smaller organizations, but large companies often find themselves having to attach Exchange Server to a Storage Area Network.
What I can tell you is that most Administrators agree that the system files (volume C: on the chart above) and the pagefile (volume D:) should be placed on a mirrored array. Hardware recommendations vary widely for the other volumes though. If you are trying to get the highest possible level of performance though, then Microsoft recommends placing volumes E:, F:, and G: on RAID 0+1 arrays within a SAN server.
Testing your own server
Now that I have discussed the recommended file placements for an Exchange server, I want to talk about how you can determine whether or not your server's hard disks are performing at the necessary level, or if it is time to invest in faster disk hardware.
When it comes to testing your server, I recommend testing volumes E:, F:, and G: (according to the chart above). You can test volumes C: and D: if you want, but it isn't as important to provide top notch performance to these volumes as it is to make sure that the volumes that store your queues, databases, and transaction logs are performing well.
The best way to test a volume's performance is to use the Performance Monitor to measure the values of a few key counters. The first counter that I recommend looking at is the Current Disk Queue Length counter in the PhysicalDisk performance object section. I have read several books that have recommended that the current disk queue length remain below 2. However, Exchange Servers almost always use RAID arrays, and you have to take that into account. Therefore, I recommend measuring the disk queue length for an individual disk dividing the reported value by the number of disks in the array. For example, suppose that your G: drive was a RAID array consisting of three hard disks. If the average disk queue length was reported to be about 4, then this would normally be a red flag, but you must remember that this disk has three spindles (because there are three drives in the array). Therefore, if you divide the reported result by three, you get a much more acceptable value that equals less than two.
The next thing that I recommend doing is to take a look at number of disk transfers per second and check to see if your system's maximum throughput is being exceeded. Yes, it is impossible to exceed a drive's physical limitations. However, the drive's physical throughput limitations and Exchange's maximum throughput are two different things.
What you can do is to look at the average number of disk transfers per second that are actually occurring on each disk. You can then calculate the system's maximum throughput. When you compare the calculated maximum throughput to the actual recorded throughput, you should find that the recoded value is lower than the calculated value. If the recorded value is equal to or higher than the calculated value then it's time to look at some other options. I will talk about this in more detail later on.
To determine the server's actual number of disk transfers per second, you will need to use the Performance Monitor to look at the Disk Transfers / sec counter for each drive. Watch this counter for a while (preferably during peak operating hours) and pay attention to the average and the maximum values. Once you have a good idea of what the average and maximum of disk I/O operations per second is for each drive, then it is time to begin calculating the maximum throughput for the drive so that you can see how your recorded values compare.
Calculating the maximum throughput for a drive isn't a simple process because there are a lot of variables that you will have to take into account. I will make this as easy on you as I possibly can though.
The first thing that you need to know is the maximum number of I/O operations per second that the physical hard disk is capable of. If you are measuring a disk array, then treat the array as a single disk for now and use the I/O capacity of a single drive for this calculation. If you don't know your drive's I/O capacity then you will have to guess. Most hard disk are capable of somewhere between 130 and 180 I/Os per second.
If you have older or less expensive drives, then use 130 as the I/O value for this calculation. If you have brand new, high dollar drives on the other hand, then 180 might be a more accurate value. If you really aren't sure about the age or cost of your system then I recommend being conservative and using a value of 130 I/Os per second.
Now that you have an approximate value for the number of I/Os per second that the hard disk can physically handle, you need to figure out how many I/Os per second it can realistically accommodate. As you may already know, Microsoft recommends that the hard disks are utilized at no more than 80% of their total throughput capacity (this is to reserve resources for spikes in activity). For example, a drive that's physically capable of 130 I/Os per second running at 80% capacity could be expected to deliver 104 I/Os per second (130 *.8 = 104). Likewise, a drive with a physical limitation of 180 I/Os per second could be expected to deliver 144 I/Os per second while running at 80% capacity (180 * .8=144). This number is the drive's maximum throughput (as defined by Microsoft).
Hopefully, the number that you measured earlier is lower than the number that you have calculated. If not though, then there are a couple of things that can be going on. One possibility is that the drive is working at above 80% of its total capacity. To find out how hard the drive is really working, look at the % Disk Time counter in the Performance Monitor's Physical Disk performance object. This counter corresponds directly to the percentage of the disks total throughput that is being used, so you are looking for an average value of less than 80.
Another thing that can cause the formula that I gave you earlier to produce an unexpected result is the use of RAID arrays. RAID arrays do strange things to the formula. Multiplying the disk's physical throughput limitation by .8 to get the maximum throughput as defined by Microsoft only works if no RAID array is in use or if the drive is a RAID 0 array.
In case you are wondering why RAID 0 works the same way as a system that doesn't use RAID in this formula, it's because both RAID 0 and a system without a RAID array use the same ratio of reads and writes per drive. If you do have a RAID 0 array though, then remember that these calculations were for one drive out of the entire array, so you will need to multiply the result by the number of drives in the array to get the system's true maximum throughput. These ratios change for other types of RAID arrays though because of fault tolerant requirements.
In Exchange Server, there is usually somewhere between a 2:1 and a 3:1 ratio for reads and writes. When I am performing calculations based on I/O operations for an Exchange Server, I like to use the 3:1 ratio because in real world operations I typically see rations closer to 3:1 than to 2:1. In our first calculation, the ratio didn't even matter because one I/O was used for each read operation and one I/O was used for each write operation. When you start using fault tolerance though, the ratio makes a big difference.
Take RAID 5 for example. There is still one I/O (per disk) for each read operation, but there are four I/O operations for each write operation (two reads and two writes to calculate and write parity information). Therefore, if an Exchange database had a ratio of 3 reads to every 1 write, then you would have to remember that the 1 write represents 4 I/O operations.
There is a pretty complicated formula that you can use to determine the disk throughput based on the database usage ratio and the number of I/O operations per read and write. However, earlier I promised to make things as easy on you as I could. Therefore, I will do most of the math for you and just tell you that if you assume a 3:1 ratio, then in a RAID 5 array, you can calculate a drive's total throughput by multiplying the drive's physical throughput limitation by .45. For example, if a drive has a total throughput capacity of 130 I/Os per second, then multiplying 130 by .45 gives you 58.5, which we can round to 59 I/Os per second maximum throughput.
Right now you might be a little confused. I just calculated a 59 I/O per second throughput on a drive in a RAID 5 array, but a 104 I/O per second throughput for a stand alone drive. Aren't arrays supposed to be faster than stand alone drives? The answer is yes. Remember that our calculations were based on an individual drive within the array. To calculate the throughput for the array as a whole, multiply your result by the number of drives in the array. For example, if the drive that we just measured was a part of a 3 disk RAID 5 array, then the total maximum throughput for the array would be 177 I/O operations per second (59*3=177).
Unfortunately, this formula only works for RAID 5 arrays. The formula has to be changed again for RAID 0 + 1, and for RAID 10 arrays. In these arrays there are two write operations (related to mirroring) for every read. As such, if you wanted to stick to a 3:1 read/write ratio, then you would use a multiplier value of .64. For example, at a ratio of 3:1 a drive with a physical capacity of 130 I/Os per second, as a part of a RAID 0 + 1 would have a maximum throughput of 83 I/O s per second. Again, though, the maximum throughput for the array as a whole would be the result of the calculation multiplied by the number of drives in the array.
Hopefully you have found that the number of I/O operations per second that your server is handling is less than the disks maximum throughput. If you find that the disk's maximum throughput is being exceeded though, start out by checking for errors in your calculations. If things are still amiss, then make sure that the drive is dedicated to Exchange. Sharing a drive with other applications greatly reduces the number of I/O operations per second that are available to Exchange. If you are still coming up short though, you may have to invest in faster hardware or add additional disks to your existing array.