Feeling sluggish? Low memory got you down? Low memory can cause your server to all but trickle to a halt and cause your hard drives to page excessively, sounding like a DJ at a hip-hop concert. As hard drives swap virtual memory with your physical memory or RAM, they’re continually reading and writing from one to the other. This will cause major interference when it comes to the regular activity the drive is trying to perform concurrently. There are a multitude of options available when determining how to prepare, investigate, and deal with this issue and help prevent any access bottlenecks. In this Daily Drill Down, we’ll show you how to manage virtual memory on your NT server for maximum performance.

What’s paging?
No matter how much memory you’ve put in your NT server, it never seems to be enough. When physical RAM starts running low, Windows NT utilizes a paging file—Pagefile.sys. In order to run the various processes and applications, Pagefile.sys gives the physical RAM some room to breathe by giving it resources in which to perform—allowing “pages” of data to be swapped.

Obviously, system performance is improved when the system can find data in the file system cache instead of searching for it on the drive. Too much searching can also bog down the processor as it tries to find what it’s looking for. This is one of the reasons the phrase “buy more RAM” has become a technology-era cliché: RAM is your friend. Managing memory can help make your “friend” more productive.

An overlooked tool that provides quick and vital information for assessing memory usage on your local server is the Windows Task Manager in Windows NT ([Ctrl][Alt][Delete] | Task Manager). Taking the amount of physical RAM into consideration and evaluating the MEM Usage counter and the Memory Usage History provides an immediate snapshot of memory activity. As shown in Figure A, comparing the CPU Usage counter and the CPU Usage History to the Memory Usage counters will give you a performance overview in a nutshell—very handy if you must determine whether to start Diskperf immediately to investigate excessive paging issues further.

Figure A
The CPU Usage and MEM Usage counters provide an immediate snapshot of memory activity.

The paging file for Windows NT can be managed via Control Panel | System | Performance tab | Virtual Memory. From here, you can control several settings for the paging file, including size and location. Obviously, you could allow the system to handle it; but for optimization, you can tweak the Virtual Memory Manager (VMM) in order to get the most “bang for your buck.”

Basic tenets of the Windows NT paging file
Windows NT originally sets the starting paging file size by adding the amount of physical RAM plus 12 MB. The 12 MB allows for a failure, so that the paging file contents can be dumped to a log in case of an emergency. If you’ve seen the “stop” box and subsequent Blue Screen Of Death, you’ve experienced this problem in action. With anything smaller than this amount (physical RAM + 12 MB), you’ll begin to receive Running Out Of Memory messages.

The Windows NT operating system and some of its applications use about 10 MB of RAM. Therefore, you should subtract this from the physical amount. This will give you plenty of leeway to determine the memory requirements on your server.

Windows NT also requires a minimum 2-MB paging file. If the paging file is small, or if it doesn’t exist at all, a warning message will appear when you boot.

Your paging file should always follow the minimum RAM+12 rule. In no cases should the paging file be smaller than the size of RAM in your server. If the system has 32 MB of physical RAM and you add 12 MB, the total size of the paging file would be 44 MB. Obviously, the bigger, the better… but when I say this, I mean investing in more physical RAM—not simply increasing the size of your paging file. You don’t want your drives to spend too much time reading and writing to the paging file if you don’t have enough RAM. This will only slow down your server and could even lead to downtime if you have to reboot your server to clear out I/O requests. And if you have to do it for that reason once, you’ll have to do it again.

The default size is sufficient for dumping the contents of the paging file if necessary. A small paging file limits what can be stored and could exhaust your virtual memory reserved for applications. If you’re short on RAM, more paging will occur, which in turn generates extra activity for your drives and will slow response time for the system. In cases such as this, Windows NT needs a minimum file size equivalent to the size of the physical RAM in addition to 1 MB of virtual RAM on the system root. It requires this file size so it can write the debugging information to a file.

As previously stated, the paging file minimum and maximum size can be specified in the Virtual Memory dialog box. The paging file will shrink and expand depending on your specifications. The paging file cannot be compressed or maintained while the system is up; however, various third-party software packages such as Diskeeper allow for maintenance of the paging file.

Optimizing the paging file
The first thing you should remember about keeping your paging file a lean and mean binary-crunching machine is that it’s a good idea to keep the paging file on a different drive than where the Windows NT system files are stored. This will keep reads and writes for each file independent from one another. On the other hand, a small paging file on the boot disk allows the system some breathing room when it comes to various log files, system events, and alerts (to name but a few tasks). On a stripe set, the paging file can be kept completely off the system drive in order to keep performance at an optimal level. Multiple paging files can also be kept—preferably, a paging file would be configured for each drive, and applications could use it at their disposal.

The larger the paging file, the more disk space will be taken up; therefore, I/O reads and writes will rob precious attention from your processor. This can be a major problem when you have a RAID array in your server, especially RAID 1. As you probably remember from our Daily Drill Down dealing with RAID, RAID 1 mirrors or duplexes the drives in your system. This duplication includes the reads and writes necessary to communicate with the paging file. A server with RAID 1 will do twice as much work with a paging file as a non-RAID server.

Another good idea is to keep the paging file size for any selected drive equal in both the Initial Size (MB): box and the Maximum Size (MB): box. Keeping these set at the same value will keep paging file expansion under control.

Memory monitoring in Performance Monitor
If you experience problems with memory usage—for example, if applications are chewing up your memory—increasing the paging file size can improve performance.

When planning for total paging file size for a particular server, the Performance Monitor counter Process (Total)\Page File Bytes will provide information pertaining to how much of the paging file is being used. This can give you a starting point when it comes to adding RAM or increasing your paging file. Obviously, considering the present memory configuration of the server will help create a baseline. From there, expand the baseline measurement by determining network server usage during peak, average, and low periods in order to generate a complete picture of your server’s memory needs.

Excessive paging
If paging activity is excessive on your drives, there’s a good chance physical RAM is lacking. By analyzing the Performance Monitor counter Memory\Page Reads/sec, you can determine the number of disk reads necessary to recover page faults. By evaluating the number of reads against the number of page faults, you can determine how many pages were recovered per read. More reads than faults points to lack of physical RAM, as the drive pages out in virtual memory.

Another counter to consider in Performance Monitor is the LogicalDisk\Disk Reads/sec. Again, if the number of reads is high, paging activity is high. Likewise, the counters PhysicalDisk\Avg Disk Read Bytes/sec and LogicalDisk\PhysicalDisk\Avg Disk Read Bytes/sec will point out the data transfer rate per byte on the drive. If this is considerably high—it should be equal in magnitude to Memory\Page Reads/sec—excessive paging is the culprit.

Remember, the disk counters are inactive in Windows NT default since it puts an extra load on the disks. To enable disk counter, you must type diskperf –y at a command prompt and restart your server. To turn them back off, type diskperf-n at a command prompt and restart your server again. Other counters to keep your eye on include Memory\Pages/sec, Memory\Page Reads/sec, Memory\Pages Output/sec, and Memory\Pages Input/sec. If these counters display large numbers, you’ll know that the server is frequently accessing the hard drive because the physical RAM was concurrently not available. A large number of page faults will also indicate this.

When memory is in short supply, the number of processes will be cut in order to maintain balance in the force—the Process (All_processes)\Working Set counter provides information pertaining to what processes are tying up memory. If there isn’t enough memory for all processes, page faults will result; Windows NT will also cut back on the number of processes it concurrently executes. This may also contribute to paging faults.

In order to keep this balance, the VMM maintains equilibrium between physical memory and virtual memory on the drive; if there is not enough to go around, the processor will essentially “hang” while waiting for the hard drive to catch up as it performs I/O reads and writes.

By maintaining baseline records over time, you can stay ahead of the balancing act and be well prepared for memory needs. It’s necessary to keep a watchful eye out for trouble if paging is excessive; obviously, it’s more trouble than it’s worth.

Ivan Mayes has been hacking around on typewriters and computers since age 15 and learned the ways of war on a Commodore 64. He holds degrees in English and Spanish, and he’s an MCSE. An equal computing opportunist, he is prone to use any computer, regardless of make, model, or operating system.

The authors and editors have taken care in preparation of the content contained herein, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for any damages. Always have a verified backup before making any changes.