At the heart of your Windows NT or Windows 2000 server’s file system is NTFS, short for the NT file system, the default file system for Microsoft Windows NT and 2000. NTFS controls where Windows NT/2000 locates files on the server’s hard drive. It also controls security and access privileges for those files.

To get the most out of your server’s hard drives, you must take NTFS into consideration. I’m going to show you how to wring the last drop of performance out of NTFS. I’ll consider the importance of defragmentation, compression, naming schemes, and folder structure.

Optimizing and improving NTFS
NTFS includes several features that can increase the performance of your system’s hard disks:

  • Cluster sizes—This is the size of units of allocation on a hard disk.
  • Standard defragmentation—This rewrites the contents of noncontiguous data into contiguous sectors on the disk to increase read/write performance.
  • MFT defragmentation—The Master File Table (MFT) contains information about all the files and data stored on disk.
  • Compression—NTFS can compress files and folders stored on disk to an optimal size, maximizing the available storage space.
  • Naming configurations—This represents the naming conventions you use when naming files and folders on your server.
  • Folder structure—This is the number of folders and subfolders and the number of files within those folders.

In the following sections, I’ll show you how to use these features to tweak the performance of your server’s hard disks and improve the overall performance of your network.

Cluster size does matter
Clusters are chunks of disk space used to organize data on a disk. The default cluster size is the smallest block of disk space that can hold a file. Versions of Windows NT later than version 3.5, including Windows 2000, won’t use cluster sizes larger than 4 KB for hard drives smaller than 16 TB. This helps minimize the amount of overhead used to contain the file.

When a file is stored on a disk, it’s written into clusters. If the file being stored exceeds the cluster size allocated to it, the file system uses additional clusters to store the file, consuming clusters until the file is stored. Naturally, files almost never fall neatly into multiples of 4 KB. So when the file system uses additional clusters, the unused portion of the last cluster is lost to overhead.

For example, suppose that you have a 35-KB file on your hard disk. The largest cluster size available is 4 KB. Therefore, the file will use a total of nine clusters. Eight of those clusters will be a full 4 KB in size. The ninth cluster will contain only 3 KB of data, leaving 1 KB completely wasted. If you had to store a thousand 35-KB files, you’d lose 1 MB of drive space.

Microsoft has developed an algorithm to estimate cluster size overhead. On the typical disk partition, the overhead algorithm looks like this:
(cluster size)/2 * (number of files)

So in the previous example, (4 KB)/2 * 1,000, the equation would estimate that you’d lose 2 MB of disk space when you’ve only actually lost 1 MB. The equation is wrong in this case because all of the last clusters are three-quarters full. The equation assumes that the average cluster will be half full.

Using smaller cluster sizes reduces the amount of space needed to store files. If you have an application that doesn’t consume a lot of disk space but uses very small data files, you can more efficiently store data on a drive with smaller cluster sizes. Unfortunately, there’s no direct way to control cluster sizes on your server. However, you can control partition size, which directly affects cluster size. NTFS cluster sizes are broken down as follows:

  • 512 bytes: 0 MB to 512 MB
  • 1 KB: 513 MB to 1 GB
  • 2 KB: 1 GB to 2 GB
  • 4 KB: 2 GB+

So if your application uses small data files, but the total of those data files won’t exceed 1 GB, you can more efficiently use your server’s drives by placing the data on a drive with a 1-GB partition.

Defragmenting your disks
As applications on the network use files, the files become scattered throughout the disk. The reason for this is free space: When an application saves a file to the disk, the portion being saved has to be allocated to the available space. NT starts at the beginning of your hard drive and breaks a file up into pieces, filling up the empty spaces on the disk.

This fragments the file across the disk(s). Fragmented files can slow down your computer when it’s accessing those files, and it also puts more wear and tear on your hard drives because the disk has to read more area to access the files. The application that reads the file has to collect all the pieces into one neat document to display it to the user.

The Disk Defragmenter
Microsoft has developed a handy little tool to help you minimize fragmentation: the Disk Defragmenter. The Disk Defragmenter rearranges common data spread across noncontiguous sectors and puts that data into contiguous sectors on the disk so that less read/write time is needed to access the file(s). It also frees up more space on the disk.

The process of moving these files is called standard defragmentation. Defragmentation organizes the pieces of files on your hard disk so that they’re closer together after all the data has been written, and puts free disk space at the end of the disk.

The Master File Table
The Master File Table (MFT) keeps track of files on disks. This file logs all the files that are stored on a given disk, including an entry for the MFT itself. It works like an index of everything on the hard disk in much the same way that the address book in your cell phone stores the phone numbers of all the people you might call. It makes an index of all these files so that they’re easy to locate for defragmentation and for use by applications.

The NTFS file system keeps a section of each disk just for the MFT. This allows the MFT to grow as the contents of a disk change without becoming overly fragmented, since Windows NT doesn’t provide for the defragmentation of the MFT. Windows 2000’s Disk Defragmenter will defragment the MFT only if there’s enough space on the hard drive to locate all of the MFT segments together in one location.

It’s a good idea to schedule regular defragmentation of your systems—for example, once a week—to keep everything in the best possible condition. Running the defragmenter on your systems can be time-consuming, and it requires you to stop all disk activity other than the defragmenter itself so that it can function smoothly. Be sure to disable all screen savers and things like that prior to running the defragmenter.

Get the most out of your hardware with compression
NTFS can compress files, folders, and entire volumes. This can be quite a space-saver if your servers store a lot of data. The compression is part of the file system, so any program that attempts to open a compressed file or folder should have no problem doing so. If you compress a document you’ve created in Word to conserve hard disk space on your server, any of the programs on any of the workstations on your network will be able to open the document because the file system is handling the compression independently of the applications that use the file.

To compress the document you’re working on, save the document to the server. Open the properties sheet for the document, as shown in Figure A. Click the Advanced button at the bottom of the General page. Then, select the Compress Contents To Save Disk Space check box, as shown in Figure B. Click OK in the Advanced box and on the General properties page to save your changes. The document will be compressed, but it will remain available to the application that created it.

Figure A
Properties for Test.doc


Advanced attributes for Test.doc

When compressing folders and files, keep the following in mind:

  • A compressed file will remain compressed if it is moved to an uncompressed folder on an NTFS drive.
  • A compressed file will be uncompressed when it is copied to an uncompressed folder on an NTFS drive.
  • A compressed file will be uncompressed if it is placed on a non-NTFS drive.

When a file is moved, it’s copied to its new location and then deleted from the original location, keeping its original attributes. However, when the same file is copied from one location to another, the copied file inherits the properties of the new parent location. Also, compression of this type isn’t compatible with non-NTFS volumes. If a compressed file is moved to a FAT volume, it will be stored uncompressed.

Naming schemes for files and folders
NTFS supports long filenames and directory names, up to 255 characters in length. This allows you to keep detailed filenames and know exactly what the file(s) are used for. It’s a great feature, especially after the 8.3 naming requirements of DOS days. However, long filenames can decrease the performance of the NTFS file system. To counter this performance degradation, you can standardize the length of all files and directories stored on your servers.

For example, suppose you have a hard disk with several files saved on it—15,000 files perhaps. All the filenames are 20 to 35 characters long, and the first 10 characters of each file are the same. This can cause a bottleneck when listing the volume contents because NTFS maintains a list of DOS-style 8.3 versions of the long filenames to maintain backward compatibility with DOS.

To avoid this, you could group things by directory and then allow filenames to be anything the owner prefers. This lets you create directories for all the departments—and possibly all the users within those departments—and minimize the performance hit on NTFS by keeping file and folder names within the departments as unique as possible. You also gain some commonality in how files and folders are stored on the server while still allowing users to control their file and folder names.

You can also disable the DOS 8.3 compatibility by using the FSUTIL Resource Kit utility. Open a command prompt, type fsutil behavior set disable8dot3, and press [Enter]. After you restart your server, it will no longer support the DOS 8.3 aliases. You should do this only if you don’t have any older DOS or Windows 3.x workstations on your network. Also, make sure that your applications don’t require the 8.3 file format.

Folder structure
Although you’ve probably never given much thought to the way you organize folders on your server, folder structure can have an impact on your server’s speed. If you have many users who use applications that open, close, or modify several data files at the same time, Microsoft recommends against locating all of these files in the same folder. Instead, if possible, split the files into multiple folders. If you can’t do that, Microsoft recommends disabling the 8.3 file-naming scheme.

One final note
These NTFS features and customizations should help you tweak your network to run at an optimal level. Although all these features can improve the performance of systems on your network, it’s still best to use trial and error to determine which of them work best in your environment.

NTFS offers many ways to improve performance and also helps you keep the environment secure. Although no system is perfect, NTFS seems to get better with age. The features that are here today will probably be enhanced in the future.