Microsoft

Building Distributed File System clustering in Windows 2000

An under-the-hood look at how both Microsoft clustering technologies work


Using the term “fault tolerance” to describe the Windows 2000 Cluster Service is a bit of a misnomer; it's more accurate to describe the Cluster Service's activities as “high availability.” And that high availability does not extend to the data stored on the servers that are highly available.

The Cluster Service works on the premise that the data is always good, so it's up to you to independently protect the data. The standard method is software or hardware RAID for the local disks that contain the operating system, and hardware RAID for the shared external disks. But the Cluster Service does not support software RAID on external storage; it also doesn’t support dynamic volumes.

Fortunately, Microsoft has included some additional cluster-aware services with Windows 2000, and one of these, the Distributed File System (Dfs), provides a way to better handle and organize data. Plus, using the Cluster Service to cluster standalone Dfs roots can offer some degree of fault tolerance.

Prerequisites
This article assumes a basic familiarity with the Windows 2000 Cluster Service. If you are new to the Cluster Service, take a look at these articles:
"Windows 2000 Cluster Service can reduce maintenance downtime"
"Configuring the Windows 2000 Cluster Service"
"Single-system Win2K clusters can cut headaches, licensing fees"
"Configuring dynamic shares with Win2K Cluster Service"


Stand-alone Dfs roots
Microsoft’s Distributed File System provides a great way to logically reorganize your network shares so that users can easily find the resources they need. In Active Directory, you have the choice of configuring your Dfs root as standalone (with the configuration stored locally) or domain (with the configuration stored in Active Directory). A domain Dfs has two main advantages: fault tolerance (more than one server offers this service because of its centrally contained configuration) and automatic file replication.

If you have an Active Directory domain, choosing the domain Dfs root is clearly a better option. However, many shops that have not yet upgraded to Active Directory want to use Dfs. Combining Dfs with the Cluster Service allows you to provide fault tolerance of the all-important Dfs root, so that if the server hosting this service fails, the second server can assume its responsibilities.

The Dfs option in the Cluster Administrator is one of the Advanced options of the File Share properties. Before configuring a Dfs File Share resource with Cluster Administrator, you have some preliminary work to do.

First, make sure a Dfs root is not already configured on the server and that the folder you want to use already exists on one of the external disks. Unlike other File Share resources, a Dfs root requires dependencies, namely a network name and IP address, that must be selected when the Dfs File Share is created.

You should configure your Dfs root on an external disk that is not used for anything else, so you'll use one of your existing Disk Groups and add the resources you need there. Add the IP Address and Network Name (both must be unique) before adding the File Share resource. Make sure the Network Name resource lists the IP Address as a dependency. This is typically done automatically when creating a virtual server.

When the File Share creation wizard prompts you for dependencies, add the Network Name. There’s no need to also add the IP address, because this is implicit in the network name's dependency on the IP address. When you get to the dialog box with the Advanced button, simply select the Dfs root option. Then bring this whole cluster group online.

When the Dfs group comes online, move it to the other server to update that server with the Dfs configuration. Then configure the group’s failover and failback policies. For example, enable Allow Failback is not set by default.

Once the Cluster Service Dfs resources are online, load the Dfs Microsoft Management Console, which will display your Dfs root. You can now use the Dfs MMC to create the Dfs links and replicas. If you need file replication for dynamic data between replicas, this must still be managed independently from Dfs, typically by using a simple copy routine or a specialized synchronization routine.

Administering Dfs with the Cluster Service and DFSCMD
You can't delete the Dfs root with the Dfs MMC, because the Cluster Server will automatically re-create the Dfs root. Similarly, you should stop the Dfs service only with the Cluster Administrator. Attempting to stop it with a Net Stop Dfs command or the Services MMC will not work because the Cluster Service will automatically restart it.

To delete a clustered Dfs root, take it offline and then either change the File Share properties to a normal share (via the Parameters and Advanced button) or delete the resource.

If you have already configured your standalone Dfs, you can use the DFSCMD utility to migrate the existing root and all replicas to an external disk managed by the Cluster Service. To do this, you use the /batch switch, as follows:
dfscmd /view \\<original_DfsServer>\<original_DfsRootname> /batch >A:\Dfsbatch.bat

Edit the resulting file to replace the dfscmd /map references of the original Dfs root location with the new Dfs root location on the external disk.

When editing the batch file with the new server name, make sure you use the virtual server name associated with the Dfs File Share, and not the physical server name. Once you have created the Dfs File Share with Cluster Administrator and brought it online, run the edited batch file on the cluster server that currently owns this resource. Load the Dfs MMC to check that it now displays your Dfs configuration.

You can use Cluster Service for fault tolerance by using it in conjunction with Dfs. We have covered the basics of this approach so you can see how Dfs standalone roots can be made fault tolerant outside Active Directory.

Editor's Picks