By Mark Strawmeyer
Clustering is a technology solution that combines hardware and software to provide high availability and scalability to infrastructure items such as Web, e-mail, and database servers. In this article, we'll examine the types of clusters and then focus on establishing a Microsoft-based clustering solution. We'll also cover the hardware components, examine how a cluster works, and show you how to install SQL Server in a cluster solution.
Types of clusters
There are two types of clusters: load balancing and failover. Load-balancing clusters distribute the server load across clustered resources. It typically involves distributing network traffic across multiple front-end servers. This keeps the load evenly spread across all of the available servers and helps to promote performance and scalability. This type of clustering is reserved for Web and COM+ applications. It also supports two or more nodes in the cluster.
The purpose of a failover cluster is to provide high availability in the aftermath of a hardware or software failure, whether planned or otherwise. The cluster monitors resources to determine when a failover state has occurred. The cluster then takes action to recover from the failure by moving the clustered resource(s) to another server in the cluster.
A fault-tolerant failover cluster provides a real-time failover for hardware and software including current application state. It requires a larger amount of hardware and specialized software that understands the application state. A high-availability cluster is also a fault-tolerant cluster, but it does not provide the same degree of failover. While the hardware and software do fail over, the current application state is typically lost and must be recovered.
Microsoft's Cluster Service (MSCS) is built as a high-availability cluster rather than a true fault-tolerant cluster. This means the cluster solution will require less hardware and will be operational for more types of applications than a specialized fault-tolerant solution. This also means the cluster will be able to recover from a hardware or software failure, but will not likely maintain the exact state at the time of the failover. The MSCS high-availability solution supports a total of two servers (nodes) per cluster.
Basic hardware and software components
Clustering involves independent server nodes working together through a common set of hardware. Each server in the cluster has its own instance of an operating system and connection to the network. The servers are connected through an external set of physical disks and a private network connection between the servers. The shared disks are connected to each node through a drive controller that's different than the controller connecting the OS drives. An external SCSI device or storage area network (SAN) is commonly used.
The MSCS solution operates as an active/passive model. This means that at any point in time, only one of the nodes in the cluster is active. The active server owns all of the clustered resources. The active node continually writes data to a location on the shared drives. This is known as the quorum drive, and it is used to share state information as the cluster fails over from one node to the next. A signal known as a heartbeat is periodically sent across the private network connection between the servers. The server operating in passive mode waits for a heartbeat to reveal that the owning resource is no longer available, and then it takes ownership of the cluster resources and reads state information from the quorum location.
Install SQL Server in a cluster
In order to install SQL Server in a cluster, you must first set up an operational cluster as follows:
- Install a Windows Server operating system that supports the MSCS on each server.
- Establish a "public" network connection for each server. The public connection is the interface on which requests are received.
- Establish a private network connection that is only between the two nodes in the cluster. This connection is used for the heartbeat to verify the status of resource(s).
- Create a user account in the domain that will serve as the service account under which the cluster will operate. Assign the account to the Administrator's group on each server.
- Turn off both servers. Connect the shared disk drive and power on the disk resources. Turn on node number one of the cluster. Create and initialize a logical drive space to contain the quorum drive and any other logical partitions desired.
- Install and configure the Microsoft Cluster Service through Control Panel | Add/Remove Programs | Windows Components. This will involve assigning a virtual name to the cluster, an IP address, and the shared disk resource.
- Turn on the other node in the cluster. After the second node is online, return to the original node and use the cluster administrator to add another node. It will be significantly easier to install the second node in the cluster because it will be established based on the configuration made to the original node.
Once the cluster is established, install SQL Server into the cluster as follows:
- Insert the SQL Server Enterprise edition CD to begin the installation. If the CD does not auto-play, then launch the setup through setup.exe.
- On the Computer Name screen, select Virtual Server and enter a virtual name by which you want the clustered instance to be known.
- Establish a service account under which to run SQL Server. Grant it Administrator rights on each node in the cluster.
- Throughout the remainder of the setup, you'll assign an IP address, service account, and shared disk resources to the SQL Server instance.
The setup program will install a copy of SQL Server on each of the nodes in the cluster and will establish all of the virtual settings in the cluster. The SQL Server services are then controlled from within the cluster administrator.