Open Source

Ultra Monkey comes to the rescue with highly available services

For enterprises that need solid clustering that costs next to nothing, Ultra Monkey is just the ticket. In this Daily Drill Down, Scott Lowe shows you how to provide inexpensive high availability with Ultra Monkey.

Both small and large enterprises—and every size in between—would benefit from using Linux servers for clustering, high-availability services, and load balancing. There are many open source solutions that are available for these Linux servers, and one of the better options is Ultra Monkey. The primary purpose of Ultra Monkey is to provide highly available, scalable Web farms, although using it to provide other services is also possible.

In this Daily Drill Down, I will provide an overview of the Ultra Monkey component and show you how to install and run Ultra Monkey. Then I will focus on the high-availability component of Ultra Monkey to give you a solid introduction to its strongest asset.

This monkey’s just a component
Ultra Monkey, a component of the VAnessa suite of software, provides local load balancing and scaling. The other VAnessa component, named Super Sparrow, provides geographic load balancing services.

Ultra Monkey components
Ultra Monkey uses a number of components, each performing specific services. (See Table A.)

Table A
Component Description
High availability High availability is achieved by the use of an open source component called Heartbeat, which is set up on the clustered servers. When Heartbeat detects that a node has failed, the backup node assumes the functions of the failed node and takes over receiving network traffic.
Service level monitoring Service level monitoring is accomplished via ldirectord, which is also the component that Heartbeat uses to monitor the health of certain services.
Layer 4 switching Using the Linux Virtual Server, Ultra Monkey can provide layer 4 switching to a highly available cluster, which further allows for high availability by enabling traffic to be directed to virtual network interfaces that can be moved from server to server.
The combination of all of the above services creates the highly available environment that Ultra Monkey provides.

Slow to release
The most recent version of Ultra Monkey is version 1.0.2 and was released on April 25, 2001.

Getting the software
As of this writing, the current version of Ultra Monkey can be downloaded as RPM packages, as source RPM packages for more customized installations, or as source files that can be built manually. While the Ultra Monkey team has done full testing of these packages only on Red Hat 6.2 and VA Enhanced 6.2.4, the packages should work on other distributions with some tweaking.

To install Ultra Monkey for this article, I am using Red Hat Linux 7.1 and downloading the RPM files (shown in Table B) from the Ultra Monkey Web site.

Table B High-availability services Service level monitoring tool for heartbeat STONITH stands for “Shoot the other node in the head” and is generally used to prevent another node from writing to a shared disk, which could be a major problem if both systems try to write at the same time.
ipvsadm-1.14-1.i386.rpm Virtual server package
perl-HTML-Parser-3.20-1.i386.rpm Perl library for Ultra Monkey
perl-HTML-Tagset-3.03-1.i386.rpm Perl library for Ultra Monkey
perl-MIME-Base64-2.12-1.i386.rpm Perl library for Ultra Monkey
perl-Net-SSLeay-1.05-5.i386.rpm Perl library for Ultra Monkey (SSL)
perl-URI-1.11-1.i386.rpm Perl library for Ultra Monkey (URI)
perl-libwww-perl-5.48-1.i386.rpm Perl library for Ultra Monkey (www)
ultramonkey-doc-1.0.2-1.noarch.rpm The main Ultra Monkey package

I saved these files in a subdirectory in my user directory named um. The full path to the files is /home/slowe/um.

Preinstallation steps
On Linux systems that are running Red Hat Piranha high-availability software, the Ultra Monkey components can cause some conflicts. Therefore, Ultra Monkey recommends removing any and all Piranha packages from the system before installation. If the Piranha packages were installed via RPM, the following commands will remove all Piranha software:
rpm -q piranha-docs >& /dev/null  && rpm -ev piranha-docs
rpm -q piranha-gui >& /dev/null  && rpm -ev piranha-gui
rpm -q piranha >& /dev/null && rpm -ev piranha

The installation
Because it is completely done via rpm, the actual installation of Ultra Monkey components is very straightforward. To install, run the following two commands (note that the backslash [\] indicates a line continuation):
cd /home/slowe/um
rpm -Uhv \ \ \
  ipvsadm-1.14-1.i386.rpm \
  perl-HTML-Parser-3.20-1.i386.rpm \
  perl-HTML-Tagset-3.03-1.i386.rpm \
  perl-MIME-Base64-2.12-1.i386.rpm \
  perl-Net-SSLeay-1.05-5.i386.rpm \
  perl-URI-1.11-1.i386.rpm \
  perl-libwww-perl-5.48-1.i386.rpm \

At this point in the installation, you will be notified of any dependencies that are required and not present. On my test machine, I did not have perl-libnet installed and had to rectify that problem before I was able to install the Ultra Monkey components. Because my Linux servers are all set up identically, I will repeat this same procedure on my second Red Hat Linux server as well.

Configuration options
Before I get into the various configuration options, I need to explain my test lab setup. I have two servers, named PEAR and LIME, and each one is running Red Hat Linux 7.1. PEAR’s IP address is, while LIME’s IP address is Both interfaces are named eth0 and both have each other’s information in /etc/hosts, which is critical for Ultra Monkey to work.

For more information on the /etc/hosts file, take a look at Jack Wallen, Jr’s article ”Keeping your sanity with /etc/hosts.”

Please note that I will be testing Ultra Monkey using the Apache Web server, which I have installed in /usr/local/apache. The service is running on both servers, although Ultra Monkey can be configured to automatically start and stop specific services depending on failover conditions. On PEAR, the index.html file simply reads “PEAR,” while on LIME, index.html reads “LIME”. I intentionally made this very simple so that I could more easily test the failover functionality and then move the “real” HTML in when I have finished.

High availability
Now that Ultra Monkey is installed, it needs to be configured, depending on what high-availability options are desired. High availability, load balancing, and high capacity—or any combination thereof—can be set up using the Ultra Monkey components.

In the first example, I explain how to configure Ultra Monkey for high availability, which will provide failover services using the Heartbeat component.

Heartbeat uses three configuration files, samples of which were installed with default configurations during the Ultra Monkey installation. These files are named, haresources, and authkeys. The sample copies of these files can be found in /usr/doc/heartbeat-0.4.9/. Ultimately, these files need to reside in /etc/ha.d.

Heartbeat uses a virtual IP address in order to operate. While each machine also has a real, physical IP address, the Heartbeat package is designed so that, in the event of a detected failure of the primary node, an interface is immediately brought up on the second node with the same virtual IP address, thus providing fault tolerance.

For my test configuration, I will only enable Heartbeat over the Ethernet link, although the Ultra Monkey documentation recommends using a second method as well in order to reduce the chances of a “false failover” in the event of an intermittent communications media problem.

Click here to read my test /etc/ha.d/ file.

The next file is /etc/ha.d/haresources, which defines the virtual IP address that will be used. The contents of this file must be identical on both machines. Take note of the address below. That is neither PEAR nor LIME as discussed above. Rather, it is the virtual IP address that will be used. Once the Heartbeat service is started, an additional entry in the network interfaces table will appear as:

Only one node (the active node) at a time will respond to this address.

The third file that is required is /etc/ha.d/authkeys,  which defines whether Heartbeat will communicate in an encrypted format with the other machine. Although the software also supports MD5 hashing, for this example I chose to use only one type of authentication, cyclical redundancy check (CRC). It is extremely important that this file have permissions 600—if it doesn’t, then the Heartbeat service will not start successfully. My /etc/ha.d/authkeys file looks like:
auth 1
1 crc

Changes need to be made to the TCP/IP settings on the highly available machines in order for packets to be properly forwarded. This is a copy of the /etc/sysctl.conf file on both of the systems (PEAR and LIME) in my test lab.
# Enables packet forwarding
net.ipv4.ip_forward = 1
# Enables source route verification
net.ipv4.conf.all.rp_filter = 1
# Disables the magic-sysrq key
kernel.sysrq = 0

A quick recap
To summarize what I have done so far:
  • ·        I have installed the Ultra Monkey components on PEAR and LIME.
  • ·        I have set up the /etc/ha.d/, /etc/ha.d/haresources, and /etc/ha.d/authkeysfiles identically on PEAR and LIME.
  • ·        I have modified /etc/sysctl.conf on both systems to enable IP forwarding.
  • ·        I have tested my Apache configuration separately on both servers to make sure that they do indeed respond with just the server name. I did this testing beforehand so that I would not have to worry about it when I enabled Heartbeat.

Starting the service
To start the service, I used the following command on both machines:
/etc/rc.d/init.d/heartbeat start

If I am greeted with OK, it means that Ultra Monkey did not find any problems so far. Of course, I still don’t know if everything is working since it takes a few seconds for the services to fully start and begin to take requests as

Remember the addresses
Keep in mind that PEAR’s physical address is and the address is just the virtual address.

Click here to see the results from running /sbin/ifconfig–a on PEAR.

With the configuration I have in place, all Heartbeat messages are written to /var/log/messages. To see if everything started properly, I can tail this file and see some of the messages. Click here to see this session’s log messages on PEAR.

Because of the earlier output in my Web browser (the index.html page reading PEAR), I know that PEAR is the currently active node in this arrangement. To simulate a node failure on PEAR, I will stop the Heartbeat service by using this command:
/etc/rc.d/init.d/heartbeat stop

Since there will be no service running to respond to Heartbeat requests from LIME, LIME will assume that PEAR has died and will begin answering requests for

Now, when I browse to the address, I get a response from LIME, which means that it has picked up the service. The page I see from my browser will actually be LIME’s index.html page and will read LIME.

Viewing the message log on LIME will prove that it did indeed detect PEAR’s failure and assumed service for PEAR. Click here to read the message log on LIME. Once PEAR comes back into operation, LIME will relinquish control back to PEAR.

In this Daily Drill Down, I only discussed one of the components available in Ultra Monkey: Heartbeat. Using only this one component, I was able to seamlessly provide highly available services and prevent downtime and protect revenue—and the only cost was just a few hours of my time in setting it up.

In the next installment of this series on Ultra Monkey, I will explain how to provide load-balanced services with this open source package.

Editor's Picks