Data Centers

Software RAID for Ubuntu LTS: Better to be safe than sorry

I recently experienced a disk failure on a HP DL320 G3 server. The cheap, high capacity SATA disks were not in a RAID-1 mirror because the server's FakeRAID did not support the flavour of Linux in use. Luckily, important data and configuration files were safely backed up, but it's still rather annoying to have to rebuild the box from bare metal again.The faulty disk was replaced under warranty and I was good to go. I've often heard that once a disk has gone bad, it's not unusual for other disks in the same enclosure to follow.  I think this applies more directly to disks in a RAID configuration as they will have experienced the same IO usage. Outside of a RAID, the disks will experience the same environmental conditions, so not wanting to risk yet another rebuild, I decided to look at using software RAID. After Googling for comparisons between software and hardware RAID, I was unsure whether using software RAID would degrade performance significantly. After building a software RAID 1 array and installing the Linux OS, I tested IO using ‘bonnie++'. I was expecting to see a slight decrease in IO performance but was surprised to see quite a large jump. I ran the tests on identical servers, one without RAID and one with SoftRAID. The tests were by no means scientific but they gave me enough confidence in SoftRAID's performance to continue.

Getting set-up

While the prospect of configuring software RAID can seem a little daunting to the first timer, it's actually a very simple process. In this example, I'm going to install Ubuntu 6.06 LTS Server and create four partitions in a RAID 1 array. I will also describe how to check the status of the array.

It seems some people prefer not to RAID swap space; I have chosen to as I notice very little swap usage on my servers and would like to have the disks fully mirrored.

Here, I'm using 21.5GB disks split up in to the following partitions:

/                              10GB

/tmp                      3GB

/var                        7GB

swap                     1.5GB

First boot in to the Ubuntu installation program and continue until the Partition Disks screen. At this point, you need to choose to manually edit the partition table. Select the first disk and create an empty partition table on the disk. Create your partitions as per usual; however, rather than setting the type to the default EXT3 file system, set them to Physical Volume For RAID. This applies for all partitions including the swap area. Repeat this process on the second disk so that both have an identical partition table. Also set the bootable flag to On for the two root partitions. Once you're done, the partition table should look something like this:

Once the partitions have been created move to the top of the Partition Disks menu and choose to Configure Software RAID. The partition manager will ask if it can make changes to the partition tables; answer ‘yes'.

We now need to create a MultiDisk device for each partition set created in the previous steps. Select Create MD Device followed by RAID 1; we want to use two active devices and zero spares. When asked to select the two active devices, select a set of matching partitions. So to create the first MultiDisk device, I have selected /dev/sda1 and /dev/sdb1. Continue this process until all of your partitions have been matched in to pairs and the MultiDisk devices created.

The software RAID devices will now be listed at the top of the Partition Disks menu. These RAID devices can be used just like normal partitions; you will need to edit each one, setting the filesystem type mount point as you would with a standard disk partition. Mine looks like this:

Once you are happy with the partitioning select, Finish Partitioning And Write Changes To Disk and continue with the Ubuntu installation as per normal.

Now that installation is complete, the final step is to install the grub boot-loader on both drives. By default, the installation process will only put grub on the first disk. To do this, boot from the Ubuntu installation disc and select Rescue A Broken System. Continue through the various option screens until prompted to select a Device To Use As A Root Filesystem and then switch to a blank terminal with Alt+F2.

Mount the bootable RAID partition and chroot to it:

# mkdir /mnt/md0

# mount /dev/md0 /mnt/md0

# chroot /mnt/md0

Now enter grub and install the bootloader on to both the sda and sdb MBR — thanks to the gentoo-wiki for help with this:

# grub
device (hd0) /dev/sda

root (hd0,0)

device (hd0) /dev/sdb

root (hd0,0)

setup (hd0)

Notice that we address both disks as hd0; this is because if the first disk fails and you reboot, the second disk will become hd0.

Reboot with shutdown -r now and start up the system as per usual. Now at the command prompt, check the status of each RAID set with:

# cat /proc/mdstat
Personalities : [raid1]
md4 : active raid1 sda6[0] sdb6[1]
4931840 blocks [2/2] [UU]
md3 : active raid1 sdb5[1] sda5[0]
19534912 blocks [2/2] [UU]
md2 : active raid1 sda3[0] sdb3[1]
166015616 blocks [2/2] [UU]
md1 : active raid1 sda2[0] sdb2[1]
39061952 blocks [2/2] [UU]
md0 : active raid1 sda1[0] sdb1[1]
14651136 blocks [2/2] [UU]
unused devices: <none>

As you can see here, all of my RAID sets are clean and active. If one of the disks was failing or being rebuilt, it would be shown here.

I hope this has proven helpful to anyone looking at setting up a software RAID set for the first time. As I become more familiar with the administration of these I will post some tips on monitoring and managing these RAID sets using the standard set of tools provided such as mdadm.

I would be very interested to hear people's opinions on Hard vs. Soft RAID, issues that may have arisen, performance comparisons, and so on. Please leave a comment and share your views.

Editor's Picks

Free Newsletters, In your Inbox