Data Centers

Choose a RAID level that works for you

When choosing a RAID level for a new array, there are a number of important points that you need to take into consideration. Scott Lowe outlines these points.

In four recent TechRepublic blogs, we extolled the good and the bad about various RAID levels. From me, you learned about why I really like RAID 50 in a number of circumstances, why RAID 60 may or may not be overkill, and why RAID 10 might be a better choice than RAID 6 even though it's more expensive. My fellow TechRepublic contributor Rick Vanover shared some pointers on when to choose RAID 5 and when to choose RAID 6.

RAID level considerations

When choosing a RAID level for a new array, there are a number of important points that you need to take into consideration, including:

  • Application performance needs. Not every application is created equal. Some applications are light on I/O needs, while others thrash the storage system all day long. Make sure you choose a RAID level that matches the workload.
  • Capacity needs. Different RAID levels all result in different amounts of net usable space remaining after accounting for RAID overhead. If capacity is your primary driver, that will affect your choice of RAID.
  • Cost. Performance costs money, and capacity costs money -- achieving the necessary balance between cost and performance is your job. Choosing the right RAID level can play a big part in achieving this balance.
  • Availability needs. Every business is different. Perhaps your business is willing to pay a little more to ensure less downtime than another business. In these cases, you need to pick a RAID level that matches the system's availability needs demanded by your organization.

The balance

Let's look at each of these considerations and consider how various RAID levels meet the objectives for each point raised. We're going to stick with relatively common RAID levels.

You'll notice that I placed an asterisk next to two entries in each category below; the asterisks denote the "winners" in each category. You'll also notice that RAID 0 "wins" a lot; however, I would never recommend RAID 0 for production use.

Application performance
  • * RAID 0. From a performance perspective, RAID 0 beats the rest since there is no RAID overhead, and the disk system is able to aggregate all of the disks into a single, high performance storage pool.
  • * RAID 1/10. In most cases, RAID 10 provides excellent performance since data can be read from multiple disks at the same time, suffering a little only when the workload calls for a whole lot of small sequential writes. For general raw performance for pretty much any kind of workload, RAID 1 and 10 are excellent choices. RAID 1 by itself is a two disk system that doesn't get a huge performance boost, but it wouldn't likely be used in a large array, anyway.
  • RAID 5/50. For heavy read workloads, RAID 5/50 provide very good performance. However, on heavy write workloads, RAID 5/50's need to write parity information begins to noticeably affect overall storage performance. In a rebuild situation, RAID 5/50 can suffer a heavy performance hit until the rebuild operation completes.
  • RAID 6/60. Like other RAID levels, read performance under RAID 6 is very good, but write performance takes an even bigger hit than it does with RAID 5 due to dual parity write needs. Rebuild operations can have a major performance impact.
  • * RAID 0. Since no parity information is stored and there is no mirroring, RAID 0 provides excellent capacity. You get full use of all of the disks in the array -- 100% utilization.
  • RAID 1/10. With RAID 1/10, you take a full 50% capacity hit due to the need to retain a mirrored copy of the data. RAID 1/10 carry the largest capacity penalty, but this is often offset by its very good read/write performance.
  • * RAID 5/50. One reason RAID 5 remains so popular is because of its capacity overhead, which results in the loss of only one disk worth of capacity. Under RAID 5/50, you will lose up to 33% of total raw capacity (three disk RAID 5 configuration), depending on how you create your volumes.
  • RAID 6/60. RAID 6 is growing in popularity, but it carries greater capacity overhead than RAID 5. With RAID 6, two disks worth of space is required for parity, so you take a capacity hit of up to 50% (four disk RAID 6 configuration).
  • * RAID 0. From a capacity and a performance standpoint, RAID 0 carries by far the lowest price tag. With no RAID overhead and maximized performance, the $/TB or $/IOPS metrics are fantastic under RAID 0.
  • RAID 1/10. From a capacity perspective, RAID 1/10 carries a hefty cost, but from a performance perspective, it's only a little worse than RAID 0. Although you lose 50% of your usable space under RAID 1/10, you retain high performance levels, making this RAID level very popular for a variety of uses.
  • * RAID 5/50. RAID 5/50 has become an almost de facto standard when one needs to add RAID and doesn't really care about the characteristics. All RAID controllers support RAID 5, and the RAID 5 capacity overhead isn't too bad, especially as more disks are added to the array. From a performance perspective, you do lose a lot of IOPS on write workloads, making RAID 5 a bit more expensive than RAID 0, 1, and 10 when it comes to supporting write workloads.
  • RAID 6/60. Expensive in every way, RAID 6 can result in capacity overhead matching RAID 1/10 (50%), and it also carries a hefty write penalty -- it's even worse than RAID 6.
  • RAID 0. On the availability front, RAID "zero" lives up to its name. It shouldn't even be called RAID; it's really just a bunch of disks (JBOD). If any disk in the array fails, you can kiss your data goodbye. Although RAID 0 provides great performance and maximum capacity, it includes zero data protection capability.
  • * RAID 1/10. RAID 1/10 - mirroring - is a highly available configuration. All data are written to two disks in the array, so you can lose multiple disks -- as long as you lose the "right" ones -- and remain functional on a single copy of the data.
  • RAID 5/50. RAID 5 provides reasonable availability and is often enough for many organizations. With RAID 5, your array can lose a single disk and remain functional, although in a degraded state. If you lose a second disk, your data is gone. RAID 50 provides a little more protection. Each individual RAID 5 subarray in the RAID 50 can lose a single disk and remain functional. In theory, you could lose a single disk in each and every subarray and remain functional.
  • * RAID 6/60. RAID 6/60 provide very high levels of availability since you can lose two disks in each RAID 6 array and remain functional.

My take

If you really don't know what RAID level to choose, go with RAID 50 if capacity is more important than performance (unless you're talking mostly sequential reads, in which case RAID 50 is awesome) or RAID 10 if both random and sequential read/write performance trumps capacity. If data protection is your primary concern, RAID 6/60 should be at the top of your list.

More about RAID on TechRepublic

Keep up with Scott Lowe's posts on TechRepublic


Since 1994, Scott Lowe has been providing technology solutions to a variety of organizations. After spending 10 years in multiple CIO roles, Scott is now an independent consultant, blogger, author, owner of The 1610 Group, and a Senior IT Executive w...


For any production servers I would not consider a configuration without a dedicated hot spare or global hot spare. There's less concern about a second drive failing before you notice and replace the first failed drive. We have several Intel SRCSAS18 controllers, work very well.


"RAID 6/60. Expensive in every way, RAID 6 can result in capacity overhead matching RAID 1/10 (50%), and it also carries a hefty write penalty ? it?s even worse than RAID 6." So you're saying that RAID 6 write performance is worse than that achieved with RAID 6? Nice trick. ;)


Of Course RAID 10 is the best choice between capacity and performance, dont tell me your Company's Server Hard Disk is 20GB, if so move to SSD... I dont see Raid 5 with better READ will improve still need to WRITE into it, even used for web hosting, the delay in Writing will drag the overall performance down a lot, live tested on 30,000 request per minute in SQL Database Read/Write...


Having just lost an external HDD (my partner's 'Time Machine' on her iMac) to the infamous "Click of Death" made me realise Zero RAID was not the way to go for this application (secure data storage). Problem is that the solution I chose didn't work. The lower-priced 'Welland' dual bay RAID 0/1 is incompatible with the way the iMac OS and/or Time Machine app wants to do things. A quick scan of Mac forums says this is a very common problem. It seems Mac SOHO users have just one solution - back up your valuable data to a single HDD. The intention of this post is not to start a PC/Mac war of words. Quite the opposite - it is to re-inforce the original article. Data security needs a minimum of RAID 1. I imagine most SOHO users cannot afford the expense of other RAID set-ups, however. A useful article, thanks Scott.


What about the next gen RAID levels "RAID 1E" for example?


I'd be careful with calling RAID 0 a JBOD. JBODs are often deployed in a way that one logical disk is on a single physical disk, which makes it a little bit better than RAID 0 when it comes to availabilty since you will "only" lose the data on the failed disk, not everything. Not trying to be a hairsplitter :)

Scott Lowe
Scott Lowe

RAID 6 just sucks when it's compared to RAID 6. RAID 6's write performance simply can't keep up with that of RAID 6. If you get the choice between using RAID 6 and RAID 6, go with RAID 6 every time. Or, you can just use RAID 5, which does carry a somewhat smaller penalty with regard to writes ;-) Scott


As the article mentions - it is all a trade off. The golden rule of technology (or almost anything) applies here: "fast, reliable, cheap - pick any 2." Obviously the words fast, reliable or cheap can be interchanged for words you find more appropriate - the point is to use the right tool for the right job. In my situation performance degradation during the rebuild of a RAID 5/50 or 6/60 volume is the equivalent of an outage so I use RAID 10 as it offers the best performance/availability at the expense of more disks for less storage space. In our case the cost of the hardware has already paid for itself with the drive failures we have had (they WILL happen, not IF) not costing us any down time or overtime resurrecting a failed system. As always, remember that RAID is never a substitute for good backups.


As pointed out JBOD and RAID 0, while functionally simialr are different technically. One can make a convincing argument as well, that RAID 0 not only provides no data protection, the more drives used in the the stripe actually increases the potential for data loss as multiple points of potential failure are introduced.


A JBOD will fills the first drive before moving on to subsequent drives. A Raid 0 failure is catastrophic and everything is lost.

Editor's Picks