Answer for:

Mixed Raid Drives

Message 8 of 7

View entire thread
1 Votes
Collapse -

First of all, what's your plan for RAID controller? According to quickspecs ( your microserver supports 4 drives total (so you'll need something else to plug the OS drive into) and the onboard RAID only supports RAID0 and 1, neither of which is suitable (RAID0 has no redundancy, RAID1 would waste two drives). I know nothing about the onboard RAID in this server, but would guess that you'll get much better results using a proper RAID card anyway. I've purchased old P400 cards in the past. You can plug your SATA drives in (using the right cable - it may come with the card or you may need to buy it separately). I've picked them up second-hand for about $100 with 512MB battery-backed write-cache. If you do go this route you'll need to check a) that there's a slot for the card b) that it will physically fit c) that the little power supply in that server can handle it plus 5 drives and d) that the P400 has drivers for the OS you want to use (from memory I think it only works under server OS').

Anyway, to answer your question the two options that spring to mind are RAID 5 and 10. RAID 5 will take your 4 drives, and will use 3/4 of each of them to store date. The other quarter of each drive will be used to store the redundant information. The redundancy is spread across all drives, so if you have a single drive failure you will not lose anything. In a single RAID 5 array you will have 1.5TB (well, closer to 1.2tibibytes or whatever they are called). RAID 5 is good for read speed, but needs to touch every drive when you perform a write. In my very limited experience I've found writes to improve a lot when using 4 drives (but were pretty crappy with 3 drives).

RAID 10 will take two of your drives and RAID1 them (which will give you redundancy only - performance-wise there's little diference between RAID1 and a single HDD). It will then take your other two drives and RAID1 them. And then it will take your two RAID1 arrays and make a RAID 0 out of them, which is where the speed increase comes from. Any of the 4 drives can fail without data loss. And you can even have a second drive failure without data loss, if the seconds drive happens to be in the other RAID 1 array. I think the general consensus these days is that RAID 10 is better than RAID5. Both read and write speeds should be around double that of a single drive. I suggest you look up some benchmarks people have run though to check for yourself.

Re mixing drives, I wouldn't have any hesitation putting these drives together into a single array. The 3GB vs 6GB makes no difference - that just referers to the speed of the channel, not the drives. The cache will make a bit of difference, but not much. The speed increase you get by RAIDing four drives will far outweigh any increase you get from a bigger buffer. And in any event, if you get a proper RAID card with cache it will use the RAID cache and disable the drives physical write-cache.