Questions

Mixed Raid Drives

+
0 Votes
Locked

Mixed Raid Drives

MattCollinsUK
Hello All,

First post, long time reader here.

What is the best possible raid configuration I can get with the following disks?

2 x 500 GB 7200 RPM 16 mb Cache SATA 3 GB
2 x 500 GB 7200 RPM 64 mb Cache SATA 6 GB

Going into a HP ProLiant Turion II N40L MicroServer.

1 x 250 GB for the OS (going into the ODD drive)

I'm using this server to learn on, so nothing is critical, (hence not doing it properly and having 4 identical drives) just want the best possible performance and redundancy without buying another 2 hard drives, because of the different speeds.

After doing abit of research I have come up with RAID 10 as a possible solution and just slow the two faster drives down, to match the others. What do others think?

Thanks in advance.
  • +
    0 Votes
    OH Smeg

    What is it you want to achieve here and then maybe we can suggest the better way to achieve it.

    Col

    +
    0 Votes
    MattCollinsUK

    Hi Col

    Data redundancy if one drive fails, out of the four, while providing a performance increase.

    +
    1 Votes
    gechurch

    First of all, what's your plan for RAID controller? According to quickspecs (http://h18004.www1.hp.com/products/quickspecs/13716_div/13716_div.HTML) your microserver supports 4 drives total (so you'll need something else to plug the OS drive into) and the onboard RAID only supports RAID0 and 1, neither of which is suitable (RAID0 has no redundancy, RAID1 would waste two drives). I know nothing about the onboard RAID in this server, but would guess that you'll get much better results using a proper RAID card anyway. I've purchased old P400 cards in the past. You can plug your SATA drives in (using the right cable - it may come with the card or you may need to buy it separately). I've picked them up second-hand for about $100 with 512MB battery-backed write-cache. If you do go this route you'll need to check a) that there's a slot for the card b) that it will physically fit c) that the little power supply in that server can handle it plus 5 drives and d) that the P400 has drivers for the OS you want to use (from memory I think it only works under server OS').

    Anyway, to answer your question the two options that spring to mind are RAID 5 and 10. RAID 5 will take your 4 drives, and will use 3/4 of each of them to store date. The other quarter of each drive will be used to store the redundant information. The redundancy is spread across all drives, so if you have a single drive failure you will not lose anything. In a single RAID 5 array you will have 1.5TB (well, closer to 1.2tibibytes or whatever they are called). RAID 5 is good for read speed, but needs to touch every drive when you perform a write. In my very limited experience I've found writes to improve a lot when using 4 drives (but were pretty crappy with 3 drives).

    RAID 10 will take two of your drives and RAID1 them (which will give you redundancy only - performance-wise there's little diference between RAID1 and a single HDD). It will then take your other two drives and RAID1 them. And then it will take your two RAID1 arrays and make a RAID 0 out of them, which is where the speed increase comes from. Any of the 4 drives can fail without data loss. And you can even have a second drive failure without data loss, if the seconds drive happens to be in the other RAID 1 array. I think the general consensus these days is that RAID 10 is better than RAID5. Both read and write speeds should be around double that of a single drive. I suggest you look up some benchmarks people have run though to check for yourself.

    Re mixing drives, I wouldn't have any hesitation putting these drives together into a single array. The 3GB vs 6GB makes no difference - that just referers to the speed of the channel, not the drives. The cache will make a bit of difference, but not much. The speed increase you get by RAIDing four drives will far outweigh any increase you get from a bigger buffer. And in any event, if you get a proper RAID card with cache it will use the RAID cache and disable the drives physical write-cache.

    +
    0 Votes
    MattCollinsUK

    Aploies for the Late Reply gechurch,

    Thank you very much for the infomation provided, that clears up alot of questions i had.

    I have a total of 6 drives in my server. 4 conected in the drive bays, plus another 2 in the ODD drive (i have a 5.25 to 2x 3.5 caddy)

    Unfortantly due to an oversight of mine, and as correctly stated by you, the raid controller, only allows Raid 0 and 1.

    So the way i have configuered it now is:

    2x 250gb in RAID 1 for the OS
    4x 500gb in Software RAID 5 for storage

    1x 2TB External Hardrive for Backups (incrimental daily and full backup every 2 weeks)

    +
    0 Votes
    MattCollinsUK

    Apologies for the Late Reply gechurch, and double post. (please delete my "reply" post)
    Thank you very much for the information provided, that clears up a lot of questions I had.
    I have a total of 6 drives in my server. 4 connected in the drive bays, plus another 2 in the ODD drive (I have a 5.25 to 2x 3.5 caddy)

    Unfortunately due to an oversight of mine, and as correctly stated by you, the raid controller only allows RAID 0 and 1.

    So the way I have configured it now is:

    2 x 250 GB in RAID 1 for the OS (Windows Server 2012)

    4 x 500 GB in Software RAID 5 for storage

    1x 2TB External Hard drive for backups (incremental daily and full back up every 2 weeks)

    I have only had this setup running for a day, but with some initial testing, I am getting write speeds of 110mb/s to the raid 5 array. (From a separate PC, with SSD, over 1GB network) To me that seems pretty good, is it?

    +
    0 Votes
    gechurch

    Thanks for posting back your results Matt. Yeah - I'd be very happy with that write speed. From memory that's about the same write speed I got on a similar setup with a dedicated P410 RAID controller. I was using 4x 250GB SATA drives in RAID 5... the drives were probably a little slower than yours, but I would have expected the software RAID to be more of a bottleneck than it actually is.

    Incidentally, I'd be very happy with your network performace too. I'm cabling my house up at the moment using Cat6. I'm looking forward to having gigabit speeds, but I'll need to replace my NAS which is only 100mbps (and only 2-bay). I've been weighing up getting a 4-bay rack-mount NAS vs just throwing drives into a Windows PC and doing it myself. Your results are encouraging... I might pick up the same server next time my supplier has them on special and build the same sort of setup. I don't want anything too power-hungry, but was worried the Microserver wouldn't have enough juice to run 5 drives (I'll go with a single drive for OS and 4x drives in RAID5 for data). Obviously power is not a problem though.

  • +
    0 Votes
    OH Smeg

    What is it you want to achieve here and then maybe we can suggest the better way to achieve it.

    Col

    +
    0 Votes
    MattCollinsUK

    Hi Col

    Data redundancy if one drive fails, out of the four, while providing a performance increase.

    +
    1 Votes
    gechurch

    First of all, what's your plan for RAID controller? According to quickspecs (http://h18004.www1.hp.com/products/quickspecs/13716_div/13716_div.HTML) your microserver supports 4 drives total (so you'll need something else to plug the OS drive into) and the onboard RAID only supports RAID0 and 1, neither of which is suitable (RAID0 has no redundancy, RAID1 would waste two drives). I know nothing about the onboard RAID in this server, but would guess that you'll get much better results using a proper RAID card anyway. I've purchased old P400 cards in the past. You can plug your SATA drives in (using the right cable - it may come with the card or you may need to buy it separately). I've picked them up second-hand for about $100 with 512MB battery-backed write-cache. If you do go this route you'll need to check a) that there's a slot for the card b) that it will physically fit c) that the little power supply in that server can handle it plus 5 drives and d) that the P400 has drivers for the OS you want to use (from memory I think it only works under server OS').

    Anyway, to answer your question the two options that spring to mind are RAID 5 and 10. RAID 5 will take your 4 drives, and will use 3/4 of each of them to store date. The other quarter of each drive will be used to store the redundant information. The redundancy is spread across all drives, so if you have a single drive failure you will not lose anything. In a single RAID 5 array you will have 1.5TB (well, closer to 1.2tibibytes or whatever they are called). RAID 5 is good for read speed, but needs to touch every drive when you perform a write. In my very limited experience I've found writes to improve a lot when using 4 drives (but were pretty crappy with 3 drives).

    RAID 10 will take two of your drives and RAID1 them (which will give you redundancy only - performance-wise there's little diference between RAID1 and a single HDD). It will then take your other two drives and RAID1 them. And then it will take your two RAID1 arrays and make a RAID 0 out of them, which is where the speed increase comes from. Any of the 4 drives can fail without data loss. And you can even have a second drive failure without data loss, if the seconds drive happens to be in the other RAID 1 array. I think the general consensus these days is that RAID 10 is better than RAID5. Both read and write speeds should be around double that of a single drive. I suggest you look up some benchmarks people have run though to check for yourself.

    Re mixing drives, I wouldn't have any hesitation putting these drives together into a single array. The 3GB vs 6GB makes no difference - that just referers to the speed of the channel, not the drives. The cache will make a bit of difference, but not much. The speed increase you get by RAIDing four drives will far outweigh any increase you get from a bigger buffer. And in any event, if you get a proper RAID card with cache it will use the RAID cache and disable the drives physical write-cache.

    +
    0 Votes
    MattCollinsUK

    Aploies for the Late Reply gechurch,

    Thank you very much for the infomation provided, that clears up alot of questions i had.

    I have a total of 6 drives in my server. 4 conected in the drive bays, plus another 2 in the ODD drive (i have a 5.25 to 2x 3.5 caddy)

    Unfortantly due to an oversight of mine, and as correctly stated by you, the raid controller, only allows Raid 0 and 1.

    So the way i have configuered it now is:

    2x 250gb in RAID 1 for the OS
    4x 500gb in Software RAID 5 for storage

    1x 2TB External Hardrive for Backups (incrimental daily and full backup every 2 weeks)

    +
    0 Votes
    MattCollinsUK

    Apologies for the Late Reply gechurch, and double post. (please delete my "reply" post)
    Thank you very much for the information provided, that clears up a lot of questions I had.
    I have a total of 6 drives in my server. 4 connected in the drive bays, plus another 2 in the ODD drive (I have a 5.25 to 2x 3.5 caddy)

    Unfortunately due to an oversight of mine, and as correctly stated by you, the raid controller only allows RAID 0 and 1.

    So the way I have configured it now is:

    2 x 250 GB in RAID 1 for the OS (Windows Server 2012)

    4 x 500 GB in Software RAID 5 for storage

    1x 2TB External Hard drive for backups (incremental daily and full back up every 2 weeks)

    I have only had this setup running for a day, but with some initial testing, I am getting write speeds of 110mb/s to the raid 5 array. (From a separate PC, with SSD, over 1GB network) To me that seems pretty good, is it?

    +
    0 Votes
    gechurch

    Thanks for posting back your results Matt. Yeah - I'd be very happy with that write speed. From memory that's about the same write speed I got on a similar setup with a dedicated P410 RAID controller. I was using 4x 250GB SATA drives in RAID 5... the drives were probably a little slower than yours, but I would have expected the software RAID to be more of a bottleneck than it actually is.

    Incidentally, I'd be very happy with your network performace too. I'm cabling my house up at the moment using Cat6. I'm looking forward to having gigabit speeds, but I'll need to replace my NAS which is only 100mbps (and only 2-bay). I've been weighing up getting a 4-bay rack-mount NAS vs just throwing drives into a Windows PC and doing it myself. Your results are encouraging... I might pick up the same server next time my supplier has them on special and build the same sort of setup. I don't want anything too power-hungry, but was worried the Microserver wouldn't have enough juice to run 5 drives (I'll go with a single drive for OS and 4x drives in RAID5 for data). Obviously power is not a problem though.