General discussion

  • Creator
  • #2293670

    Redundancy and clustering


    by buschman_007 ·

    I came into this company and have pretty much overhauled their entire network. Going from an NT environment I have upgraded all their major systems to 2K3 AD as well as Exch 2K3. So the core network itself is running well. But now my thinking is switching over to disaster recovery mode. Pondering about the “what if’s”. Right now my network relies on tape backup. But I want more safety and less downtime. We have a remote office in NJ(we are loctaed in MD) with a full up T1 connecting us. I’m thinking about server redundancy, but am new to this.

    Could anyone suggest the right path to ensuring 0 downtime for servers such as File Servers, Exchange Servers, and DB Servers? What systems would I need to institute to ensure that my users and the outside world never notice a blip on the radar screen?

    Thanks for your time and advice.


All Comments

  • Author
    • #2690205

      Reply To: Redundancy and clustering

      by opiate ·

      In reply to Redundancy and clustering

      Have you looked into file replication over multiple NAS servers? I am still reseaching because I am in the process of planning the same. But this is one of the solutions I have seen. A few NAS servers that run Hot Swap RAID arrays.

      • #2731058

        NAS Servers

        by buschman_007 ·

        In reply to Reply To: Redundancy and clustering

        No I have not. What is a NAS server and how can it help? Thanks for the heads up.


        • #3310463

          Go with Double Take Software

          by sarina ·

          In reply to NAS Servers

          If you go with NAS replication, use Double Take Replication software. it is the best. Definitely don’t go with CA or you’ll be sorry.

      • #2704658

        For realtime file replication — Double Take

        by tomsal ·

        In reply to Reply To: Redundancy and clustering

        There is a product from a company called NSI Software, Inc. ( called “Double Take”. Its not cheap — pricing varies on the amount of servers and the OS you are using, but an example of an average Windows server is a retail cost of about $2,000 per server.

        However, its made from the ground up for maximimizing server uptime. Its perfect for using with a co-location facility, for total DR security.

        What it does (and its fully configurable of course according to how often/how much you want to replicate, etc.) is in REAL TIME it will replicate all changed files to a replication server – this means an EXACT copy of data on replication server as on the production server.

        When the production server “dies” double-take, takes over and will switch the appropriate replication server to now be the production server. Downtime is minimal — definitely less than 10 minutes.

        FYI, there is no such as true “ZERO delay” downtime — UNLESS you have bottomless pockets of course. Most companies do not have that luxury of “money is no object” budgets.

        Check it out.

    • #2705082

      NAS Servers – What are they?

      by ·

      In reply to Redundancy and clustering

      Benefits and Features of the NAS 200m/160GB
      Perfectly sized for your small business or workgroup environment, the Iomega NAS 200m/160GB storage server delivers 160GB of RAID-redundant network storage and outstanding performance while easily integrating into your existing network infrastructures – and it’s priced affordably too!

      Your network can have:

      160GB of storage you already know how to use with a Microsoft? Windows? Powered OS
      easy installation with plug-and-play connectivity
      the best data protection and data storage for your dollar

      How Can I…
      Add affordable storage to my workgroup?
      Your workgroups, remote sites and intense graphics users all need affordable, accessible storage for file sharing and storage. Iomega NAS servers allow you and your diverse workgroups (Windows?, Mac?, Unix, and more) to do just that. High-end graphics, CAD/CAM operators and others can store valuable images on redundant Iomega NAS servers or simply use it as a scratch disk. And, user management rights can quickly be established so data stays safe and protected.

      Use my NAS server as an interim backup for tape?
      Backing up to a tape drive can often extend beyond the window of time available each night, causing you delays ? and since only one server can back up to a tape device at a time, those delays can be considerable. But there is a better way! Use your Iomega NAS server as an interim or near-line solution to save you valuable time. With NAS, multiple servers can write to a dedicated NAS device simultaneously (and many times quicker than writing to tape), so it doesn?t matter if the tape drive takes all night to backup from the NAS server. Plus, this solution adds another point of failure/protection to your network, and restoration of files from a NAS device is extremely easy and efficient.

      Technical Information

      Microsoft Windows Powered OS
      Microsoft Services for Unix
      Microsoft Services for Novell
      File Services for Macintosh
      Native support of Active Directory Services and DFS aggregation
      Computer Associates? eTrust? Antivirus 7.0
      Iomega NAS 200m/160GB
      Intel P4 Celeron Processor, 1.7GHz
      256MB RAM
      RAID 0, 1
      Single 10/100 Ethernet
      2 x 80GB Hard Drives, 7200rpm
      1U Form Factor
      2 USB 2.0 Connections
      Service & Support: Limited three year parts and labor, 24×7 telephone support and E-mail with 4 hour response time.

      • #2705077

        NAS the cheap way!

        by ·

        In reply to NAS Servers – What are they?

        I currently run a second server beside the critical systems based on a cheap clone which has all the capabilities of a server but is a small portion of the cost. Hourly the systems are syncronised (this can be more frequent if required) and all files co-exist on both systems. AD and DFS are good for this. It’s completely automatic. I run Undelete Server on the primary system from where I can recover most inadvertantly deleted files with little effort. This holds multiple recovery versions of the same file so if it has changed 3 or 4 times in a day you can recover them. Deleted is the important point. MDB’s for example are never deleted, they are just over written or updated so therefore cannot be undeleted in general terms.
        There is some very good software around for making live copies for future recovery and SecondCopy has proved it’s worth on many occasions. I am sure there others just as good.

        Lots of luck with the thinking.

        Gary Stevens

        • #2705076

          NAS, SAN and Replication

          by ymiheisi ·

          In reply to NAS the cheap way!

          My understanding is that NAS is suitable for file servers but not databases. We implemented a SAN/NAS solution where we used the SAN for the databases and NAS for files. We are using NSI Double-Take (fantastic software) to replicate locally (for High Availability) and to a remote site (for Disaster Recovery). The software provides automatic failover on the local subnet, but we need to do a few minutes work (changing server names and IP addresses) to failover to the Disaster Recvovey site.

        • #2704616

          Other alternative than tape backup

          by joe90fluke ·

          In reply to NAS the cheap way!

          Try the Novanet backup made by Novastor.
          You can backup to removable hard drives located in an other computer on the netowrk. They provide also an Open File Manager:
          I personally backup my Lotus Notes data at the rate of 23 Gygabytes in an hour and a half.
          I suggest to always work with at least 2 sets of
          I did let go the tapes system a long time ago,
          this software I use is a little bit special to configure but it worths the involvment.


        • #2709944

          Two sets ? Ahemmm….

          by jens ·

          In reply to Other alternative than tape backup

          Two sets are definitely not enough. Most professionals have one daily, one weekly, one monthly and one set per year to fall back to – often with more sets stored off-site. Also – are they stored in different locations ? What is the cost/gig compared to a very good tape unit ?

    • #2705075

      Clustering you said it, Linux will do it !

      by mikatrob ·

      In reply to Redundancy and clustering

      Cluster servers are the only real way to get the 99.9% uptime your asking for, Nas are nice and I personally don’t have problems with them except limitations they create. If you can afford NAS units then use real servers and run BSD or Linux on the servers, put your money in mulitple NIC’s and Server Hardware with SCSI RAID CONROLLERS, not a IDE raid controller. These systems are inexpensive, then use the DFS since you have MS Win Servers to show the file shares. (Please note: you will have to learn linux to utilize these systems if you do not already use it.) If you use multiple NIC’s you can have a dedicated pipe between servers and use replication 24×7 for this and any other data for servers only, then route normal traffice through other NIC in server, also make sure you use switches and not standard hubs. Hubs may be 100Mbit but you must devide the amount of ports into 100 to find the true data rate, switches are inexpensive as well and you will have the redundancy you are looking for.

      Linux can function BDC style without problems in the AD setup. If MS systems get hit by virus network stay’s up while you take down the AD Server in MS world of AD there is no more PDC/BDC all systems are PDC’s again Linux can perform flawless and clustering will out perform the MS systems we already use this set up and save money creating a balance and then the set-up saved us more than once.
      Anyone Else?

    • #2704706

      Using DFS and FRS

      by ryan.steel ·

      In reply to Redundancy and clustering

      You have so many options available to you at a cost. The quick cheap and cheerful solution would be to utilise the tools that W3K has built in. Create a DFS share and use DFS to automatically replicate the data from the hub server to the spoke server. You can set this up in a few min, setup a replication schedule and then look at other software solutions if you find that this does not work for you. Good luck.

    • #2704670

      DR design

      by richardgray ·

      In reply to Redundancy and clustering

      I have put together our company’s plan for DR on our networks and have maintained it for 6 years. My recommendation for your question is to use Cisco’s CSS 11500 series cluster switches. You can redirect traffic from 1 location to another almost instantly and have it work automatically. These switches are not cheap, but they work and are reliable. I have a friend who works for a large utility company that uses them for their redundant data centers.

    • #2704668

      Tapes – no way around them

      by jens ·

      In reply to Redundancy and clustering

      (All professionals will surely know that, but…)

      Apart from legal reasons, even with a perfecly working system, redundant to the max, there is no way around tape backups and archives.

      Imagine having to restore data accidentally deleted yesterday.

      Or – a virus attack that destroyed files.

      Not to mention the feds needing old data…

      • #2704634

        Well I had a failure recently

        by hal 9000 ·

        In reply to Tapes – no way around them

        Where a Power Supply 5 V rail went bad and it trashed the MBR’s on every HDD.

        Now without the good old Tape along with the auto loader I’d have been up the creek without a paddle so to say.

        It is not only outside attack that can cause problems there is always hardware failure so this is one very good reason to keep the backup data well away from the machines and I for one would never rely on a HDD as any form of DR other than the most rudimentary first line procedure.


        • #2710275

          Was the power supply redundant?

          by kevj ·

          In reply to Well I had a failure recently

          All of my servers are RAID 5 hot swappable with dual power supplies.

        • #2710256

          No this was a high end workstation not a server

          by hal 9000 ·

          In reply to Was the power supply redundant?

          While it did have a Dual Processor M’Board and a Dual SCSI bus with 13 HDD’s installed it didn’t have the possibility of using redundant Power Supplies.

          Just one of the breaks when working with this type of stuff.


    • #2704662

      Get away from tape RAID and iSCSI

      by mberkow914 ·

      In reply to Redundancy and clustering

      First thing is…ditch the tape. RAID Arrays are way down in price. Depending on your data size, you could also consider iSCSI. Either way, you can purchase TBs for between $5K/TB to $3K depending on array capacity. iSCSI will be more, but it handles the VPN a little cleaner. There are packages available for snapshots and M2M amdn Many to 1 replication. In another response, the switch redirect is nice also.

      • #2704655

        NEVER NEVER NEVER rely on JUST drives

        by tomsal ·

        In reply to Get away from tape RAID and iSCSI

        I disagree.

        And btw, I’ve had a guy who is buries me in experience in this field (he’s a professor now for computer science even) and wouldn’t give in to disagreeing with him on this topic either.

        NEVER rely on just HDD (be it iscsi, raid, NAS, SAN, whatever latest buzzword you want to throw at it) for data. Those technologies are great, but UNLESS you have a co-location deal going I say always keep a removable copy like tape in use. Tape is portable, its easy to bring off-site in a jiffy, and even in a co-location environ (be it, the chance of failure of BOTH the production location and the co-lo location at the same time is slim to none — STUFF CAN STILL HAPPEN.

        All those fancy systems would be squat if you have both locations fail/screw up.

        Meanwhile if you do the iscsi, the co-location, the NAS/SAN….AND tape…well then you have severe protection (multiple layers) for DR. Just make sure you don’t store your tapes

        • #2704653

          Tape will go bye bye (but not yet)

          by mberkow914 ·

          In reply to NEVER NEVER NEVER rely on JUST drives

          You are correct not to rely totally on disk. When I talk redundacy, I mean multi-site and with the correct config, you can ( if you are compfortable) eliminate the tape. I have seen tapes become corrupt. With TBs becominig common place and PB starting to show up, restores can go on for days from tape. If would sleep better with 2,3,…n mirrored sites than a tape. It all depends on money and comfort.

        • #2704626

          While True

          by hal 9000 ·

          In reply to Tape will go bye bye (but not yet)

          What’s a few days when you are considering all the companies DATA? The systems that you talked about are only useful for very small business who doesn’t store much data digitally and has hard copies of the files.

          I too have see horrendous cases of tape being useless but it has always been down to “Human Error or lack of knowledge” like a case a few years ago now where those in charge thought to save a few $ they would get the Security Guard to run the daily Tape Backups. He of course knew nothing about anything concerning computers and followed the instructions he had been given to the letter. The only problem was the tape drive had failed and only ran for 5 – 10 seconds by which time he had left and gone on his rounds. The next morning the office staff removed the tape and replaced it with that days and took the backup into a secure location. The only problem was that this went on for 6 months and was only discovered when the server failed.

          To me the biggest problem with DR is the Human Factor and if you can remove this as much as possible it works better.

          SO I would always recommend a dedicated server with a tape autoloader who’s only job is to perform backups either daily or incremental that way unless the Human involved forgets to change the tapes over it is as secure as possible.

          Personally I’ve seen far too many HDD fail and even a few days involved in recovering the data is cheep compared to requiring the failed drive/s being sent away to be stripped and have the data recovered which takes a lot longer than a few days and costs a lot more. Not to mention the fact that once this happens you lose control of the data. So if security is a very real issue your only alternative is to have a fully fitted out forensics ability on site and even then a few days would be considered as luxury compared to the time required to recover the data.


        • #2706062

          Human Error

          by imaginetsecurity ·

          In reply to While True

          A major Human Error factor in the example you provided is quite common with all backup routines. The failure is within not having a frequent process to validate the backup data and the ability to perform a restoration. This is too common a problem with many Admins.

          Blind faith that a backup runs when and how scheduled will bite you in the butt if you are not testing the backup for completeness and accuracy and then testing your restoration process to ensure it works when needed.

          Whether or not a Guard is launching the backup, if the dataset is not validated the next day by an Admin it is likely that the result will be as you described. An incomplete backup with no possibility of restoration.

          The lesson to apply is to ALWAYS VALIDATE a backup whether it is to tape or HDD (SAN/NAS/Removable) and to perform regular RESTORATION TESTING. For regulatory compliance, if applicable, this validation and testing should be documented.

      • #2704652

        Still need tape

        by strikeleader ·

        In reply to Get away from tape RAID and iSCSI

        Depending on your business you will still need to archive your data. Disk to disk to tape is the way to go especially if you have large amounts of data to backup. Federal regs require that you archive data in a way that it cannot be altered.

      • #2704628

        That is the last thing I would suggest

        by hal 9000 ·

        In reply to Get away from tape RAID and iSCSI

        As posted above I recently encountered a power supply failure that took out the MBR’s on all the drives connected to that unit. It was a Backup Server and while the hardware worked once the failed power supply was replaced none of the drives contained any data any longer.

        To me the good old tape is the only way to go but keep the chance for “HUMAN ERROR” out and use an autoloader. At least that way when everything goes to hell in a hand basket you still have the data and can reconstruct all the companies working again fairly quickly as apposed to having to reenter it all over again.


      • #2704627

        more on SAN

        by puzling ·

        In reply to Get away from tape RAID and iSCSI

        People will continue to debate SAN vs NAS, but some previous posts have already pointed out that these decisions depend entirely on your need and existing network infrastructure.

        A SAN (storage area network) uses serial technology to communicate between devices, while NAS (network attached storage) typically uses Ethernet. Each requires its own type of interface to communicate (e.g. Ethernet card or host bus adapter (HBA)). The serial interfaces (HBA) promise fantastic speed (2GB/s – 10GB/s). If you throw in a switch between these devices, you get a very fast serial network. This happens to be an extremely nice environment for backups.

        When people are held hostage by all-night backups, it makes more sense to create a smaller high-speed backup SAN than to overhaul the Ethernet network. In other words, if your backups are leaving servers unavailable or hogging the network, you might want to offload those backup activities from your Ethernet network.

      • #2704178

        Never deny your roots

        by jloan ·

        In reply to Get away from tape RAID and iSCSI

        Yes I will agree that RAID is cheap and a great thing to have running on almost all servers. But… NEVER ditch the tape! There is a purpose for tape.. It’s removable! Therefore, if anything physical should happen to the server, you still have your files. Just my $.02

        • #2710030

          What about removeable hard disk?

          by bj walraven ·

          In reply to Never deny your roots

          My experience has been that removeable (even firewire) hard drives have been MUCH more reliable than tape.

          Anyone else care to comment along these lines?

        • #2710015

          I agree

          by cgrau ·

          In reply to What about removeable hard disk?

          I use removable hard drives too. They are very nice for my weekly rotation.

          However, I use tape for my archives.

        • #2709997

          Well I use both Removable HDD for speed

          by hal 9000 ·

          In reply to I agree

          And ease of use and tape for those really important things that just must be stored no matter what. I’ve had a few bad experiences with removable HDD’s although to be fair there has always been a reason and then mostly human error like they drop the things down a flight of stairs or something equally stupid but hey it does happen and if it is something really important tape is a lot faster in recovery than sending away a HDD for Data Recovery even if it is possible from a security point of view.


        • #2710468

          Historical Archive

          by dpatillo ·

          In reply to What about removeable hard disk?

          As previously mentioned, what about historical data: ie, last months database, last years payroll. Break out the tapes, baby, and restore the data.

    • #2704654

      File replication, DB replication and clustering

      by me ·

      In reply to Redundancy and clustering


      Here we use W2K (W2K3) DFS for file replication and it does a great job and the price is right ( It’s free).

      We also use W2K advenced server for clustering SQL enterprise DB and SQL log shipping features for database replication and availability.

      We also have a LAN Extention between our sites and VPN backup links.

      But don’t forget that tape(removeable media) backups are not something you (or anyone) can live without, they are mandatory for any real Disaster recovery/ or auditing purposes (court trial, hacking detection offline after the fact).
      And DB/Email/files resotration.

      Don’t forget that replication is synchronious so a big booboo at one end is also a big booboo at the other end !!!!

      So My 2 cents are:

      Backup often and on different medias, Tivoli
      Storage Manager (5.2) is the best backup archive solution on the market once you learn how it works!!

      P.S.: I’m on no way related to IBM, Tivilo, etc

      Eventhough I sound like a big Microsoft dude, I’m actually a UNIX admin have been for 15 years. As much as I hate to admiit MS is doing a good job with the features of SQL and W2K3.

      And as far as Linux goes you can do it all with Linux as long as you want to spend a bit of time configuring it.

      Linux is probably more stable than Windows but if your not a UNIX guru, You’re probably betrer off with MS software.


    • #2704625

      Cost-Effective Recommendation

      by leonard_aj ·

      In reply to Redundancy and clustering

      The advances in file compression technology coupled with DVD writing and re-writing capabilities make them an ideal tool for replacing the old tape backup devices. If you have a high-powered system management pc with lots of drive capacity, consider setting up a backup strategy using PkWare or Winzip and archive the zip files to DVD or CDR. Copy or zip across the network to your management pc during the lowest usage time frame and zip/archive from there. Best of all, the data will be accessable from almost any workstation for disaster recovery.

      • #2704540

        Cost effective? No way….

        by highlore ·

        In reply to Cost-Effective Recommendation

        Have you ever considered the following?:

        1) If your server totally crashes, you need to get your server back online. That aint possible in a fast way without the proper software. Also it can take a whole day when a server is down and a lot of programs are running on it (or even worse, an SQL database or an Active directory global catalog server with several server roles on it).

        2) The time to put it all back in the right folders is going to kill me if i only have a DVD backup. I am working lots of hours and i dont want to get even more overtime, just because i use a not failsafe and fast way to recover my server.

        3) Security on folders is lost right at the moment you use DVD media. I do NOT want to check all my folders and call all managers to see who is authorised and who isnt, and put those securities back on all the folders. In my company I have about 15 servers running and we have GB’s of data on all these servers. No way I can get security back online in a fast way without a proper backup. BTW: I use veritas on most of my servers and some servers have Arcserve running. I am satisfied with veritas and a less about arcserve, but the opinion is different for every other admin I guess.

        So, if you have a small office, and you dont need any security measures (which I find is real dumb not to take care of security even in small businesses…. see viruses, hacks and trojans external and internal, not to mention disgrunteled employees etc) it might be a way to backup SOME data on dvd that aint personal or may only be available to some people.

        Remember though that data on DVD can get lost quite easily, and you cant really count on DVD-RW or DVD+RW to be able to keep backing up forever.

        I personally think tape backup is still the medium to make proper backups (insurance, offsite storage, audits, hack attacks etc).

        DFS is nice to have available when a server is down (just make the link connect to the other server and voila), but it is only a failover plan, not meant to replace the backup medium.

    • #2704579

      It ain’t rocket science… so don’t make it into rocket science.

      by debunker ·

      In reply to Redundancy and clustering

      1) Frequently, systems that add system complexity for the purpose of providing real-time protection can end up *causing* the protected systems to fail more often due to the added complexity…. and then they’re harder to fix.

      2) The cost and complexity of doing real-time replication and failover is *much* higher then the cost of doing near-real-time (lets say with an acceptable 30 minute replication lag). Reality check: People will spend an 30-60 minutes going to lunch and another 30-60 minutes doing other non-work activities during the workday. So whenever some zealot tells me that we can’t tolerate even 5 minutes of down-time (ever) without causing major damage to the company, I recommend that he see his doctor for a lithium prescription.

      3) Any DR strategy that integrates the DR system with the host OS that’s being protected is just begging for trouble. It’s time to think in terms of backup appliances and replication appliances. Separate the protection function from the devices being protected. I have no doubt that the cost on these will come down rapidly in the next year or so.

      A good one to look at now is

      Their website is not particularly well organized or well written, but I’ve tried a demo system and been very impressed with how bulletproof and easy to use it is…. enough so to make me issue a P.O.

      I should mention that I’m currently using Veritas v9 with various add-ons. I liked the older Veritas version a lot better than I like v9. Somehere along the line, Veritas got the peculiar idea that backups were rocket science and that they needed to be the star of the show. Wrong! No more Purchase Orders for you….

    • #2704434

      NAS is IT!! But what about budgets??

      by aragon.elessar ·

      In reply to Redundancy and clustering

      NAS Servers are a brilliant way to seperate processing from storage. It’s more like connecting a network printer, even if your server fails, data transfer between clients & the NAS will continue, thus ensuring Fault Tolerance. Since NAS has it’s own OS (Windows Storage Server 2003, for instance)the ACL’s on files & folders are also controlled there.
      But NAS is pretty expensive when it comes to implementation!! So give it a thot if you have budget issues!!

    • #2710905

      Lets see,

      by robert ·

      In reply to Redundancy and clustering

      Hey guys dr is what it is, the best you can do. (just like with your kids 🙂 ) tapes were used in the days of high costs for data storage and were not always reliable but it was the best you could do within budget. Now we have many options to go with that are not as costly as in the days of yor. We will still use tape but only in the last piece of the chain so to speak. We are going with Intradyn rocket vaults as the backup and co-standby for the servers for fail over and backup at 3 diff. locals, of course the tape will backup the Intradyn but will not be considered the
      first choice in restoring when a failure accures. So just do the best you can and get some sleep.

    • #2711682

      Redundancy and clustering

      by adriano.herberth ·

      In reply to Redundancy and clustering

      Well, you have to write To-Do tasks. First secure your closest servers making replica of them and joining both to a cluster environment as you are running 2k3 core servers. For your database, if it is Oracle, try RAC. Test everthing, unplugging cables, change resources to other node on a cluster. Then, use the T1 as your desister recovery to a small network consisting of the servers on NJ for example. If you have a router (CISCO CATALIST 6500) you can implement a disaster recovery to this site, or configure a new one to substitute when failures, pointing to your remote site. Remeber there is not 0 Downtime if a tornado comes in your place.

    • #2711650

      safety net with mixed media

      by chasster ·

      In reply to Redundancy and clustering

      What are you referring to as tape drives?
      I’d assume DLT’s or 8MM’s.
      Are you aware that there are tape drives that sustain 20+ MBY/sec transfer rates to their media?
      We have found that in large media set environments that a mixture of backup (media) types usually provides a high level of data safety.
      A lot depends on the data and the environment.

    • #2711646

      Try Stratus ftServers

      by leen2 ·

      In reply to Redundancy and clustering

      In terms of internal server redundancy, 0 downtime in reality is not obtainable, however for Windows based servers the Stratus ftServer line of systems are the only option that truely achieves 99.999% uptime.

      Stratus systems are also simpler to implement and maintain than Windows clusters and actively monitor themselves. In the event of a hardware component failure there is no “blip on the radar screen”, no delay for fail-over and no loss of system performance.

    • #2711634

      disk as tape

      by lunatick ·

      In reply to Redundancy and clustering

      what about disk as tape? instead of using a tape rotation you use a disk rotation: you get history, off site storage, inexpensive media, fast restore…. the only downside i see is that they get angry if you drop them….

      any thoughts?

      • #2710080

        So does tape

        by jclaypool ·

        In reply to disk as tape

        Try dropping a DLT from about three feet and you might find that it, too, will not read properly, just like a dropped drive may not survive.

        I use a combination of primary backup to disk, with a disaster recovery backup to tape. This minimizes my backup/restore time for day to day operations, while providing disaster recovery insurance.

      • #2709993

        I’ve had a few fail

        by hal 9000 ·

        In reply to disk as tape

        But it has always been human error which is something that I try to remove from any DR plan as it is the weakest link. Those removable HDD’s are nice and secure while on site but are subject to damage while being transported off site. In one case recently the HDD where in a car that was struck by a semi and all died an unnatural death but the tape cartridges lived to store another day.


    • #2710244

      Veritas Cluster Server for 0 downtime

      by acolb ·

      In reply to Redundancy and clustering

      We’ve been using Veritas Cluster Server (VCS) for Windows and Exchange for 3+ years (Solaris version even longer). We run two Windows server clusters: A three-node cluster that supports Exchange (Enterprise Ed) failover on two nodes and file/print services on two nodes, with one in common. Our two-node VCS cluster protects our email gateway server apps: Outlook Web Access, MailSweeper SMTP, and Blackberry Services.

      By the way, we are a small organization: 180 staff total.

      Over the past year, Exchange has been down in prime time about 2 hrs (our own goof up, not VCS; and the gateway not at all.

      This solution requires an organization that can tolerate the very occassional 3-4 minute failover time in exchange for lots of benefits. One benefit is the ability to carry out invasive changes during work time while user services run on the alternative node. We did this recently to swap out a server main board. Another big benefit to us is administrator flexibility since both the Solaris and Windows admins can manage failovers. (We also use Veritas Volume Manager on both Unix and Windows so there are additional admin unifications there.)

      We don’t cluster SQLserver, but the agent is available. Clustering of Windows Printer services is trickiest and most fragile part of our configuration.

      We are also looking at a “perfect” solution for Exchange DR mode at our Business Continuity (T-1 connected) site using Veritas Volume replicator and Global Cluster manager; this solution requires Volume Manager and VCS as prerequisites.

      Hope this is useful.

      Andy Colb

    • #2710189

      Have you considered backup appliances

      by colomike ·

      In reply to Redundancy and clustering

      Replication is great for backup and recovery, but does not address the issues of disaster recovery, or archiving. There are a few products out that provide fairly elegant solutions with varying complexity. Tivoli Storage Manager (TSM) from IBM provides a nice utility for managing backups at the file level. It allows for online and offline disk and tape pools where backup flows through levels of storage. You can maintain the last n days of backed up documents in a disk storage while pushing lesser used documents onto tape for backup. At the same time you can define your DR tapes and archiving tapes which are manages and handled automatically for you. The only full backup performed is the first time the system is put into motion. From then on the system manages incremental backup and off-site storage as well as tape reclamation. TSM is a pretty sweet product. Since backups are handled at the file level you can even allow your users to restore accidental deleted or modified files through client software.
      Unfortunately TSM is a bit of a bear to get a handle on. A company out of Colorado Springs, CO, STORServer has created a backup appliance, which is a combination of disk and tape storage utilizing TSM as it’s management application. They have scripted a front end that is highly usable and removes all of the complexity of TSM. The setup of the system is typically less than a week. The system does come with a relevant price tag, but with the savings in tape usage over a year often pays for it self in tape reduction let alone time freed up managing the system.

    • #2710177


      by bbautista808 ·

      In reply to Redundancy and clustering

      Six months ago I was a college graduate with no network experience looking to learn and gain the skills I needed to succeed in the field. I do not consider myself an expert but I do know a thing or two about stabilizing a network. This reply is just a brief on some things we are doing to increase stability in our org.

      When I first came into the organization (1000+), things were also a mess. Our alert system (Big Brother) would send us alerts on a daily basis telling us that something was going wrong with the system. Since then the number of alerts have significantly decreased thanks to rebuilds, repairs, and implementation of brick-level backups and redundancy.

      We have several NAS systems handling the backups of our file servers and any brick-level backups need. Our implementation of the NAS systems provide us with the resources needed to reduce downtime and recover from catastrophe, though recovery time is still needed. They provide the option of recovering data from a snapshot of some point in time just in case any critical data was lost for some reason. RAIDs and clusters may provide redundancy but in such a case you will just have redundancy of lost data. That is where the backups can come in handy.

      We have also recently implemented a Dell/EMC entry level SAN solution – our biggest investment into reaching stability yet. Among the systems running off it is a two node active/passive Exchange 2000 cluster on Win2k Advanced Server. Configured correctly, Microsoft’s Cluster Service actually does a decent job doing what it’s supposed to. The SAN has several RAID (1 & 5) sets with multiple luns, redundant power, and redundant fiber switch. Our Exchange servers are connected to the SAN via HBA cards. If you are looking for near zero downtime, you should look into this. You can even go as far as adding RAID sets to each node in the cluster if you wish.

      Total redundancy is something everyone wishes they could have and if money were no object then it would be no problem. You should do a thorough cost-benefit analysis when pursuing this. Just remember that no matter how much redundancy you have, there is still Murphy’s Law.

    • #2724206

      NAS and Server Imaging

      by nosreppih ·

      In reply to Redundancy and clustering

      Has anyone considered combining NAS devices with server imaging applications?

      I have used a combination of V2i and Snap Appliance NAS. This combinations works very well. The V2i server image is compressed (2:1)and provides the ability to image the whole server, O/S and data or either as you wish. Restore is simple and fast. Combine this with the ability to replicate the Snap Appliance device to another, either local or remote, and you have a good basic solution that provides for fast backup and restore as well as DR.

      We only re-image the O/S as part of change control, so normally only the data is re-imaged each time.

    • #2706057

      What is the size of your critical data?

      by imaginetsecurity ·

      In reply to Redundancy and clustering

      This thread will bounce all over the place (as you can already see) unless you can provide some specifics to your situation.

      First, does your company actually have a Disaster Recovery plan in place? If not, you should get one going. It will include the processes of backup and data restoration germaine to this thread.

      Secondly, is your company subject to any regulatory compliance such as HIPPA, GLBA, or SOX? This impacts your DR plan and backup requirements. E.g. archiving of data or email under one regulation can vary from another as can the lengths of time for archiving.

      Thirdly, how many servers are involved and what is the cumulative size of critical data you need to recover in your situation?

      This will allow the rest of us to provide better responses to you rather than expound whether SANs are better than NAS or tape, etc.

      • #2706366

        Qty servers and storage size

        by bletterman ·

        In reply to What is the size of your critical data?

        I have 6 servers of which 2 would be critical. The critical data storage is about 60gb. We do fall under HIPPA and we are just beginning to develope our Disaster Recovery plan. I have 2 ISP’s. One is cable and the other is a T1. I have a remote office about 2 miles away that could house some remote servers.

    • #3305663

      Take a look at Marathon Technologies

      by jmechling ·

      In reply to Redundancy and clustering

      Marathon’s software ties two standard Windows servers together into an FT platform for Windows apps just like the ones you are running. No failover (it’s not a cluster), zero downtime, even online repairs. And no proprietary hardware. Your users will never notice a blip.

Viewing 21 reply threads