Storage

Overhauling the server infrastructure: More details on virtualization and handling storage

Scott Lowe is remaking his server environment and reporting on the progress of his project. Here, he provides more details on his iSCSI-based SAN and AX4 arrays.
A few days ago, I wrote a blog posting outlining some of the general steps that I am taking in my organization to remake my server environment. Reader IT Generalist provided the following feedback:

"Do you also plan on virtualizing messaging, database or any other process intensive application and hosting it off of SAN? What about Domain Controllers and Active Directory? Is your SAN iSCSI based? I would appreciate if you can shed some light on the assessment process, desgin and configuration. Thanks!"

In this posting, I'll tackle these questions and provide a little more detail about how we're moving ahead with our project.

I'll start with the SAN question: Yes, our SAN is iSCSI-based. I've worked with iSCSI SANs in larger organizations with outstanding success. Previously, I was using a single EqualLogic PS200e array with around 4TB of capacity. This unit had dual uplinks on dual controllers with the controllers working in an active/passive mode. The result: 2Gb of uplink.

In that scenario, we were running close to 2,000 Exchange mailboxes under Exchange Server 2003, a number of SQL Server databases, roaming desktop profiles, files from a file server, and a few vitual machines hosted on Virtual Server 2005. The EqualLogic array held 15 disk spindles.

In my current organization, we moved ahead with two EMC AX4 arrays. The primary array has 12 400GB SAS disks and active/active controllers, each with (2) 1 Gb uplinks, for a total theoretical capacity of 4 Gb/s of uplink. The secondary array hangs from the primary and holds 12 750 Gb SATA disks. We are running 1,200 mailboxes on our Exchange Server 2007, which has a lower I/O footprint than Exchange Server 2003. We are also running a number of SQL Server databases, but very little in the way of high transaction capacity. We also have file servers which will use the AX4s for storage. In this scenario, we also have a number of VMware ESX-based virtual machines. The catch: We have not yet implemented the AX4s. The boxes arrived just last week and some other things have taken priority over installation. Once we get these in place, I'll report back about the experience.

Everything described above will run, in my opinion, quite well on this pair of AX4 arrays. Based on my experience with iSCSI and the needs we have as far as I/O is concerned, I am fully confident that the storage solution will exceed my expectations. I've written previously about my reasoning behind choosing the AX4 over other devices on the market and have had a number of comments with things like, "Why didn't you choose product X?" Simply put: Budget. The choice came down to not doing anything vs. moving ahead with something affordable for the long term that I felt met my organization's needs. Hence, the AX4s were shipped.

The second part of the question asks what services I plan to run where and how storage will be handled. In a perfect world, from the storage side of the house, I'd eventually like to see everything running from the SAN. We added Navisphere Manager and Snapshot Manager to our AX4 order, providing us with more management capability than is provided on a bare bones AX4. Ultimately, this means that all critical services can be protected by snapshots. Because we went the iSCSI route with the AX4, we can't do full array replication, but, from a budget perspective, we won't be looking at significant disaster recovery efforts for two or three more years anyway.

Now, as for our services, let me make liberal use of bullet points:

  • Domain controllers: We have two domain controllers. One runs as a virtual machine on an ESX host while the other is a physical machine. I love and trust virtualization, but will continue to host at least one physical domain controller. Once the SAN is in place and we've upgraded to Virtual Infrastructure 3.5 Enterprise (we're at 3.0 Starter now), the virtualized domain controller will be better protected since we'll have Vmotion capability. Why will we keep a physical DC around? Comfort.
  • Messaging servers: We run from a single Exchange Server 2007 server right now with a direct-attached SAS RAID array with 2.5TB of capacity. It's working great and serves up about 1,200 mailboxes and is also our Unified Messaging server. For the future, the Exchange environment will stay physical--that is, I won't run the actual server under ESX. However, we will be evaluating a move from the direct-attached storage to the SAN, or alternatively, adding a mailbox clustered Exchange server that uses the SAN for its storage. The second option would require a bunch of additional work as well as we would need to move all non-mailbox roles off our existing Exchange server to a new server since clustered mailbox servers can have only this role installed.
  • Databases: As soon as we can, all of our databases will move to the SAN. Our I/O requirements for all of our database applications are relatively low and the SAN can handle them with ease. The major driver here is the ability to use snapshots. A mid-day snapshot from a major database application can be a boon when it comes to correcting a problem. In my previous organization, we had taken a 3PM snapshot for a database that a user, at 3:10PM, then managed to corrupt. Imagine my delight when we were able to recover the database with only 10 minutes of data loss!
  • Virtualized servers: We've virtualized a number of our low-use or old servers in an effort to improve the reliability of our infrastructure. The choice for us was to run a 6 or 7 year-old server or to virtualize it to new hardware. We chose the latter and continue to back it up as if it were physical, but we have much less risk of hardware failure now. However, we have increased the eggs in the basket. If the new host fails, a number of services go dark. To combat this, we'll run all of our virtual machines from the new SAN and get Vmotion in place to help us keep things in working order. We will be continuing to assess our environment and will virtualize as many services as make sense as we move forward.

I hope this sheds some additional light on our project. I would love to hear additional questions from you.

About

Since 1994, Scott Lowe has been providing technology solutions to a variety of organizations. After spending 10 years in multiple CIO roles, Scott is now an independent consultant, blogger, author, owner of The 1610 Group, and a Senior IT Executive w...

1 comments
IT Generalist
IT Generalist

Scott, Thank you so much for answering my questions and providing me with all the details. Please keep me posted with your progress on moving to VMWare Infratructure 3.5, moving exchange database to SAN and other upgrades. The only question that I have is about array replication. In this post you mentioned that you couldn't do full array replication becuase you went the iSCSI route with the AX4 so I was wondering if it was a product limitation? As I understand, iSCSI is routable because it encapsulate the SCSI protocol into TCP packets and can be routed to other networks for replication purposes.