Previously in my writing on storage topics, I shared with you the decision-making process that resulted in my purchase of an EqualLogic PS200e iSCSI storage array. For that series, the work was already done. I had already purchased and installed the unit, so it was easy to write a nice, step-by-step series of articles on the project.
Now, I need more storage. And I've also changed jobs, so I can start with a clean slate. After all, even though the EqualLogic unit was my best choice in my previous environment, it may not be as good a fit for me this time around. Further, I have not even come close to a decision or solution for my new environment. Heck, I haven't even defined all of the requirements yet. So, I'm going to share with you the whole process as it goes along. Of course, this won’t be in 24-style real-time (I’m not Jack Bauer… I do need to eat and sleep!), but it will be as close as I can get. I hope that this series of articles helps you make similar decisions in your organization.
I’ll start with a brief description of where I work. I’m the CIO for a private liberal arts college located in the Midwest. The IT department supports 950 students and about 200 or so faculty and staff. We run the usual variety of applications, including Exchange, SQL Server-based administrative applications, and so forth. We also support other typical IT functions, such as file serving. In short, we’re a pretty normal IT shop.
We do have some serious storage challenges, though. Our users currently have pretty low mailbox-size limits—30 MB to 40 MB. Now, bear in mind that we’re asking students to use e-mail as one of their primary communication mechanisms. With Google and everybody else under the sun providing free mailboxes in the 2-GB range, our students understandably turn away from our service. I don’t know if we can match these free services, but our current limits, by any measure, are paltry.
On the file storage front, we use a NetApp filer that has just a tad over 500 GB of available space—not a whole lot in today’s world, especially when it comes to storing rich media. While we’re not in imminent danger of running out of space on the filer, we clearly need more space for future needs.
We’re also starting to talk about the virtualization of some of our servers in order to be able to decommission unsupported hardware and to provide high availability for some services. High availability generally means the ability to use VMware ESX’s VMotion, which means that there is a need for an underlying SAN to handle the storage of the virtual machine files.
Compounding these storage challenges is our backup system. We currently use a couple of older DLT-based tape drives and BackupExec 8.5. The college has opted to skip maintenance on the software, meaning that as a part of our storage strategy, we need to implement a new backup solution as well. I will not be talking as much about the backup solution in this series, but will provide you with our solution at the end.
Why not stick with direct-attached?
If all else fails, I could just stick with tried-and-true direct-attached storage on each of my servers. However, I feel that this would be pretty short-sighted. With direct-attached storage, I would not gain any of the benefits normally associated with shared storage, including:
- Centralized space allocation from a large pool of storage.
- Ability to make use of some high-availability applications, such as VMware VMotion.
- The ability to cluster certain applications, which often requires shared storage.
- Centralized snapshots.
- Disaster recovery features.
Personally, I feel that a shared storage solution is simply better than constantly throwing more space in servers … and then hoping it’s all in the right place. After having successfully migrated to a centrally-managed, shared storage solution in my previous position, it’s high on the "to do" list in my new job.
In order of importance, my storage solution has to meet the following criteria:
- Provide block-level shared storage easily connected to servers in my organization.
- Be fully and 100 percent redundant and able to withstand the failure of a single component (i.e., a power supply, switch, etc.).
- Not break the budget. (Can you say "cheap"?)
- Provide some level of snapshots.
- Provide some level of disaster recovery capability (i.e., replication).
I don’t think that any of the items on this list are mutually exclusive. In short, I’m looking for a highly-available, cheap solution that will provide some enterprise-grade features.
I don’t think that my environment is all that unusual when it comes to these issues in the SMB market. If I was working at a massive multinational conglomerate, I might just whip out the credit card and buy a couple of high-end EMC SANs. But, like many of you, I work in a small organization with limited IT resources. In my next article in this series, I’ll start looking at options for achieving storage nirvana.