New storage project, part one: Assessing the challenges

Scott Lowe is ready to tackle a new storage project as CIO of a private liberal arts college. Follow along as Scott documents his progress, beginning with this assessment of the current situation and looking ahead to the implementation that will meet his organization's storage needs.

Previously in my writing on storage topics, I shared with you the decision-making process that resulted in my purchase of an EqualLogic PS200e iSCSI storage array. For that series, the work was already done. I had already purchased and installed the unit, so it was easy to write a nice, step-by-step series of articles on the project.

Now, I need more storage. And I've also changed jobs, so I can start with a clean slate. After all, even though the EqualLogic unit was my best choice in my previous environment, it may not be as good a fit for me this time around. Further, I have not even come close to a decision or solution for my new environment. Heck, I haven't even defined all of the requirements yet. So, I'm going to share with you the whole process as it goes along. Of course, this won’t be in 24-style real-time (I’m not Jack Bauer… I do need to eat and sleep!), but it will be as close as I can get. I hope that this series of articles helps you make similar decisions in your organization.

The environment

I’ll start with a brief description of where I work. I’m the CIO for a private liberal arts college located in the Midwest. The IT department supports 950 students and about 200 or so faculty and staff. We run the usual variety of applications, including Exchange, SQL Server-based administrative applications, and so forth. We also support other typical IT functions, such as file serving. In short, we’re a pretty normal IT shop.

Tips in your inbox
TechRepublic's free Storage NetNote newsletter is designed to help you manage the critical data in your enterprise.
Automatically sign up today!


We do have some serious storage challenges, though. Our users currently have pretty low mailbox-size limits—30 MB to 40 MB. Now, bear in mind that we’re asking students to use e-mail as one of their primary communication mechanisms. With Google and everybody else under the sun providing free mailboxes in the 2-GB range, our students understandably turn away from our service. I don’t know if we can match these free services, but our current limits, by any measure, are paltry.

On the file storage front, we use a NetApp filer that has just a tad over 500 GB of available space—not a whole lot in today’s world, especially when it comes to storing rich media. While we’re not in imminent danger of running out of space on the filer, we clearly need more space for future needs.

We’re also starting to talk about the virtualization of some of our servers in order to be able to decommission unsupported hardware and to provide high availability for some services. High availability generally means the ability to use VMware ESX’s VMotion, which means that there is a need for an underlying SAN to handle the storage of the virtual machine files.

Compounding these storage challenges is our backup system. We currently use a couple of older DLT-based tape drives and BackupExec 8.5. The college has opted to skip maintenance on the software, meaning that as a part of our storage strategy, we need to implement a new backup solution as well. I will not be talking as much about the backup solution in this series, but will provide you with our solution at the end.

Why not stick with direct-attached?

If all else fails, I could just stick with tried-and-true direct-attached storage on each of my servers. However, I feel that this would be pretty short-sighted. With direct-attached storage, I would not gain any of the benefits normally associated with shared storage, including:

  • Centralized space allocation from a large pool of storage.
  • Ability to make use of some high-availability applications, such as VMware VMotion.
  • The ability to cluster certain applications, which often requires shared storage.
  • Centralized snapshots.
  • Disaster recovery features.

Personally, I feel that a shared storage solution is simply better than constantly throwing more space in servers … and then hoping it’s all in the right place. After having successfully migrated to a centrally-managed, shared storage solution in my previous position, it’s high on the "to do" list in my new job.

What’s important?

In order of importance, my storage solution has to meet the following criteria:

  1. Provide block-level shared storage easily connected to servers in my organization.
  2. Be fully and 100 percent redundant and able to withstand the failure of a single component (i.e., a power supply, switch, etc.).
  3. Not break the budget. (Can you say "cheap"?)
  4. Provide some level of snapshots.
  5. Provide some level of disaster recovery capability (i.e., replication).

I don’t think that any of the items on this list are mutually exclusive. In short, I’m looking for a highly-available, cheap solution that will provide some enterprise-grade features.


I don’t think that my environment is all that unusual when it comes to these issues in the SMB market. If I was working at a massive multinational conglomerate, I might just whip out the credit card and buy a couple of high-end EMC SANs. But, like many of you, I work in a small organization with limited IT resources. In my next article in this series, I’ll start looking at options for achieving storage nirvana.


For the environment specified, if at all in the future if you intend to invest on IT resources i would suggest to go with HP latest NAS product called HP AiO (All in one). Which provides enough disk space along with iSCSI technology. All the application you are using like Exchange & SQL can easily be migrated & disk capacity can be increased any time. Even the backup is fully automated & uses snap shot feature for backup. You can you this for both file level storage & block level storage. Very much suitable for your kind of environment. Its basically NAS & SAN put together keeping SMB market in mind. Hope this solutions is useful


Check out the Infrant ReadyNAS, I think it fits your requirements and won't break the budget. I'd be happy to email you more information or you can visit the website at Good luck, -Tom


I am in a similar boat myself, Whilst we have implemented a fundamental change to our backup and DR process, moving from the very poor CA Arcserve, to Commvault, we have been able to incorporate top of the range disk to disk backups, with currently 17.1 compression, with a copy going to lto tape then arechived to udo drives. Storage is my next project, we are running short on space generally, your findings will be interesting.


I am going through almost the exact same process and dealing with the same questions. I'm greatly interested in your progress and process. I'll be watching and waiting, but don't hurry on my account. lol.


I am in somewhat of the same boat at my company. One of our sister companies is a library and we recently just created a Digital Collections department whose requirements is to take posters, artifacts, medals, prints, photos, etc., and have them scanned into hi-res digital photos. Of course this is going to eat up a lot of storage so I am needing to look into some sort of NAS or SAN. So I will be interested in seeing what you come up with in your job.


The same is the case with NetApp, you can use both FC, iSCSI and CIFS on one appliance. Also the snapshot and snapmirror can be used and actually is very useful. You have the possibility to reduce space on the storage with the help of deduplication too. The strengths of NetApp is its intuitive and easily administration.


In the NetApp case you can you the same appliance for both. We're a relativly small shop too... 700 users in 6 locations around the country. When we did the assessment cost vs manageability and trained resources, we decided to stick with NetApp and use it for both SAN and NAS. Except the support NetApp appliance is relatively cheap compared to others. We have our exchange and SQL servers boot from SAN, our AIX and Solaris boxes attached to the filer as well as our 12 ESX servers. All Fibre Channel. for a small shop NetApp is pretty good and cost effective. If you have a high performance database that needs the fastest possible disk access on the planet...not your choice! Go look at Hitachi and pay 30% markup for the name, or for the cheaper solution look at HP EVA or XP series which use Hitachi systems with HP StorageWorks (really Compaq before they bought them) management software. Hitachi for less! MPIO and clustered heads with NetApp are a must! Pretty much everything in regards to maintenance on the Filer requires you to halt the head (stopping it from serving data) and rebooting it. if you have MPIO and clustered heads you're o.k. and everything continues to run fine... if have to power all your hosts down first before you can reboot the head.


Been there. Mid-sized public university in the midwest, roughly 9,000 students and 1,500 faculty. 50 MB e-mail boxes for students and losing e-mail useage to gmail and hotmail. To match even closer, we had backup software and a failing SAN that was somewhere between garbage and trash. We ended up with Legato Networker backup to disk to tape, VMware ESX virtualizing 10 servers, new Exchange 2003 (upgraded from 2000 in the process), and a Dell/EMC cx500 fiber channel SAN. If I had it to do over again, I'd virtualize more and buy more storage. Maybe go with BackupExec or get more training on Legato. Did I mention virtualize more?

Editor's Picks