By Kachina Dunn

With Joe Hernick, an IT manager for a Fortune 100 company with 12 years of consulting and project management experience in data and telecommunications. He recently examined the business cases for storage area networks (SANs) and their alternative, network-attached storage (NAS). Both methods are being used to replace storage that’s more distributed, and in Hernick’s mind, more costly and less efficient.

This interview originally appeared in the IT Business Edge weekly report on Maximizing IT Investments. To see a complete listing of IT Business Edge weekly reports or sign up for this free technology intelligence agent, visit

You make a strong case for centralizing all storage resources to avoid problems and needless costs of distributed tapes, disks, arrays, and switches. Is it possible to quantify the average gain an enterprise can realize? Is greater reliability the heart of the business case for centralized storage?

Hernick: Five “9s” reliability is often the strongest justification for a SAN implementation, coming in ahead of savings from staffing costs and simplified management. All businesses should be able to quantify the costs of downtime on critical systems and run them against the reliability stats for centralized vs. distributed storage. The higher the cost impact of downtime, the clearer the business case for high-reliability solutions such as NAS or SAN.

Many enterprises maintain a 20-percent reserve in storage capacity to handle short-term growth. But you tend to discourage this policy in favor of more virtualized storage, where different departments or user groups pull from a “storage pool.” Even with all the bells and whistles, isn’t IT setting itself up for overload managing, sorting and prioritizing everyone’s storage needs?

Hernick: Simple answer: Yes. Storage management discipline is required for success when using a centralized pool as cost avoidance; the upside is that centralized management tools simplify care and feeding tasks. In a stable shop with steady growth curves, on-demand storage allocation works well. In a worst-case scenario, where all managed environments are highly volatile and requirements just can’t be nailed down, managers can build up their centralized solution with a 20-percent buffer. Their ROI would take a hit due to the extra capital expense on surplus storage, but risk is mitigated and there would be fewer sleepless nights.

You also once said that the reliability and performance improvements that come with more centralized, networked storage cost more per MB than older legacy methods. How might enterprise IT go about calculating whether it’s worth the expenditure?

Hernick: That’s taken a bit out of context. My message was that base costs per MB are higher from a nuts-and-bolts perspective. But total cost of ownership is often much lower in the long run for centralized solutions, especially in big 24×7 shops. The first step in cost justification is to take a brutally honest look at your current spend on storage. Track all staff planning and support time, monitor financial impacts of outages, and plot costs for your growth over time. Then run your numbers for distributed vs. centralized. Most SAN vendors have planning tools—use them with a grain of salt or play out scenarios of your own in Excel to see if SAN/NAS makes sense in your situation.