Storage

10 things you should know about transitioning to enterprise storage

Although local storage on servers is the traditional approach and best for smaller environments, enterprise storage is the next frontier. But what do you need to know in making the switch? Rick Vanover looks at 10 considerations to help guide you along the path toward an easier transition.

As your organization and career grow, you will inevitably face the issue of enterprise storage. Although local storage on servers is the traditional approach and best for smaller environments, enterprise storage is the next frontier. But what do you need to know in making the switch? Here are 10 considerations to help guide you along the path toward an easier transition.

Note: This information is also available as a PDF download.

#1: Don't overstep your roles

If your organization has server, storage, and networking all delegated to separate groups, do not attempt to be an expert in each group. The storage group is the expert in storage, and the network group is the expert in the network. As a server admin, you have to play a critical part with all of them to architect the solutions to your needs. If you don't know something, ask. Your storage administrators know what you need to do on the server side and are generally happy to work with you to get the storage configuration correct on both ends of the solution.

#2: Driver version management is important

Various technologies, such as iSCSI and fibre channel, have drivers for the interfaces, just like traditional SCSI and RAID controllers. The difference is that you're dealing with a smart device on the other end. Because of this, there are matrices of supported configurations and you should engage your storage administrators to ensure you are targeting a configuration that will work. This applies to the interface driver and possibly a software driver for the controller. For example, if you are connecting to an IBM SAN Volume Controller (SVC), you will also have SVC drivers as well as the fibre channel interface drivers.

#3: Testing the configuration is critical

Having performance and functionality expectations in line with reality is a key step in the learning process. When you provision a system, you should go through a testing process that checks I/O performance on the shared storage and accounts for the loss of a path or link down in the connection. You should also know what tools you can use to add, remove, and modify storage while a system is online. You do not want to go through the discovery process on a live system.

#4: You will use all the storage you are given

One of the beautiful aspects of working with a storage team is that if you need more storage, you can simply ask for it. However, you will find pressure to accurately state your needs, minimizing the amount of assigned free space to shared storage configurations. With general purpose servers and local storage, the available free space is not a concern at all for most systems. For enterprise storage, it is not at all unrealistic for the expected free space for a small volume to be 30% or less from an initial provisioning for a system that has relatively static data and filesystem usage. This requires slightly more planning in system provisioning for shared disk size from the storage team, but in most configurations, the space can be extended dynamically.

#5: Shared storage is key to clustering

Most clustering implementations utilize shared storage that two or more systems have direct access to. In shared storage configurations, this will be implemented through the zoning configuration for the drives and the nodes of the cluster. This storage configuration can also be used across many servers hosting virtual machines.

#6: There are many factors to shared I/O

SAN admins can isolate or share allocations among systems. For example, if you have a generally idle logical unit number (LUN) in the storage environment, it may be grouped with other LUNs of similar use on a set of disks to match the performance usage. In this way, the most expensive disks are not holding the most idle systems. Should one of those LUNs become very busy, the other systems may have degraded performance depending on many factors. The converse can apply as well: If a highly intensive and critical system is in the shared storage environment, it can be isolated to a set of disks and only that LUN will be using those drives for the best I/O. Be sure to get a clear expectation of your I/O patterns and requirements as well as the placement of your LUNs for systems attached to the enterprise storage system.

#7: Storage replication is the hard part

One of the key characteristics of the most solid disaster recovery plans is a storage replication solution. However, many plans fall short of achieving this key point. Replication of the shared storage requires a large amount of bandwidth dedicated to the traffic between the storage controllers. Getting the storage and the bandwidth for the replicated systems is not the problem -- getting the money for the storage and bandwidth is the challenge.

#8: Virtualization is everywhere, including the storage

Storage controller systems can have a virtualized front end that manages the connectivity to all of the devices. The IBM SVC, for example, virtualizes all storage devices that are connected to the SVC and presents them to the servers as a simple LUN. The servers connected to the SVC do not know where they are in regard to controller and type of disk. There also can be virtual SAN (VSAN) technology in use in enterprise storage configurations. This is where interface can access multiple SANs through the provisioning either at the port or at the controller.

#9: Vendor compatibility is essential

Related to the driver topic, the compatibility of different products is critically important. For example, you want to use a certain brand of host bus adapters (HBAs) with an operating system like VMware ESX. However, ESX provides only native or proprietary drivers for the storage connectivity. So ensure that your brand of HBA is on the supported list of drivers within the operating system. Also check downstream with the storage team for product compatibility. Finding compatibility surprises after purchasing equipment is not a good situation.

#10: Consider data deduplication on the storage

Deduplication is floating around a lot of areas in IT, with a guiding principle of more efficiently using the relatively expensive shared storage. Frequently, these solutions plug into large collections of similar systems. For example, take virtual machines on a shared storage environment. These products revolve around the principle that there are many similar attributes to the storage footprint. These products can "open up" the virtual machine files and have visibility to their contents. A frequent deduplication point is the C:\ drive of virtual machines, which may be up to 80% similar across many systems.

Are you ready for enterprise storage?

The learning curve is great, but not insurmountable. With proper planning and getting expectations in line for functionality, you can transition well to enterprise storage. Have you learned things along the way? If so, share them below.

About

Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

4 comments
gordonmcke
gordonmcke

Nice article Jody. I have worked in storage for twenty years and have my own firm, Ohio Valley Storage Consultants. I represent some of the newest technology vendors such as Compellent StorageCenter. I would add to your list "thin provisioning". Quite often, users allocate up to 75% more space than they use. Thin provisioning uses physical media only when data is written, not allocated. Also, it is very possible and cost-effective to implement automated block level storage tiering, which promises to reduce cost and substantially improve environmentals with new storage systems.

abeeber
abeeber

Having worked for a SMB of about 100 people, we grew a storage footprint from 16TB to over 250TB in a 2 year period. So my lessons learned: Storage is a commodity, sometimes a business can be addicted to adding more storage than implementing storage management policies that can offset consumption. Despite variations in implementation (EMC, BlueArc, NETAPP) all NAS/SAN providers buy their disks from the same place. Poor quality in a lot of disks can play havoc to a customer due to the OEM relationships in place. A quality defect in one manufactured lot can play havoc as this lot is traditionally shipped to one provider who then passes the disks on to their customers., especially if a customer buys disks in large quantities as the disk distribution model ensures that The larger the drive the longer the rebuild times and recovery times. Dont forget virtualization at the block level aka Compellent and 3Par. Finally, if you are file sharing based, sutions from Acopia/F5 networks make traditional NAS/File sharing obselete enabling an organization to merge different storage solutions behind one namespace. HTH

b4real
b4real

gordonmcke: That is a good point you mention, and a difficult adjustment for administrators who have been accustomed to working with local storage only.

b4real
b4real

I think the smaller enterprises have it tough to make the transition to enterprise storage simply because of the initial investment requirements.

Editor's Picks