Storage tiering is a practice that has existed in many ways for storage administrators for a very long time. In a static disk assignment, you can provision a number of storage tiers by manipulating these key design elements:
- Storage protocol: Ethernet-based (iSCSI/NFS), fibre channel, or direct-attached storage. Each connectivity media has its own throughput that will affect the overall experience of the storage.
- Drive speed: The revolutions per minute (RPMs) of the drives in an array is a factor in the overall performance of the storage design.
- Drive interface: The most popular disk interfaces in use today are Ultra320 SCSI, SATA, SAS, fibre channel, and solid state drives. The throughput and I/O operations per second (IOPS) for each of these drive types is a factor in determining the behavior of the storage array.
- RAID level in use: RAID 1, 4, 5, 6, 0+1, 5+0, and other proprietary levels can make significant differences in throughput. Check this AC&NC RAID.EDU resource for information on standard RAID levels and the NetApp page on RAID-DP.
- Quantity of drives: Generally speaking, if your array hits on more drives, you can access less surface area from more drives to enhance the performance of the array.
- Disk size: Drives that are very large (potentially with many arrays striped across them) can bog down the overall throughput of the array. While the 2 or 4 Terabyte drives are attractive for SATA storage, the throughput and interface rate for the drives are the same as 1 TB or smaller drives.
For most storage systems, aligning these design elements to craft the best performing storage system with the resources available or what can be purchased is about as much storage tiering as can be done. If you’ve never built up a few designs and put a performance benchmark on the storage design, you really should — there can be an incredible variance in the performance results.
As you can see, this can be very tedious in the flat storage arena without advanced management. A new set of features are showing up on some of the more full-featured storage processors that provide automated storage tiering. Automated storage tiering will allow the storage processor to put the segments of data on the level of disk that it needs, when it needs it. Examples of this technology are 3PAR’s Adaptive Optimization, Compellent’s Data Progression feature, IBM’s Easy Tier, and EMC’s fully automated storage tiering (FAST). This video shows a preview of EMC’s FAST:
The automated storage tiering technologies allow the storage administrator to permit a volume that may reside on a lesser-performing disk set to be dynamically moved to a higher-performing tier automatically. One of the best use cases is to put the bulk of a SAN’s storage requirements on less expensive SATA storage and use automated storage tiering to move those volumes or sub volumes to higher performing SAS or solid state disk drives.
The ability to automate storage tiers is quite attractive, primarily because the right resources will get the right disk when it is needed; however, some administrators may express concern about the data blocks or volumes being moved dynamically around a SAN. While each product will implement automated tiered storage differently, it is not far from standard volume migration technologies that exist on most storage processors, operating systems, or hypervisors. I can see a great reduction in the amount of tier-1 (SAS) or tier-0 (solid state disk) storage zones that many administrators may need to provision in order to keep the hot spots on the premium disk.
Where does automated storage tiering fit into your storage roadmap? Let us know in the comments.
TechRepublic’s Servers and Storage newsletter, delivered on Monday and Wednesday, offers tips that will help you manage and optimize your data center. Automatically sign up today!