Since vSphere 5 introduced VMFS-5, there have been some big improvements in ease of use for storage management. Rick Vanover shares a few tips on the new clustered file system version.
I’ve long thought that VMFS was the most underrated technology that VMware has ever produced. When vSphere 5 was released, I was really happy to see a lot of the improvements for VMFS-5. They go right along with the other storage-related features such as Storage DRS and datastore clusters, but VMFS-5 has become a lot simpler to use.
In some earlier posts, I spoke earlier about the VMFS-5 datastore and its new capabilities as well as how to go about upgrading a volume from VMFS-3 to VMFS-5 (though I recommend a new format on a new volume from the SAN if possible). These may be good reading ahead of time to get a handle on where you may want to go with your VMFS-3 datastores, should you have them.
For VMFS-5, the new limit of approximately 64TB per single volume is one of the biggest benefits I see in the new features. This really simplifies how the storage is to be laid out. Previously, we’d have multiple VMFS-3 volumes “stacked” on top of the actual disk drives on the SAN, and that isn’t good for performance. In my current experiences, I find myself working with a lot of mid-size iSCSI storage systems. These are capable of provisioning iSCSI targets over 2TB, so for each tray (logical drives on a single bus) of drives (even if they are are interconnected as one bus), I've been provisioning one VMFS-5 volume for the whole thing. This way, one drive has no more than one VMFS-5 volume accessing it at any given time. This helps me in the following areas:
- Performance monitoring: I can now associate one VMFS-5 volume’s I/O patterns directly to the disk behavior. Previously, other datastores would impact each other when the same underlying disk systems were used.
- Ease of use: Less granular administration is required to provision datastores and the VMs that live within them.
- Less cluttered storage interface: I have to say, I like having the datastore inventory for a host and every wizard in the vSphere Client simplified by just having fewer datastores listed. This is a much more pleasant experience.
Another practical use tip for VMFS-5 volumes is to take this simplicity forward to include better names. This means leveraging the name on the SAN controller itself (LUN 0 for example) and the tier of storage (SATA, SAS, SSD, FC, etc.) on the controller to easily be reflected in the datastore name. Additional indicators may include the storage network (fibre channel or iSCSI) should multiple storage networks be in use.
So far with VMFS-5, I’m really happy with the ease of use. I didn’t mention it in this post, but the unified block size of 1 MB makes it easy for us not to have issues down the road like we did with VMFS-3. Little changes collectively have made a big difference with VMFS-5 in terms of ease of use for managing storage in vSphere environments.
What tips have you learned with VMFS-5 so far? What have you changed with your storage practices for block-based systems thus far with vSphere 5? Share your comments below!