When Storage vMotion first was introduced in VI3 in late 2007, I thought it was the coolest thing ever. Or at least it was the coolest thing since regular vMotion (virtual machine migration from host to host). Since then, we’ve seen a lot of improvements over time to the features related to VM migration technologies. Storage vMotion allows a running VM to be moved from one datastore to another datastore. My storage practice generally sees me using VMFS volumes for vSphere VMs, but NFS is also supported for Storage vMotion.
Since vSphere 5, we now have a new disk format option when a Storage vMotion task is performed. This option is presented on the second step of the wizard and is easy to miss. This new format option is shown in Figure A below:
Figure A
There are three options to format VMDKs on this task now:
- thick provisioned (lazy zeroed)
- thick provisioned (eager zeroed)
- thin provisioned
The thin provisioned disk is pretty straightforward;the VMDK will only consume the space on the VMFS volume that it needs and grow as more disk space is consumed. Be careful that the space never goes down, meaning if the VM grows by 100 GB, but then that file is deleted, the thin provision footprint still has the 100 GB growth. Subsequent disk writes and data growth on a thin provisioned VM will take space in that 100 GB region, however.
The thick provisioned options are a little different. Let’s start with eager zeroed VMDKs. This means that the entire size of the VMDK is pre-zeroed out. So, each I/O request that writes the VMDK out as a zero for the free space is sent. This option is also available on VM provisioning and is required for MSCS clusters and FT VMs. Further, eager zeroed VMDKs have a (minimal) first-write performance gain noted in the vSphere Performance Best Practices Guide.
The lazy zeroed thick format for a VMDK will take up the full sized of the VMDK on the VMFS volume (No thin provisioning benefit) but upon the first-write I/O for the new regions of the VMDK, the blocks would have to be actively zeroed or written to. So, consecutive writes will have to have this overhead with additional blocks of the VMDK being used. If you have a thick provisioned disk and don’t know which format it is, this VMware KB will show you how to determine the format.
In my practice, I use thin provisioning extensively. I occasionally use thick provisioning, and when I do, I use the eager zeroed format. This will avoid the overhead associated with the lazy format. How do you use either of the thick provisioned formats? If so, share your strategy below.