One of the more recent features that is available with ESXi in modern editions is the ability to leverage SSDs for caching. This can be a great cost-efficient stopgap for your memory-constrained servers. But, before we go too far, it is important to note this is a swap improvement feature, not a storage I/O feature. Don’t be confused by all of the I/O acceleration technology out there; host SSD caching helps as an additional memory management technique. The thought is, leveraging a slice of SSD storage on one or more ESXi hosts will help memory performance if swapping happens.

ESXi hosts detect if a disk is either SSD or non-SSD, and it is displayed in the vSphere Web Client. In Figure A, you can see that two datastores (each are local storage) are designated as SSD drives and the rest of them are not:

Figure A

The benefit of this feature is that it is a configurable amount of the SSD drive that will service the host swapping. Now, it’s important to note that swapping is bad and memory is still faster than SSDs, but the host caching is a nice intermediate solution. It is configured per ESXi host, as shown in Figure B:

Figure B

In this example, 20 GB of the SSD disk are available to the host as a preferred target for swapping memory to the guest VM. This will be materially better than a rotational drive (where the .vswp file is stored with the guest) but still not as good as actual memory. The silver lining here is to have a good handle on actual memory usage on the hosts as well as individual VMs. If not all hosts in a cluster have a host caching option set on SSDs, migrating VMs can make the monitoring and management of memory resources a bit of a moving target.

The goal is to avoid swapping, but if you must, consider leveraging an SSD for it. Figure C shows a VM in a very memory-constrained cluster that is swapping over 1 GB of memory just for one VM, making those memory pages run slow as this different environment does not have an SSD host cache:

Figure C

Swapping is the least desirable form of memory management for an ESXi host. The memory management efficiencies are great with ESXi, but if the other techniques can’t deliver the memory ranges to the VM, there will be swapping. The other techniques available include the balloon driver, transparent page sharing, memory compression, and more. See the vSphere 5 Documentation Center for information on Memory Overcommitment with vSphere.

I realize that I say swapping is not ideal, but the reality is adding memory or more hosts isn’t always an option. Have you used host caching for production clusters? It’s great for labs; but production is another thing. Share your comments below!