Virtualization

How to configure host SSD caching on ESXi

A recent feature in VMware ESXi allows you to use host SSD-caching as stopgap memory-management technique for memory-constrained servers.

One of the more recent features that is available with ESXi in modern editions is the ability to leverage SSDs for caching. This can be a great cost-efficient stopgap for your memory-constrained servers. But, before we go too far, it is important to note this is a swap improvement feature, not a storage I/O feature. Don’t be confused by all of the I/O acceleration technology out there; host SSD caching helps as an additional memory management technique. The thought is, leveraging a slice of SSD storage on one or more ESXi hosts will help memory performance if swapping happens.

ESXi hosts detect if a disk is either SSD or non-SSD, and it is displayed in the vSphere Web Client. In Figure A, you can see that two datastores (each are local storage) are designated as SSD drives and the rest of them are not:

Figure A
hostcaching10-28-FigA.jpg

The benefit of this feature is that it is a configurable amount of the SSD drive that will service the host swapping. Now, it’s important to note that swapping is bad and memory is still faster than SSDs, but the host caching is a nice intermediate solution. It is configured per ESXi host, as shown in Figure B:

Figure B
hostcaching10-28-FigB.jpg

In this example, 20 GB of the SSD disk are available to the host as a preferred target for swapping memory to the guest VM. This will be materially better than a rotational drive (where the .vswp file is stored with the guest) but still not as good as actual memory. The silver lining here is to have a good handle on actual memory usage on the hosts as well as individual VMs. If not all hosts in a cluster have a host caching option set on SSDs, migrating VMs can make the monitoring and management of memory resources a bit of a moving target.

The goal is to avoid swapping, but if you must, consider leveraging an SSD for it. Figure C shows a VM in a very memory-constrained cluster that is swapping over 1 GB of memory just for one VM, making those memory pages run slow as this different environment does not have an SSD host cache:

Figure C
hostcaching10-28-FigC.jpg

Swapping is the least desirable form of memory management for an ESXi host. The memory management efficiencies are great with ESXi, but if the other techniques can’t deliver the memory ranges to the VM, there will be swapping. The other techniques available include the balloon driver, transparent page sharing, memory compression, and more. See the vSphere 5 Documentation Center for information on Memory Overcommitment with vSphere.

I realize that I say swapping is not ideal, but the reality is adding memory or more hosts isn’t always an option. Have you used host caching for production clusters? It’s great for labs; but production is another thing. Share your comments below!

About

Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

1 comments
jonking_007
jonking_007

Hi Rick


This seems a little bit backwards to me. If you have the ability to put SSD's into a host server, when you see swapping happening, surely you have the ability to put more RAM in. It's cheaper and faster!


I appreciate that SSD's "could" be hotplugged in, and that this is harder for RAM, But I would have thought that the drive bays on servers are more precious for storage and would want to be utilised with disks.


There must be some other reason why it's beneficial to use this functionality, otherwise I'd just expect people to move VM's to another host, and put more RAM in.


thoughts?


JK