Data Centers optimize

Top reasons hosts won't go into standby with vSphere power management feature

vSphere's DPM feature allows hosts to be idled when not in use. IT Jedi Rick Vanover shares his top tips to look out for to ensure hosts can go into standby mode.

I use vSphere’s DPM (distributed power management) capability extensively. While this subset of DRS (distributed resource scheduler) is not very commonly embraced, I find it critical to conserve power and cooling in my environments. But as I’ve used it more and more, there have been some critical things I have learned that can prevent a host from going into standby mode. These can range from slight inconsistencies to more serious issues that prevent a host from going into standby mode. Making sure these reasons don’t impede your vSphere cluster’s ability to leverage DPM can save some troubleshooting down the road! Without any further delay, here is my list!

  1. Local storage is in use. This is the most obvious culprit; if the datastore has any mapping (even a CD-ROM .ISO file), it will not allow the host to go into standby mode.
  2. Only one host in a cluster. A different host must be in the same cluster to send the magic packet to resume a host, so clusters with only one host (even if additional hosts are on the same network) cannot leverage DPM.
  3. DPM not configured in properties of DRS. For a cluster, the DPM property is enabled once DRS is enabled. Further, decide whether a manual or automatic implementation of DPM should be used.
  4. Ensure HA and DRS rules can be met. If complicated HA and DRS rules are in place that require all nodes of the cluster to be on at all times, there may never be an eligible host to send to standby with DPM.
  5. Ensure DRS is licensed. DPM is a feature of DRS, and is available with vSphere Enterprise or higher.
  6. Ensure that the network is not blocking the magic packet. The host can be sent into standby when DRS is configured and DPM is enabled, but the host must be resumed via a magic packet (Wake on LAN packet). Make sure the network topology supports this transmission.
  7. Ensure the vMotion vmnic interface is set to Auto negotiate. This is a frequent issue that may cause DPM to fail, but it is required to send the magic packet from another host.
  8. Templates are inventoried on a host. As you know, they don't migrate via DRS, so DPM won't be able to make a host a power-down candidate until they are evacuated.

There are countless other ways that DPM can fail to send a host into standby, but if the criteria are correct; DPM is a great energy saver in the data center. Do you use DPM? If so, share your tips to ensure it works correctly below.

About

Rick Vanover is a software strategy specialist for Veeam Software, based in Columbus, Ohio. Rick has years of IT experience and focuses on virtualization, Windows-based server administration, and system hardware.

1 comments
David Frans
David Frans

Thank you for your article, it pointed me to the reason why it was not working in my case: vMotion needs to be enabled on the VMkernel properties on all hosts in the cluster. Obviously... (I just recreated the VMkernel port so the setting was lost)