Provided by: Technische Universitat Dortmund
More and more data centers are built, consuming ever more kilo watts of energy. Over the years, energy has become a dominant cost factor for data center operators. Utilizing low-power idle modes is an immediate remedy to reduce data center power consumption. The authors use simulation to quantify the difference in energy consumption caused exclusively by virtual machine schedulers. Besides demonstrating the inefficiency of wide-spread default schedulers, they present their own optimized scheduler. Using a range of realistic simulation scenarios, their customized scheduler OptSched reduces cumulative machine uptime by up to 60.1%. They evaluate the effect of data center composition, run time distribution, virtual machine sizes, and batch requests on cumulative machine uptime.