Microsoft is making it easy to deploy and manage a fleet of virtual machines in Azure. Here's what you need to know about the new Virtual Machine Scale Sets feature.
Microsoft has added a feature to Azure called Virtual Machine Scale Sets (VMSS) that allows you to deploy and maintain a fleet of homogenous virtual machines (VMs) as a set. This feature is useful for managing and scaling elastic workloads such as big data and containers; it also enables true autoscale without the need to pre-provision VMs. Azure Container Service, the managed microservices platform of Azure, is based on VMSS that manage an elastic cluster of Mesos.
It took Microsoft a couple of years to add this autoscale feature to Azure. In its current form, autoscaling requires pre-provisioning of VMs within an availability set. The availability set feature avoids the single point of failure by ensuring that there is at least one VM running at any given time. The number of pre-provisioned VMs depends on the upper bound of the scaling range.
For example, to run the web tier on five VMs during peak traffic, the administrator can pre-provision five VMs with a policy that can scale VMs based on the CPU utilization. The drawback of this approach is that the VMs needs to be pre-provisioned upfront. VMSS is a more efficient way of implementing autoscale in Azure.
VMSS can be created from Windows Server images, Linux platform images, and custom images. All the VMs created as part of VMSS share common attributes such as networks, subnets, storage, and VM extensions; this architecture enables Azure to efficiently manage a common set of resources that are a part of an application. The autoscale policy is defined and managed by the Azure Application Insights service that monitors the usage and performance of web applications. Based on the telemetry and alerts generated by this service, VMSS can launch or terminate VMs.
VMSS can be configured through Azure Preview Portal and CLI through the Azure Resource Manager (ARM) templates. Microsoft has published ARM templates and samples on GitHub. Customers can use these templates as starting points for deploying elastic workloads. The getting started guide provides a walkthrough of this feature.
Why some companies are moving to elastic infrastructures
One of the key attributes of the cloud is elasticity — that is, the ability to shrink and expand the underlying infrastructure of an application. Amazon EC2 has pioneered the concept of elasticity through its EC2 Auto Scale service. Through the integration with CloudWatch alarms and Elastic Load Balancer (ELB), EC2 instances can be launched and terminated dynamically. Many AWS customers, including Netflix and Parse, use this feature to implement elastic infrastructure. Google Cloud Platform has an autoscale feature in Google Compute Engine that was added earlier this year.
With big data and containerized workloads becoming mainstream, public cloud IaaS platforms are moving towards implementing efficient elastic infrastructure. Stateless components of an application such as web tier and app tier make ideal candidates for autoscale; they also offer better utilization of VMs, thus delivering a better return on investment. The data tier is typically deployed in the managed database platforms such as Amazon RDS, Azure SQL Database, and Google Cloud SQL. This architectural pattern is becoming popular among web-scale startups and enterprises with needs for big compute and big data workloads.