My enterprise service must be able to scale out to accommodate increased demand. It’s one of my 12 principles of operational readiness.
I know I have a problem with my current customer service because I ran a stress test. I increased demand for my service and saw the response time become unacceptable. I want to give clients a good service by adding machines during busy periods and I want to save money by removing machines during quiet periods.
Scale up or scale out? Manual or automatic?
There are two ways of beefing up the set of machines that make up a distributed architecture: scaling up or scaling out. Scaling up is increasing the resources available to one machine. A struggling database server can be given more CPU, memory and disk space. Scaling out is adding more machines. Web servers benefit from scaling out. Any decent IaaS service offers both methods.
I want to scale out by adding EC2 machines when the service is busy and scale back in during idle periods by deleting machines. It takes time, engineering and a little bit of art to get the size of the server fleet right.
Scaling out the size of a server fleet can be done manually or automatically. An enterprise may run a 24-hour bridge where team members keep an eye on operations. They have a limited ability to add EC2 machines manually by constantly monitoring performance, managing snapshots, AMIs and instances and scaling a fleet of EC2 machines to match demand. There is even the customer benefit of the personal touch. However, the bigger the fleet, the more the old problems creep in.
Amazon’s auto scaling command line tools
Auto Scaling is one of the killer apps of the IaaS world. It actually isn’t useful to most people, but it certainly is an attention grabber. Why would a small company, running a customer service that can already easily cope with demand from all its customers, want to automatically scale out?
The Amazon secret sauce that makes scaling out possible is a set of Auto Scaling Command Line Tools. Amazon supply these Java-based tools for system administrators to use at the CLI (Command Line Interface). Auto scaling features are not popular enough to make it to the AWS console (although apparently ylastic offer autoscaling in their console).
It’s pretty complicated. Carrying out a complicated IT procedure using a command line interface is enough to wipe the smile off a person’s face. I will break down these steps and show how they work over the next few posts.
- Make an AMI as the basis for new EC2 machines.
- Check the Amazon EC2 API Tools
- Install the Auto Scaling Command Line Tools
- Install Amazon CloudWatch Command Line Tools
- Add a Launch Configuration.
- Add an Auto Scaling Group.
- Add an Auto Scaling Policy.
- Add Cloudwatch monitors.
- Add Auto Scaling notification
- Thrash the nuts off that bad boy.
- Clean up.
Make an AMI as the basis for new EC2 machines
The first task is to make an AMI to use as the basis for new EC2 machines. An AMI is basically an image of the root disk. It is easily copied to use as the heart of many new EC2 machines.
I set up one instance the way I want it, copy it to an AMI then use that image to fire up more instances. It’s a simple way of duplicating servers without the hassle of application installation, content copying, system upgrades, and so on.
There are drawbacks to this method, such as an image becoming out of date over time and the difficulty of keeping track of many types of image for different purposes. There are more flexible ways, such as firing up plain instances then adding servers using a configuration management app like Puppet, but these are harder to master. Firing up ready-made instances from an AMI is fine for these first steps in auto scaling.
Next time, I will create the AMI, fire up an instance manually, and check it is good.