Cloud

Puppet automation brings a cloud technology stack to life

Nick Hardiman explains how to design and build a cloud technology stack by automating the Puppet Labs automation system.

 

cloud_1648x1165_030614.jpg
 Image: iStock/solarseven
 

In part one of this cloud automation series, I listed the technology we'll use and provided details about the architecture and the steps involved in the build process. Now I'm going to run a little test on Amazon Web Services (AWS): I'm going to use the Puppet automation system to install an Apache web server. Rather than set up Puppet manually, I am going to make life hard for myself by automating the automation.

pl_logo_vertical_rgb_sm.jpg
 I'll be using command line tools and cloud automation tricks to build two machines. Here's what I want to happen: commands from the new AWS CLI toolkit create the machines, the cloud-init setup system installs Puppet, and finally a Puppet agent installs the Apache service.

Design and build

The design and build of a project runs through a few steps like these.

  1. Scribble a simple design. Describe the new system without getting bogged down in the details.
  2. Produce a complex design. Get bogged down in the details.
  3. Collect the tools.
  4. Build the system.

Different IT workers approach the work in different ways. A solutions architect completes step 1 but is too important to do any more. A sysadmin gets stuck on step 3 for his entire career. A developer plunges straight into step 4 on the live production platform.  I kick off with a simple design that outlines the technology stack.

The technology stack

A technology stack is a series of layers that hide horrible technical complexity. It's the IT equivalent of theater–actors don't have to understand what happens backstage, just as application developers don't have to understand what sysadmins do.

The technology stack–from the bottom to the top–looks like this:

The foundation: AWS machines

This is a test system with two Amazon EC2 machines. One machine hosts a Puppet master and the other hosts a Puppet agent and Apache. Apache provides the only customer service in this test system.

  • p-agent-machine: the machine running the customer service and the Puppet agent.
  • p-master-machine: the machine running the Puppet master. Only sysadmins (not customers) use this machine.

AWS provides many shapes and sizes of virtual machines, including the DB1 database server, the G2 graphics machine, and the M3 office mule. Which one is most suitable?

The test system doesn't have to deal with heavy loads, or High Availability (HA), or anything very much. I pick the smallest, cheapest machines I can use: t1.micro.

The OS in the middle: Ubuntu Server 14.04 (Trusty Tahr)

I'm building the applications on top of the upcoming Long Term Support (LTS) release of the free Ubuntu OS called Ubuntu Server 14.04 (Trusty Tahr). Version 14.04 is at the tail end of its development and testing phase and does not enter production support for several weeks. However, when it does, it will be officially supported until 2019.

Ubuntu images for AWS

Ubuntu provides a big list of Ubuntu images–a few recent releases that are ready for use with AWS EC2. Ubuntu also provides a list of Trusty images–the next LTS release. Each of these images is an AWS Amazon Machine Image (AMI)–it's the disk to boot an EC2 machine from.

I want the next big thing, so I choose from the list of Trusty images. A development release like this tends to get updated a lot before its release. These images are built every day.

Here's where it starts to get technical. You have to pick the line with the right values for you. I picked this one.

  • Region: eu-west-1. Have you chosen an AWS region yet?
  • Arch: amd64. Use a modern 64-bit machine, not an antique 32-bit machine.
  • Root store: ebs. The other root store choices are hvm (for GPU instances) and instance-store (cheaper and ephemeral).
  • ami: ami-50b64527. This label is required to start a new EC2 machine. If you like using a web UI, the big Launch button takes you to the EC2 section of your AWS management console.
  • ec2 command: ec2-run-instances ami-50b64527 -t t1.micro --region eu-west-1 --key ${EC2_KEYPAIR_EU_WEST_1}. If you prefer a CLI, this is the command to launch your new machine.

If you want an Ubuntu production release, don't use the list of Trusty images–choose from the big list of Ubuntu images.

Picking the right image means deciphering even more fields. Here's the full list for Trusty, showing an older build (check out that Release field).

  • Zone: eu-west-1. This is the AWS region, not an Availability Zone.
  • Name: trusty. Abbreviation of "Ubuntu Server 14.04 (Trusty Tahr)."
  • Version: devel. Trusty is not production-ready and won't be for weeks.
  • Arch: amd64.
  • Instance Type: ebs. This is the root store.
  • Release: 20140111. A datestamp that shows this is several weeks old.
  • AMI-ID: ami-ec50a19b. It's not the nightly build release (see ami in the list above). The names don't match.
  • AKI-ID: aki-71665e05. The special Linux kernel that works with AWS EC2.

Applications on top: cloud-init, Puppet, and Apache

All applications are installed from packages. Some are installed in the Trusty OS already, such as cloud-init and the supporting Python things that make it go. Others must be added, such as Puppet and its Ruby bits and pieces, and the Apache2 server.

You don't have to worry about all these extra supporting packages. Ubuntu uses Debian's APT (Advanced Package  Tool), hooked up to Ubuntu's software repositories. A dependency system is built into APT, which automatically installs all the required supporting packages.

What's next?

The hands-on hacking fun starts with an install of the new AWS CLI toolkit. 

 

About

Nick Hardiman builds and maintains the infrastructure required to run Internet services. Nick deals with the lower layers of the Internet - the machines, networks, operating systems, and applications. Nick's job stops there, and he hands over to the ...

0 comments