To offer one example, a company I worked with needed to test application scalability under heavy user load. The test in question needed to run 100 simultaneous browser instances all generating significant traffic over a two-day period.
The old way would have been to scrounge up 100 machines from somewhere, manually install the operating system and software application stack, and then fire them off. A slightly newer way would be to encapsulate those 100 instances in virtual machines (VMs) and have the virtualization software fire them off. Depending on VM density, somewhere between five and 20 machines would be necessary for this, not to mention the virtualization software investment also required. And, with either alternative, at the end of the two days, you would have been left with unused hardware.
Instead, this company created 100 Amazon Elastic Compute Cloud (EC2) instances based on a single machine image stored in Amazon's S3 storage service (collectively referred to as
With results like these, the question becomes, how can I best take advantage of this environment? Here are ways to take advantage of cloud computing to aid your development and test, ranging from simple to more complex; each successively more complex use of Amazon Web Services encompasses more of the entire development and test process, using automation to integrate the entire process while leveraging Amazon's cloud characteristics.
Using AWS for testing
If you want to replicate the above use of AWS -- that is, take a preconfigured application and move one or more instances into the cloud for testing -- the process is relatively straightforward. Naturally, one needs an AWS account; there really isn't space in this article to cover this process, but the AWS site has full info on how to do this.
Amazon provides a command-line tool to take a physical machine and convert it into an Amazon Machine Image (AMI), which can be stored in S3 and then instantiated as a running EC2 instance. To create an AMI, use the command ec2-bundle-image; to upload it to S3, use the command ec2-upload-image. (See the EC2 Developer Guide for a full overview of the command-line tools.)
The simplest way to leverage AWS in the testing world is to convert the entire application setup into one or more S3 images and then set them up to communicate to one another. Amazon provides a couple of ways to configure each EC2 instance to ensure they can all communicate. One is a Firefox plugin called Elastic Fox; the second is an Ajax-enabled AWS management page. Each lists all of the AMIs running under an account and offers access to individual instances for configuration. Typical configuration tasks include defining ports that the application needs to be opened in order to run as well as creating (if necessary) Elastic IP (EIP) addresses, which are permanently assigned IP addresses within EC2. This capability allows different parts of the application to reliably find one another via known IP addresses. There is an extra, though minor, charge for each elastic IP address.
By default, Amazon will let you reserve up to five EIPs per account. If you would like to reserve more than five EIPs, you can submit a request to Amazon at Request to Increase Elastic IP Address Limit. EIPs are a limited resource and should be used sparingly, typically only for EC2 instances that are directly accessed though the Internet. If your application has a large array of server farms (e.g., Apache Web servers) then you should configure a load-balancing proxy server such as HAProxy which is accessed via the EIPs. All other EC2 instances are accessible via an external DNS server system such as DNSMadeEasy, or you can configure your own DNS EC2 instance that uses its own EIP.
Once the application is up and running, whatever tests need to be run can be executed. These can either be run from local systems, or, if appropriate, the tests themselves can be uploaded to EC2 instances and run inside AWS. If the testing procedure is likely to be run repeatedly, I recommend that the tests themselves be placed in permanent S3 AMI instances to facilitate loading and execution. (See below regarding "burning" AMI images in S3.)
Integrated development and test in AWS
Moving testing into the cloud is an obvious win -- less hardware, ease of scale and so on. But even better is to leverage AWS for an integrated development and test process. Let's examine how you might go about doing this.
EC2 instances (executing AMIs) are, essentially, empty machine containers into which you place software to create your execution instance. While starting with an existing physical (or, for that matter, local virtual) machine is definitely fine, you can also leverage pre-existing Amazon AMIs that are offered on AWS. These AMIs can be used as a starting point, which one would then load other software on to create the desired EC2 instance. A third alternative is to create an AMI from scratch; it's more work, but it is the "purest" form of an AMI.
Once the image is in the desired form, an S3 version must be burned. It's important to keep in mind that EC2 instances are launched from S3, so burning an S3 version is a critical step in the development process.
Something to keep in mind for development is the fact that creating S3 images is not instantaneous. It takes on the order of five to 15 minutes.
A further aspect of AWS that affects the development process is the fact that run-time modifications to an EC2 instance are not persistent. Perhaps the best way to understand this is to consider an AMI instance something like a live CD: Modifications can be made to the running system that affect that instance, but when the machine is shut down, none of those changes are stored permanently. Therefore, run-time modifications are not present the next time the system is run.
This has serious consequences for the tight iterative development process common to today's development environments. The rapid code/build/unit test approach founders on the nonpersistent aspect of run-time modifications. Of course, one could build a new S3 image every time some portion of the application system changes, but given the time burning the S3 image takes, this is unpalatable for these environments.
Fortunately, Amazon provides a persistent storage mechanism that is saved dynamically upon system shutdown. This mechanism is called Elastic Block Storage (EBS). The catch is that this does not cover the entire AMI -- that's what S3 is for. EBS stores only file systems that are configured to use its services -- and the EBS file systems must be attached after initial bootup of the AMI. This means that a command must be executed to perform the attachment. While this command can be manually executed, it is better if it is placed into the startup scripts of the AMI image to ensure it is consistently performed.
All files within the image that are modified during system use should be located on an EBS file system. This will ensure that all run-time changes are persisted across individual AMI sessions. Data that should be considered for inclusion in the EBS file system includes database, user home files and, crucially, code changes to the application itself. If the files that will be changed include system files that should or need to be located in certain places within the system file (e.g., configuration files that live in /etc), links should be created between those files and files that reside in EBS.
With changeable systems available, the full development and test lifecycle can be supported in EC2. Developers can continue the code/build/unit test quick cycle they are used to. When the system reaches a state of stability appropriate for testing, the AMI can be instantiated for test purposes, separate from the development instances that are being actively worked on.
AWS can be a valuable resource for organizations seeking greater efficiency and reduced hardware costs. However, using AWS can impose changes to the development and test process. More specifically, the typical rapid code/build/test cycle needs to be adjusted to reflect the AWS infrastructure characteristics -- especially the need for changeable files to be segregated into file systems stored as EBS.
This was first published in April 2009