After an enterprise has chosen to implement cloud-based servers, the next decisions include nailing down a required
server size and spelling out the expected return on investment. Determining your resource requirements (CPU, storage, memory) is the easy part; knowing what economic value you will receive from a cloud vendor is a bit trickier.
Unlike the hardware in your server room, the performance of services can change overnight. Enterprise IT has visibility into hardware; however, they typically have zero visibility into a cloud server provider's hardware infrastructure. And a cloud provider that grows too quickly might not be able to keep up with the demand and might oversubscribe hardware or bandwidth.
One of the more attractive aspects of the public cloud is the ability to simply "turn off" a virtual server for days or weeks on end. In general, when a cloud server isn't running, you're charged a nominal fee for disk storage the server image occupies.
In a multi-vendor cloud performance benchmark test of several cloud service providers or "Compute as a Service" vendors, The Tolly Group, of Tolly Enterprises LLC, evaluated Amazon Web Services (AWS), Dimension Data, IBM and Rackspace. These cloud service providers represent just a few on the market, but the results can help enterprise IT understand overall any cloud services they are considering. Full results are available in The Tolly Group's document #213131. It's important to note Dimension Data commissioned the survey.
While the study focused on performance, not price, initial findings can have a big impact on running costs. Of the four services evaluated, AWS and Dimension Data implemented the expected pay-when-you-run model. With IBM and Rackspace, once you provisioned a virtual server, you began paying the full "server running" cost -- even if you choose to shut down the server.
IBM and Rackspace's approach all but kills the usefulness of cloud servers in a test environment. For a test, users want to have various images and run different operating systems or application versions -- all of which can be brought up and down as needed. With IBM and Rackspace products, this simply becomes untenable.
As a solution, Rackspace offered a workaround: Clone the virtual machine, keep the clone, delete the original machine and then create a new instance from the clone once the virtual machine was needed again.
One vCPU size does not fit all
A key element of VM performance benchmark testing is the number and characteristics of the virtual CPU (vCPU). Among services tested, all but AWS use vCPU as a basic element when defining small, medium and large server sizes -- with systems generally having 1, 2 and 4 vCPUs, respectively. In this test, IBM and Rackspace didn't offer 1 vCPU servers.
Amazon, however, has created a synthetic unit of compute power that it calls the Elastic Compute Unit, or ECU. Simply put, AWS states that one ECU represents the equivalent compute capacity of a 1.0 to 1.2 GHz 2007 Opteron or Xeon processor. Amazon's small, medium and large instances have 1, 2 and 4 ECUs, respectively, and 1, 1 and 2 vCPUs, respectively. For this test, we chose to match one Amazon ECU to 1 vCPU from Data Dimension, IBM and Rackspace.
Results show Dimension Data's 1 vCPU performance was significantly better than Amazon's 1 ECU/1 vCPU performance. Interestingly, when comparing systems with 2 vCPUs/ECUs, Amazon's performance improved over the 1 ECU performance, showing there is some additional compute power offered in the 2 ECU option, even though it's still a single vCPU. AWS may deliver better CPU performance as CPUs increase, but it will also cost you more to run each hour. If you choose to benchmark vCPU to vCPU, you will compare Amazon's 4 ECU and 8 ECU options to other vendors' 2 and 4 vCPU offerings.
RAM performance benchmarks. Memory performance is important for memory-intensive applications. Results showed that, for systems with equivalent memory, the number of memory operations per second varied dramatically. For example, in tests with large server configuration, Dimension Data delivered 18,542 operations per second; IBM and Rackspace delivered 8,772 and 7,818 operations per second, respectively. Amazon delivered 3,200 operations per second.
Local file. The number of file operations per second on the local server disk varied significantly across cloud servers tested. On the large system configuration, Dimension Data delivered 3,473 operations per second; Amazon had the next highest performance at 1,342 operations per second; and Rackspace and IBM were significantly lower with 659 and 527 file operations per second, respectively.
Comparing bandwidth performance of cloud servers
Beyond single system performance, network bandwidth among systems is arguably the next most important benchmark. While all of the servers tested were nominally outfitted with virtual Gigabit Ethernet adapters, the results varied dramatically under and over 1 Gbps.
In the large system test, Rackspace demonstrated 479 Mbit/s of bi-directional traffic, compared to 1.24 Gbps for AWS and 1.86 Gbps for IBM. Dimension Data delivered 4.46 Gbps of system-to-system throughput.
Cloud servers that appear similar or equal on paper deliver dramatically different results when benchmarked. Ironically, it's unlikely that physical systems with similar specs would deliver such dramatically different results. So while the cloud can simplify certain elements of IT infrastructure, it complicates others -- making system benchmarking even more important for making a decision on which cloud service provider to use.
About the author:
Kevin Tolly is founder of The Tolly Group, which has been a leading provider of third-party validation and testing services for more than two decades.