What's not to like about cloud servers? Deploying a cloud server means there's no more hardware to buy, you can
choose the best server size for your needs and you can pay as you go for use. So, what's the problem? Well, only the first of these statements -- no more hardware -- is universally true.
It's important to first carefully compare cloud servers on three points before looking at the performance metrics of the cloud provider choices on your short list.
- How fast is the cloud server's vCPU?
- How quickly do memory and disk respond?
- What is the actual network throughput?
Cloud servers are all "virtual" servers that run on the same physical hardware as physical servers. Virtual server platforms allow administrators to provision servers by specifying the CPU, memory and disk characteristics those systems will have when each is brought online.
Naturally, cloud providers offer systems that are different "sizes" with regard to power and price. Offerings typically have two key dimensions: CPU and memory. For a basic orientation, it is safe to think of "small" as 1 vCPU and 2 GB of RAM allocated, "medium" as 2 vCPUs and 4 GB of RAM, and "large" as 4 vCPUs and 8 GB of RAM.
Cloud servers are all 'virtual' servers that run on the same physical hardware as physical servers.
Several vendors offer similar technologies, but just use slightly different terminology. For example, Amazon Web Services' price/performance model is based on what it calls the "EC2 Compute Unit." Amazon defines it this way: "One EC2 Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0- to 1.2-GHz 2007 Opteron or 2007 Xeon processor."
For the sake of professional sanity, think of a single ECU as being the equivalent of a single vCPU from other cloud server vendors. Therefore, two ECUs would equal 2 vCPUs from other vendors.
In addition to vCPU and RAM, cloud server providers will also specify the amount of disk capacity available on each server, though the capacity varies too dramatically to make any generalizations. Provisioning additional disks is a standard option as different applications and users can have dramatically different disk requirements.
While every server will have network connectivity, cloud service providers differ on how (or if) they advertise the network bandwidth available to cloud servers of different sizes. You can most likely expect to see a Gigabit Ethernet connection specified.
Remember: All of this is virtual. The virtualized hardware interacts with the cloud server OS in the same way that real hardware does; however, it does not necessarily deliver the same performance as it would if the OS were running on the physical hardware.
Cloud server vendors make provisioning decisions about how many virtual servers to run on a physical server, as well as how many real CPUs and memory that physical server should have. These decisions directly affect their bottom line and your end users' applications' performance.
Measuring up cloud service providers
After comparing cloud servers on these three criteria, you're ready to evaluate performance benchmark tests. And there are several ways to benchmark cloud server performance.
One way to benchmark CPU, RAM and disk use (among other things) is through a third-party tool. The open source Phoronix Test Suite (PTS), for example, runs on all major OS platforms -- including Windows and Linux.
While the selection of available tests PTS offers is so vast it can be mind-boggling, a basic installation includes some core tests of CPU, RAM and local file services. The C-Ray benchmark is a compute-intensive benchmark for CPU. The RAMSpeed benchmark stresses the memory/RAM operation, and PostMark exercises the file performance of the local server disk. These tools can quickly show the differences between various vendor offerings.
If your cloud server supports external clients -- either in the same cloud data center or across the Internet -- you will be driving the Ethernet adapter. You'll want to know how much actual throughput you can expect from your virtual adapter.
One way to determine this is to use Iperf, another open source benchmarking tool that's hosted at Google Code. Combining PTS and Iperf gives enterprises a straightforward way to benchmark the essential elements of a cloud server.
Because cloud servers allow you to pay as you go, booting the server turns on the meter. When you don't need the server, which may be the case in a test-and-development scenario, shut it down and stop the clock. With some cloud service providers, however, this may not be the case.
About the author:
Kevin Tolly is founder of The Tolly Group, which has been a leading provider of third-party validation and testing services for more than two decades.