Cedexis gives its users a little bit of phone-home code (although it would more accurately be called phone-around-randomly code) for their websites that sends out requests and tracks performance of those requests from a list of global providers. Cedexis provides a rough cut of similar data on their site, but this report is much more detailed.
It covers the usual suspects in cloud: Amazon Web Services, GoGrid, Joyent, Rackspace, Microsoft Azure (congrats, Microsoft, you're one of the cloud big boys) and Google App Engine. It ran 300 million requests and compiled the results.
Note: The report notes that Cedexis is an EU firm and most of the collected data is weighted toward measuring the cloud from the EU looking out.
A cursory survey of the interesting results goes as follows:
The cloud divide
The digital divide in global availability is stark. This map might as well be a look at just first-world countries and developing nations because it so clearly shows where network access is plentiful and where it is not.
This matters because cloud computing has the potential to distort this apparent lack of access. Those living in a low-access region now have easy, robust, affordable options to run online technologies from anywhere they like. A smart person in Africa or the hinterlands of Australia can, for all intents and purposes, run a business anywhere in the world they want to. This has been a long slow trend for a while; cloud computing accelerates it. Enterprises, too, can look at these performance indicators and quickly get solid insights to where and how to locate resources to gain the most advantage.
Google kind of stinks a little in France
Google App Engine (GAE) availability came in surprisingly low: roughly 93% for simple HTTP availability. The report says that Google was contacted and graciously examined the results with Bitcurrent and Cedexis. The determination was that Google App Engine servers are in the U.S., and since Cedexis busts past online caching to make direct requests from cloud providers, performance suffered. This is a blow to one of the favorite sacred cows of cloud, ubiquity (or the appearance of it). While Google has an impressive collection of servers and pipe all over the world, it's not working with magic. Physics matter.
Google also declared that another test probably had a 5% error rating based on client-side or user-created application error. They could be right -- maybe enough people don't full "get" GAE and are building apps with a 5% fail rate -- but it does smack faintly of "blame the victim." If I were considering a home for my Web-based application, I might pick a provider a little less finicky. On the other hand, GAE is, for all intents and purposes, free. You can pile your extra money in one hand to balance out the headaches in the other.
Other providers ain't so hot either
Joyent was also branded with a less-than-sparkling 94.5% availability, probably for exactly the same reasons Google did: most of the tests came from the EU and Joyent is firmly ensconced in U.S. data centers. Unlike Google, they don't have a giant engineering and marketing team to throw at poor benchmarks. Find a way to do the same tests from San Bernardino or even Japan and I bet the picture would be a rosier. Regardless, the lesson is clear: location makes a difference for different services.
Service-level agreement? What service-level agreement?
Amazon Web Services (AWS) gets the treatment, too. Thanks to its marvelously engineered system of overlapping, seamless-looking Availability Zones around the world, it's painfully easy to compare how each region performs. The results, however, are slightly underwhelming, especially given the (possibility inflated) expectations.
"Amazon is a perfect test case," says the report, "since the company has four zones around the world."
"Why, That doesn't look like the SLA I was promised!" might go the cry from the user base. Relax; AWS never promised that your Web requests would get 99.95% uptime, only your instance availability. "Buyer beware" is in full force in the cloud, it seems.
This report is a fascinating look at real-world performance from cloud providers. It's important to note that it's heavily weighted to the region which produced the most tests (the EU) and therefore not necessarily an accurate picture of how your cloud of choice will perform from where you are. It's equally important to understand that the significance of this kind of analysis is that it exists, and the cloud providers can't stop you from looking at it, nor can they easily fudge the results. It's another brick in the foundation that's going to make public clouds a truly useful and mainstream part of the IT world: useful, fair and objective performance data.
Bitcurrent joins a few other projects of this kind, including Guy Rosen's State of the Cloud and live trackers like CloudSleuth, CloudClimate.com and benchmarker CloudHarmony. All I can say is, keep it coming. I can tell you how exactly how any given piece of computing hardware performs -- like my RAM or a hard drive -- accurately, objectively and for free in about five minutes. Cloud needs to be the same.
Carl Brooks is the Senior Technology Writer for SearchCloudComputing.com. Contact him at email@example.com.
Dig deeper on Network and application performance in the cloud