Explaining how cloud works in the real world

In this week's episode of Cloud Cover TV, Carl Brooks explains everyone's role in this new cloud computing world.

We discuss:

  • The new cloud computing reference architecture from NIST
  • How to define cloud users, cloud brokers and cloud providers
  • Cloud benchmarks from Bitcurrent
  • Monitoring the performance of the major cloud companies
  • Analyzing cloud connectivity around the globe
  • Are cloud providers telling the truth about availability?

For the rest of the episodes, check out the Cloud Cover TV archive.

Read the full transcript from this video below:

Read the full text transcript from this video below. Please note the full transcript is for reference only and may include limited inaccuracies. To suggest a transcript correction, contact editor@searchsecurity.com.   

Carl Brooks: For CloudCover TV I'm Carl Brooks here in Boston. Today we're going to talk about some new information released by NIST, the National Information
Standards and Technology Institute which is the federal organization that
sets the standard for all sorts of things including how far apart the
threads on screws are, standards in measurements and weights and they also
famously have defined cloud computing. And they did it a couple of years
ago and they did it so well most everybody agrees with it. They have now
come out with a cloud reference architecture, which is supposed to be for
anybody who is really confused about how cloud computing is going to work
for their own enterprise model.

The reason they've done this is because they feel that there's a need to
sort of guide the conversation in how people talk about cloud computing,
especially in how they approach it for both enterprise and federal use.
It's helpful to have at least a ground floor for people to talk about it.
When NIST originally defined cloud standards they defined it basically
completely in functional terms and they did not do it in reference with any
vendors, with any industry groups. They just looked at the models that
were out there and decided this was it. It is elastic, it is scalable, it
is self service, it is online, so things we're all familiar with now.

The cloud reference architecture now defines the basic parties that you're
going to find if you decide to look at cloud computing as a big picture
part of your enterprise. This is not for the guy developing new software
on his laptop or working online. If that's the case you need an Internet,
you need a cloud service provider in your customer's end list. This is for
people who have to look at all the intermediaries in the change between
yourself and a cloud provider.

So there's cloud users, that's you or me, there are cloud service providers
like Amazon or Rackspace, there are cloud brokers who intermediate. They
don't really exist yet but clearly sometime in the future they're going to
be part of it. There are cloud carriers; these are the telecoms,
basically. They are who get you the Internet connection from point A to
point B and interestingly there are cloud auditors. These are all parts of
NIST's new reference framework. We'll put a link to this up on our page;
you can look at it in detail.

Very quickly cloud customer, you, cloud provider, Amazon, cloud auditor,
this is new. This is the person in an organization who's responsible for
making sure that cloud computing uses measure up to organizational
standards. This hasn't really been an issue. Everyone using cloud is
doing it online, they're making it up as they go along where security
hasn't really been a big concern. Cloud carriers, this just basically
means network providers, although some are starting to specialize in
certain ways towards delivering cloud solutions. Mostly this just means
how you get piped. And cloud brokers, like I said they don't exist per se.
There's a couple of interesting examples like Zimmery from a few years
ago, and Spot Cloud from Enomaly.

These are, eventually people will be brokering, buying and selling clouds.
Possibly managed service providers will fit into this place at some point.
But really the NIST cloud architecture reference is for the consumer, the
enterprise user, the organizational user who, you know, you might be
spending a couple of million bucks a year on what you pay through Verizon
or AT&T. You're going to be interested in where they fit into the cloud
paradigm as you look at the organization. Your cloud provider, on the
other end, you're going to be interested in how they fit into your auditing
and compliance models.

So the NIST reference architecture is an interesting and vendor neutral
and basically industry neutral way to start and look at how cloud fits into
the organization. So we definitely consider it worth a look and an
interesting development. The old standards, the definitions of cloud that
NIST's put out have become proudly accepted. This cloud reference
architecture might also become a defacto place to start talking about cloud

So this time I've got cloud benchmarks, a new set. New information on how
clouds actually work in the real world, which is a subject of some debate.
Most cloud providers don't really care to break it out. Most Internet
service providers don't really care to do the work to find out how well
they perform, because they're just sitting in the middle between you and
whatever you're using for a cloud provider. However Bitcurrents,
consulting organization now part of Cloud ops, headed up by Allister Cole,
have put some significant amount of work and time into a new report that
benchmarks nine different cloud providers all around the world. There's
some interesting conclusions.

The providers are the ones you think of, Amazon, Microsoft Azure is on there so they definitely hit the mark in terms of being considered a worldwide cloud service delivery platform, Rackspace is on there. And what they did is they used a software, slash, I don't know exactly what it's called, it's called Sidexis [SP], we'll put a link up to
it you can look at it. It's a automated testing platform that basically
puts up a little tiny webpage, asks for a response from a cloud in various
locations and it calculates how well it can do that. It does that
repeatedly and it calculates availability. They did something like 300
million tests for these nine providers and they came up with some
interesting conclusions especially as they look around the world. Yes I
printed this out, I'm probably a bad person.

As you go around the world, there's a digitial divide that's been talked
about. What this means in terms of cloud computing is that in the US,
Canada, Japan, very small parts of Southeast Asia, connectivity is great,
uptime and availability are roughly all equal. When you go and hit South
Africa, when you hit Africa, South America, even Australia, it starts to go
off. Performance starts to suffer because there's not as much connectivity
there and this is just a function of development across various parts of
the world.

Interestingly, connectivity times extremely high in Europe, partly because
they have a lot of very well developed internet connection exchanges and
all that sort of stuff. So again, very clearly if you're going to want
your data centers to have some performance, or if you're in a place that's
less privileged, then you might actually want to have a service running or
you might want to have data or an application running. To customers in
other parts of the world cloud computing can be an advantage here. You can
see some data in this report that might tell you where you want to place an
application even if you're somewhere remote that doesn't have the greatest
connectivity. So it's interesting.

Another interesting fact is that availability does not match up in the real
world. That is using this little testing program. Availability doesn't
match up to what cloud providers claim. For example across the world for
AWS, it was very easy to test across and compare different availability
zones. Availability for simple web requests generally ran about 97% or so
as opposed to what you would expect Amazon's SLA's. You might want to
squint at them in this case because they say they get uptime 99.5%. When
you run a whole bunch of big batch tests that uptime is actually going to
be closer to 97%. I don't think you can put them on the hook for this but
it's worthwhile to know.
Google App Engine actually turned out surprisingly low in terms of its
availability and its performance, 93% availability across various parts of
the world. It's not really pitched as like an enterprise class usability
type thing, it's basically for developers to play on, but again worth
knowing. Another company that does very interesting research on this is
Compuware. They put up a free website, a resource much like the one that
Bitcurrent has made their report out of called Cloudslueth. Also worth
checking out. So if you want to know exactly how clouds perform there are
a couple of good places to look and we've got some interesting benchmarks
now. Thank you very much.

View All Videos

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.