K. Scott Morrison, CTO and chief architect for Layer 7 Technologies, has a pedigree that includes IBM and medical imaging at the University of British Columbia. He's a prolific author and expert on compliance, governance and standards-based application architecture and security. Morrison says that security should be the primary concern for cloud users and that they may be bringing more assumptions about security into the cloud than...
they actually realize.
Where are the concerns for building out into the cloud? It's easy and relatively cheap: all the virtual machines are out there, waiting to be used.
K. Scott Morrison: It's kind of a double whammy. Right now, certainly, you can take your app and build it on your own version of Red Hat or whatever. There's lots of that kicking around, but you've really got to know how to secure that app. When you think about it, the problem with moving out in to the cloud is that everything you put out there has to be secured basically to the same level that you would secure an application in your DMZ.
In other words, you're out there in the Wild West and everything has to be hardened for potential attack from literally any angle, and that's hard stuff to do. It's rocket science stuff.
Are people bringing assumptions about their own data centers, about what's inside and what's outside and where the edge is, out to the cloud and making mistakes this way?
KSM: Absolutely. The problem is that security is a very subtle thing. There's a lot of "lore" involved and a lot of "best practices," and people get into a certain mindset. If that mindset was trained in a DMZ-based, edge-outward environment where you've got good perimeter security, you've got a reasonable level of physical security, [then] there's a lot you can just delegate to a baseline of enterprise security.
The minute you get out there in the cloud, literally every communication hop you make, no matter how trivial it is, is suddenly risky. If you have two processes talking to each other, all of a sudden you've got to seriously think about a security model between those processes because any communication at all is subject to some level of interception.
A vastly higher level of risk than people assume even for accepted transmissions over the internet or email.
KSM: Oh, yeah. Take a typical three-tier Web application that people build, where maybe they've got a HTTP server in the DMZ and, let's say, a Java app server in the secure zone and they're connecting up to an LDAP directory and somewhere in the back, there's a database server the app server is calling -- each of the components has some kind of communication hop and usually its between physical machines.
So they design with this [on-premise] model where they harden the HTTP server, and they know how to do that; It's the only thing sitting in the DMZ so let's lock that down, and so on.
But once we've made that jump from the Web server to the app server, all of a sudden we start to relax things, we make our connections to the database without SSL, we go out and bind to the LDAP without doing any cryptography just because we suddenly assume, hey! We're secure here. That's a problem because none of those assumptions are valid once you get out into a cloud environment.
It's not all that bad, is it? Cloud providers do provide some basic protections, right?
KSM: Look at a company like Amazon: They do a good job of isolation between images and such, and basically have some good practices in how they've locked images to a certain extent. There's certain protocols they block, like gratuitous ARP and so forth, but somebody is going to figure out exploits in the end.
In general, because they don't reveal a great deal about their data centers (this is all of them, Rackspace, Amazon, GoGrid or whoever), we've got vague ideas about how they're laid out. But they won't tell you how they administer things or lock things down, so you have to assume the worst. You have to go in there with an extremely defensive posture right at the beginning.
That's what people have to recognize -- you're giving up control because you want the advantages of commodization, but you have to recognize how that boundary of control affects your hidden assumptions about security.
Boundary of control is easy right now -- it stops at the edge of my network, which is where I start with security. If not the network, where do I set my boundary lines?
KSM: The message we try and give to people is try to reassert control where you can. The way you do that is through application layer protocols, because that is the one place, particularly in the Infrastructure as a Service world, the Amazon EC2 world, that is where you've still got a couple of degrees of freedom. You can still go in there and control the stream of communications in or out of your applications. Focus security on the applications.
Stop focusing on trying to secure networks and focus again on trying to secure services and applications. That's what really important.
That's not trivial. Some applications don't play nice with others, not all services are designed to cough up all of the auditing that you want and so on.
KSM: That is fundamentally a problem. In a lot of ways, you have to think differently when you move out into the cloud. If you want high availability and elasticity you have to change your approach. The security is different, the scalability is different, the way you architect is different, and if you embrace that, move into that and recognize that, you can start to make use of elasticity, and on-demand processing, and you can do it securely, because you can start to build your services so they do play nicely within this model.
Carl Brooks is the Technology Writer for SearchCloudComputing.com. Contact him at firstname.lastname@example.org.