Fifteen minutes in cloud

Avoiding cloud lock-in, self-service bloat and private cloud traps

Bridget Botelho

Most large companies that have built private clouds hope to move to hybrid and public clouds eventually, but their data is holding them hostage.

    Requires Free Membership to View

Andrew Hillier

Those IT pros are concerned about the way their data and applications will run on unknown public cloud systems,about the security of their data, and about cloud lock-in of their data to a public cloud provider. Others believe they'll be able to cloud burst as needed during peak periods, but using public cloud in that way isn't so simple.

Andrew Hillier, chief technology officer and co-founder of capacity management software provider CiRBA Inc., has worked with a number of Fortune 500 companies on their private and public cloud projects. Here, he gives an insider's view of the benefits those companies reap from private clouds, the value of open cloud computing platforms, and ways to avoid private cloud traps.

Companies that want to keep their data in house are building private clouds. What are some of the challenges IT admins come across with this model?

Andrew Hillier: They are starting from a position of having a virtual environment and moving toward a cloud, and the first thing they do is put up a self-service portal to let people access their own capacity.

The self-service console causes demand to go out of control. Users begin asking for whatever they want, and you have no way of planning what that will do to your environment. Right off the bat, we see a lot of really big VMs [virtual machines] because users ask for way too much. It's like people who go to a restaurant and ask for an eight-person table even if they are only two people … you can't open up the demand side of the equation without sorting out the capacity side.

There needs to be controls in place to make sure you make the best use of capacity; if you don't sort that out from the start, [the self-service portal] will make things worse.

What percent of your customers who have built private clouds also use public clouds?

Hillier: Private cloud is the vast majority of customers, though there may be someone in there using public cloud for something. But among the core IT groups -- very few, if any -- leverage external public clouds at this point.

People have this vision that they can just turn on extra capacity…It's absolutely not that simple.

They choose architectures that would allow it, but aren't anywhere near using it. The next step for them is to use some external capacity to augment their internal capacity, but for big banks, insurance companies, they are a long way away from that -- and we have some very large customers who are very forward-thinking. They are on their second generation internal clouds and call their virtual environments "legacy" at this point.

Companies are building API-driven environments. So when they talk to their internal infrastructure, they are talking to it the same way they would any other environment and can use the same tools. This is very clever, because if they ever do need external capacity, they are ready for it. That desire for interoperability drives things like OpenStack.

There is a huge concern about lock-in at this point. I won't name names, but having a bunch of VMs from one hypervisor vendor doesn't necessarily lock you in to that vendor, because you can still turn on VMs from another vendor. But choosing a cloud stack that is not flexible locks you in.

Because once you move your data to that cloud, you won't be able to move it to other clouds, or back to your own private cloud?

Hillier: Exactly, you lose that interoperability and you narrow your options for what you're able to do from an external provider perspective. That is driving a lot of cloud decisions right now. People are integrating their internal infrastructure with OpenStack because it gives them complete control over their future.

If you have a private cloud, you may want to move some workloads to the public cloud during peak periods. Sounds like a no brainer -- but in reality, you need to have the network infrastructure to support that type of traffic to and from a public cloud, or else your application performance will suffer, right?

Hillier: Yes. We are involved in a lot of projects like that where people talk about [cloud] bursting; what we are seeing, from a practical perspective, to your exact point, is that to take an app and burst it to pick up extra capacity for a peak load is extremely complicated to do, from a data perspective.

For example, we are finding even transferring the disk image for the VM that you want to start can take a very long time; it is network-intensive. People have this vision that they can just turn on extra capacity whenever they want, but the data can't just be moved back and forth. It's absolutely not that simple.

One thing we see as viable is seasonal bursting. If you are a bank with 401K season coming up, and you know those applications will be way busier, a month or two ahead you can request external capacity, get it set up properly with the apps, test it, get it running and bring it online, use it for a month and turn it off again.

[Cloud bursting] doesn't have to be second to second, but don't buy servers and have them sit there idle 11 months of the year for when you need the extra capacity; do [bursting] in an extremely controlled way.

Are there workloads that shouldn't be moved back and forth between private and public clouds? What are they and why?

Hillier: There definitely are; we have private cloud customers who have real-time trading apps that aren't likely to be in their internal cloud because they are so special-purpose, and many people don't want to put their databases in a cloud, but there are Database as a Service clouds. …  People are also thinking about renting small, medium and large [instances] from external providers to create a hybrid model, but there are many ways to do it.

Take IBM and SoftLayer; with them, you can have a [public] cloud but have them turn on a physical box if you need to. You can say, 'Give me 12 blades, and install VMware and OpenStack or whatever,' and have an environment that looks exactly like what you run within your own four walls. 

That is fascinating because it really lowers the barrier to cloud; you aren't doing anything different from what you are already running, other than its physical location. It is a great baby step to hybrid and we are working with them on the analytics around that; the on-premises analytics to determine when customers will need capacity, and the [capacity] controls. It is the bursting use case where you can predict the extra capacity you'll need.

Using a private cloud model versus traditional IT infrastructure appears to offer minimal potential for extra efficiency and cost savings. Why do it at all?

Hillier: If you go to virtual, that is more efficient than a physical environment, but virtual and cloud don't look much different from each other in terms of use of capacity. It is really about the agility. …

People would buy their own servers but there was no sharing or centralized IT. … With virtualization you share resources centrally for higher efficiency, but the users still don't have access to the capacity they need. With the cloud, end users can ask for their own capacity and it is shared and centralized.

It's that agility that is the big driver -- end users can get the extra capacity they want and [IT] can respond faster to their needs.


There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: