This content is part of the Essential Guide: Optimize your public cloud cost management strategy

On-premises vs. cloud: What's more cost-effective for your apps?

Some organizations rush into a cloud migration, assuming cost savings are a guarantee. But not all applications are meant for the public cloud, and moving them may cost you more.

Consuming resources only when you need them seems like the most obvious way to increase efficiency. While you can shut down a server to save pennies on power and cooling when it's not in use, you can't recoup any of the capital costs. And most OS or software licensing models don't care how often you use the application. So, when you're able to pay for the bundled resource, delivered as a service, only when you need it, of course you save money -- except when you don't.

Many applications just aren't suited to run in a public cloud, for either technological or financial reasons, said David Linthicum, senior VP at Cloud Technology Partners based in Boston. To avoid paying more than they need to, organizations should carefully consider their application costs in an on-premises vs. cloud environment.

"It could be as many as 50% of applications in a traditional enterprise, and the average is about 30 to 40%," Linthicum said. "You have to do the triage and understand the application portfolio -- otherwise you will end up making dumb decisions and moving workloads to the cloud that will end up costing you more money."

Applications that are tightly coupled to a database, or that would require a large amount of redevelopment work to run efficiently in a cloud provider, are workloads that are best left running on-premises, Linthicum said.

"Some applications were just built less efficient and are going to use more resources than they should in a cloud provider," he said. "So, very much like a 30-year-old refrigerator, it's going to take more power than a new model."

Ultimately, much of the on-premises vs. cloud cost comparison comes down to whether the application is designed to run in the cloud, or how much work it will take to redesign it, said Erik Peterson, director of technology strategy at Veracode, an application security company that runs on Amazon Web Services (AWS) based in Burlington, Mass.

"Most people think they're going to start by lifting and shifting an existing application into AWS," Peterson said. "But they often don't realize the mental shift that's required with the move to cloud."

For decades, companies have spent a lot of money to ensure their critical workloads remained functioning in the event of a failure, investing in redundant systems sized to meet peak demand. In many ways, public clouds turn this dynamic on its head, offering an elastic platform with the expectation of failure. Unsurprisingly, workloads designed for one infrastructure paradigm often don't easily translate to the other. For example, when deploying on-premises workloads, administrators commonly allocate enough resources to accommodate expected demand spikes. But if you apply this same principle to public cloud workloads, you end up paying -- often per hour -- for much more than you need.

Organizations should first evaluate their reasons -- aside from cost -- for moving an existing application to the cloud. Then, if there are compelling business reasons to proceed, businesses should approach a cost comparison skeptically, said Mindy Cancila, research director for cloud computing at Gartner.

"Typically, when I talk with clients who want to build a model for comparing costs, the first thing we recommend is they look for other benefits that are driving cloud adoption first," Cancila said. "The reason being that cost models are layered with inaccuracies."

Overlooked costs, such as facilities and power delivery, can skew comparisons if they aren't accounted for. Gartner has built a cost comparison model for clients to understand the economics of on-premises vs. cloud environments. But an accurate comparison from any model requires that organizations crunch the numbers and look closely at everything that goes into delivering a workload to end users.

Gartner recommends an enterprise moves to a per-virtual machine (VM) cost component for compute, because it is the most logical comparison for on-premises and the public cloud. "But, again, most companies don't have that level of clarity or transparency," Cancila said. "Most are not tying spending to VMs or even to different teams."

A new age for new apps

Comparing on-premises vs. cloud costs for many workloads is difficult, but it's worth the effort, Cancila said. Cloud providers have levels of infrastructure efficiency that are out of reach for most organizations, and they're benefiting from next-generation hardware that is unavailable to enterprise IT shops. Even if shifting an existing application to the cloud may not always offer a clear financial advantage, businesses looking to build or deploy a new application should consider a cloud deployment first -- either hosted at an infrastructure as a service provider or as a software as a service option.

"Over time, we just don't think you can compete in the type of model where you're comparing public cloud costs with on-premises, and that's true for most all workloads," Cancila said.

Increasingly, new companies, or those looking to deliver new workloads, consider cloud services to avoid large capital server or storage expenses.

"We started a new company when [Google] App Engine was in beta -- so we never owned a server in our office," said Dale Hopkins, chief architect at Vendasta Technologies, a sales and marketing software provider based in Saskatoon, Sask.

"The cost of on-prem is too high for our applications, and we don’t have any IT staff," Hopkins said. "So, we chose right away that we wanted to use managed cloud as the core to our business from when we first opened the doors."

Over time, as Google's cloud services evolved and more competitors emerged, Vendasta continued to reap the financial benefits.

"[Google] has made some significant strides over the last eight years on their pricing," Hopkins said. "Basically, across the board, we pay less than we used to."

While there is money to be saved, most organizations will encounter a variety of challenges, Veracode's Peterson said. A business also needs to consider that a change in platform should be accompanied by a change in culture. While performing a security audit for a customer's AWS environment, Peterson's team helped uncover an unexpected problem in their client's account.

Tag, you're it

Adding information to an instance's metadata not only helps employees triage when a problem occurs, it builds a culture of accountability. Veracode employs a tagging policy that requires each new AWS EC2 instance include the following information:

  • Who is responsible for the instance
  • What environment it’s used in (production or test)
  • The product or team the resource supports
  • Who to contact when something goes wrong

 "We discovered they were spending over $10,000 a month on disk storage volumes that they'd completely forgotten about," Peterson said. "A developer had created a system that generated disk volumes but never cleaned anything up. There wasn't a connection between who was paying the bill and who was doing the work."

Creating policies that enforce accountability and allow organizations to track resources is the most important piece to ensure a company's cloud investment doesn't become a liability, he said. Veracode relies on CloudHealth Technologies, a third-party cloud management tool to track and manage AWS resources.

Large customers often rely on multiple AWS accounts, but built-in tools from Amazon don't allow users to track costs across different accounts. "In our case, we have over 20 different accounts," Peterson said. "If you want a holistic view across all of your accounts, the only way you're going to get that is if you have some sort of third-party service or write your own code to do it."

The next level of cost optimization

It's not until organizations have a good process to track their cloud spending, employees with the expertise and use cloud services at scale that they start to explore cost optimization techniques. In the future, Cancila said she expects to see a new breed of tools -- from both cloud providers and third parties -- designed to help organizations optimize cloud spending.

Even today, larger cloud users find ways to slash costs. For example, AWS offers an option called Reserved Instances in which customers prepay for cloud capacity at a discounted rate. Assuming a company can accurately plan capacity needs, "you can shave 20 to 30% off your bill with some smart Reserved Instances purchasing," Peterson said.

The next evolution in cloud frugality may build on another EC2 instance type. Amazon EC2 Spot Instances allow a customer to bid on spare computing capacity. This unused capacity on Amazon servers would otherwise go to waste, so, in an effort to further improve efficiency and make a buck, Amazon offers this capacity at rock-bottom prices -- or to the highest bidder. Customers specify a price they’re willing to pay, and as long as the market rate -- based on other customer bids --  is equal to or less than their price, they can purchase capacity at a significant discount. However, when the market price exceeds their bid, their instances are terminated.

"You can have systems that do unbelievable amounts of work for pennies," Peterson said. "It's difficult for companies to re-architect their apps to take advantage of that, but when I’ve seen companies make the investment, it pays back very quickly."

Spot Instances are geared more towards workloads that aren't considered critical or time sensitive, but it is possible to build a resilient application that doesn't fail if a single instance -- or group of instances --is killed, Linthicum said. In fact, given the portability advantages containerization offers, the next step could be the movement of workloads across different instance types or even across different cloud providers -- automated by cost triggers.

"I could even build out automated processes that seek out the most efficient platform," Linthicum said. "That's a little science fiction right now, but it's certainly possible with the technology we have today." 

Next Steps

Control your spending in the public cloud

Find the hidden costs of open source cloud

Optimize your cloud to cut cloud application spending

Dig Deeper on Cloud infrastructure design and management

Data Center
ITOperations
SearchAWS
SearchVMware
Close