Cloud value proposition heightened with proper cost controls

Public cloud marketing hype promises cost savings, but your budget can easily be destroyed without mitigating cloud's unpredictable costs.

This article can also be found in the Premium Editorial Download: Modern Infrastructure: Application performance management sets new goals:

Cloud's value proposition can be staggering. When implemented correctly, cloud technology drives business agility, improves service resiliency, reduces costs and promises to improve how IT departments respond to changing business requirements.

Most people think that technology is the key to driving success in the cloud, but, in reality, it all comes down to controlling costs. After all, what value does cloud technology bring if its costs are unpredictable? Luckily, a number of techniques can be used to minimize cloud cost variance, such as using cloud services versus instances, right sizing, reserved instances and autoscaling. Putting these techniques into practice will ensure cloud's value is realized with confidence and within your budget.

1. Learn to love cloud services

The first step in managing cloud costs is to evaluate the components in the current environment that can be transitioned to use cloud services rather than recreating them as individual instances in the cloud. For example, you can use a cloud service like Amazon Relational Database Service (RDS) to replace existing database instances. The value of this approach is that there are no costs for licenses or need for additional RAM or CPU that will inflate the cost. By adopting a cloud service, you pay only for the actual service functionality you use, thus reducing the overall complexity and removing large up-front fixed costs.

Cloud services reduce the operational complexity of planning, configuring and managing the infrastructure in order to deliver the actual service.

Replacing a complex mirrored database design with RDS really saves money: You avoid the cost of having multiple compute instances, multiple operating systems, multiple database licenses, and complex configuration of mirroring, communication, backups and monitoring. This typically results in a huge cost savings and reduction in complexity of 25% to 50% depending upon database size and complexity, to say nothing of the operational savings that come from doing configuration systematically through a Web interface.

Cloud services aren't limited to relational databases. Examples include load balancing, big data computation, backup, DNS and application messaging queues, to name a few. All of these services reduce the operational complexity of planning, configuring and managing the infrastructure in order to deliver the actual service.

2. Right-size your instances

Once you have determined the services you want to use, the focus shifts to the remaining servers. These servers have to go into cloud instances -- but which size? For this you need to go through a "right-sizing" exercise to determine the proper size instance into which to move the server. Right-sizing is simple and straightforward. Monitor your existing servers and capture utilization for a period of time, then size your cloud instances using the steady state average utilization. This requires a monitoring tool that captures use of server CPU, memory, disk and network. LogicMonitor is one easy-to-deploy monitoring tool that will capture this information.

Understanding CPU, memory and disk use over a period of 30 to 90 days is critical in determining the steady-state utilization. You don't want to over- or under-size your instances. You want to right-size them to fit your average use and correlate that use to a key metric such as user sessions. But what about spikes in utilization? Don't worry about that just yet -- we'll get to autoscaling. Remember, elasticity is part of what makes the cloud valuable.

3. Classify your workloads

Once you have identified which cloud services to use and have properly sized the instances for the remaining servers, the next step is to classify the workload in terms of whether it is always running or displays variable use. Does the service run every once in a while or is it a mission-critical, always-on instance? We approach these two types of workloads differently in a cloud environment.

With autoscaling, the net effect is that you are not running the number of instances needed to handle peak loads that only occur 10% to 20% of the year.

A good rule of thumb is that if a server is used for less than 60% of the month or less than 40% of the year, then it is a good fit for traditional cloud instances. Examples include development environments or seasonal work. To avoid overspending on instances that only need to be online for part of the month or year, use the traditional no-commit cloud instances. 

In most cases, a cloud instance that runs all month (720 to 750 hours) costs more to run than just renting a traditional hosted server from a service provider. In those cases, leveraging reserved or dedicated instances is a cost-effective way to run steady-state, always-on instances. Reserved or dedicated instances are instances that you commit to run for at least one year. For that commitment, the cloud provider will give a discount. The discount is significant and can range from 50% to 75% over traditional cloud instances. There is a small upfront cost for the reserved or dedicated instances, but the cost per hour is reduced. With Amazon Reserved Instances, for example, you can turn off the instance without incurring the monthly hourly charge.

If your workloads are not time-sensitive and can be turned on or off at any time based on demand, there's an even cheaper option: Amazon Spot Instances. Spot instances are the cheapest way to run on a cloud environment. Spot instances let you name your price and bid on unused AWS instances; if your bid is the highest, your workload will run -- until such a time that the spot price exceeds your spot bid. While the use case for spot instances is specific and not for everyone, it is a way to save money on cloud instances.       

4. Embrace the magic of autoscaling

Autoscaling is another tool used to control cloud costs. Autoscaling instantiates more instances of your application components based on triggers that can be load- or performance-based. It is a bit complex and requires insight and knowledge of your application, as well as of the following key metrics:

  • the number of sessions or transactions your app can run with minimal configuration;
  • the number of additional sessions or transactions the app could run if you add a Web or app server to the configuration;
  • the triggers to grow and shrink the environment; and
  • the base configuration needed to meet your steady-state user load.

By the numbers

30 to 90 days: The amount of time needed to monitor systems for an accurate right-sizing exercise

60%: Server utilization of less than 60% of the month (or less than 40% of the year) means it's likely a good fit for traditional cloud instances

720 to 750 hours: Standard amount of time in a month that a server instance runs

50% to 75%: Amount of cost savings possible when using reserved or dedicated instances from a cloud provider

Take, for instance, a three-tier application with a load balancer, Web server, application server and database with the following attributes: One Web server, one application server and one database server instance can run 1,000 sessions. Adding a pair of Web and app servers increases the number of possible sessions by 500.

The application has a steady-state utilization of 1,500 sessions, with peak bursts to 2,500. The growth trigger is latency on the Web server of greater than seven seconds over a five-minute period.

Within your cloud environment, create an image of your Web and app server configurations. Once the trigger is tripped, the cloud system instantiates another Web and application server pair. Autoscaling is built into several cloud provider control panels, as well as external cloud control panels from vendors such as RightScale, ServiceMesh, Scalr and Dell Enstratius. With autoscaling, you can run with a lower number of instances for 80% to 90% of the time and only increase the number of instances when the application needs it. The net effect is that you are not running the number of instances needed to handle peak loads that only occur 10% to 20% of the year. This obviously cuts costs, as you only use the precise number of instances needed to run your application.

5. Don't forget the cloud management system

The last and possibly the most important step is to employ a cloud management system that controls the provisioning and decommissioning of your cloud instances and continually tracks your cloud spend against your allocated budget. There are several such tools available, with both SaaS and on-premises delivery models: RightScale, Scalr and Dell Enstratius. These tools add additional value with features such as policy automation, workflows and support for advanced monitoring triggers. It is vital that you not only establish best practices but also employ a system that is responsible for oversight and visibility within the cloud system.

Follow this formula for an effective means to reduce the overall cost of your cloud environment.

About the author
Robert Green is principal cloud strategist at Enfinitum.

This was first published in June 2014

Dig deeper on Cloud computing pricing and economics

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchServerVirtualization

SearchVMware

SearchVirtualDesktop

SearchAWS

SearchDataCenter

SearchWindowsServer

SearchSOA

SearchCRM

Close