Many enterprises think their cloud bills are more expensive than necessary -- and most are right. Those high costs,...
however, are not due to providers' hidden fees but often to the way organizations consume cloud services.
Elastic resources, such as those in the cloud, have elastic cost structures, and that means you need to pay attention to the way you deploy, redeploy and scale applications.
There are three processes, in particular, that an enterprise needs to manage carefully to stay within budget and develop a solid cloud cost management strategy:
- Deployment is when you set the baseline resource consumption for your cloud applications. It's best to calculate the average load of your application, add in a safety margin based on how variable your load is and then size your cloud to that new value. This reduces the need to scale later on.
- Scalability is the process through which you add additional resources to handle an increase in load. To support this process, enterprises often create configurations that can run up their cloud bills considerably. As a result, application scalability is one of the largest sources of cloud cost management problems.
- Fault management ensures that, when an application component fails because of a hosting or network problem, it is replaced. The replacement component, however, might introduce additional costs and perform less efficiently, which creates a need for scaling.
With these three areas in mind, here are some best practices to craft a cloud cost management strategy.
1. Understand traffic charges
Enterprises tend to focus on hosting costs, but most providers also charge for traffic that flows in and out of the public cloud. This movement of information is sometimes referred to as a border-crossing charge, and the specific fees associated with it will vary widely. So, dig in to your provider's pricing plan, and ask questions.
These traffic costs can especially add up during scaling and cloud bursting. Avoid scaling and hosting strategies that create additional border crossings, especially between the data center and the cloud. When you replace a failed data center component with a cloud component, you might incur charges for the traffic that crosses that border. To save money, try to back up data center resources with other data center resources, and back up cloud resources with other cloud resources. Write these policies into your orchestration and DevOps tools to avoid additional cloud cost management troubles.
2. Use web services carefully
Nearly every cloud provider offers specialized services for mobile, web, event processing, AI and more. In most cases, these services are priced based on usage, which means applications that continuously use one of these services could cause your cloud costs to rise significantly.
These hosted web services aren't usually portable to the data center, so you risk overscaling in the cloud to respond to increased load. To avoid this, use web services where consumption isn't scalable; you can specify scalability limits in your cloud contract or build limits into the apps themselves.
Serverless computing is an area of especially high risk in terms of cost. These services follow a usage-based pricing model and scale with the number of events or requests. It's often difficult to set boundaries on event generation, so plan serverless applications carefully to make sure you don't end up paying more if a flood of requests comes in.
Other recent cloud provider features, such as the integration of Kubernetes services with applications, also create added risk for overconsumption of web services, as they make it more difficult to assess the impact of scaling. Be sure to set boundaries on how much access users have to these services.
3. Factor in availability, resiliency costs
When enterprises run workloads in multiple availability zones, they need to account for that in their cloud cost management strategy. Before you solidify a backup plan with availability zones, look carefully at other options, including data center hosting. In addition, dedicated cloud hosting, which provides a single-tenant environment, might cost more upfront but could reduce the number of times you'll have to scale the applications.
As mentioned above, it's not uncommon for the redeployment of a failed component to result in lower performance than the original. If you frequently redeploy components, you might need to scale additional instances to manage quality of experience, which becomes a vicious circle. The best approach is to redeploy a work-equivalent configuration when something fails and then add the number of instances required to match the performance of the failed element.