With all the available choices, it isn't easy to pick the right cloud instances to match specific workloads. As...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
those workloads change over time, organizations need to frequently review the number of instances they run -- and their sizes -- to optimize delivery and lower costs.
Luckily, there are some common red flags that suggest it might be time to resize your cloud instances. These include:
- Long runtimes;
- The inability to respond to increased demand; and
- The need to add more instances to support a particular workload.
The three signs above indicate that a current cloud instance size is too small for the application it runs. In this case, admins should look at cloud usage reports to determine what they need -- more memory, virtual CPU cores, storage and so on. A trial deployment, either with a sandbox environment or a live operation, helps admins evaluate the impact of a new instance size.
Consider the costs
When choosing an instance, consider its size. Larger cloud instances remove bottlenecks and allow apps to run more efficiently, which reduces the total instance count an organization needs. This offsets the higher price of a larger instance type, and should lead to overall savings, coupled with better runtimes.
Bursty workloads, however, require a little more attention. Larger cloud instances are likely the best choice for the baseline load, but admins should analyze workload spike patterns -- especially spike duration. To get the lowest overall total cost of ownership, it might be best to buy long-term instances, such as Amazon Elastic Compute Cloud Reserved Instances, for the baseline load, some larger instance types to support some of the bursts, and some smaller instances on the spot market to support remaining demand peaks.
Many admins downsize their cloud instances compared to the in-house resources they would use to support a particular workload -- but this is a mistake. The problem is that there is usually a minimum number of resources -- such compute, dynamic random access memory and networking -- for an app to run smoothly. Going below that threshold causes the app to spend major resources, such as thrashing files in and out of memory, to overcome bottlenecks. Admins can break up the workload to run on multiple instances to reduce size requirements, but that just means there is a new sweet spot, in terms of instance size, for admins to identify.
Monitor and manage your cloud usage
Choose the right Azure instance
Evaluate which Google cloud instance to choose
Dig Deeper on Cloud management and monitoring
Related Q&A from Jim O'Reilly
As the OpenStack community continues to advance its software-defined networking capabilities, what role will the Neutron and Dragonflow components ...continue reading
Despite ransomware and other attacks causing security issues, it is possible to institute safe cloud backup. Access control and testing are among the...continue reading
SSDs, HDDs and NVMe can all provide local storage for an OpenStack deployment. But what are the benefits and tradeoffs of each, and how will these ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.