With all the available choices, it isn't easy to pick the right cloud computing instances to match specific workloads....
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
As those workloads change over time, organizations need to frequently review the number of instances they run -- and their sizes -- to optimize delivery and lower costs.
Luckily, there are some common red flags that suggest it might be time to resize your cloud instances. These include:
- Long runtimes;
- The inability to respond to increased demand; and
- The need to add more instances to support a particular workload.
The three signs above indicate that a current cloud computing instance size is too small for the application it runs. In this case, admins should look at cloud usage reports to determine what they need -- more memory, virtual CPU cores, storage and so on. A trial deployment, either with a sandbox environment or a live operation, helps admins evaluate the impact of a new instance size.
Consider the costs
When choosing an instance, consider its size. Larger cloud computing instances remove bottlenecks and allow apps to run more efficiently, which reduces the total instance count an organization needs. This offsets the higher price of a larger instance type, and should lead to overall savings, coupled with better runtimes.
Bursty workloads, however, require a little more attention. Larger cloud computing instances are likely the best choice for the baseline load, but admins should analyze workload spike patterns -- especially spike duration. To get the lowest overall total cost of ownership, it might be best to buy long-term instances, such as Amazon Elastic Compute Cloud Reserved Instances, for the baseline load, some larger instance types to support some of the bursts, and some smaller instances on the spot market to support remaining demand peaks.
Many admins downsize their cloud instances compared to the in-house resources they would use to support a particular workload -- but this is a mistake. The problem is that there is usually a minimum number of resources -- such compute, dynamic random access memory and networking -- for an app to run smoothly. Going below that threshold causes the app to spend major resources, such as thrashing files in and out of memory, to overcome bottlenecks. Admins can break up the workload to run on multiple instances to reduce size requirements, but that just means there is a new sweet spot, in terms of instance size, for admins to identify.
Monitor and manage your cloud usage
Choose the right Azure instance
Evaluate which Google cloud instance to choose
Dig Deeper on Cloud management and monitoring
Related Q&A from Jim O'Reilly
Replacing all your HDDs with SSDs won't solve the storage issues associated with in-memory databases. Look to hyper-convergence and NVDIMMs instead.continue reading
Storage snapshots act almost like a rewind feature for admins, enabling them to roll back to uncorrupted versions of data. Unfortunately, they aren't...continue reading
Containers are a hot technology, and the OpenStack platform continues to evolve to support them. But where does the Kolla service fit into the ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.