Paying only for what you use is a major cloud computing selling point, especially to budget-conscious IT execs.
But using only what you "need" is easier said than done if developers must roll their own code to automatically add or remove additional compute resources in response to changing demand.
Autoscaling has been the most common economic justification for enterprise cloud computing. Windows Azure introduced built-in autoscaling management for cloud services, websites and mobile services at its 2013 Build Developers Conference. Since then, it has been adding more features to appeal to even the most critical enterprise DevOps teams and finance execs.
Amazon Web Services has offered autoscaling in its Elastic Compute Cloud (EC2) public cloud since 2009; however, Microsoft Windows Azure didn't feature it until this year. Previously, enterprises could have autoscaling in Azure through third-party service Paraleap Technology's AzureWatch. The Azure team has since incrementally improved its autoscaling, monitoring and diagnostics features.
The need for autoscaling arose from public-facing websites and services exhibiting a combination of predictable and unpredictable traffic variations, which can cause unacceptable response times or even total outages. But unanticipated viral events or publicity can cause large, unforeseen increases in Web server load in just an hour or two. As a result, Internet startups that encounter notoriety often have been knocked completely out of service.
DevOps teams can customize data center orchestration software from a variety of sources, such as Microsoft System Center or Puppet Labs Enterprise, to match on-premises resources with cyclic traffic demands. However, most startups or enterprises can't realistically devote capital investment to data center facilities that are used only for a fraction of the day or a few times per year.
Autoscaling techniques and management for cloud resources
Cloud computing service providers use hardware and software load balancing to simplify resource allocation from clusters of servers, such as the Windows Azure fabric, to individual subscribers, as well as to automate recovery from hardware failures. Windows Azure moved from hardware to software load balancing for improved throughput and reliability in conjunction with migration to a new 10-GBps flat network topology called Quantum 10 (Q10.) Additional monitoring features implemented for the Q10 architecture-facilitated autoscaling management.
Figure 1. Windows Azure Management Portal's Scale (Preview) page for OakLeaf Systems' new Android MiniPCs and TVBoxes demonstration WAWS, with a minimum of one and maximum of three instances scaled by CPU usage.
"Autoscale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance," said Microsoft Vice President Scott Guthrie in a blog post, touting the benefits of Windows Azure AutoScale Service (WAAS) and management features. "Once configured, it will regularly adjust the number of instances running in response to the load in your application," Guthrie wrote.
Guthrie also noted that WAAS supports two load metrics -- CPU percentage and storage queue depth, for cloud services and WAVMs only -- but Microsoft will continue to add more services. Enterprises can set up WAAS in the Windows Azure Management Portal's new Scaling page (see Figure 1).
Windows Azure autoscaling enhancements
Windows Azure now supports autoscaling WAMS Backend as a Service offering based on daily API use, noted Guthrie's blog post:
When this feature is enabled, Windows Azure will periodically check the daily number of API calls to and from your Mobile Service and will scale up by an additional unit if you are above 90% of your API quota (until reaching the set maximum number of instances you wish to enable).
At the beginning of each day (UTC), Windows Azure will then scale back down to the configured minimum. This enables you to minimize the number of Mobile Service instances you run -- and save money.
Figure 2. The Set Up Schedule Times dialog enables admins to specify different autoscaling settings for day, defined as 8:00 AM to 8:00 PM by default, and night, as well as weekdays and weekends.
Microsoft also reported the extension of WAAS to Azure Service Bus Queues, which will spin up new virtual machines or cloud services to handle increased workloads. The addition of Schedule rules for autoscaling came a month later, in August. These rules let you establish different scale settings for different times of the day by clicking the Set Up Schedule Times button shown in Figure 1 to open the dialog of the same name (see Figure 2).
If you specify different scale settings for weekdays and weekends, the Edit Scale Settings for Schedule list (shown in Figure 1) gains a choice of recurring schedules for scaling on weekdays, weeknights and weekends. You can combine these settings with those for CPU usage. The August update also enables the monitoring of autoscaling trends over time and alerting of autoscale failures.
The management portal calculates and reports the estimated monthly billing amount that autoscaling WAVMs can save you, rather than leaving all instances allocated and running. AWS autoscaling requires and charges for CloudWatch Auto Scaling Group metrics; Windows Azure autoscaling is free for compute services -- except for Free and Shared Web Sites tiers.
About the author:
Roger Jennings is a data-oriented .NET developer and writer, a Windows Azure MVP, principal consultant at OakLeaf Systems and curator of the OakLeaf Systems Inc. and Android MiniPCs and TVBoxes blogs. He's also the author of more than 30 books on the Windows Azure platform, Microsoft operating systems (Windows NT and 2000 Server), databases (SQL Azure, SQL Server and Access), .NET data access, Web services and InfoPath 2003. More than 1.25 million English copies of his books are in print, and they have been translated into more than 20 languages.
Dig deeper on Cloud development and testing