Examine DR in the cloud from all angles
A comprehensive collection of articles, videos and more, hand-picked by our editors
If you ranked certain cloud features by the total revenue they could generate or the total user spending they could affect, cloud bursting and failover would be at the top of both lists. Many enterprises consider these two features equally important and find that the processes can support each other. In fact, the best strategy for hybrid cloud deployment is to combine cloud bursting and failover.
Cloud bursting, or workload overflow processing, occurs when an application's presented workload exceeds its capacity. It allows an enterprise to spin up additional instances of the application in the cloud to relieve what might otherwise be a detriment to workers' quality of experience. It's a perfect example of how public cloud resources can augment internal IT resources, and it makes economic sense if it avoids costly capacity oversupply in the data center.
Cloud bursting requires two key technical elements: an application design that permits multiple instances to run at a time, and a mechanism to load balancing work among all the instances -- whether they're running in the data center or the public cloud.
Failover, or disaster recovery, in the cloud also makes sense to buyers. Many enterprises already have considered or partially implemented standby data centers to keep their applications running in case a major failure disables some or all of the normal data center resources.
In a failover strategy, the emphasis is often on major incidents, such as hurricanes or power failures that take out an entire geographic area. In many cases, there's an expected outage period while an enterprise shifts from its primary resources to the standby. In most cases, the applications will be running in one place or the other and traditional mechanisms such as the domain name system (DNS) can redirect work to the standby data center on a failure -- then back to the production data center when the failure has passed.
Clearly, cloud bursting represents a more agile approach for a disaster strategy. If growth in an application's workload can trigger cloud bursting, a reduction in available resources to the application -- server or even data center failure -- could also trigger it. This DR strategy could deal with not only a complete data center failure but also limited equipment, software or even network failures. Overall, successful cloud bursting is a useful strategy for building hybrid cloud applications.
Creating a robust cloud bursting implementation
Issues associated with resource failures, not growth in application workloads, help determine whether cloud bursting can work to build hybrid apps. Users need access to any additional copies of an application, and the copies require access to databases and other resources. Are both of these conditions possible using cloud bursting in disaster recovery mode?
User access to these multiple app copies requires a form of load balancing. In most cases, enterprises would include a Level 3 switch, application delivery controller or similar device in their data center between the WAN gateway and data center servers. This device would then switch work among copies of an application as needed. If cloud bursting is supported, as it sometimes is, by connecting the public cloud "behind" these on-premises load-balancing devices, then loss of the data center will not only result in the failure of the servers but also of load-balancing devices. In this case, there will be no way to access public cloud resources.
Instead, the best approach to create a robust implementation of cloud bursting is to start with a network-based load-balancing strategy.
Load Balancing as a Service is a new feature in the Grizzly release of OpenStack and is increasingly being implemented by cloud vendors. Savvy users could also build such an application and host it in the cloud. With this approach, all load balancing occurs in the cloud, with data center application resources linked to cloud resource allocation. That means a data center failure won't result in a loss of application connectivity.
The issue of data or application access is more complicated. In workload-driven cloud bursting, it's safe to assume that app data is still available in the data center and cloud-based copies of the application can access it. If a resource failure triggers cloud bursting -- particularly a failure that takes down an entire data center -- then database resources in the data center are unavailable, and cloud copies of the application cannot access the data.
Moving a company's entire database to the cloud is not likely practical because of cost, security and compliance reasons. The only alternative is to improve the reliability of the database elements of applications; protect data storage and query processors with additional backup power and cooling or maybe even provide for hot-standby copies of key application data in alternate locations. While this will certainly add to the cost, it will still almost certainly be less expensive than maintaining a complete backup data center.
Both cloud bursting and DR will likely demand some application optimization. Online transaction processing may need to be adapted to update multiple database copies to maintain a standby copy that's up-to-date. Enterprises may also need to analyze application workflows to understand which app components can be replicated to improve performance and reliability, as well as how to maintain data integrity if multiple components are accessing the same database at the same time. While these types of application issues aren't new to seasoned architects, it's still possible that cloud bursting and disaster recovery processes will demand some new accommodations. It's even more likely that an application that accomplishes both at the same time -- cloud bursting and failover -- will require special design. Testing and review are critical to achieving business goals.
About the author:
Tom Nolle is president of CIMI Corporation, a strategic consulting firm specializing in telecommunications and data communications since 1982.