Taking an application to the cloud isn't simply generating a machine image. A cloud migration could compromise the data center facilities that support application integrity, performance, security and compliance. While all are important, the first test of any application is whether it meets business goals -- and its performance is critical in making sure it does.
Application performance management (APM) is both a practice set and toolkit -- one of the most crucial elements in worker quality of experience (QoE). It's critical to either transfer current APM practices to the cloud or replace current practices with cloud-friendly equivalents.
Traditional APM focuses on performance enhancement through one or more of the following:
- Compression of traffic to improve effective throughput;
- Prioritization of time- or performance-critical traffic at the access edge;
- Specialized replacements for Transmission Control Protocol that respond better to packet loss;
- Multiple parallel paths through the network to increase effective bandwidth; and
- Load balancing of traffic across multiple servers.
All of these would normally be implemented using an appliance or software agent at both ends of the connection, user and server. When the server side is moved into the cloud, this implementation can be problematic, particularly if it involves network appliances. Cloud operators rarely allow users to install equipment in their data centers, and even if they do, the assignment of an application to a cloud virtual machine may put the app far from its accelerating appliances.
The solution to this problem is to use APM software tools rather than hardware. For that to be effective, the APM tool must be in the form of network middleware, included in the application's machine image. Software-based agents at the server side can still be paired with appliances if desired, and the combination will support at least some of the capabilities of current APM practices.
Fixing bottlenecks in cloud application performance
Load balancing poses specific problems, not only because appliances aren't likely to be accepted into a cloud provider's network, but because in the cloud, multiple app instances may not reside in the same data center. There are mechanisms for software-based, distributed, load balancing using DNS, but they may require tweaks to the application to work.
The next step is to address network performance issues arising out of the cloud. Cloud providers typically connect to users via the Internet, which offers best-effort services. When companies want better performance, they must use virtual private networks (VPNs) -- and not all cloud providers will allow VPN connections to their clouds. Even if it's supported, there may be limitations on what network operators' VPN services can be used, and its use may impact cloud features, such as distributability of hosting points into geographic zones. A cloud planner must know every VPN option available from every potential cloud provider and attempt to find out the longevity of such agreements.
Performance management in the cloud may also offer options not readily available in the data center. About one-third of cloud planners intend to use "cloud bursting" to push work from the data center to the cloud in response to the increased load. Many native cloud applications already have the capability to "horizontally scale" or to automatically instantiate new copies of apps to increase the number of users or transactions that can be served. The challenge is that this scaling requires careful planning and likely changes to the applications themselves.
To understand the benefit and requirements for horizontal scaling, first build a simple workflow diagram to show how a given user/transaction is served with the application. In many cases, it will pass through a Web server, an application server and a database server. Analysis of current performance will tell you which of these elements are actually performance bottlenecks. If a given application uses 80% of its processing resources in the Web server while users parse through product catalogs, then replicating the Web server (and the catalogs) will have a significant impact on the application's performance. If only 20% of the time is spent there, providing more copies of the Web server will not likely improve performance -- and that means the bottleneck is caused by something else.
Database access often creates a new cloud bottleneck because companies are often reluctant to pay access and storage costs and take security and compliance risks associated with migrating critical application data to the cloud. In this case, some cloud applications must pull data across the network from on-premises. In cloud bursting, this is almost always required because the primary data will still reside in the data center where the application normally runs.
Creating an efficient data pathway is critical, and that may mean adding APM measures to the database connection between the cloud and the data center. The problem is that compression typically adds delay, and for some data access strategies, that's as bad as low capacity. The best approach is to plan for the use of a database server that accepts queries rather than block-level I/O commands. This will reduce the data volume and eliminate the accumulated delay associated with moving individual records across the interface. Query/result exchanges are also not as delay-sensitive, so you can add compression and other APM features to the application-to-database connection to improve performance.
About the author:
Tom Nolle is president of CIMI Corp., a strategic consulting firm specializing in telecommunications and data communications since 1982.