BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
Application performance modeling has been an appealing notion for decades, so it's no surprise that it's also appealing...
to cloud planners. A cursory online search reveals a host of scholarly articles on the topic -- an indication that users may be on the leading edge with modeling cloud applications. To stay on track with your own modeling plans, understand the principle performance modeling approaches, gather the right data to feed your model and benchmark models against reality to ensure your results represent the real world.
Performance modeling helps developers and architects understand how an application will perform under load by simulating the application's performance in some way. How simulation works varies; from being tightly coupled to application code to relating only to deployable components. Reports from users suggest that at first, most will make mistakes in modeling by selecting either an unsuitable or unsustainable approach.
Choosing the right application performance model
Tightly coupled performance modeling is like DevOps; it has to be integrated with development from the very first and changed every time the code changes. This type of performance modeling can provide planners and architects with very accurate information about how changes in code, demand or deployment model will affect performance. The problem is that it involves a considerable amount of work, and many users can't justify the effort. In addition, it's useful only for code developed internally; not for applications purchased from a third party (unless application performance modeling is incorporated into their own development).
Where you have control of the code or inherit application performance modeling hooks inserted by the developer, and where application performance must be managed rigorously (transactional applications in the financial industry, for example), it may be worthwhile to consider tightly coupled modeling tools. For these, look either to your application development tool provider (HP, IBM and Microsoft) or to well-known third parties like Tibco. Microsoft and IBM (Research) also offer a list of useful articles on this topic.
For most users of packaged software and many developers who can't justify the time needed to develop a tightly integrated application performance modeling capability, a component-level model may be the best approach. Component modeling is easily applied to software that's divided into deployable components, including either SOA (ESB or CORBA) or RESTful coupling of elements. In a component model, each component is described in terms of its response characteristics, and the component relationships are also described to provide a mapping. This map is then used to drive a modeling run.
Three approaches to application performance modeling
Application performance modeling is based on three approaches: discrete event, analytical and statistical. Discrete event modeling is primarily useful in tightly coupled modeling applications. Analytical models incorporate an expression or code structure that simulates the actual interior behavior of a component; statistical models simply graph the range of performance observed based on internal and external (load, capacity) variables. Analytical tools include JMT and PDQ; statistical performance modeling tools include SPSS (from IBM, the most popular of the modeling tools), Tibco's SpotFire (also popular), Minitab and Statsoft.
To use either analytical or statistical modeling, it's necessary to measure performance under a range of conditions. The most common measurement is offered load or demand, and this is a good place to begin providing there are no significant performance differences among the types of transactions an application supports. If there are, then it will be necessary to measure performance by transaction type to achieve accurate results. Analytical modeling can also be augmented with knowledge of the algorithms used in each component, because its goal is to represent component behavior based on functionality rather than simply drawing data plots.
However, most users likely will find that a successive-approximation approach to setting up a performance model is best for them. First, establish a component map that assumes each component has constant performance implications and then test its results against the real application in a range of conditions. The model should allow you to look at the projected performance at various points, which then can be compared to observed application performance. Where the two are fairly well correlated, no further work is needed. But where there are discrepancies, it will be necessary to evaluate the component's logic and perhaps model discrete logic paths as "sub-components." Users experienced with building models may be able to short-cut this approach by developing a model with some sub-component structure based on application workflows and transaction handling.
One advantage of tightly coupled modeling is that it disciplines an organization to change their model when their application changes. With component-level modeling, on the other hand, it's easy to forget that, which means it's wise to include a validation of an application performance model in the application lifecycle management (ALM) goal set.
For cloud apps, better tools may be coming
Generally, ALM will include a load test as part of the lifecycle progression, and this is the best point at which to include application performance modeling validation. First, ask whether the test data suite has been changed; if it has, the model may have to be updated to reflect a different expectation of workload and type. Second, run the performance model before the load test and obtain the expected performance of the test data suite. Compare this with the results of the actual load test, which of course must now collect performance data at the component level to be used as the reference.
For cloud applications, you should expect to deploy some application performance monitoring tools and network monitoring tools in order to both populate and validate performance models. However, where third-party applications are used or where you have a single application middleware provider, you should first look for monitoring tools from them. But be prepared to extend your monitoring into areas like traffic probes if you want to obtain the best results.
Users are often disappointed in the state of application performance modeling tools -- particularly for cloud modeling -- because they require considerable skill to use. But because the cloud raises the profile of the issue, keep an eye on vendors; better tools may come along.
Getting better mobile app performance
Learn about performance monitoring tools
How to really make predictive modeling work