With the growing buzz around the cloud, many organizations are starting to evaluate the cloud use cases for moving applications to the cloud. Many are analyzing which use case makes the most sense. But simply moving your applications to the cloud is not necessarily the best goal, said Michelle Cantara, research vice president at Gartner. "Producing specific outcomes when you move to the cloud is the goal. While cloud does reduce capital costs, there is no empirical evidence that it will reduce the total cost of ownership in the long term."
Better cloud use cases include accelerating time to business outcomes, reducing capital expenses, or reinventing business processes to increase customer satisfaction and grow revenue, she said. However, the link between a cloud service provider's performance and business outcomes is murky. While Gartner research has found that 66% of organizations say they manage cloud service providers based on business outcomes, only 27% are tied to actual service-level agreements (SLAs).
Getting clear on what cloud means
An internal private cloud deploys a cloud-based infrastructure in your own data center. An external private cloud is a hosted model, where an external service provider hosts the cloud service specifically for you. You should isolate your environments from each other, however, because of privacy and security concerns, Cantara said.
A community cloud leverages a restricted community, such as partners or suppliers in a multi-enterprise business process network. Public clouds are often what people are thinking about when they talk about the cloud and refer to external cloud environments, such as Amazon Web Services, Rackspace, Azure for IaaS, Expedia for travel and Salesforce for sales automation.
Applications for customer relationship management are more likely to be used in public clouds. In contrast, applications, such as those for enterprise resource planning, that involve integrating more back-end data with higher security requirements are more likely to be in a private cloud. Cantara said that organizations in emerging countries are more likely to use public rather than private clouds because they lack the infrastructure for private clouds.
But no cloud can stand alone. Few of them can live completely separate from the corporate center. As a result, Cantara expects that hybrid clouds will become more common.
Cloud maturity extends opportunities
Integrating applications across clouds initially presented too many challenges for many kinds of enterprise applications. The main focus for using the cloud was on developing and testing Web applications rather than commercial, off-the-shelf applications, said Robert Green, principal cloud strategist at Enfinitum Consulting.
From a cost perspective, moving development and testing systems is smart because the servers and the meter can be turned off when developers stop working. "You don't need to run the servers when developers are asleep at home," Green said. Make sure you use an automation control panel to reduce the time and effort required to spin up and spin down cloud instances.
"Now we are seeing a lot of folks migrating to Software as a Service from a services standpoint for applications such as those for productivity (Office 365) and for offline backup and archiving (Dropbox), and for the growth of development and testing platforms," Green explained. Many organizations are adopting the cloud as a mainstream way to save money for internal and commercial off-the-shelf (COTS) applications. These don't have a lot of requirements for tying to internal systems and minimal requirements for connecting to data centers and main frames.
But with the improvement of the underlying infrastructure, Green sees more organizations moving three-tier applications to the cloud. Such applications involve integration to databases, Web servers and Web-based clients.
Get performance on budget
The big challenge is speed, Green said. Multi-tier applications tend to have high input/output operations per second (IOPS) requirements and increased requirements for RAM directly attached to virtual machines (VMs). So, the performance is not always stable when some of these applications are moved to the cloud.
If you have 10 servers running internally and move them out, 10 servers might seem cheaper in the cloud. But once you add in the full cost structure of moving the equivalent performance to the cloud, it's not always so clear that the cost will be less.
[IaaS providers] will continue to innovate to capture business because they cannot compete on price alone.
Robert Green, Enfinitum Consulting
To address this, many cloud providers are offering higher-performance IOPS capabilities as part of their offerings, such as all solid-state drive (SSD)-enabled platforms. This can hide some of the performance constraints and makes cloud infrastructure more suitable for a wider range of applications.
Green said it is important to focus on understanding the performance metrics. It is a good practice to baseline your applications internally before moving to the cloud. These applications can be compared to the performance of the same applications on the cloud platform. That way, you are getting an apples-to-apples comparison.
Provision performance, not just VMs
Cloud providers are getting better at addressing performance considerations. They are starting to understand that performance matters and are offering more options. Green said, "The great thing about the cloud is that, as consumers start to complain and make adoption changes, providers are offering more options."
Major Infrastructure-as-a-Service providers like Amazon, Azure and Rackspace have been focused on competing on price. But, Green noted, "The only way to differentiate is that, as one comes out with something cool, faster or better, the others will do the same."
He expects to see new focuses, such as provisioned IOPS throughput or all-SSD instances. "They will continue to innovate to capture business because they cannot compete on price alone," Green said.
The economics can improve if you invest the time to right-size the cloud configurations for what the application requires. For example, if your COTS or application server is specified to require 8 GB of RAM, but uses only 2 GB in practice, you don't need to requisition the cloud machine to support 8 GB. You can probably get away with 2.5 GB. That reduction deflates the cost you might otherwise forklift into the cloud.
Develop appropriate metrics
The process of baselining the current state starts with identifying key application metrics. "If you don't have those metrics captured, you will not know if the cloud environment is a good fit for you," Green said.
Once you have identified metrics, you need to look at things that could add value if you move to the cloud. You want to capitalize on synergies. If you are going to migrate, you might as well improve it, too. As you think about your future state, consider how you can leverage such things as autoscaling and right-sizing.
Also consider what you want things to look like from a budget and operational standpoint. Once you have outlined these in terms of your cloud use cases, then you can go through the process of migration as an informative exercise. You will know from Day 1 whether it will work or not.
"You see a lot of folks jump in without thoughtful analysis," Green said. It saves money in the beginning, but then the applications run slower so they instantiate more VMs than they initially thought, or they have to keep them running longer. Furthermore, these are often implemented without governance, so things that need to be decommissioned keep running.
With horizontal scaling, you are able to scale the application by making adjustments to only one component in a three-tier application, for example, to the Web server without making adjustments to associated application or database servers. With vertical scaling, all components need to grow simultaneously.
Typical Web applications can scale horizontally, but it all depends on how the application is developed. COTS software, on the other hand, tends to have more vertical scaling characteristics. But these can vary, so, to make an accurate assessment, it's important to take a look at what you plan to move to the cloud.
Once you have identified how the system scales, there is an opportunity to configure auto-scaling to be more efficient. This allows you to reduce the footprint to the bare minimum. Auto-scaling tools can then grow or shrink all required infrastructure on demand based on preconfigured triggers.
The next step lies in identifying cloud use cases that would allow you to maintain performance as demand grows. These could include such metrics as database queue, the size of a log file, or CPU memory utilization.
At the enterprise layer, most organizations focus on the automation layer that sits on the IaaS layer, rather than scaling capabilities that might be built onto the PaaS layer. In many cases, they leverage tools like ServiceMesh, ScaleXtreme, or RightScale, which makes it easy to remote control the application.
Today, the typical enterprise is more focused on these tools, rather than leveraging some of the auto-scaling capabilities being built into new platforms like Apache Stratos, Green said. "They have thousands of applications. The development effort to revamp that to a platform would be uneconomical. The bolt-ons let them get 80% of the value with only 20% of the cost."
The bolt-ons can also support migration across clouds on a case-by-case basis. A good migration tool lets you migrate from internal infrastructure to an IaaS platform. Then you can use a tool like ScaleXtreme to manage the blueprint once it is in the public IaaS cloud.
These tools can also help to simplify the development required to manage workloads across clouds from a single monitoring application. "It is really about building a portfolio of cloud assets, and then leveraging a tool to bring this together in terms of governance and access," Green said.