Mathias Rosenthal - Fotolia
Cloud computing is evolving into a new, more mature phase. The focus of cloud planning and deployment has shifted from remote hosting of inefficient applications to the support of the cloud as a kind of virtual application platform for developers to exploit. At the same time, businesses are finding that some clouds support specific missions better than others.
The collision of these two trends demonstrates the need to better understand how application structures and deployment choices affect the management of cloud workloads in multicloud configurations.
In modern terminology, application units that are hosted on premises or in the cloud are workloads. And while the cloud has changed the notion of workloads, the implications of the changes on workload management, particularly in multicloud deployments, hasn't been fully appreciated. To manage multicloud workloads, you have to think about them differently and then plan and execute your management strategy accordingly.
In the cloud, everything should be centered on the movement of information. The resources committed to cloud workloads are identified by workflows created by information movement. This means you need to start multicloud workload management by looking at workloads and workflows as a unit -- network, hosting and any web-service features such as database services that the application might use. Keep this deployment-unit notion in mind as you plan for multicloud operations.
The unit structure of cloud workloads
Effectively managing multicloud deployment units means taking care of three key elements:
- the planning and cost-analysis component, which manages costs and helps you decide where things should be run;
- the deployment-automation piece, which simplifies deployment and redeployment of applications; and
- the cloud-monitoring aspect, which watches for problems that affect any of the clouds or the workflows that move among them.
The workload and workflow planning step starts with predicting and monitoring the cost and usage of cloud resources. Price your application needs on public clouds to pick the best fit, and then analyze how application changes affect your costs and your provider selections. The tools available for this divide into one group that analyzes cloud pricing for applications across multiple cloud vendors and a second group that monitors application performance in the cloud. Both are readily available on a per-cloud-provider basis -- from Amazon, IBM and Microsoft, for example. Cloud software tools from Cisco, Dell, Hewlett Packard Enterprise, IBM, Oracle and Microsoft also include per-cloud analysis.
Multicloud users can piece together their information from cloud-specific tools, but it's probably best to look at this more holistically.
For multicloud cost analysis and even dynamic cloud cost management, some of the key tools are CloudAware, Cloudyn and RightScale. Cirba has a suite of tools for cloud, multicloud and hybrid cloud. The key to selecting a multicloud cost management tool for your cloud workloads is to pick one that works with all your providers and that offers both planning and support for dynamic costs.
The second category of tools deploys and sustains applications in the cloud, a function usually described today as DevOps. Public cloud providers offer their own DevOps tools, but for multicloud, you'll normally need a single, overall DevOps capability. Some DevOps tools help you manage scripts to describe deployment and redeployment steps -- the imperative model -- while others define states that represent the correct operation and generate the necessary commands to maintain those states -- the declarative approach.
You'll have options with cloud automation tools. Chef is the most popular imperative tool today, and Puppet the most widely used declarative one. If you have a strong IT operations team that already uses scripts, then Chef is easily adopted. Otherwise, consider Puppet. Alternative tools such as Ansible are worth a look if you're not already firmly committed to operations automation.
Monitoring's essential role
While it is useful to have multicloud tools in use for cost management, it's absolutely critical to have them in place for monitoring multicloud deployments. This is because many applications will deploy across multiple clouds or burst from one into the other.
Cloud and network vendors such as Cisco offer multicloud monitoring. In addition, some cost-management platforms, such as RightScale, provide assistance in cloud planning, failure reduction and cost management that cross over into monitoring or supplement its use.
Specialized tools for performance monitoring of cloud workloads include workload-focused tools, such as ManageEngine, and workflow-focused tools, such as Boundary. Unless you want to manually integrate cloud workload and cloud workflow information to get application status, you should consider products like these that can -- together or separately -- help you cross the bridge to what's thought of as deployment-unit planning and operations.
Since workflows stitch workloads, it's this stitching and information movement that actually delivers information. Cloud networking is the untold story of workload management, and cloud connectivity is difficult to plan and debug. Traditional cloud and network management and monitoring is widely supported through the network equipment vendors (Cisco, Juniper Networks, etc.), through VPN providers and through independent tools, such as those from NetScout.
Identifying a network workflow problem isn't as good as preventing one. Here, some simple rules can be helpful.
First, use special VPN cloud-connect services to link your multicloud environment to your organization's VPN. The internet doesn't offer the kind of service-level agreements or quality of service guarantees that VPNs do, and you can't manage a multicloud arrangement without some assurance of how the network resources will perform. The best situation is where all your multicloud providers will attach directly to your corporate VPN.
Second, think in terms of deployment units that include both the hosting (workload) and network (workflow) elements to reduce both effort and errors. The ideal DevOps -- or policy-management -- tool is one that has specific support for each provider in your multicloud environment and lets you define your deployment units as a single element to be deployed. If your current tool supports these capabilities, there's no need to change. If not, look to see which tools are best supported by your primary cloud provider.
Third, set strict boundaries on where workflow units can be hosted. These limits will be based on price and performance and enforced with the policy management or DevOps tools you employ.
No matter which tools you pick for multicloud workload management, it's the concept of workflows and deployment units that will ultimately determine the success of your approach. Every cloud decision is both a hosting and a connection decision, and getting both right is the key to effective workload management in multicloud deployments.
VMware sees multicloud opportunity
The right APIs enable application portability
Orchestrating your way through a multicloud world