This article can also be found in the Premium Editorial Download "Private Cloud: Common challenges in developing a private cloud environment."
Download it now to read this article plus other related content.
Many enterprises either already have a private cloud, plan to build one or at least have considered in-house cloud as an option. If you're on the private cloud bandwagon but remain unfamiliar with how
This tutorial looks at private cloud computing tools that unleash the power of automation and orchestration, monitoring and service catalogs. While these features are important, they're also not yet fully understood in the context of virtualized, or private cloud, environments.
Enabling orchestration and automation
Although automation and orchestration are often used interchangeably, there is a subtle difference between the two terms. Automation is usually associated with a single task, and orchestration is associated with a process that involves workflow around several automated tasks. If you're looking to better understand the value and importance of automation (and orchestration) in a private cloud environment, one of the best ways is to contrast server provisioning in a traditional data center with virtual server provisioning in a virtualized environment.
Too many 'private clouds' are being created today without automation, sufficient monitoring and service catalogs.
Server virtualization can reduce the amount of time to provision servers, but it does not decrease the time associated with installation. An IT staff uses labor-intensive management tools and manual scripts to control and manage the infrastructure, and they will not be able to keep up with the continuous stream of configuration changes needed to maintain access and security changes in conjunction with a private cloud's dynamic provisioning and virtual machine (VM) movement. This is why automation of these processes is an important element of private cloud.
Orchestration is key; it is the automated coordination and management of servers, storage, security and networks to deliver services to users. An orchestration function resides between cloud services and the cloud infrastructure. It is based around policies that define relationships between and/or among the users, servers, storage, security and networks. Policies are automatically translated in real time into device configurations that dynamically provision whatever resources are necessary. For example, the orchestration tool to the hypervisor management system communicates the CPU and memory requirements for provisioning a virtual server.
All of these functions -- allocating CPUs for a virtual server; allocating storage; setting up routers, firewalls or switches to support the newly provisioned virtual server -- are automated. The orchestration function coordinates all of the automated configuration changes across all systems and hardware; it is a single point of control. Without automation and orchestration tools, IT would have to manually re-provision and optimize resources every time the smallest change in the environment is made.
Automation and orchestration, however, will not solve all your problems. They may help get changes to the infrastructure completed quickly, but those changes have to be recorded almost simultaneously so the orchestration function has the up-to-date configuration data needed to make decisions like allocating CPUs and storage. The rapidity of change stemming from automation and self-service in private cloud environments requires a more efficient approach to configuration management and change management -- processes that live inside the IT organization. Tools like configuration management databases (CMDBs) are available to record these changes in real time.
Automation and orchestration tools for the private cloud
What tools are available to handle automation and orchestration? LineSider Technologies (recently acquired by Cisco) and CA Technologies are two of the several companies that offer automation tools.
LineSider OverDrive focuses on networks and automates the provisioning and deployment of network services in cloud environments. When resources are moved and/or changed, policy-driven OverDrive modifies and changes the underlying network infrastructure. OverDrive sits between an LDAP directory, a hypervisor manager and device controllers. It manages routing and virtual private networks (VPNs), switching and VLANs, and firewalls and their access control lists.
CA Technologies offers the CA Automation Suite for Data Centers. This suite includes CA Server Automation, CA Virtual Automation, CA Process Automation and CA Configuration Automation. CA Automation Suite for Data Centers is an attempt by CA Technologies to automate server provisioning, processes and configuration management. It provides support for Windows, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, AIX, Solaris, and HP-UX, Hyper-V and VMware ESX.
There are other automation tools from vendors, such as IBM with its Tivoli Service Automation Manager and HP's Cloud Service Automation offering. Of these tools and many others, LineSider Overdrive is closest to what an automation tool should be.
Monitoring your private cloud performance
Monitoring ensures that applications meet specific performance targets. It'll also help answer questions like:
- What is the response time from storage devices?
- What is the performance of an application?
- How is my compute and storage bandwidth being used?
Virtualization, however, has added a layer of abstraction to traditional monitoring; we can no longer measure performance just by looking at physical devices. As network virtualization is adopted, network operations teams have struggled to look past the abstraction and determine what is actually happening at the physical level. New performance monitoring tools provide insight into the infrastructure for both physical and virtual elements, allowing operations staff to make better decisions about how to configure and allocate workloads in virtual environments.
If you look at the evolution of IT -- from mainframes with shared resources to client/server with dedicated resources and now back to shared resources with low cost hardware -- we have systems that behave differently. We have dependencies in virtualized environments that did not exist in client/server. The way that we monitor and manage is changing because we no longer have clear lines of dependencies. Interactions have grown much more than complex than those in the client/server world.
So how do application performance tools work? They monitor memory utilization, CPU utilization and performance metrics. The application is associated with the guest operating systems; the guest operating system is associated with the hypervisor running on a physical server. The associations continue with a network port down to the storage resources. Monitoring provides the linkage all the way through the infrastructure to the application.
SolarWinds provides one of the most complete sets of monitoring tools on the market. It provides monitoring for network, storage, application, server and virtualization performance management. This set of tools monitors the cloud stack from top to the bottom through the devices themselves.
One particular SolarWinds product, the Hyper9 Virtualization Manager, provides visibility into the health of CPUs, memory and networks in a virtual environment. It allows guest virtual servers to be mapped from the application all the way down to the data stores. If, for example, you add a fourth virtual server and suffer a sudden performance drop, you can track back and look at the disk resources, what I/O resources are being used and the host that the servers are running on. The potential is there to very quickly identify any bottlenecks and make immediate changes.
Another company to consider is AccelOps; its monitoring tools capture and analyze information about the network infrastructure. IT staff can use AccelOps to access status, events, trends, and configuration data about networks, network devices, systems, applications and virtual environments. Alerts can also be set up to send out alarms on performance or memory allocation problems. And if you want to investigate a security issue, AccelOps offers a recap of any recent changes made to a virtual server. AccelOps deployment involves installing the AccelOps application as a VM on a VMware ESX platform.
Nimsoft also provides monitoring software for private clouds. This software tools monitor servers, network devices, databases, and applications, along with virtualized environments like ESX, vSphere, Hyper-V, and Citrix XenServer. Nimsoft works with cloud providers such as Rackspace, Amazon, Salesforce.com and Google; it also integrates with CMDBs and service desks.
Service catalogs in the cloud
Service catalogs are now being viewed as a core part of cloud computing. A service catalog contains a list of automated services that are available via a self-service portal. It exists to demonstrate service availability and trigger steps in the provisioning of many types of enterprise services. A service catalog is typically a front-end Web-based listing of services, products and pricing delivered by the back-office IT infrastructure.
Virtualization has added a layer of abstraction to traditional monitoring.
For an organization to receive the full benefits of cloud, users must be able to request the services they need and IT must be able to respond to those requests quickly. The service catalog allows users to serve themselves by choosing from a menu of cloud service offerings. IT organizations that implement private clouds should provide a service catalog to establish standards, provide users with convenient online access to cloud services, and help orchestrate automation of services.
Part of the service catalog design challenge is to ensure that the catalog is well integrated with the necessary components required for a seamless workflow: service desk, CMDBs and provisioning and change management tools.
newScale is one of several companies that provide service catalog software; RequestCenter provides users with an easy-to-use service catalog. HP has introduced a HP Service Manager Service Catalog that is integrated with a number of HP products. BMC Cloud LifeCycle Management includes a policy-driven service catalog, and CA Oblicore Guarantee provides the capability to create service catalogs.
Tips to enhance your private cloud
Too many "private clouds" are being created today without automation, without sufficient monitoring and without service catalogs. Those private cloud implementations will have a hard time realizing all the benefits of cloud computing.
There are many companies, big and small, supplying tools for each of these important functions. Some, such as LineSider and Oblicore, have been acquired by larger companies like as Cisco and CA Technologies, respectively, and integrated with other products to form more complete cloud management suites. Most of these tools are so new and untested in production environments that you should take a close look at the use of the tools by the vendors’ reference customers. If they don’t have reference customers, then beware.
Using tools from acquired companies may lock you in to the larger companies that purchased them. This happens frequently when acquisitions occur -- one company's management tools get buried inside a larger set of products and are no longer marketed and sold separately.
Of the three functions discussed earlier, monitoring tools are the most likely to be insufficient in virtual environments. The tendency is to try to use whatever monitoring tools you used in the traditional data center, but these will not provide sufficient, if any, monitoring of traffic between virtual components. Local communication between virtual servers can go largely unmonitored; traffic that runs through a virtual switch is practically invisible because it never hits wire. To ensure the optimal private cloud experience, virtual traffic between VMs needs to be monitored.
ABOUT THE AUTHOR
Bill Claybrook is a marketing research analyst with over 35 years of experience in the computer industry with the last dozen years in Linux, Open Source, and Cloud Computing. Bill was Research Director, Linux and Open Source, at The Aberdeen Group in Boston and a competitive analyst/Linux product-marketing manager at Novell. He is currently President of New River Marketing Research and Directions on Red Hat. He holds a Ph.D. in Computer Science.
This was first published in April 2011