Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Moving to a private cloud: Unveiling the myths

Some think moving from a virtualized data center to a private cloud requires just a little management software here and some automation there. It’s not quite that easy.

This article can also be found in the Premium Editorial Download: Private Cloud: Private cloud strategy: Building blocks and roadblocks:

As IT managers embark on building a private cloud, they may have to confront past assumptions and practices. Some of the prevailing wisdom that has defined their data center infrastructure may not be valid in a cloud. And while vendors often profess that cloud automation and management is relatively turnkey, those on the front lines can attest otherwise.

A private cloud resides inside a company’s data center and offers control of IT resources. It automates workflow and eliminates manual configuration tasks, such as shifting workloads to setting up firewall rules and configuring routers. Some thus refer to the cloud layer as a “manager of managers” of sorts that allows data center operators to move application workloads; reallocate memory, storage and other IT resources where they need the most oomph; and consolidate data and management in a single “location.”

For most data centers, though, a private cloud’s “nirvana state” of automated management requires retooling of existing infrastructure and processes. You can’t just slap cloud management software on top of existing servers, storage and networks and call it a private cloud. Nor will your infrastructure work the way it’s supposed to. So let’s examine some general misconceptions about virtualization infrastructure and consider the changes required for a private cloud environment.

Myth 1: VM automation is simple

Businesses with advanced virtualization techniques have now taken the next logical step: building a private cloud so users can dial up virtual machines (VMs) without requiring a team of people to create and define them. Application owners should be able to dial up a virtual machine from the private cloud on demand just as they can with an external provider. That way, administrators won’t get bogged down with the day-to-day issues of the virtualization layer and will eliminate the possibility for human errors when provisioning new VMs.

But creating, provisioning and managing virtual machines in the cloud differs from existing data center management practices. In a virtual infrastructure, existing change management routines dictate the process of creating new VMs, and these processes often strive to eliminate VM sprawl. In a cloud environment, however, the challenge is to develop a user-driven environment without augmenting sprawl. Additionally, VM templates—which provide standardized hardware and software settings to create new VMs—likely include only a base OS, service packs and other patches. Given their fear of performance problems, most organizations have steered clear of installing full-blown applications and services into these templates.

In a private cloud cloud, however, one goal is to allow end consumers to create new applications and services on demand. When end consumers log in to a cloud portal, they expect a service catalog to offer more than a couple of virtual applications that contain merely a base OS build. They want a complete service or application.

So you need to confront the assumptions and procedures of the past. In the case of templates, this means going “up the stack” and installing services and applications into VMs. You need to work closely with the stakeholders who traditionally manage these applications and gain approval for VM configuration. And before they can be included in a service catalog, VMs need considerable testing and verification. So you need proper controls to ensure that VM sprawl in a virtual infrastructure doesn’t become VM sprawl in the cloud. It’s going to take considerable balancing to empower end users with these new freedoms while also maintaining corporate standards.

One way to allow that freedom is to offer pre-packaged services that end consumers can use off the shelf without the need for excessive tweaking and customization. You can also simplify the configuration and provisioning process by creating “classes” of virtual machines—such as platinum, gold, silver and bronze —in the automation engine. By designating such classes, IT managers can pre-establish various VM templates from which to choose, and users can get access to templates with a range of applications and services. A tiered approach helps control performance and consumption and creates realistic expectations for departmental units about their resource consumption. Tiered models limit the amount of CPU or memory and help set the stage for chargeback or showback policies.

A VM-first policy: While the move to a cloud-based model doesn’t exclude physical servers, the more virtualized your existing infrastructure, the easier the transition to a cloud will be. If you haven’t done so already, adopt a “VM-first policy,” in which new services and applications are virtualized by default. Then, only when it’s demonstrated that these services cannot perform well virtualized, deploy them on dedicated physical servers.

Additionally, it may be time to rethink physical servers that were originally excluded from the early phases of virtualization. These physical boxes may have been performance-sensitive servers that were considered too tricky to virtualize. With the major advances in hypervisors, it’s time to push these systems out of the nest and into the virtualization layer.

Finally, it’s time to review the policies and change management routines that have been enforced on VMs. Are they still valid, or are they a throwback to how things were done in the physical world? Now that virtualization has proven its mettle with production workloads in the data center, a more aggressive policy is required.

Myth 2: Provisioning storage is simple

In a cloud-based environment, provisioning adequate storage is acknowledged as a central pain point. In a private cloud, storage is multi-tenant, but this model can create technology problems and IT turf wars.

Architectural differences: Server virtualization and enterprise-grade storage technologies have evolved on separate paths. As a result, attempts to marry the two and, thus, gain the benefits of a cloud environment are often a kludge. An enterprise running a decent-sized storage area network (SAN) appliance, for example, must have direct access to the appliance even to set up a storage pool to boot a single VM. Compare that with a standard virtualized server, which is a single-image file that runs with virtual disk space already embedded in it and assumes a user operates on a host that is capable of processing instructions (i.e., CPU) and talking directly to onboard storage. The ideal host environment for virtualization is a massive single server with as many cores, RAM and direct-attached storage as possible. But that’s not how infrastructure with individual servers and a SAN work. This is not to say that high-level, expensive, safe storage and virtualization can’t work together, though.

So it’s important for private cloud architects to take a long hard look at how storage interacts with overall data center architecture. Chances are that even if your storage pool is best of breed and virtualized, it was set up to work for day-to-day needs and you don’t need to manage it much. When you link virtualized resources together into infrastructure-agnostic pools with broader access, your storage management interface isn’t going to “just work” with VMs seamlessly.

Storage access: In traditional virtualization environments, access to storage is strictly controlled, and virtualization administrators may engage in weekly or daily battles to get necessary storage. In a cloud, with a mere click of the mouse, end consumers can access many gigabytes or even terabytes of costly storage with less oversight than they had previously. So the challenge is twofold: shepherding cultural and technological change.

The job of the cloud administrator is to present storage in a way that is easy to consume yet also reinforces the concept that there is no free lunch. As end consumers select items from a service catalog, the best cloud automation software makes them aware of the cost of storage through chargeback processes.

Today, a raft of storage management plug-ins for virtualization platforms such as VMware’s virtualization suite, vSphere, allows admins to provision new storage directly from VMware’s management console. These plug-ins save a huge amount of time and automate processes that, even with the help of scripting tools, are time sinks. Still, while plugins are a boon, storage teams may hesitate to allow virtualization administrators the rights to use them, as broadening access reduces their iron-fisted control over storage array consumption.

Myth 3: Configuring networks is simple

For your infrastructure to be cloud-ready, networks also need an overhaul. While private clouds mask underlying differences at the infrastructure layer to allow for scale and dynamism, this homogeneity creates new network bandwidth and provisioning challenges.

Bandwidth: Even if your network is humming along, with 1 Gigabit Ethernet bandwidth and a handful of solid links to serve everyone’s needs, you may still have bandwidth problems waiting in the wings. So get ready to invest in tools for monitoring network congestion. If you virtualize everything you can and start serving all these resources from the network—and users have access to do so themselves—the bottlenecks will arise relatively quickly.

If VM sprawl is an issue for your IT shop, a private cloud will pose even bigger problems. You might have a team standing up handfuls of servers simultaneously and creating massive loads that disrupt other operations. Now imagine them doing it from home and clogging your entire operation’s Internet connection until you can corral them. If you’re also planning virtual desktop infrastructure or workspace virtualization, the headaches are ever-present. Client/server design means that work takes place on both ends of the network and information is exchanged; cloud computing means that most of the work takes place in the data center but is communicated continuously to the user.

To combat these issues, consider reallocating and expanding bandwidth to resource-hungry users before implementing cloud strategies. Many IT shops have a kind of “fairness doctrine” in place, where all parts of the organization have an equal share of company network resources whether they need it or not. But plan on careful segregation of different kinds of users and have the headroom in place to accommodate this allocation of resources.

A virtualized environment that consolidates numerous physical servers into a smaller number won’t necessarily add to network traffic, and that hasn’t been a big consideration in terms of resource allocation. But revamping your data center into private cloud means delivering more services, and yet more services over your network to users who come and go when they please. Consider your bandwidth needs and think hard about an upgrade.

VLAN tagging: Virtualized networks also need a separation of VMs to ensure data privacy of one tenant in the cloud from another. So they need mechanisms to ensure that these networks can share the same physical network link without compromising or leaking information between networks.

To allow access to a physical network, most cloud automation software uses the virtual local area network (VLAN) tagging model. This approach requires a network team to pre-create pools of VLAN IDs on a physical switch. When a new VM or virtual application is created, a cloud end consumer eats up these VLAN IDs without having to ask the network team to set them up.

But VLANs defined on a physical switch are not “free.” Most physical switches support only a certain number of VLAN definitions, and the name space for VLANs can be consumed at a much faster rate than expected. The biggest change here is convincing a network team that creating VLANs up front—which may or may not be used—is a good idea. In some respects, it flouts a generation of best practices that counsels IT managers to configure only what is needed to protect resources from being hijacked by nefarious intruders.

Virtual switches: IT managers need a bulletproof strategy for the logical configuration and management of virtual Switches (vSwitches) that provide VM connectivity. Virtualization admins may need to reexamine their default settings, which originally may have been created for a server consolidation project. Most virtual switches, for example, have a set number of “ports” into which a VM can be “plugged.” Think of it as a conventional physical device, such as a 48-port switch. Of course, in the virtualization world, you can have a much greater number of “ports” than you can in the physical world.

Most virtual switches use a static model for assigning ports to VMs. This pool of static ports can quickly become depleted, so a virtualization administrator has to look closely at vSwitch settings to allow for a more dynamic model or for an approach that creates and destroys ports on vSwitches as they are needed or discarded.

Myth 4: Private clouds are simple

While vendors are on a mission to “cloudify” their services and tout the path to a private cloud as simple and easy (with their help, of course), IT managers should take heed. Reflect on your experience with other IT projects—a software migration or a legacy hardware upgrade— and the technology change and personnel upheaval it takes to get there.

A private cloud infrastructure is no different. A true private cloud model means rethinking all the infrastructure elements that make up your data center—and the people who manage those IT resources. Don’t be afraid to roll up your sleeves and challenge the vendor take. It’s going to take a whole lot of change—and change management—to get there.

About the author:
Mike Laverick is an IT instructor with 17 years of experience in technologies including Novell, Windows and Citrix Systems. Since 2003, he has been involved with the VMware community and is a VMware forum moderator as well as a member of the London VMware User Group Steering Committee. He is the owner and author of the virtualization blog RTFM Education, where he publishes free guides and utilities for VMware users. He is also writing a book on building a cloud with VMware vSphere as the foundation.

This was last published in May 2011

PRO+

Content

Find more PRO+ content and other member only offers, here.

Essential Guide

Virtualization to cloud: Planning, executing a private cloud migration

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Great article. I've moved two organizations from AWS/Rackspace to a private cloud and they realized huge cost savings and massive control and scalability of their infrastructure and apps. This article clearly speaks for Mike's experience.
Cancel

-ADS BY GOOGLE

SearchServerVirtualization

SearchVMware

SearchVirtualDesktop

SearchAWS

SearchDataCenter

SearchWindowsServer

SearchCRM

Close