This article is part of an Essential Guide, our editor-selected collection of our best articles, videos and other content on this topic. Explore more in this guide:
1. - Private cloud 101: Read more in this section
- What's the difference between a private cloud and a standard data center?
- The private cloud model, demystified
- Unveiling the myths about moving to a private cloud
Explore other sections in this guide:
- 2. - Considerations for a private cloud migration
- 3. - Blueprints for moving from virtualization to the private cloud model
- 4. - Management, financial tools in your private cloud arsenal
This article can also be found in the Premium Editorial Download "Private Cloud: Private cloud strategy: Building blocks and roadblocks."
Download it now to read this article plus other related content.
For most organizations, designing and managing a private cloud is a tectonic shift in existing IT operations. All layers of the data center stack require retooling to ensure solid network, storage and application performance, secure data exchange, and flexibility in a cloud environment.
Changes can be welcome as cloud designers rethink old processes and methods. Cloud computing affects everyone in IT.
In the first part of this series, we explored how enterprises must rethink networking and security for private and hybrid cloud. In part two, we look at how cloud computing affects legacy applications and forces IT managers to shift away from traditional data center management practices. We also look at how licensing, fees and chargeback differ in the age of cloud.
What about our legacy applications?
Enterprises are built on legacy applications. These applications assume a traditional operating system, such as Microsoft Windows, running on a traditional server. The challenges of moving legacy applications to a private cloud are often the same as those in traditional virtualization projects, including performance problems and trouble migrating highly customized applications.
New cloud-based approaches, such as VMware's SpringSource, offer radically different models for designing applications, but they also change how applications are deployed and supported New application-based clouds can considerably improve developers' lives. They aim to mask the complexities of OSes and networking from application developers, enabling them to write software that can be deployed internally.
Google App Engine and VMware vFabric are good examples. But while masking these complexities enables applications to work and scale in cloud environments, system administrators' lives, in turn, can get more complex. How do these applications get backed up? How are they monitored? How are they secured? Environments such as VMware vFabric Server are delivered as appliances, whose black box–like nature foils traditional attempts to manage them.
Cloud can eliminate IT silos
Cloud projects also disrupt entrenched departmental silos and functions. Because a cloud makes resources more dynamic and can strain performance and data security, siloed IT teams are often forced to come together to manage the system as a whole. But in many cases, one department's efforts to enhance a cloud deployment can undermine the work of another.
Network professionals, for example, spend a lot of time worrying about how data moves around the data center. They size switch interconnections just right for workloads. They configure routers and firewalls to maximize efficiency. They tweak everything and monitor it thoroughly. And then system administrators come along and break all these assumptions with live migration, hundreds of guests per host, trunked network ports and other virtualization tactics.
To boot, the systems guys now work with technologies that have traditionally been the domain of networking, such as firewalling, intrusion detection and prevention systems, and network segmentation and design. Storage professionals share some of these challenges. Their traditional usage model for a storage area network is disrupted. Storage arrays choke under all the seemingly random I/O from cloud hosts. Security models for networks, storage and applications all need revision, too.
Change and configuration management becomes taxing. Even system administrators -- often cited as the cause of all this chaos -- are thrown into the mix as separate departments that previously ran their own servers are forced together into a single cloud mandated by management.
Change is difficult, and the transition to a cloud causes great anxiety as we rethink traditional IT. There are solutions to these problems, though. When it comes down to it, storage, networking, systems and security teams have to communicate about requirements and concerns. They also have to move at a comfortable pace for everyone that allows problems to be identified and resolved before they become overwhelming. Rather than being seen as a time sink, a cloud deployment offers an opportunity to rethink existing practices and fix the broken processes that IT has endured for years.
Private cloud licenses, support and chargeback
In addition to the disruptive nature of cloud technologies, cloud licensing adds complexity. Increasingly complex systems that need tweaking, troubleshooting and monitoring threaten to eat into cloud cost savings through lost staff time.
Private clouds are composed of layers of software, from common virtualization technology at the bottom, management layers in the middle and user interfaces on the top. Each layer needs a different tool, and with each tool comes a license fee and a yearly support cost. Each tool also requires staff time to install as well as ongoing time to support the tool with patches and upgrades. Additionally, integration work is often needed for user access via corporate Active Directory or LDAP instances or between financial systems and cloud chargeback and reporting products.
A private cloud's chargeback-based billing system is also daunting, where IT charges individual departments for IT usage. Like a monthly phone bill, chargeback involves variable-rate charges that can catch departments unaware or prompt user resistance. Even choosing an accounting method can be problematic. Do you charge based on resources consumed, or do you charge a flat fee? Flat fees are nice for budget estimations, but they may not be fair, where small virtual servers subsidize large ones. If you charge based on resource consumption, you have to track resource consumption as well, which adds complexity and staff monitoring tasks.
Charging based on resource consumption can also invite political battles. Tracking CPU usage can be particularly contentious because it's highly variable. When a department receives a bill for CPU usage, it may challenge why it has to pay for IT tasks, such as server patching, that were previously "free."
Too much focus on the costs charged back can also prompt those being billed to optimize the amount spent, which usually undermines the efficiency of the entire cloud environment. As a result, many chargeback systems take simpler approaches, implementing a base charge plus RAM and disk allocations.
Changes can be welcome, though, as cloud designers rethink old processes and methods. Cloud computing affects everyone in IT. So now, more than ever, cloud architects need to communicate and work actively with network, security, and systems counterparts on design, support and processes.
These interconnected technologies and practices require an interconnected plan. Only in breaking down internal borders can companies truly cope with these technology shifts and begin to focus on strategic business goals.
ABOUT THE AUTHOR
Bob Plankers is a virtualization and cloud architect at a major Midwestern university. He is also the author of The Lone Sysadmin blog.