BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
Preparing for a move to the cloud includes vital steps, such as analyzing technical requirements and implementing...
security protocols. However, even with the best planning, you can still encounter obstacles. Once you've prepped for a cloud migration project, you need to explore the impact on data center configuration management, networks and storage.
The hybrid cloud puzzle involves several complex pieces, but they are not insurmountable problems. Rather, these problems benefit from new, better solutions that arise every month. If you and your organization take the nontechnical messages of cloud computing -- namely centralization and automation -- to heart, you will find yourself becoming more flexible and more able to take advantage of solutions as they emerge and, most likely, save money in the process.
Building service catalogs, templates to automate configuration management
A primary benefit of public clouds is the ability to dynamically scale systems and resources to match workloads. This saves money because you don't need to size your system for a yearly peak workload, just for today's workload. But to rapidly scale systems, staff will need to build and maintain good virtual machine templates to use with these tools. They will also likely need to explore some automated configuration management.
Implementing configuration management in the form of tools like Chef and Puppet isn't simple. It opens the door to extreme levels of automation and change control, which saves staff time, prevents outages and assists with security by keeping all OS configurations in sync. As with authentication, you need to consider your goals so that you can properly design these systems to be robust during site outages. Staff also may need training, and you may need to build additional infrastructure -- such as separate configuration repositories and servers, firewall rules, etc. -- to support these new tools.
Retrofitting networking to your cloud migration project
Networking is central to what makes the cloud possible. A successful hybrid cloud implementation is dependent on good networking practices, excellent and comprehensive monitoring and rapid troubleshooting. Adding reliable and available connectivity to multiple sites, load balancing, dynamic scaling and security requires staff time and considerable skill.
Moving workloads out of a data center to a public cloud can stress an organization's external network connections. You may choose to make a single network connection redundant to help guarantee that a problem with one provider doesn't take all your company's products offline. These tasks aren't simple and need to be planned carefully with a network engineering team. It also is important that the application and system administrators work together with the network engineers for sizing and troubleshooting.
More traffic on network connections may mean more traffic through firewalls, intrusion-detection devices and intrusion-prevention devices that were never sized for that amount of traffic. Scaling them up and adding redundancy is a must to prevent single points of failure from taking hybrid cloud applications offline. Likewise, intrusion detection and prevention systems need to be configured so that communications from white-listed remote hosts aren't interrupted.
Implementing service management
A robust monitoring technology indicates the state and performance of every system in your data center. But as you move to the cloud, are these systems extensible, and will they work for the cloud? Perhaps. The technologies for on-premises virtual environments may work for public cloud environments as well. Other considerations might emerge, such as disaster recovery. If the primary site is down, how can you manage and monitor systems? Perhaps you choose to replicate your management services as well, or create a secondary monitoring system at the alternate site.
The hybrid cloud puzzle involves several complex pieces, but they are not insurmountable problems.
Real-time performance metrics are also important, and access to them depends on the cloud provider you choose. Performance metrics ensure that technical staff can troubleshoot a problem, help inform the automatic scaling features of hybrid clouds and are often used for chargeback, billing and reporting. Using a monitoring tool or service that can automatically trigger scaling up or down is a key part of the move toward a hybrid cloud, but it is often overlooked until later in the process. A chargeback process that is aware of up-to-the-minute charges from cloud providers is also a must. Choose tools with good programming interfaces and have IT staff that can configure and manage those tools and integrate them into your company's business processes.
Good service management techniques don't stop once a service is partially or completely in the cloud. Adapting internal configuration management databases and other tools to the cloud is important. Some of this work is strictly process-oriented, rather than technological, though there are likely good integration possibilities. In some cases, tracking certain assets in a traditional configuration management database is impossible, given the dynamic nature of the cloud.
Moving from a private cloud to a hybrid cloud requires planning and implementation work throughout a data center. Basic assumptions that have built up over decades need to be rethought, tools need to be re-evaluated and all parts of an infrastructure likely need to be changed in a careful way. Having clear goals in mind informs much of this work, which is often about communication just as it is about technical implementation.
Don't ignore storage and backup
In the race to the cloud, IT management often overlooks storage and backup needs. But with good communication of business requirements and solid work on technical requirements, these problems can be mitigated.
First, not all cloud storage is the same. Consider that most on-premises storage is sized in two ways: performance and price per gigabyte. But in the cloud you often see only one fee: price per gigabyte. When you select a public cloud provider, inquire about performance options. Many inexpensive-seeming providers use slower SATA disk arrays to drive down costs. But if your applications require additional performance, you may find yourself without options. Many providers have begun to add service tiers that guarantee certain levels of storage performance, and selecting a provider that does so allows you to save money where performance isn't necessary but spend money selectively to make performance-sensitive applications work well. Choosing a provider that allows you to move dynamically between these tiers may be of interest, especially as unanticipated performance requirements crop up.
Second, backup needs are often overlooked with hybrid clouds. First, do you plan to use your legacy system to back up cloud-based virtual machines? How will that affect network traffic? Just as important, how will that affect your bill, as most providers charge fees per gigabyte of traffic moved off the network? Perhaps the cloud provider offers backup solutions internally that are cost-effective but will require different processes and procedures for restoring data than your already-established systems. You may also want to consider enabling encryption for backups, especially for third-party shared services. Encryption of backups is not a simple thing and will require procedural changes to securely store encryption keys, as well as testing of restores and encryption key changes.
About the author
Bob Plankers is a virtualization and cloud architect at a major Midwestern university and author of The Lone Sysadmin blog.
Prevent downtime during a cloud migration