Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

No democracy for apps in the cloud?

In the cloud, not all applications are created equal, and moving some applications there could spell disaster. Here are seven steps to create a rock-solid strategy for porting apps to the cloud.

This article can also be found in the Premium Editorial Download: Private Cloud: Open source in the cloud: Boon or bust?:

Just because you want to move an application to the cloud doesn’t mean that you should. In the cloud, not all applications are created equal, and some are downright wrong for the infrastructure model. 

To make the right decision about which apps to move, you need a solid migration strategy. You need to consider your application portfolio and your business requirements to prevent problems such as poor application performance and latency, data leakage, or issues with compliance or other regulations. Applications subject to regulation or those that are business-critical, for example, are often poor candidates for cloud migration. And legacy applications may not stand up to the customization required for a move to the cloud. 

But when it comes to these decisions, you don’t have to fend for yourself. You can rely on established best practices to prevent disaster. Here’s how to develop a foolproof strategy for moving the right applications to the cloud, which starts by outlining clear objectives, then focuses on your application portfolio’s characteristics and business requirements to determine best fit. 

1. Define your cloud objectives

The first task is to identify why you want to move a given application to the cloud. Is your goal to save costs or to scale an application quickly to meet new business demand? Sometimes your goals clearly align with the applications you want to move to the cloud, enabling you to save money and become more responsive to business needs -- and avoid costly infrastructure investments to expand capacity. 

But other use cases won’t fulfill these goals, particularly applications that are mission-critical, resource-intensive, or those that house sensitive data. If you have to retool a legacy application to move it to the cloud, for example, it may drain staff time and, ultimately, money. Does the resource cost justify the move? If not, consider hosting or another alternative. 

The same applies to workload-intensive applications that require extremely low latency and have steep disk I/O requirements or may pose performance tradeoffs that are unacceptable for business users. In such use cases, revisit your model for managing applications in-house. 

2. Understanding scalability and redundancy

Scalability. Cloud computing is all about scale and the ability to ramp up additional resources on demand as workloads change. So the easiest applications to move to the cloud are those with built-in scale-out capabilities and redundancy. 

Historically, IT departments have used the scale-up approach and have added more memory and CPU to servers to improve performance. But with cloud computing, the far simpler method is to scale out—that is, to add more nodes to a single system, often by spinning up a new virtual machine (VM) when peak demand occurs. Ideally, these new VMs can be deployed rapidly without the application owner needing to go through a convoluted postconfiguration process. These additional VMs can be spawned on demand, then destroyed when no longer needed—or left in standby mode ready for the next spike in demand. Scale-out architectures suit the cloud, which requires immediate, on-demand access to these scalable resources. It’s much more difficult to add resources in the form of CPU or memory on the fly. Not every guest operating system supports this functionality, and, depending on the features of the OS and the functionality of your hypervisor, you may find that that an OS needs a reboot for the change to be applied. 

Resiliency and redundancy. If an application lacks built-in resiliency and poses a potential single point of failure, an organization has to spend time and money retrofitting the application to build in this functionality. This may require you to shoehorn availability technology into the guest OS to protect services that previously had none or, alternatively, enable a virtualization provider’s VM availability, such as Microsoft Hyper-V’s Failover Clustering or VMware’s High Availability. 

Whatever your decision -- and it may very well be a combination of both virtualization-enabled and added-on availability -- it will undoubtedly increase the cost of moving an application to the cloud. Even if your data center has these technologies on board, they still have to be managed and maintained, which only increases complexity when compared with applications that have built-in “self-healing” capabilities. 

But the reality is that, today, applications with this built-in design for scale-out and availability are few and far between. Despite occasional sightings of this rare creature in the wild, they remain largely an endangered species compared with their natural predator: legacy applications that don’t scale, and don’t have built-in resiliency. 

3. Identify cloud-friendly applications

Now evaluate applications that you consider cloud candidates to determine whether they can achieve the scalability and redundancy that the environment requires. Here is a sample checklist of attributes to consider, though it may not encompass every consideration in your own environment. 

  • Business criticality. How central is this application to the business? What are the potential costs if the application were to go down? Mission-critical applications are rarely good candidates for a move to the cloud. 
  • Resource use. Does this application consume a lot of compute resources? If so, it isn’t likely to be a good candidate for the cloud. 
  • Availability. How many nines of uptime are expected of this application? Will moving it to the cloud change that degree of uptime? If the application requires four or five nines of uptime, it probably isn’t a good candidate for the cloud. Moreover, be wary of providers that claim to guarantee this level of reliability; companies like Google and Microsoft claim only three. 
  • Resilience. Does an application lack built-in resiliency and pose a potential single point of failure? If so, an organization has to spend time and money retrofitting the software to build in this functionality 
  • Portability. Is the application easy to move to the cloud? Is it based on Java, .NET or another language? Cloud providers such as Google and Amazon use different underlying architectures, which quickly becomes problematic if you’re considering moving applications between one provider and another. This becomes especially apparent if you use a Platform as a Service that is based on a specific programming language. 
  • Scalability. Can this application scale, and do you need it to scale for peak demand times?
  • Application dependencies. Does the application rely on other software, such as a database, to run? The greater the dependencies, the less likely it’s a fit for migration to the cloud. 
  • Data security. Does the application house data that is subject to strict security requirements or compliance regulations? Applications that contain sensitive data or that are subject to regulation are poor candidates for the cloud. 

Now you can consider these attributes in light of the applications in your infrastructure. If your application is a resource hog, for example, placing it in the cloud will likely only introduce or augment performance problems. Similarly, if your app relies on others, such as a database, to run, or is subject to data privacy concerns, it probably isn’t a good candidate for cloud migration. For these reasons, many organizations have begun the process of porting applications to the cloud by targeting email programs, disaster recovery, and test and development environments. Such applications are often natural fits for the cloud: They may need elastic resources for peak volumes, they aren’t mission-critical, and they don’t house sensitive company or customer data. You can start with these production-level applications while minimizing the risks. 

Another key aspect of this step is consulting with stakeholders to reality- check your findings. You may discover that an application’s owners have solid automation routines in place for installation and configuration that can be seamlessly integrated into the cloud deployment process. Alternatively, you may discover that an application is resistant to being ported to a cloud environment because of security or auditing processes. 

4. Select a resource consumption model

Generally, you can consume a private or public cloud in three formats: allocation, reservation and pay as you go. With the allocation model, you assign a percentage of CPU/memory from a virtualization cluster, which controls the resource pools and per-VM defaults. Critically, only a certain percentage of those resources are guaranteed or reserved. So if you set your allocation policy at 75%, you have 25% unreserved resources. If you exceed the 75% value, it’s anyone’s guess whether those CPU/memory resources would be available. 

With the reservation model, these percentages are set to 100%, and you are guaranteed 100% of the megahertz or gigabytes you reserve. This can be costly; if you set too high a reservation, you pay for resources you may never use. 

Finally, the pay-as-you-go model  -- often the most attractive -- is based on variable consumption of compute resources, and the cost varies according to what you con- sume. But as with all pay-as-you-go models -- such as cell phones -- there is a risk of receiving a larger-thanexpected bill if applications’ resource demands vary. 

5. Identify roadblocks

Of course, you may still encounter objections to migrating applications to the cloud, and it’s critical to address these challenges head-on. Some challenges are technical and architectural, but some involve human obstacles. 

Application portability. Users want assurance that they can bring cloud-based workloads back into their data center if circumstances change. But clouds like Amazon Web Services use virtual machine images, which are proprietary and difficult to map to enterprise networks. While the industry has begun to move toward standard application programming interfaces and other common standards for clouds, vendors haven’t coalesced around common practices, and providers want to preserve product differentiation and stave off commoditization. Still, the industry has made some strides in making workloads independent of the hypervisor, enabling interoperability with multiple virtualization platforms. Organizations including DTMF, IEEE, the Open Cloud Initiative and others have also pushed for common standards throughout the market. But these efforts are still nascent, and new methods of abstracting resources are necessary to improve application portability. 

Security. Another primary roadblock is the objection that public and hybrid clouds pose security risks. IT managers are concerned about the risks of data leakage in a multi-tenant environment—not to mention the lack of control over their data. 

Given the immaturity of many cloud management products and vendors’ slow moves to develop cloud security standards, IT managers are rightly concerned with data insecurity. As recently as July 2011, Gartner Inc. analyst Neil Mac- Donald characterized cloud computing standards as “nascent” and insufficient. 

One reason that data security in the cloud is slowgoing is that the market has placed greater focus on network security by creating technologies that allow for secure multi-tenancy, such as VMware’s vShield technologies. 

But vendors have placed less emphasis on securing the data itself, as opposed to securing network packets. Many analysts believe that the public cloud will inevitably require levels of data encryption to address concerns about data interception. (Though what hasn’t been discussed is the additional payload that such a system places on a cloud platform as each bit and byte is encrypted.) But IT managers can deal with some of these objections directly by reminding application owners that security starts at home, not with a cloud provider. They should check whether the current application set is up to date with all known security patches and configured with features turned off to protect potential gateways from hackers. Second, make application configuration the focus of security, compliance and performance concerns. This focus forces application owners to own the “problem” rather than object to cloud-based applications based on amorphous security paranoia. 

Compliance. Nearly every major industry has government-imposed regulations to meet, and in some cases, independent bodies impose additional regulations to be part of the club. Additionally, many cloud compliance requirements deal directly with local or central government and these requirements are precisely the ones that public cloud vendors are inexperienced at delivering. Failure to meet compliance is the responsibility of the business, not the cloud provider, so simply blaming someone else is not a solution. Many think that businesses will want to buy insurance to cover themselves for breaches and noncompliance, but as the Sony PlayStation Network breach in April 2011 shows, there’s no guarantee that an insurance company will accept liability and pay out on the policy. Companies must be prepared to accept responsibility for security breaches as well. That’s why many industry watchers predict that hybrid clouds are the inevitable outcome to combat this compliance anxiety. Organizations will opt to hold data and compliance-sensitive applications in-house on a private cloud for the moment and restrict their use of public cloud to applications that aren’t politically sensitive. 

6. Test a deployment strategy

One of the key components of cloud computing is the ability to rapidly spin up new applications from an existing catalog. If your infrastructure doesn’t have this automation built in, however, it takes time to develop and test. Rigorous testing with beta users helps to confirm that the service runs acceptably and reliably. 

Beta testers should encompass a broad swath of users: Give business users, administrators and developers a chance to evaluate the benefits and the limitations of the cloud from their perspective. Application experts can use the sandbox to run functionality and performance test- ing on the application in the cloud to see how it behaves compared with the traditional environment and to see whether any differences are acceptable. 

7. Select a network model

For the cloud model to work, you need a networking design that can accommodate virtualized, multitenant resources. 

At a simple level, resource sharing can take place by creating pools of virtual LANs (VLANs) -- which enable information and resource sharing across locations as if they were all under one roof -- at the physical switch that are then addressed by hypervisors’ virtual switch configuration. Virtual switches are then presented automatically to the cloud automation layer to be consumed by tenants. 

But VLANs have their drawbacks; these models require a significant number of VLANs to be created up front as a pool of resources on a physical switch. Network administrators are often hesitant to create numerous VLANs in bulk that aren’t designated for immediate use because they view VLANs as the main avenue to control traffic and ensure network security. 

New alternatives allow cloud administrators to segment a network without excessive use of VLANs. VMware Inc.’s vShield Edge appliance, for example, can create “network isolation-backed” network pools. These pools use a MAC-in- MAC encapsulation process to add additional bytes to the standard Ethernet packet, which creates multiple network IDs within a single VLAN. The process is analogous to the 802.3 Q VLAN tagging standard that many VMware admins have enabled on their physical and virtual environments, which allows many VLANs to be accessed through a network interface card team. (In a team, one or more physical NICs are bonded together logically to create the impression of a single pipe. A NIC team guarantees bandwidth and offers redundancy should a NIC in the team fail.) With this MAC-in- MAC method, the same number of networks can be supported with fewer VLANs, and network administrators can receive fewer requests. Fundamentally, it allows for a more dynamic and automated approach to creating new networks that cloud computing requires. 

This networking design approach comes with caveats as well. The MAC-in-MAC process adds 24 bytes to the overall Ethernet packet, so you may need to adjust the maximum transmission unit (MTU) value on your physical and virtual switches to prevent fragmentation of packets through devices that are currently configured to the default of 1,500. They need to be reconfigured to an MTU of 1,524 bytes or greater. If they aren’t, every time a 1,524 (or larger) packet traverses a network device configured for 1,500 bytes, it gets split up into smaller units. This fragmentation can degrade performance and affect the reliability of secure protocols such as SSL. 

Ensuring that all devices in the path of communication are configured for the correct MTU can be a management headache, and it’s difficult to diagnose which device has caused the fragmentation. So while changing the MTU value is a relatively trivial task, it must be done consistently across the affected net- work, and that can introduce an initial administrative burden to the network team that makes the change depending on the number of network devices that need the update. 

In contrast to these methods, Nimbula, a cloud automation startup based in Palo Alto, Calif., uses a “security list” method that acts as an access-control method. Currently built on Kernel-based Virtual Machine (KVM), Nimbula use the DOM0 partition to store the mapping data and then control access from one VM to another. While these new methods of network isolation are innovative, they are also exceedingly new. Cloud service providers may not be ready to support these methods. 

Conclusion

By necessity, the process of deploying an application to the cloud varies based on organizations’ environments, business requirements and application portfolios. Just as the migration to server virtualization required a flexible approach to arrive at the end game, the journey to the cloud will require new technical, business and project management skills. 

As with other initiatives, planning and developing a migration roadmap is critical. Start by clearly defining the goals of migrating a given application and then identify applications whose internal characteristics are receptive to a cloud, such as those that offer an easy way to deploy new VMs as part of a scaleout approach and that -- ideally -- are designed with built-in redundancy. Companies have already had success with targeted application migration. Retail platforms that need quick scale-out to accommodate peaks in customer demand are a good example. 

Next stakeholders need to agree on a resource consumption model for network, memory, CPU and disk resources that allows for easy adoption and acceptance among the various parties -- while also fitting into potential budget constraints. 

As you map out the various technical considerations and your cloud service-level agreement model, however, you also need to identify your company’s internal roadblocks to the cloud, such as the security and compliance requirements that often make the prospect of cloud migration a political hot potato. And applications and data subject to regulation should be kept in-house. Also carefully consider the departments most affected by the move and how to broker their investment in a cloud strategy. 

Testing your cloud deployment is also critical. Identify beta users who can give you a taste of the production requirements and the snafus you’ll likely encounter. 

Finally, remember that users are at the center of the cloud model. Guaranteeing system and application uptime and performance are critical objectives in the success of a cloud migration strategy.

About the author:
Mike Laverick is a former VMware instructor with 17 years of experience in technologies such as Novell, Windows, Citrix and VMware. He has also been involved with the VMware community since 2003. Laverick is a VMware forum moderator and member of the London VMware User Group. Laverick is also the man behind the virtualization website and blog RTFM Education, where he publishes free guides and utilities for VMware customers. Laverick received the VMware vExpert award in 2009, 2010 and 2011. Since joining TechTarget as a contributor, Laverick has also found time to run the weekly podcasts “The Chinwag” and “The Vendorwag.” Laverick helped found the Irish and Scottish VMware user groups and now regularly speaks at larger regional events organized by the Global VMUG in North America, EMEA and APAC. Laverick has published several books on VMware Virtual Infrastructure 3, vSphere4, Site Recovery Manager and View.

This was last published in November 2011

Dig Deeper on Open source cloud computing

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchServerVirtualization

SearchVMware

SearchVirtualDesktop

SearchAWS

SearchDataCenter

SearchWindowsServer

SearchCRM

Close