Essential Guide

Browse Sections


This content is part of the Essential Guide: Virtualization to cloud: Planning, executing a private cloud migration
Problem solve Get help with specific problems with your technologies, process and projects.

Four network latency gotchas of private cloud

Cloud-enabling the wrong VM could have some serious consequences. Make one of these mistakes and you could sabotage your cloud efficiency.

If you believe the hype of virtualization platform vendors, you’d think the cloud is a perfect host for every virtual machine. Whether you’re connecting local and remote assets using VMware vCloud Connector or clicking the “Create Cloud” button in Microsoft System Center Virtual Machine Manager 2012, moving VMs to a cloud has never been easier.

But the easy option isn’t always the best option. Before pushing any VMs to the cloud, IT admins need to determine whether it even makes sense. And such decisions mirror those of server virtualization -- determining what to move from physical to virtual (P2V). With cloud, P2V has become V2C (virtual to cloud).

Network latency: The efficiency killer
When determining whether a VM is a good match for the cloud, network latency becomes a major concern and stands to be the biggest cloud efficiency killer. Here are the top four network latency “gotchas” to keep in mind when you’re making your next V2C decision.

Gotcha #1: Your Internet connection. Offloading the processing of VM activities to a cloud provider can free up in-house resources. However, your network connection can create a bottleneck when trying to relay activity results back to the data center.

Keep in mind the amount of throughput each VM needs when building network capacity between your data center and the Internet. Network measurement tools are a must to ensure efficiency.

Gotcha #2: Your traffic patterns.

A slow Internet connection becomes less critical when network traffic occurs mostly between colocated VMs.  In addition to measuring the aggregate network requirements of each cloud candidate, you also need to quantify what VMs are communicating with.. 

Obtaining this level of detail for a VM’s network usage used to be challenging.  Current network flow monitoring technologiessuch as NetFlow, sFlow, J-Flow, and IPFIX, to name a few, can help. Flow monitoring delivers the added level of detail that helps admins isolate inside-the-cloud traffic from that which will later be separated by the Internet.

Until recently, tools for measuring network flows were available only for big, enterprise customers with expensive equipment.  Affordable, commercial flow monitoring tools are now available for even micro-IT shops.  A handful of open source monitoring tools are an option for those on a limited budget.

While the cloud effectively removes resource boundaries, it does so at the cost of pushing that processing back to local equipment. As a result, an investment in network monitoring technology is a good bet for future cloud builds.

Gotcha #3: Your usage patterns. While it may seem obvious, a business user and usage patterns can also affect a cloud-connected network. For example, hosted file services and cloud-based apps are becoming more prominent with the rise Microsoft Office 365 and Google Apps for Business.

While office applications in the cloud offload the administration of complex services, they do so by relocating storage into the cloud. Highly distributed businesses that aren’t structured around a brick-and-mortar office infrastructure are particularly suited for moving these services to a public cloud.

On the other hand, businesses with well-established data centers and a central location may want to think otherwise. The cost and time needed to upload and download documents from a cloud service will likely outweigh the benefits.

Gotcha #4: Your provider-to-provider networking. Companies hoping to completely eliminate the risk of a cloud provider outage affecting IT operations are interested in full-provider high availability.

This cloud-to-cloud network latency can be the most challenging to characterize prior to implementation. There simply aren’t effective tools to characterize provider-to-provider throughput metrics short of throwing a few servers in each location and monitoring the traffic. Notwithstanding, IT shops with extreme high-availability requirements shouldn’t neglect network monitoring for monitoring connections among providers.

Resource-bound becomes network-bound
IT’s glacial shift from server virtualization to a cloud-friendly architecture has changed where bottlenecks exist. Early virtualization environments were largely resource bound, suffering from shortfalls in processor, memory and storage capacity but generally were well connected via the network.

While the cloud effectively removes resource boundaries, it does so at the cost of pushing that processing back to local equipment. As a result, an investment in network monitoring technology is a good bet for future cloud builds.


Greg Shields, Microsoft MVP, is a partner at Concentrated Technology. Get more of Greg's Jack-of-all-trades tips and tricks at

Dig Deeper on Cloud architecture design and planning

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

The title of the article says private clouds but the content is about public clouds.
Thanks for your comment. The author is actually referring to hybrid clouds that combine public and private clouds.