Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

What users can expect as cloud providers add new data centers

As the price wars die down, cloud providers now target new land to expand their data center footprints. But will these new data centers impact users' workloads?

It seems like every quarter, cloud giants Amazon Web Services, Microsoft Azure and Google Cloud Platform spend...

billions of dollars on new data centers and infrastructure to support their cloud services around the world. But what happens to customers' data as a result of these massive and ongoing expansions?

The answer, it seems, is not much -- as long as users craft a strong service-level agreement (SLA) when they sign up for these services.

Providers rapidly add new data centers

Cloud providers have data centers spread across the world. They break out their services by geographic regions, or groups of largely autonomous, mega data centers that work together to ensure availability and response time. Each of these domains operates as an independent unit, responsible for its own power, cooling and network connections. If one piece of hardware encounters a problem or goes down, others in the domain pitch in to keep the workloads running.

The top cloud vendors continue to expand into new regions at a breakneck pace. Microsoft, for instance, has grown its Azure presence to 34 regions globally, with plans to add another four more in 2017.

Research and advisory firm Gartner expects global public cloud service revenues to increase by 18% percent in 2017, totaling $246.8 billion, up from $209.2 billion in 2016.

This expansion has happened for a couple of reasons. First, demand for cloud providers' services continues to rise. Research and advisory firm Gartner expects global public cloud service revenues to increase by 18% percent in 2017, totaling $246.8 billion, up from $209.2 billion in 2016.

Consequently, providers can more easily justify the cost of new data centers. "Margins are high right now," said Carl Brooks, analyst at 451 Research.

In addition, providers have turned to leading-edge technologies, such as automation and machine learning, to increase system performance and drive down costs. "Google has robots that perform routine storage management functions," Brooks said.

Compliance regulations are another factor, according to David Cappuccio, vice president and analyst at Gartner. In a growing number of countries, such as France and Germany, the government requires cloud providers to have a local presence.

Shifting workloads

As providers build these new data centers, their workloads could shift. Rather than being processed in London, an Azure application for a business in Denmark, for example, could run in Frankfurt, Germany because that data center is closer to the customer. But for existing cloud users and their data, it's not likely that too much will change.

One reason for that is that providers build many of these new sites for new applications, such as those for the internet of things or mobile devices. Older workloads will largely stay on their existing systems, since moving them to improve response time or maximize processing is often too tedious a chore for providers to undertake.

What's more, an SLA typically includes items, such as where and how information will be processed. A user may not be able to pinpoint the exact virtual machine on which a workload runs, but the SLA usually outlines the geographical region where their data is processed.

The visibility and performance question

Full system visibility is a capability businesses typically lose when they move to the cloud. The provider has tools to manage its network, but these don't always offer the same level of visibility a business has when it runs its own data centers. And, the more geographical regions a provider adds, the more potential hops a transaction may take as it moves from the user to the central processing site.

The challenge businesses face when managing these connections is linking on-premises management tools with those in the cloud. Providers, however, continue to take steps to improve visibility. In October 2016, for example, VMware and Amazon Web Services (AWS) partnered to roll out VMware Cloud on AWS, a platform that consists of VMware vSphere, Virtual SAN and NSX that can run on bare-metal AWS infrastructure.

Cloud vendors also try to ensure that enterprises do not experience any delays as they expand and build new data centers. Some of the top providers, for example, have started to create instances of their offerings on premises at major colocation facilities, such as those from Equinix, Digital Realty and Telecity, according to Gartner's Cappuccio. In this case, users avoid the potential busy spots that could arise on the internet when using public cloud, and instead ride a private connection to these colocation data centers.

Gradually, cloud providers will continue to add more management and redundancy to ensure businesses feel comfortable moving workloads from on premises to the cloud.

Next Steps

Negotiate your cloud SLA

Discover AWS regions and availability zones

What do you need for cloud storage availability?

This was last published in March 2017

Dig Deeper on Network and application performance in the cloud

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What issues have occurred with your workloads as a result of data center expansion?
Cancel

-ADS BY GOOGLE

SearchServerVirtualization

SearchVMware

SearchVirtualDesktop

SearchAWS

SearchDataCenter

SearchWindowsServer

SearchCRM

Close