Eclipse Digital - Fotolia
Forget price wars -- the real fight between cloud providers in 2016 was a data center arms race.
Over the past year, a steady stream of new public cloud data centers opened across the world, as providers raced to meet customer demand. Cumulatively, the infrastructure expansions represent billions of dollars invested to get closer to customer hubs -- and there's no sign this will slow down.
Market leader Amazon Web Services (AWS) added four new regions this year, bringing its total to 15 regions and 40 availability zones, with one or more data centers in separate facilities within each availability zone. Three more regions -- U.K., France and China -- and seven availability zones are slated to open in 2017, and more are expected to be added to that list as the year goes on.
Microsoft Azure, its next closest competitor, doubled its capacity in Europe in 2016. It added multiple data centers in Canada, Germany and the U.K. to reach a total of 30 regions globally. Microsoft plans to open eight more in 2017, with multiple new regions in France, Korea and the U.S.
Google, which is in the midst of its own push to catch AWS and Azure, added two new regions this year to bring its total to six. In 2017, it plans to open a new region every month, including Sydney; Sao Paulo; Frankfurt, Germany; Mumbai, India; Singapore; London; Finland; and northern Virginia.
There are two main drivers to cloud data center expansion: data residency laws and proximity to customers. The former is particularly critical in nations where personally identifiable information must be kept within borders. In those cases, not having a region there can be a nonstarter for some customers.
Wipro Ltd., an IT consulting company and AWS partner, had Canadian customers that waited to put workloads on AWS until the region opened in that country earlier this month. The same scenario has played out in the Middle East, where some companies limit their use of cloud services in European regions, particularly for heavily regulated workloads, said Varun Dube, global practice head of AWS for Wipro.
Vendors can't just open one cloud data center if they truly want to compete for business. They must open multiple facilities, because customers will want to back up their data in another location within that same jurisdiction for redundancy and disaster recovery.
Al Sadowskiresearch vice president of infrastructure, 451 Research
"A business running in Germany has requirements to keep data in Germany," said Al Sadowski, research vice president of infrastructure at 451 Research. "If you're Amazon and you want their business, you'd better have multiple availability zones in a country, or your customer isn't going to come to your business."
Despite the perceived arms race, vendors say they make their decisions about expansion independent from the activities of other players in the market.
"Obviously, we watch the competition and global footprint of everyone, but for IBM, we really focus on demand we're seeing," said John Considine, general manager of cloud infrastructure services at IBM. "There's a lot of commonality there ... but rather than saying, 'Look at what they're doing there,' what we do is see demand from our customers and identify if that demand is real."
IBM opened data centers in Dallas; Oslo, Norway; Seoul, South Korea; South Africa; and the U.K., and it expanded capacity in 13 existing facilities.
Proximity matters for cloud customers
Consumers may want more information on cloud providers' investments to understand their commitment to a particular region, but the scope of cloud data center expansion can be opaque. Sometimes, providers build from scratch; other times, they lease space in existing facilities. Customers aren't allowed inside these data centers -- vendors most often cite security concerns -- and apples-to-apples comparisons between facilities are nearly impossible, as vendors don't disclose the number of servers or actual capacity of their data centers.
Thankfully, customers can cut through the subterfuge in a way that wasn't always possible in more traditional data center models. They can run test environments across providers to get a semblance of which is the best fit for their needs.
Multiplay, a gaming service company in Southampton, England, requires very low latency for the multiplayer online games it hosts. That typically means it has used bare metal in its own data centers or colocation facilities, but the company is expanding to public cloud facilities, too.
"It takes a while to get it set up and spun up, and if you've got a game that's incredibly popular, you can't react as quickly as you need to react to capacity demands," said Isaac Douglas, online services sales manager at Multiplay.
Multiplay has relationships with AWS, Azure and Google Cloud Platform, but it uses Google as its preferred cloud provider because it offers the best workload performance, particularly for automation and scalability, Douglas said. It varies by continent, but generally gameplay becomes jittery when an end user is more than 120 miles away from a hosting facility, so Multiplay still has workloads in several AWS regions because Google isn't operating there yet.
That's why Multiplay was particularly pleased when Google opened a new facility in Oregon, which gives the company proximity to the big gaming scene on the West Coast and where the largest number of game developers are located.
"By Google expanding, it means we get to use our preferred supplier and better serve our customers and better serve the players," Douglas said.
Trevor Jones is a news writer with SearchCloudComputing and SearchAWS. Contact him at [email protected].
Google discusses its hardware choices for cloud
Google looks to multicloud market with Stackdriver
When to consider a small or local cloud provider