- Trevor Jones, Site Editor
In discussions about cloud, networking often gets short shrift compared to compute and storage resources. But that might change, as more options for complex and data-hungry applications enter the market.
Some form of partitioned connection to the public cloud has become the de facto standard for enterprises. In many ways, the most straightforward solutions -- VPNs and direct connections -- aren't all that different from what companies have done for years. But that doesn't mean there aren't challenges, particularly for IT pros with workloads in multiple environments and a globally distributed user base.
Major cloud providers have their own flavor of dedicated networking to their global data centers. Services, such as Amazon Web Services (AWS) Direct Connect or Microsoft Azure ExpressRoute, provide private connections from enterprise data centers to the public cloud or a colocation facility. Google, which has put a concerted effort into being more enterprise-friendly, added a similar service in September called Dedicated Interconnect.
Smaller companies typically lack the locations to go that route, while midsize businesses will often turn to VPNs or dedicated lines, said John Engates, CTO at Rackspace, a managed service provider that works with major public cloud providers. Larger organizations tend to already have private networks that link their regional offices and headquarters, so using a service, such as AWS Direct Connect, is a logical way to use public cloud resources.
"A lot of these companies already have relationships with Equinix and other colocation providers, so it becomes a pretty seamless process to extend that global network right into the facility," Engates said.
For companies that move data to and from the cloud, these dedicated cables provide security, consistency and better throughput. They're also popular options for new customers conducting large-scale migrations to the public cloud.
Hightail, a file-sharing and collaboration provider based in Campbell, Calif., moved petabytes of data from its private data center to AWS. Hightail, formerly YouSentIt, was offered several options, including AWS Direct Connect and Snowball, the shippable data transfer appliance.
Hightail found the speeds of AWS Direct Connect sufficient and decided to skip Snowball. The company moved its data over the course of three months and completed the entire migration in six months. That approach might not work for organizations in more remote locations, but the connection between Hightail's data center in the Bay Area and AWS was good, said Shiva Paranandi, senior vice president of technology at Hightail.
"We had to do a lot of checks and balances to make sure the data was right … and to verify the security and reliability," Paranandi said. "After we did the initial few terabytes of data, our confidence just kept growing."
Cloud network services mature
The public cloud has come a long way in addressing network issues. In its infancy a decade ago, there were few firewall rules, said Dustin Kirkland, a vice president of product development at Canonical who was an engineer at IBM in 2006, where it would take months to stand up a server.
"The explosion in cloud in the early days was that it was fast, easy and cheap, and I've got access to everything I need," he said. "Things have changed and certainly for the better for security."
One of the most important changes was advanced policy mechanisms that administrators could apply on a per-user basis, Kirkland said.
"Before, it was very difficult for a CIO at a traditional enterprise to put parts of their infrastructure in the cloud, given the security posture and features that it offered -- primarily from an inbound-outbound firewall perspective," he said.
A business can securely connect to the public cloud in other ways, including with something as simple as a Secure Socket Layer tunnel to encapsulate traffic. Multiple endpoints raise the complexity, though, and create the need for key management, certification management for participating machines and bidirectional authentication.
Connecting the edge
Edge computing represents a divergence from the decades-long shift toward centralized IT infrastructure and presents new challenges for cloud connectivity.
AWS has pushed further than other vendors to address edge issues. Its Greengrass software pushes cloud computing networking and limited compute capabilities outside AWS data centers so users can crunch data locally before periodically shipping it -- often compressed and filtered -- back to Amazon. That data can then be run through machine learning algorithms inside the public cloud and sent back to the edge device to improve manufacturing performance.
But the factory floor isn't the only place where networking limits the use of the public cloud. Mobile phones, drones and internet of things devices often don't stay in the same location, which can translate to inconsistent connectivity speeds. Another example could be a self-driving car, said Dustin Kirkland, vice president of product development at Canonical. It's one thing for the vehicle to talk to Google Apps for traffic and weather, where latency isn't an issue, but there are basic automotive operations that the car needs to make locally.
"What it can't do is ask the cloud, 'Can I change lanes now or should I break?'" he said. "It may or may not come back with an answer in time, and bad things may happen."
One of the biggest networking obstacles is the cost of getting data out of the public cloud. Major providers let customers move their data in for free, but pulling data in the opposite direction can cost more than 10 cents per gigabyte.
Egress fees aren't much of an issue for some of the most common uses of public cloud, such as test and development and disaster recovery. For test and development, much of that data can simply be wiped when the server is taken down after the user moves the workload in-house. In backup or disaster recovery, that one-time cost to pull the data out can be framed as the cost of doing business when faced with an outage.
But as organizations put more production workloads into the public cloud, particularly in hybrid and multicloud scenarios, egress costs can be prohibitive. And since crunching massive data sets has become a popular way to use the cloud, the answer for some customers is to not pull data out of a given cloud.
Multiplay, a gaming service company in Southampton, England, prefers to run most of its workloads on private bare metal, but it uses AWS and Google Cloud Platform in certain locations to get as close to the end user as possible.
Multiplay creates VMs, with attached storage, that effectively act as jump boxes to pull in a user's file data all at once. The company doesn't use direct cloud network services because it sends only 20 GB files. For Multiplay, the more important connection is between the end user and the cloud, not the company and the cloud. All its cloud workloads are session-based, so rather than transfer data from one cloud to another, it simply starts a new one in a different cloud as needed.
"We're never doing egress out of a cloud company unless it's actual game traffic," said Will Lowther, senior business development manager at Multiplay. "Our egress is heavily taking input from running simulations and sending out results 20 times a second for each second it's running. That's the only data we ever want to leave the cloud."
Colocation adds options for cloud network services
It's becoming more common for businesses with workloads in multiple clouds to host the data inside a colocation facility. Those facilities could be in major metro areas, or they could reside near one of the massive data center farms that cloud providers have in rural outposts across the globe.
Because of those tightly coupled connections, customers can stage the data in a colocation facility and effectively bring any of the major public clouds to that facility via a SaaS application to run analytics or other popular cloud-based services.
CDM Smith, an engineering and consulting firm in Boston with 160 global offices, ran data on a patchwork of outdated networking tools before it moved to Equinix three years ago. Now the company has its data peered at multiple locations worldwide and has the flexibility to choose the right cloud for the right workload.
CDM Smith stores its data in AWS, Azure and Zadara Storage, but it built in enough network support to weather most any storm. What's more, it projects saving more than $1 million this year by closing two data centers and eliminating third-party tools that similar services on AWS and Azure made redundant.
"We could theoretically have multiple network outages and still have failover and redundancy," said Michael G. Woods, director of business technology at CDM Smith. "We have no single point of failure to get them [to go down]. It would take multiple catastrophic events in multiple locations."
Other vendors try to use networking as an edge in the public cloud, too. VMware NSX and Cisco Application Centric Infrastructure use software-defined networking to integrate with the major public cloud providers. Google, which likes to remind potential customers that its cloud runs on the same network as Gmail, YouTube and its other billion-user services, recently tiered its cloud network services to give customers a lower-cost option. It claims that tier is just as fast as what is found on the other hyperscale clouds.
Learn how to implement direct connections for cloud
Improve your network design for hybrid cloud
Explore the benefits of direct connections