News Stay informed about the latest enterprise technology news and product updates.

Q&A: AWS Docker integration marks new era for Linux containers

In this Q&A, Docker CEO Ben Golub discusses integration with AWS and future development of the hugely popular container technology.

Docker, Inc. CEO Ben Golub has overseen one of the most meteoric recent rises of a technology since the emergence...

of Ruby on Rails in 2005.

Over the past 19 months, his company has popularized a new approach to container technology that promises cloud-scale application portability. Amazon Web Services (AWS) was the third of the "big three" cloud vendors to deepen its support for Docker, when it launched an Elastic Compute Cloud (EC2) Container Service at its re:Invent conference this month.

We caught up with Golub following the conference to ask him about AWS Docker integration, how the EC2 Container Service differs from competitors' offerings, how Docker differs from previous generations of container technology, and how it will be developed going forward.

AWS announced its EC2 Container Service at re:Invent. What's different about this service vs. AWS's previous integration with Elastic Beanstalk and Google’s Kubernetes?

Ben Golub: With AWS, what you're seeing is significantly greater use of native Docker interfaces, greater integration with Docker Hub, which is our hosted service that provides registries, access to over 50,000 Dockerized applications, languages and frameworks, so that allows people to use things like private registries and workflow functionality. It has significant new integration into the AWS infrastructure, so [the difference is] the ability to use Availability Zones and security features, and the whole range of instances.

Kubernetes is a project sponsored by Google, and then there's the Google Container Engine which is the Google Compute Engine equivalent of the AWS EC2 Container Service. We were very happy that the AWS integration used native Docker interfaces and integrated with Docker Hub.

Containers are by no means a new technology. Why is Docker getting so much attention right now? What has changed?

Ben Golub

Golub: We tend to use the shipping container as an analogy a lot. Prior to Docker, what you had [were] sealed boxes. There was great isolation technology that was developed for Linux, Solaris and BSD, but it wasn't usable by developers, it wasn't portable, there wasn't a standard interface and as a result of those things there also wasn’t an ecosystem around it.

Prior to Docker, containers were a low-level technology that was used in places like Google, but they had specialized teams, specialized infrastructure, and it was largely a tool for ops rather than developers.

We provide clean interfaces, make it very easy to integrate Docker with existing tools like source code so that building the container is no more difficult than writing your code. We've provided the ability to move Docker containers between any two servers, which is compatibility on the operating side. But also, the way Docker has been structured, we define all of the layers and dependencies of an application so that as a developer, you can make sure that what's working on your laptop will work in staging and production and as it scales across clusters.

It also means there's not a lot of rework, so if you're making a minor change to your application, you’re able to leverage all the work you’ve done before or the work others have done before.

In the past, containers have not been considered good candidates for highly available, critical applications. How can users assure availability if the underlying OS dies? What tools are out there for this?

Golub: A lot of the issues around availability and security have gotten much, much better over the last couple of years. Docker containers are so cheap and fast to create, and fast to destroy, that a lot of the pain people felt in the past in terms of the way they thought about how you achieve high availability or achieve state has been changing.

It's incredibly easy to run multiple versions of the same application on different hosts. The way that we're recommending people build applications is that they put each process into its own Docker container and provide ways for them to interface so that things like storage can be persistent even as you move, create and destroy applications. People are able to use containers within VMs, but they're also able to use containers in entirely new ways that mitigate a lot of the concerns we used to have about availability back when every application was tightly bound to infrastructure.

A lot of the existing thinking around things like HA was created when applications lived a long time, were monolithic and ran on a single server, so you'd worry about the server dying … it's sort of the difference between NORAD and the Internet -- having one thing that's very powerful but if it fails, it's catastrophic versus being redundant. And if one element breaks, you don't notice it because you can just reroute.

In the Docker view of distributed applications, each component of the application is Dockerized and can be run across multiple servers, so you worry a lot less about any individual server going down, any individual disk failing, etc. Certainly if you have a bunch of containers running on an OS and that OS fails, then the container won't be able to run until you restart the OS, but that's the same problem that exists with VMs. In fact, it's even worse with VMs because you have a bunch of VMs on a host, each VM has its own guest operating system which can fail, and it’s running on top of a hypervisor that can fail, on top of a host OS that can fail.

There are also some concerns in the market about Docker's security. How will Docker improve its security stance in the future?

Golub: Docker has been significantly hardened and is able to leverage things like SELinux and AppArmor to provide greater isolation. I also believe fundamentally that in addition to the work it's doing to make Docker more secure, you can also run Docker in a layered way, so you run Docker inside a VM. And with things that we are rolling out around container signing, you can actually get to a much more secure state because every application is composed of elements that are small, so what they are and where they came from can be assured and managed via policy, and you also reduce the attack surface.

Docker is still a young company. How should we expect the technology and the company to evolve over the next couple of years?

Golub: We've tripled the size of the company since the beginning of the year and have added a lot of great executive talent. Our focus now is on the Docker project, [and] making sure that it really does the multi-container model of orchestration very well. We are also launching services around management and orchestration of Docker containers.

How does Docker the company sustain its position in the market with these big players offering services based on the open-source project?

Golub: What we’re hearing from a lot of enterprises is that what they want from Docker is a set of commercial software that helps them manage both the software development lifecycle and the ability to run containers that are spread across on-premises and multiple different [public] clouds.

Beth Pariseau is senior news writer for SearchAWS. Write to her at bpariseau@techtarget.com or follow @PariseauTT on Twitter.

Next Steps

Unfounded fears delay Docker container technology adoption

PRO+

Content

Find more PRO+ content and other member only offers, here.

Essential Guide

Get to know Docker, container technology out of the box

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Of course, all with all the unix things, docker had to come to AWS. The biggest value I see for this is in packaging virtual environments, build/deploy pipelines, and in enabling blue/green deployments. My only hope is more writing and explanations and training on how to do the stuff -- right now it is explained at the conceptual level, but most of the hard core "how to do it" seems to be a black art.

I blame the death of the paper book, and the fact that by the time a book is published, it is obsolete. :-)
Cancel

-ADS BY GOOGLE

SearchServerVirtualization

SearchVMware

SearchVirtualDesktop

SearchAWS

SearchDataCenter

SearchWindowsServer

SearchCRM

Close