People often talk about how containers can help organizations overcome the issue of application portability in cloud computing, as well as with other platforms. Along with slimming down resource requirements and enhancing scalability, the ability to move containerized applications from one cloud platform to another is high on the list of container advantages. But how realistic is this portability notion? What should decision-makers know before they commit?
Let's get the disappointment out of the way first. While there are many advantages to using containers, organizations are finding that portability is not one of them -- at least, not in the practical sense.
For true portability, organizations still need to look to virtualization. Comparing virtualization to containers, however, is a bit like comparing a tractor-trailer to a school bus. Both move things from place to place along a road, but each has specific uses through which it can drive the most value. Imagine putting youngsters in the back of a truck or filling a school bus with cargo.
The truth about containerized applications
Portability means that an application can be moved from one host environment to another. While we typically think of this as a cloud-to-cloud shift, we can move containers from on-premises platforms, such as Windows 7, Sparc or Linux, to cloud-based platforms that can leverage the same or different operating systems on a public cloud.
The work needed to complete the porting of an application from one platform to another depends upon the exact circumstances. Many IT teams envision a simple lift-and-shift scenario, where they move a binary from one environment to another or move the source code that can do a clean compile without modification. Those who have done this in real life know that those best-case, lift-and-shift porting experiences are rare.
Porting applications, regardless of whether they're within containers or not, requires a great deal of planning to deal with the incompatibilities of the different environments. Adopting containers provides no assurances that your containerized applications will be portable from cloud to cloud, desktop to cloud or whatever platforms you're attempting to move between. The reality is that containers are just a fancy way of packaging applications and all of their OS dependencies.
So, why use containers? Why not?
Containers do provide some portability advantages. Using containers, you can build an application once, containerize it and run it on any host platform that supports containers and runs the same operating systems.
Boiling this down to the essence of what containers are, you host applications inside of containers on Linux. Then, you can move the containers just by relocating the images, as long as the host supports the OS. Thus, porting an application from one Linux flavor to another is a pretty simple process -- it was also a simple process before containers came along.
The portability issues come in when looking at container standards such as Docker that don't support porting across different types of operating systems. Indeed, you can't take containerized applications meant for Linux and run them on Windows. It won't work the other way around, either. For instance, if you're looking to move a containerized application to macOS, good luck. It's not supported. Nor can you move it to Android.
Never mind trying to take a container created for an x86 system and attempting to run it on an ARM server. This is still not possible, even though containers such as Docker support ARM. If you compare virtual machines to containers, VMs would support most of these porting use cases.
The good news
So what does all this mean for your organization's cloud computing strategy?
Containers do provide developers with the ability to add structure to the deployment of applications. Containers can be used from within an orchestration engine, such as Kubernetes, thus providing both scalability and failover -- if handled correctly.
This, however, comes at a cost. Other things must be done to ensure that scaling occurs. The ability to design and write stateless applications, for example, needs to be built into the containerized applications.
In the case of containers, you can maintain state within the container. The best approach, however, is to maintain state outside of the container. This cleanly separates behavior from the data. This is desirable, since all container instances can participate in the operations of a single application, as long as they can write and read to a common state.
The advantage with this approach is that you can launch as many containers as you need, and they can jump right in and pick up a workload by using state as a starting point. This allows the containers to scale by taking advantage of clusters. Once the containers have carried out a specific behavior, they write state and then return to the resource pool. Other application containers can pick up things from there, with the containers working along with state to carry out whatever they are programmed to do.
Another issue to keep in mind is that containerized applications need to survive application restarts and outages. This means that state not only keeps track of the progression of application operations, but maintains the ability to resume where the application left off in case the cloud provider, network, physical server or other resource goes down.
What this all means is that containers are helpful in some, but not all cases. Clearly, the notion that containers remove us from having to deal with portability and compatibility challenges is a bit of stretch. Put to use properly, containers can provide some portability advantages when considering specific sources and targets.
The challenge is to get beyond the hype and the misinformation. Some shiny object always comes along and promises a little more than the technology can actually deliver. We should be used to this.