Containers in cloud computing simplify and accelerate application deployment, but the ease with which users spin...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
them up can result in overuse. When this happens in the public cloud, container sprawl can drive up costs alarmingly.
Fortunately, container sprawl is manageable, but enterprises need to take control early and fight to keep it. And above all, remember that not all container sprawl management practices will address cloud costs.
The container sprawl challenge
VMs were the first popular virtualization strategy, but it was clear that companies could take virtualization too far, complicating both host management and application deployment.
Containers in cloud computing, and in the data center, offer a way to create virtual hosts that share an OS and some middleware on a physical server. This enables organizations to deploy more containers per server than they could with VMs. This also means, however, that the number of hosts in a data center can multiply even more and, because container systems are easier to deploy, organizations don't encounter management complexity as quickly as they do with VMs.
In the public cloud, container sprawl management is a challenge, but cost can be a bigger one. If containers in cloud computing proliferate, provider charges can increase drastically. And even worse, most recommended steps to overcome container sprawl are intended to reduce management complexity, with little impact on cloud cost.
If you want to control public cloud charges for containerized applications, reduce the number of container hosts you deploy. Evaluate these three options to accomplish that -- and to save money.
Option one: Combine application components
Many container users overcomponentize, which means they break up applications into loadable images that are smaller than needed. Don't separate components if you're not going to reuse them differently. Ideally, combined components should be adjacent in workflows, because that will shorten data paths and improve performance. Fewer application components also mean less complicated operations and easier, cheaper management.
Review the components of any containerized applications in your data center that are targeted for public cloud deployment. To reduce hosting charges, ensure you have the minimal useful number of containers before you shift to the cloud.
Option two: Combine VMs and containers in cloud computing
The second option to reduce the costs of sprawl is to combine VMs and containers in the public cloud. To do this, host your container system, such as Docker, inside an infrastructure-as-a-service platform. If you use many public cloud containers via a container service, you will likely be charged per container. But if you host a VM in the cloud and then create your own container hosting image in it, you could end up with a lower overall charge per container. However, this isn't a guarantee, and there are still issues to address with this model.
For example, the addition of VMs between a container OS and bare metal will impact performance. Users report that, at best, you'll lose about 25% -- and as much as 40% -- of machine performance versus running containers in cloud computing directly. You'll need to see significant cost benefits to justify this approach and choose your applications carefully. If containers host application components that don't use many resources, such as I/O or CPU and memory, but have to stay resident most of the time, this VMs-and-container approach can work.
Option three: Go serverless
The third option is to replace some containerized components with serverless components. This addresses the problem of sprawl directly, because it lets users pay for the processes they actually use, rather than the hosting points they consume. The problem is that organizations often need to redesign applications or components to run in a serverless model.
With serverless computing, applications are divided into a series of simple components that are loaded and run when and where organizations need them. Just like the VM hosting of a container system works best when you have many containerized components with infrequent use, serverless computing is made for that kind of application. You can have thousands of application pieces on call, and if the call doesn't come, you pay nothing.
Before you adopt serverless computing, review the available frameworks from major cloud providers, such as Amazon Web Services, Google Cloud Platform and Microsoft Azure. Serverless computing is more than just a different kind of programming; it's a whole new model of applications. You'll need to grasp the full context of it to take advantage.
Get to know important terms related to containers
Pick the right provider for containers as a service
Solve hybrid cloud challenges with container orchestration