While many applications are deployed in the cloud as virtual machines, the use of containers continues to rise....
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Containers break down applications into interrelated pieces -- an architecture called microservices -- which allows for scalability, security, efficient cloud resource use and superior management.
Organizations using OpenStack, an open source cloud computing platform, are now faced with the challenge of running containers and OpenStack together. But there are certain OpenStack modules that can help.
OpenStack is not a single cloud platform, but a collection of interrelated software components that pool, provision and manage resources across the data center. Users interact with OpenStack components using a Web, command line or REST API interface. Since the OpenStack Liberty release in October 2015, there are at least 16 components contributing to the data center cloud environment -- and at least four are involved in the deployment and management of containers.
OpenStack Magnum, for example, is a recently added API service that supports container orchestration engines including Docker, Google Kubernetes and Apache Mesos. Magnum makes containers a self-service option for public and private cloud providers. It's a complement to existing OpenStack modules, such as Nova virtual machine (VM) instances, Cinder volumes and Trove databases. Magnum can scale containers to a large number of instances, allow applications to respawn in the event of disruption and run more container instances simultaneously than traditional VM instances can.
Three OpenStack modules related to Magnum include:
Magnum uses the Heat module to orchestrate an operating system image containing a container orchestration engine like Docker or Kubernetes. It then runs the image in either a VM created by Nova, or on a bare-metal system using Ironic.
OpenStack containers on bare metal vs. VMs
Organizations typically deploy containers as virtual entities running on top of a common operating system kernel. However, a common OS negates the isolation and security benefits that come with VMs. To meet isolation and security requirements, organizations can deploy containers within a VM. A third deployment option is bare-metal deployment, where container applications can be deployed without VMs. OpenStack supports containers on VMs or on bare metal.
For example, containers started with the Magnum OpenStack module run on top of Nova VM instances that are created using Heat orchestration. However, some applications require additional portability, which is not always possible with VM-based deployments; developers would prefer to deploy an image to a server to start the desired application. In this case, the Ironic tool can provision instances almost directly on system hardware, using a single lightweight image and APIs to communicate with the hypervisor.
Ironic uses hardware-level server features like the pre-boot execution environment and intelligent platform management interface to provision and control system hardware, but additional features and functionality can be added to Ironic with vendor-specific plug-ins. For example, HP Inc. provides a OneView management platform driver for Ironic. Application developers can switch between OpenStack or bare-metal environments to develop and refine an application for proper deployment in the cloud.
Evaluate VMware vs. OpenStack for private cloud deployments
OpenStack projects can come with steep learning curve
Is container technology right for my applications?