Containers and microservices continue to change the way enterprises design, build and deploy applications. But,...
to successfully make that shift, they also require a new set of tools.
Red Hat has entered the container arena with OpenShift, an open source platform that helps developers and operations staff accelerate application development and deployment for on-premises, private cloud and hybrid cloud environments.
Red Hat OpenShift Container Platform is based on Docker, Kubernetes and Red Hat Enterprise Linux (RHEL).
Developers can use OpenShift to facilitate a project's entire pipeline. It supports workflows and self-service provisioning of IT resources, so developers can provision containers, pull code from a version control system, execute and test a new build in Docker-ready containers, and then deploy those new containers or entire applications.
Operations teams can also use OpenShift. Orchestration and automation are core attributes of the platform and assist with container and application builds, deployments, scaling and performance monitoring -- although, operations staff still monitors and manages the underlying resources and container instances.
The operations teams can also delineate policies in Kubernetes to manage container-based applications. Orchestration and automation support also extends to clustering and scheduling, enabling OpenShift to offer automatic scaling and load balancing to keep microservices-based workloads performing effectively as traffic loads change.
Other deployment options
As an on-premises option, Red Hat OpenShift Container Platform may not be possible, or desirable, for all enterprises. Red Hat has two other deployment models to choose from: Dedicated and Online.
OpenShift Dedicated provides enterprise users with a private, high availability OpenShift SaaS cluster. Users can deploy the cluster on their preferred region of AWS or Google Cloud Platform, but Red Hat operates both options as a service.
The Red Hat OpenShift Online option also offers access to OpenShift as a hosted public cloud service. The primary difference between Dedicated and Online is that Dedicated users make a long-term financial commitment to the platform, while Online users pay hourly, on demand -- a model that is almost identical to other public clouds.
Portability is a crucial hallmark of container technology, and containers built on Red Hat OpenShift Container Platform will run on any system that supports Docker containers. In addition, containers are ideally stateless instances, which do not retain data as a part of their normal execution. However, stateful instances are still important components of many applications. The platform offers persistent storage that supports stateful services, when required.
Requirements and integration
Red Hat OpenShift Container Platform is built from several core components, including an API/authentication engine, a data store, a scheduler and a management platform. All of this runs on a RHEL OS.
The minimum system requirements for a master host running version 3.9 include:
- a server running RHEL 7.3 or later or RHEL Atomic Host 7.4.5 or later;
- at least four virtual CPUs;
- 16 GB of memory; and
- 40 GB of disk space.
Depending on your installation preferences, there could be more requirements, and additional nodes carry slightly lighter requirements. It's important to always refer to current documentation when you install OpenShift on premises.
After installation, developers can also take advantage of OpenShift's integration with CI/CD tools, such as:
- HAProxy for application routing, clustering and high availability;
- Fluentd as an open source data collector;
- Jenkins as an open source automation server;
- Kibana as an open source data visualization tool; and
- Cassandra for metrics gathering.
OpenShift is also compatible with Docker tools, such as Builder and Registry. Potential adopters can review the OpenShift compatibility matrix to determine which tools or frameworks to integrate with the platform.
There are some alternatives to Red Hat OpenShift Container Platform that cover public, private and on-premises environments. Popular options, other than Docker and Kubernetes, include:
- Google Kubernetes Engine: A managed public cloud service for the orchestration of container-based workloads across clusters.
- Azure Container Service: Another container orchestration service, based in the public cloud, that optimizes deployment, management and operations of Kubernetes. Users can also choose an alternative orchestrator, such as DC/OS.
- AWS Fargate: A public cloud service that enables users to run containers as a managed service, especially when integrated with its Elastic Container Service and Elastic Container Service for Kubernetes.
- Rancher: A container orchestration tool with unified cluster, application and policy management, as well as Kubernetes support.
- Cloud Foundry Container Runtime: A tool that builds on Kubernetes to simplify container integration, deployment and management in highly available Kubernetes clusters.
- Datacenter Operating System (DC/OS): Based on the Apache Mesos kernel, an offering that can manage machines and deploy containers, services and applications.