What is container orchestration?

As many containers enter production there is a need to manage containers. While smaller container implementations could use a combination of customized tools or use shell scripts to manage the containers, manual intervention needed is high. Containerization platforms have introduced “orchestration tools” that help in managing or as referred by industry ‘orchestrate containers’. As organizations deploy and manage thousands of containers, container orchestration tools help deploy, manage and network containers.

Through container orchestration the lifecycle of containers in the operating environment is managed.

How is container orchestration used?

  • Service Management: Establish and deploy containers, manage availability and failover of containers, manage communication of the processes in the container with external environments, monitor containers and hosts from an operational readiness perspective – for example, authentication and security
  • Scheduling: As per application requirements within containers – enable and allocate required resources at scheduled times, schedule actions like deploy, restart, upgrade, stop containers as per operative needs
  • Resource Management: Between containers and for individual containers - manage usage of hardware resources like memory, processor capacity, storage, communication ports and networks. Balance load by moving containers to hosts that are less loaded, restart and move affected containers in case the host machine fails to stable hosts

Popular Container Orchestration tools

Kubernetes which was created by Google and now maintained by Cloud Native Computing Foundation has become the default orchestration system that is used across platforms. Kubernetes is a platform for automating deployment, scaling, and managing operations of containers across compute environments and hosts. Platforms have built additional capabilities with Kubernetes as the base. Some examples of orchestration tools and their variations follow:

  • Docker Swarm: Establish and deploy containers, manage availability and failover of containers, manage communication of the processes in the container with external environments, monitor containers and hosts from an operational readiness perspective – for example, authentication and security
  • D2IQ Mesosphere Marathon: As per application requirements within containers – enable and allocate required resources at scheduled times, schedule actions like deploy, restart, upgrade, stop containers as per operative needs
  • Amazon ECS: Between containers and for individual containers - manage usage of hardware resources like memory, processor capacity, storage, communication ports and networks. Balance load by moving containers to hosts that are less loaded, restart and move affected containers in case the host machine fails to stable hosts
  • Azure Service Fabric: Azure Service Fabric is a platform to package, deploy, and manage microservices and containers.
  • Azure Kubernetes Service (AKS): Facilitates the deployment, management, and operations of Kubernetes. AKS offers serverless Kubernetes with Azure Active Directory security and governance options at enterprise-scale.
  • Google Container Engine (GKE): Google Container Engine is built on Kubernetes and facilitates running and management of containers on the Google Cloud.
  • Red Hat Advanced Cluster Management for Kubernetes: GA management solution created to manage hybrid cloud-native applications running in container environments. Provides visibility, policy governance and control for containerized environments.
  • Red Hat Quay: A container and application registry providing secure storage, distribution, and deployment of containers on any infrastructure. Red Hat Quay.io is a hosted version of Red Hat Quay.
  • CoreOS Fleet’s Tectonic:Container management tool that lets you deploy Docker containers on hosts in a cluster as well as distribute services across a cluster, has been integrated with Red Hat.
  • Cloud Foundry’s Bosh is a cloud-agnostic open source tool for release engineering, deployment, and lifecycle management of complex distributed systems.
  • Cloud Foundry’s KubeCF is an application runtime for Kubernetes.

How does container orchestration work?

Container orchestration tools, such as Kubernetes, are updated with information on where container images are, where the logs need to be stored, how to establish connectivity/network, processor/CPU limits, metadata, user-defined labels, and memory availability. This is done through a configuration file which is either a YAML or JSON file.

The orchestration tool deploys the container to the right cluster host and manages the container lifecycle – based on information about requirements and restrictions in the configuration file. Orchestration tools have the feature of managing version control for the configuration files, allowing the container/application/microservice to be deployed in multiple environments – development, test and production.

Typical Orchestration Components (Docker Engine as sample)

Cluster components in an orchestration tool/platform include control planes, nodes (or worker). The worker node(s) host the Pods (a set of running containerized applications) that are the components of a larger total solution/application. The control plane (The orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.) manages the worker nodes and the Pods in the cluster. In production environments, the control plane would normally run across multiple computers and a cluster usually runs multiple nodes, leading to high availability and fault-resilience.

Summary

Container orchestration is the use of tools to automate the deployment, management and networking of containers. Orchestration focuses on three core areas in automating container management – Service Management (configuration, provisioning, availability, security, health monitoring), Scheduling (scaling, start, stop) and Resource Management (load balancing, resource allocation).