What is containerization?

The classic case for a container - Containers helped resolve the typical dilemma faced by most developers in the evolving native-cloud based application opportunity – varying computing environments meant that an application tested using Python 2.x could face issues because the production environment ran on Python 3.x; or the application is tested on certain Secure Socket Layer / SSL library, while the production environment has another one installed. Additionally, variations in security settings, file storage set up and network setups could cause the application to fail.

Microservices - As computing evolved to deployments onto native-cloud and serverless architecture, there was a need for lighter software applications that could be:

  • Deployable independently, be maintained and tested by smaller teams
  • Coupled loosely with other business applications and processes

These requirements led to the concept of ‘Microservices’. Microservice enabled legacy technology solution stacks to be broken into smaller logical units/parts that can run independently on cloud environments, allowing these complex applications to be quickly tested and deployed reliably.

Microservices being loosely coupled and independently deployable smaller services, needed a platform that supported lightweight deployable capabilities.

Emergence of containers and containerization

Container technology emerged as the preferred choice as a deployment platform for microservices due to characteristics like being – Light, Modular, Portable.

To allow applications (like microservices) to operate across computing environments, the concept of containerization emerged. While Virtual Machines (VM) were in use before containers, VM for microservices were bulkier due to OS images in the VM. A Container is a logical packaging of the application, isolating it from the environment in which it runs and is an abstraction at the application layer. This allows container-based applications to be deployed with ease, regardless of the target environment which could be a private data center, the public cloud, or even a personal computer.

Though the concept of containers made an appearance over a 10 years ago, built into Linux - LXC with other interpretations from Solaris Containers, FreeBSD Jails and AIX Workload Partitions. Most developers will relate to Docker as the introducer to the modern container era.

Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one anot her and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. All containers are run by a single operating system kernel and therefore use fewer resources than virtual machines.….The software that hosts the containers is called Docker Engine. It was first started in 2013 and is developed by Docker, Inc.

A single container could be used to run a microservice or even a software process to a larger application. The container consists of all the necessary executables, binary code, libraries, and configuration files.

The Open Container Initiative (OCI) run by the Linux Foundation helps to develop industry standards for container formats and container runtime software across platforms. The starting point of the standards was based on Docker technology, Docker were the early providers of containers.

The project's sponsors include AWS, Google, IBM, HP, Microsoft, VMware, Red Hat, Oracle, Twitter, and HP as well as Docker and CoreOS.

Containerization adoption due to CI/CD – DevOps

Container and microservices adoption have been led by organizations that have transitioned to modern development and application patterns like DevOps / CICD as the way they build, release, and run software.

  • CI/CD or CICD generally refers to the combined practices of continuous integration and either continuous delivery or continuous deployment. CI/CD bridges the gaps between development and operation activities and teams by enforcing automation in building, testing and deployment of applications.
  • DevOps is a set of practices that combines software development (Dev) and IT operations (Ops). It aims to shorten the systems development life cycle and provide continuous delivery with high software quality. DevOps is complementary with Agile software development; several DevOps aspects came from Agile methodology.

Container deployment basics

Container deployments entails the action of installing container images to their computing environment, this could include environment variations like cloud (private/public) or bare-metal servers. In a normal production containerized environment, it is common to have multiple containers deployed at once, in fact large scale operations might deploy hundreds or thousands of containers a day.

The containerization platform

Containers are deployed by using containerization platforms like, Docker Desktop, Red Hat OpenShift, D2IQ-Mesosphere DCOS, Amazon Web Services ECS/EKS, Microsoft Azure Container Service and Google Container Engine-GKE among others.

Starting with the Container Image

First step for a container deployment is to build a container image for your container. This can be done by creating a new image or reusing existing container images from the container repository. Each containerization platforms hosts its own container image repository.

Container deployment general steps

Container deployment can be generalized into 3 steps

Assess or test is done to ensure that the microservice/application code delivers intended outcome and is also used to expose the system dependencies needed for correct outcomes. The assessed code is then Compiled with the base image to form the deployable container image and registered into the Container Registry hub after testing and verification. The container is now ready to be Deployed/activated by the relevant deploy CLI used on the containerization platform.

Container orchestration - As large number of containers enter production there is a need to manage containers. Containerization platforms have introduced “orchestration tools” that help in managing or as referred by industry ‘orchestrate containers’. As organizations deploy and manage thousands of containers, container orchestration tools help deploy, manage and network containers. Through container orchestration the lifecycle of containers in the operating environment is managed.

How is container orchestration used?

  • Service Management : Establish and deploy containers, manage availability and failover of containers, manage communication of the processes in the container with external environments, monitor containers and hosts from an operational readiness perspective – for example, authentication and security
  • Scheduling : As per application requirements within containers – enable and allocate required resources at scheduled times, schedule actions like deploy, restart, upgrade, stop containers as per operative needs
  • Resource Management : Between containers and for individual containers - manage usage of hardware resources like memory, processor capacity, storage, communication ports and networks. Balance load by moving containers to hosts that are less loaded, restart and move affected containers in case host machine fails to stable hosts

Considerations when choosing a platform for containerization – End users/enterprises would have multiple considerations before deciding to work on a specific platform. For example, at a higher level:

  • What are the development approaches currently in use – DevOPs or CICD or other?
  • How would the existing collaboration and workflow for the development teams and operations change?
  • How would the existing IT environment architecture scale and migrate to the new architecture, would the existing operational needs impact application packaging and dependencies? Database, storage and network impacts?
  • Implications to infrastructure monitoring and security?

How does containerization actually work?

A container consists of an executable application/microservice, running on a host OS. In a complex environment thousands of containers run concurrently. Orchestrators manage the application delivery, this works as a container runs small, isolated processes/services within itself.

  • Layer 1 - Infrastructure : Is the hardware layer with the infrastructure of CPU(s), disk storage and network interfaces
  • Layer 2 – Host operating System : On the hardware infrastructure the host OS kernel is installed and operational. The OS software coordinates between the Layer 1 and Layer 3
  • Layer 3 – Container runtime / platform : The container runtime app / engine consists of software code needed to coordinate between the host OS level and the deployed container. Layer 3 normally consists of a daemon service and API and CLI interface
  • Layer 4 – Supporting libraries and binaries : Encapsulated within the container Layer 4 consists of the code and libraries needed to communicate to/from external interfaces through Layer 3 – Container Platform
  • Layer 5 – Application code / code for a service (Microservice) : Encapsulated within the container Layer 5 consists of the code / application that the container was setup to deliver


Containers encapsulate discrete components of application logic through microservices, provisioned only with the minimal resources needed to do their job. Applications and dependencies run on top a containerization platform. The containerization platform runs on a host OS and compute environment of choice, allowing deployed containers to work independently of host OS dependencies. Microservices is an architectural style that structures an application as a collection of loosely coupled services that are fine-grained, and the protocols are lightweight. Containerization is the preferred choice as a deployment platform for microservices.