Containers vs Virtual Machines (VMs)

Virtual machines (VMs)

VM simulates a physical machine with an application, supporting binaries and an operating system (OS image) encapsulated into it and is an abstraction of the hardware layer. VM technology can use one physical server to run the equivalent of multiple servers (each of which is called a VM). So, while multiple VMs run on one physical machine, each VM has its own copy of an OS, applications and their related files, libraries and dependencies.

VMs were introduced before containers and transformed deployment of multiple applications efficiently on existing machines. Before the introduction of VM, it was normal for each application to run on a separate physical machine.

VM is achieved through VM hardware and are managed by a hypervisor. A hypervisor, can be a combination of hardware, firmware or software that deploys, monitors, and manages VMs. A hypervisor supports multiple VMs each with their own OS.


A Container simulates a logical packaging of an application with required binaries encapsulated into it and is an abstraction of the software layer. The encapsulated container image is isolated from the environment in which it runs. This allows container-based applications to be deployed with ease, regardless of the target environment which could be a private data center, the public cloud, or even a personal computer. A single container can be used to run a microservice or even a software process to a larger application. Containers deploy by using virtual-memory hardware and operate though orchestrators usually provided by the containerization platform, the orchestrator manages resources used by the container and facilitate OS level communications.

A tabular comparison of additional differences and similarities between VMs and Containers follows:

Characteristic ​Containers ​Virtual Machines
Instance Security An individual container is isolated from other containers and the host. However, container boundary security could be compromised if best practices are not followed. Some suggested practices include
  • use of verified container base images to prevent malicious code within the container
  • disable ‘privileged flag’ on containers to prevent unrestricted resource access
  • restrict communication protocols and processes allowed in and out of the container preventing a container from being hijacked

VM provides strong isolation from other VMs and the host. This allows different apps to be hosted by VMs on the same server or cluster.

As the OS image is part of the VM the associated security protocols can be implemented on the VM. In addition, the OS resources and management tools are available to the app. This makes VMs more secure as compared to containers.

VM platform security capabilities have been leveraged to isolate each container in a lightweight VM, for example, Microsoft VM’s Hyper-V isolation mode.

Portability and Deployment
  • Containers do not have an OS image and use the host OS, making them inherently lighter and faster to deploy. This is possible through the Bin/Lib (binaries and libraries) in a container image that have instructions/code to support communication with/through the orchestrator. The orchestrator runs on the host OS, thus facilitating communication to/from the container and external applications. This makes containers to easily port across OS
  • Updating OS bin/lib files in a container is easily done by updating the container’s image file using ‘dockerfile’ and redeploying the container. This allows for quick fixes to containers when host OS is updated
  • An individual container is deployed through the CLI interface, while the orchestrator can be used to deploy multiple containers
  • VMs run the OS in them, making them heavier and needing more time to deploy in comparison to a container. A hypervisor communicates with VM and the host OS. The hypervisor runs on the host OS. The VM advantage being that the most suited OS for an application is encapsulated in it
  • Updating a VM to ensure that new OS updates are installed, would mean upgrading or at times creating a new VM. This is more effort and time when many VMs are in production
  • An individual VM is deployed through the CLI interface, while a virtualization management application like VMware’s vSphere can be used to deploy multiple VMs
Persistent storage Use local storage for a single node, or SMB shares for storage shared by multiple nodes or servers. Use a virtual hard disk (VHD) for local storage for a single VM, or an SMB file share for storage shared by multiple servers.
Load balancing Orchestrators facilitate start and stop of containers to manage use of resources based on load and availability. The hypervisor facilitates start and stop of VMs. VM load balancing to manage load and improve availability, can also involve moving running VMs to other servers in a cluster.
Fault tolerance The orchestrator recreates a new cluster node, inserting any containers that were running, in a failed cluster node. The hypervisor moves VMs to a new fail-over server cluster, if the existing one fails. The hypervisor restarts the VM OS on the working cluster.
Networking Containers use virtual network adapters. VMs uses an isolated view of a virtual network adaptor.


The primary difference between VM and Container is that a container does not have an OS, multiple containers can run on a single OS instance. While a VM includes an OS instance, allowing multiple OS instances to run on one physical hardware. Running software in containerized environments generally uses less space and memory than running software within different VMs, since the latter requires a separate copy of the OS to run on each VM. Containers can be run within VMs.