Reflection on Containerization Technology

Containerization technology is an indispensable consequence of the rapid growth in number of software applications and the subsequent urgencies for computational resource utilization, managing software dependencies, ease of scalability, software packaging and moving them from one operating system (OS) to another without duplication.

Consider a classical deployment architecture: the OS is installed on hardware; multiple applications run on OS; each application has certain dependencies and libraries and specific requirements in terms of OS. This architecture naturally creates a conflict issue between the applications, e.g., some applications use specific version of a library, while others use a different version of that. To resolve this issue the applications must run in isolation.

One way to address this is by employing virtualization architectures. In this scenario, hypervisors were introduced on top of the infrastructure (and the host OS, depending on the type of hypervisor), to run the guest OS and applications with their dependencies. Although the resources were utilized more efficiently in comparison to the traditional architecture, there is still lack of resource utilization in virtual machines.

The second approach for isolation process is through containerization. In this approach, the container engine (e.g., Linux containers (LXC), Docker, rkt) is introduced on top of OS which help to pack the library and dependencies along with the application in a container. In this picture, each container is managed independently by the container engine. The container engine communicates with the OS and only allocate the required resources for running the containers.

In comparison to virtual machines, this creates about full utilization of the resources in the container architecture. Moreover, the containers (particularly, Docker containers) can start in few milli-seconds, while it may take a few minutes for virtual machines to start.

Although docker containers became very popular in software development communities to build, ship, deploy and scale applications, not all versions of Windows support Docker containers. Another aspect is that Docker containers were designed for applications that have command-line interface, which is usually not the case for Windows applications where graphical user interface is present. Although there are ways of running containers on Windows, it is not still as straightforward as running them on a Linux machine. Therefore, in near future we should expect significant improvement in terms of Windows based platforms for running Docker containers.


The Missing Introduction To Containerization

The Differences Between Linux and Windows Containers

Microsoft’s Docker Strategy: The Future of Windows Containers

YouTube Channel: Accelerate Rediscover In 4 Minutes

AI for Materials Research