Containers: focus on a technology that revolutionizes infrastructure

Simplified development and optimized resource management in an isolated environment

By Omar Naghmouchi Software Engineer


In a digitized environment, challenges such as fault tolerance, scalability and maintainability are becoming increasingly important.

Containers, combined with design patterns, especially micro service architecture, are revolutionizing the way IT systems are designed to better meet the needs of the new digital era.


An entire ecosystem has emerged around containers, allowing developers to quickly adopt the technology. Solutions to create, orchestrate, secure and benchmark containers appeared and keep on appearing everyday.

This ecosystem seems so magical that we often forget that behind the magic, the basic principle of containers is simple. This is what we will explore together in this article.


What is a container?

A container is a collection of isolated processes that run on a host. It enables the creation of an isolated space necessary for the execution of a program.

The container includes all dependencies for program execution, starting with the code and its configuration to the libraries on which it depends.


What is a container for?


Containers for easy deployment

Building a package that fits its configuration and dependencies, ready to be deployed and run on physical servers, VMs or on the cloud, is very interesting. By allowing this portability, we solve all issues related to deployment in different environments (development environment, integration, production …).

In addition, by ensuring that a program is designed to be decentralized, containers, thanks to their ease of management (creation, launch, stop, destruction…), guarantees smooth scaling-up and fault tolerance.

Easy deployment, with no downtime, also allows the application to evolve quickly by fixing bugs and adding new features.


Containers for optimal resource management while keeping the insulation

Containers make it possible and efficient to deploy multiple applications on the same infrastructure. A container is very light, compared to a VM, and measures only a few megabytes. Where VMs rely on a hypervisor, installed directly on hardware or on an OS, to run the guest OS; containers all share the same operating system, as processes, and insulation is managed at the kernel level.


Let’s build a container!

A container is a set of isolated processes.

To isolate them, we need to:

  • Assign a container-specific file system.
  • Restrict what it can see about other processes and the system.
  • Manage the resources that it can use (memory, processor, disk …).



To assign a container-specific file system and change its root, the Linux kernel provides chroot (change root), which assigns the root of the file system of the calling process.

Thus, with chroot, the process can be made to believe that it points to the root ‘/’, whereas it points, from the point of view of the system, to a sub-directory.



To restrict the visibility of resources to the container, we use namespaces.

Thus, two processes that have the same namespace, can see the corresponding resource changes. Conversely, two processes in two different namespace are completely segregated in the resources controlled by the namespace.


Namespace Constant Isolates
IPC CLONE_NEWIPC System V IPC, POSIX message queues
Network CLONE_NEWNET Network devices, stacks, ports, etc.
Mount CLONE_NEWNS Mount points
User CLONE_NEWUSER User and group IDs
UTS CLONE_NEWUTS Hostname and NIS domain name


Cgroups are collections of processes. Cgroups have different controllers:

  • cpuacct : records consumption of CPU cycles
  • memory : controls the RAM and the cache of a group
  • devices : allow or deny access to a device
  • net_cls : manages network access
  • blkio : manages access to block devices (hard disk …)
  • cpuset : allocates CPU resources and RAM


Thus, playing on these controllers, we can assign to a container the resources and devices for which it has access.

By then combining these three capabilities of the linux kernel, we can create a container in which we can execute a set of processes.




Knowing how a container works is rewarding.

Implementing your own container runtime is less convenient.

Tools, like Docker, make it possible to build images while managing the network, the security, the portability towards other OS (windows, mac os), the orchestration (docker swarm, kubernetes).



I wrote this article following my participation in Paris Containers Day. I was able to attend several interesting talks, including the one of Liz Rice, during which she lived code a basic container from scratch. Her code is on github:



By Omar Naghmouchi Software Engineer
Paris Container Day