Docker internal and Docker evolution. Docker not evolve in a month or so , it took years and years to evolve.
1. How containerization evolve
Containerization was actually included in the Linux 2.6.24 kernel to provide operating system-level virtualization and allow a single host to operate multiple isolated Linux instances, called Linux Containers (LXC).
Purpose of LXC
LXC
is based on Linux control groups (cgroups) and provide control applications complete resource isolation (like processor, memory and I/O access). Linux Containers also offer complete isolation for the container’s namespace,such as systems, user IDs, network ID’s and other elements usually associated with operating systems are unique for each container.
Same way Docker uses the container technology but creates a layer above the LXC
layer for
packaging, deployment and migration of workloads to different hosts.
Actual working of Docker based on Docker works with LXC Container-based virtualization. It is also called operating system virtualization One of the first container technologies on x86 was actually on FreeBSD, in the form of FreeBSD Jails.
Same way container works on
Containers work on the concept of process level virtualization. Process level virtualization has been used by technologies like Solaris zones and BSD jails for years. But the
drawback of these system is that they need custom kernels and cannot run on mainstream kernels. As opposed to Solaris zones and BSD rails, LXC containers have been gaining popularity in recent years because they can run on any Linux platform. This led to the adoption of containerization by various cloud based hosting services.
Linux based containers there are two main concepts involved,
2. Namespaces in docker
1. Namespaces(Linux there are six kinds of namespaces which can offer process level isolation for Linux resources. Namespaces ensure that each container sees only its own environment and doesn’t affect or get access to processes running inside other containers. In addition, namespaces provide restricted access to file systems like chroot, by having a directory structure for a container.)
- Pid Namespace
- Net namespace
- Ipc namespace
- Mnt namespace
- Uts namespace
- User namespace
3. Cgroups (Control groups.) in docker
2. Cgroups (Control groups.)
Cgroups (control groups) is a feature of Linux kernel for accounting, limiting and isolation of resources. It provides means to restrict resources that a process can use. For example, you can restrict an apache web server or a MySQL database to use only a certain amount of disk IO’s.
Linux container is basically a process or a set of processes than can run in an isolated environment on the host system.
4. Copy on write file system:
normal file system like ext4, all the new data will be overwritten on top of existing data and creates a new copy. Unlike other Linux file systems copy on write file system never overwrites the live data, instead it does all the updating using the existing unused blocks in the disk using copy on write functional.
One of the file systems used by Docker is BTRFS. Resources are handles using Copy on Write (COW) when same data is utilized by multiple tasks. When an application requests data from a file, the data is sent to memory or cache. Individual applications then have their own memory space. In the case when multiple applications request the same data, only one memory space is allowed by COW and that single memory space is pointed to by all applications. An application, which is changing data, is given its own memory space
Docker uses a “plain text” configuration language to define and control the configuration of the application container. This configuration file is called a Docker File.
Docker makes use of Linux kernel facilities such as cGroups, namespaces and SElinux to
provide isolation between containers. At first Docker was a front end for the LXC container management subsystem, but release 0.9 introduced libcontainer, which is a native Go language library that provides the interface between user space and the kernel.
Containers sit on top of a union file system, such as AUFS, which allows for the sharing of components such as operating system images and installed libraries across multiple containers.
5. Why Docker is good to use
Why Docker is good to use or how docker save time and money in an enterprise? This is very good question itself. and you can find answer as follows:
- It replaces Virtual Machines and provide light weight infra to develop, test and deploy
- Better for prototyping software without bother about infra.
- Packaging software with docker image to save software dependency.
- Better and provide micro-service architecture.
- Help to reduce debugging overhead
- Enable continuous delivery (CD)
6. Where docker run?
Two mode in which docker run
1. Root Node : temporary basis,
2. Attach and detach
In next session we are going to discussion about:
- What are the difference between container and images, where to get images , how to pull and push images at docker hub.
- Layering feature in docker
- Hands on docker to build and application Java/python program or services.
Happy Learning
My name is Donnell and I am studying Integrated
International Studies and Arts and Sciences at Jesseren / Belgium.
I’m impressed, I have to admit. Selrom do I encounter a blog that’s both equally educative and engaging, and leet
me tell you, you habe hit the nail on the head. The issue is something noot enough folks are speaking intelligently about.
Now i’m very happy I found this during myy search for something relating to this.