Inception of Containerization— Docker

Linu Bajy
6 min readJan 20, 2023

--

What’s and Why’s of Docker

Before the inception of Docker, let’s understand how an application was deployed. An application had various services- frontend and a DB (Lets keep it simple :) ). So , the Dev team builds the projects using various dependencies (packages) .

For example — A developer , Mariam , starts making a Python application. She installs python first and then , to run it, she requires a framework, Flask that she installs. She then is able to successfully deploy the application. All good till now.

In the traditional way, she gives the app code and the set of instructions to install dependencies in order to deploy the application.

Now , Jason from Operations team goes through the instructions and installs the dependencies . Now this can take up some time and also , it also repeats a lot of steps that already has been configured by Mariam. Also, the versions of the dependencies of services can vary and this can cause of conflict. This was called the ‘Matrix of Hell’.

Fast Forward to the present day , the concept of containerization comes into picture. So containers are basically a small service (imagine a container where u put all the required dependencies). Now we can create a box for frontend where the python application is running, and another container running the DB service. All these containers are in the Docker Host and are linked by an Internal Network.

Today, Mariam creates an application in the Dockerfile, sends the application code to Jason. Now Jason just has to build the Dockerfile to create containers and voila ! the application is deployed. Jason did not have to install dependencies like before, he just had to build the Dockerfile along with the app code.

Hypervisor vs Virtualization

Every time I think about this technology, it amazes me! Now , what exactly have they done ? One word — Hypervisor.

In Virtual Machines, we had a Kernel OS, which was the base layer , which would enable to interact with the associated Hardware. An OS would be installed over it .

Fun fact: Centos OS and Fedora OS have the same Linux Kernel but they differentiate just because of the differences of Software installed over it.

In Hypervisors, a Kernel and the OS would be present. But apart from that, a Container Engine(Such as Docker) would be installed and running, and that is how the magic happens.

There can be multiple containers each with its own dependencies, that could be connected via an Internal Network.

Images And Containers

An image is basically layers and layers of software and packages installed over it. We have multiple images that we can use. Image can be an OS like ubuntu or, a service like MySQL. These images are available in Docker. The image can be pulled from docker using the command :

docker pull ubuntu

To know the layers and its size, we can use the command :

docker history ubuntu

Once the image is pulled , we can see the images present in our local using the command:

docker images

The image will not do any good unless it is run. And when an image is in the running state, it is called a container.

docker run ubuntu

Now there’s a catch for OS over here. A container is typically used to run a service. Now ubuntu is just an OS and if there is nothing running inside it, the container stops or exits, and that’s why we do not see any running ubuntu containers above.

We can check the container and its status using the command :

docker ps -a                    #<a- all containers running and exited>

So how do we fix this little problem? Well, one solution would be to run an application . Here, let’s keep it simple and just run a command called sleep with an argument of 30. So while this command runs, the container will be up and running, and once the command finishes its execution, it will exit.

docker run -d ubuntu sleep 30.
# d is for detched mode where the execution happens behind and
# we can get back to the prompt once the container is started.

Docker can show the containers that are currently only running using the command :

docker ps

And to clean up, we need to remove the container as well as the image . We first need to ensure that the ubuntu container is stopped. For that we use the command :

docker stop <container-id>

Now we can remove the container from our local workspace using command :

docker rm <container-id>

But uh-oh! When I try to remove the ubuntu image, it says that the image is being used by another container.

Since I’ve run the docker run command twice, it has created 2 containers using the image. But since I had only deleted one of the container , it shows an error saying that image is still in use. Hence , we need to delete the containers that are using image that we intend to delete. How smart, Docker!

So now the all the containers using ubuntu image has been deleted.

But still, we do have the image in our local. To remove the image , we use the command :

docker rmi <container-id>    # where the 'i' in 'rmi' is for the image.

Our workspace is now cleaned up after our little experiment :) And further, you can make use of docker exec command for debugging purposes .

DockerHub

So locally Mariam created the app code and some files with a set of instructions on it (Dockerfile) to containerize it. And now Mariam created an image , which when run creates a container. Mariam has two options to give these files to Jason.

One, to copy the files to a USB and send it to Jason, (or maybe via email? Idk if this is practiced anywhere, seems like a terrible idea. ).

Or Two, To save it into a Repository from where Jason , (or in fact anyone who has the authority ), can download the image and run the image to get the containers.

If you were smart, you would choose the second method over the first. For it ensures that only the required person gets access to these file.

So now Mariam pushes the code to DockerHub , a Docker Repository to store Docker Images. For the Free and Open Source , there can only be 1 private Repo and the rest has to be Public. So, the organisation has to purchase the Enterprise version for Private Repos.

By the end of this article , we now know what issues Docker solved as it came to existence, some basic docker commands and how to create containers and about DockerHub. Here ,we made use of images that were created by the community, but what if we want to customize it and create an image of our own? Lets explore it on the next article :)

--

--

Linu Bajy
Linu Bajy

Written by Linu Bajy

Enthusiastic Learner . DevOps Professional .

No responses yet