Softwares work differently in different machines. This condition can be a blocker because sometimes when developers add a new code and test it on their machine, it worked, then they add it to the team’s repository. When another team member pulls the new code, the code might give a different behavior, it might even give a different behavior when deployed.
That’s when Docker is helpful. Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. The containers contain the environment, with all the parts an application needed, like libraries, dependencies, etc. It basically enables developers to run the application with the same environment on different machines.
In which situation a Docker can be used?
- Docker can be used as version control for the entire app’s operating system
- Docker can be used when we want to distribute/collaborate on our app’s OS with a team
- To run code on our machine in the same environment as what we have in our server.
Docker uses a client-server architecture. When the user (Docker client) executes a command like a docker build, pull, and run, the daemon does it. The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface.
The docker daemon (dockerd) is the one that manages the Docker objects and listens to Docker API requests. Daemon can also communicates with other daemons.
The Docker client is like ‘the gate’ for the users to interact with the Docker. When a user run a command like ‘docker run’, the client sends request to the dockerd. The client can communicate with more than one daemon.
A Docker registry stores Docker images. When we use the
docker pull or
docker run commands, the required images are pulled from our configured registry. When we use the
docker push command, your image is pushed to our configured registry.
When we use Docker, we’re creating and using images, containers, networks, etc.
An image is a read-only representation that consists of instructions for creating a Docker container. To build our own image, we can create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it.
A container is a runnable instance of an image. We can create, start, stop, move, or delete a container using the Docker API or CLI.
Virtual Machine vs Docker Container
- Hardware-level process isolation
- Each VM has a separate OS
- Boots in minutes
- Big in size (view GBs)
- OS level process isolation
- Each container can share OS
- Boots in seconds
- Lightweight (KBs/MBs)
Example of Implementation
In our group project, we use docker to run and ‘host’ our application. To configure it, we need to configure the image and the registry. Like what explained above, to configure the image, we need to make a Dockerfile:
So what does this Dockerfile do?
Line 1: Specify in what environment we want to run our app, we use python version 3.8
Line 4: create directory /app and make it our work directory. All of the process will be done in that directory.
Line 7–9: Set environment variable
Line 11–23: Install dependencies needed
Line 28–34: ‘pull’ all files from gitlab and run the process needed.
To make the gitlab runner ‘communicate’ with the registry, we have a Build_and_Deploy stage that specifies it:
It can be seen that we use image from public.ecr.aws/r0o1w4r4/docker, the setting in this image already being set on the Dockerfile.
When we run the gitlab runner, we can see that the runner use the image we specified:
Why you should use Docker and containers
A book published in 1981, called Nailing Jelly to a Tree, describes software as "nebulous and difficult to get a firm…
What is Docker and When to Use It
Heard of Docker, but still having trouble understanding it? Don't worry, you are not alone. Today, I want to try to…