What is Docker?
Docker is an open-source tool, it is a tool that helps in developing, building, deploying, and executing software in isolation. How does it do this you ask? It does so by containerizing the complete software in short.
What is a Container?
So once again, containers are software that wraps up the code, the dependencies and the environment that is required to run the code in a single package. These containers are used for development, deployment, testing, and management of software.
To get a better understanding of containers, let’s study it in comparison to VM. I’m sure you guys already know what VM is.
Containers vs VM
I’ll be using these criteria to compare a Container and a VM:
- Operating Systems
- Architecture
- Isolation
- Efficiency
- Portability
- Scalability
- Deployment
Operating system:
- Containers contain only the bare minimum parts of the Operating system required to run the software. Updates are easy and simple to do.
- VMs contain the complete Operating system that is normally used on systems for general purpose. Updates are time consuming and tough.
Architecture:
Containers occupy a part of the host system’s kernel and acquire resources using it.
VMs are completely isolated from the host system and acquire resources through something called the hypervisor.
Isolation:
- Container’s isolation isn’t as complete as of a VM but is adequate.
- VM provides complete isolation from the concerning host system and is also more secure.
Efficiency:
- Containers are way more efficient as they only utilise the most necessary parts of the Operating system. They act like any other software on the host system.
- VM are less efficient as they have to manage full-blown guest operating system. VM’s have to access host resource through a hypervisor.
Portability:
- Containers are self-contained environments that can easily be used on different Operating systems.
- VMs aren’t that easily ported with the same settings from one operating system to another.
Scalability:
- Containers are very easy to scale, they can be easily added and removed based on requirements due to their lightweight.
- VMs aren’t very easily scalable as they are heavy in nature.
Deployment:
- Containers can be deployed easily using the Docker CLI or making use of Cloud services provided by aws or azure.
- VMs can be deployed by using the PowerShell by using the VMM or using cloud services such as aws or azure.
Why do we need Containers?
Now that we understand what containers are, let’s see why we need containers.
It allows us to maintain a consistent development environment. I have already talked about this when we were discussing the issues we faced before containers were a thing.
It allows us to deploy software as micro-services. I will get into what micro-services in another blog. But right now, understand that software these days are not deployed as single files, but rather a set of smaller files, this is known as micro-services. And Docker helps us launch software in multiple containers as micro-services.
Again, what is Docker?
With that entire context, this definition should make more sense: Docker is an open-source tool. It is a tool that helps in developing, building, deploying and executing software in isolation.
It is developed and maintained by Docker Inc. which first introduced the product in 2013. It also has a very large community that contributes to Docker and discusses new ideas.
Docker Environment
So Docker environment is basically all the things that make Docker. They are:
- Docker Engine
- Docker Objects
- Docker Registry
- Docker Compose
- Docker Swarm
Docker Engine:
Docker engine is as the name suggests its technology that allows for the creation and management of all the Docker Processes. It has three major parts to it:
- Docker CLI (Command Line Interface) – This is what we use to give commands to Docker. E.g. docker pull or docker run.
- Docker API – This is what communicates the requests the users make to the Docker daemon.
- Docker Daemon – This is what actually does all the process, i.e. creating and managing all of the Docker processes and objects.
So, for example, if I wrote a command $sudo docker run Ubuntu, it will be using the docker CLI. This request will be communicated to the Docker daemon using the docker API. The docker daemon will process the request and then act accordingly.
Docker Objects:
There are many objects in docker you can create and make use of, let’s see them:
- Docker Images – These are basically blueprints for containers. They contain all of the information required to create a container like the OS, Working directory, Env variables, etc.
- Docker Containers – We already know about this.
- Docker Volumes – Containers don’t store anything permanently after they’re stopped, Docker Volumes allow for persistent storage of Data. They can be easily & safely attached and removed from the different container and they are also portable from system to another. Volumes are like Hard drives
- Docker Networks – A Docker network is basically a connection between one or more containers. One of the more powerful things about the Docker containers is that they can be easily connected to one other and even other software, this makes it very easy to isolate and manage the containers
- Docker Swarm Nodes & Services – We haven’t learned about docker swarm yet, so it will be hard to understand this object, so we will save it for when we learn about docker swarm.
Docker Registry:
To create containers we first need to run images, to create images we need to build text files called dockerfile. You can run multiple containers from a single image.
Since images are so important, they need to stored and distributed. To do this, we need a dedicated storage location and this is where Docker registries come in. Docker registries are dedicated storage locations of docker images. These images can be distributed easily from here to anywhere it is required.
The Docker images can also be versioned inside of a Docker Registry just like source code can be versioned.
You have many options for a Docker Registry. One of the most popular ones is DockerHub, which is again maintained by the docker inc. You can upload your docker images to it without paying, but they will be public, so if you want to make them private you will have to pay for a premium subscription to docker.
There are some alternatives but they are rarely entirely free, there is a limit and once you cross that limit you will have to pay. Some alternatives are: ECR ( Elastic Container Registry), Jfrog Artifactory, Azure Container Registry, Red Hat Quay, Google Container Registry, Zookeeper, Harbor etc.
You can always host your own images if you have the infrastructure and resources to do so and some organisations do this.
Docker Compose:
Docker-compose is a tool within docker that is used to launch and define multiple containers at the same time. Normally when you run a container using the docker run command you can only run one container at a time. So when you need to launch a whole bunch of services together you first define it within a docker-compose.yml file and then launch it using the docker-compose command.
It’s a very useful tool for testing, production, development and as well as staging purpose.
Docker Swarm:
Docker swarm is a little bit more advanced topic I won’t cover it entirely but I will give you guys an idea of what it is. Docker swarm by definition is a group of either physical or virtual machines that are running the Docker application and that has been configured to join together in a cluster. So when we want to manage a bunch of docker containers together we group them as clusters and then manage them.
Docker swarm officially is an orchestration tool that is used to group, deploy, and update multiple containers. People usually make use of it when they need to deploy an application with multiple containers.
There are two types of nodes on a Docker swarm:
- Manager Nodes – Used to manage a cluster of other nodes.
- Worker Nodes – Used to perform all the tasks.
Docker Architecture
Let’s explore the architecture of docker, since we know about all of its components we will understand the architecture much better.
Docker has three main parts, Docker CLI – allows us to communicate our requests to Docker, Docker Host – performs all the processing and creation of objects, Docker Registry – a dedicated storage place for Docker images. And of course, not mentioned in the diagram here, but there is Docker API which handles all the communications.
Let’s consider three commands here:
- $ docker build
- $ docker pull
- $ docker run
And now let’s study what happens when each of these commands is executed.
$ docker build
This command is used to give the order to build an image. So when we run the command Docker build through the Docker CLI, the request is communicated to the Docker daemon which processes this request, i.e. looks at the instructions and then creates an image according to those requests.
Let’s say that the image to be created is of Ubuntu. So we will tell Docker about the creation of the image using the command: $ sudo docker build dockerfile –t ubuntu . , Once the daemon gets to know about the request it will start building the image based on dockerfile you have written.
$ docker pull
This command is used to give the order to pull an image from the Docker registry. So when we run this command the request will be communicated to the docker registry through the Docker daemon and once the image is identified it will be stored in the host system, ready for access.
Let’s say we want to pull an apache web server image for hosting our server, for that we will use the command: $ sudo docker pull apache2 , Once the daemon gets the request it will look for the same image in the Docker registry and is it finds it will download the image and store it locally.
$ docker run
This command is used to run any image and create a container out of it. So when we run this command the request will be communicated to the docker daemon which will then select the image mentioned in the request and create a container out of it.
Let’s say we want to create a container based on the ubuntu image we had created earlier. For this we will use the command:
$ sudo docker run ubuntu
, Once the daemon gets this request it will refer to the image named ubuntu and then create a container out of this image.
So this is in short how Docker functions as a tool to create containers.
Docker Common Commands
There are a few common commands you guys will need to know to get started, I will list each one and then explain what it does.
The following command will list down the version of the Docker tool that is installed on your system and is also a good way to know if you have docker installed in your system at all or not.
–$ sudo docker –version
The following command is used to pull images from the Docker registry.
$ sudo docker pull <name of the image>
So this is what it should look like when you want to pull Ubuntu image from the Docker registry:
$ sudo docker pull ubuntu
Both of the following commands are used to list down all Docker images that are there stored in the systems you are using the command in.
$ sudo docker images
or $ sudo docker image ls
So if you had only one image (an Ubuntu one) then this is what you should be seeing.
The following command will run an existing image and create a container based on that image.
$ sudo docker run <name of the image>
So if you wanted to create an Ubuntu container based on an Ubuntu image it will look something like this:
$ sudo docker run –it –d ubuntu
Now here you will notice that I included two flags; -it & -d
- It – interactive flag, allows the container that was created to be interactive
- d – Detach flag, allows the container to run in the foreground.
Once you have created a running container, you would want to know if it exited or is still running. For that reason you can use the following command, it basically lists all the running containers:
$ docker ps
Now if you wanted to see both exited containers and running containers then you can go ahead and the –a flag like so:
$ sudo docker ps –a
To stop a container you can use this command:
$ sudo docker stop <name of the container>
Like so:
If you want to kill a container then you can use this command:
$ sudo docker kill <name of the container>
If you want to remove any container either stopped or running you can go ahead and use this command:
$ sudo docker rm –f <name of the container>
Disclaimer: kill, stop and rm commands are different in the matter that stop allows for slow and steady stoppage of a container whereas kill command kills containers quickly and rm is basically used for clean-up purpose.
Docker File
So now we know the most basic docker commands, we can move on to learn how to create our own image.
To create a new image/ custom image you need to write a text file called dockerfile. In this file, you need to mention all of the instructions that will let docker know what to include in the image.
To understand this better let’s look at an example.
Here we are trying to create a container with the base image of Ubuntu latest version and running the commands to update the instance and install apache web server on it. And finally, the working directory is mentioned to be /var/www/html.
These are called instructions. There are many different types of instructions to use, such as:
FROM
Syntax: FROM
It is an instruction that informs docker about the base image that is to be used for the container. So basically if you have an image in mind whose properties you wish to inherit you can mention it using this instruction. This is always used as the first instruction for any docker image, but you can use it multiple times.
ADD
Syntax: ADD
It is used to add new sources from your local directory or a URL to the filesystem of the image that will become the container in the designated location.
You can include multiple items as the source and can even make use of wildcards and if the destination that you have mentioned does not exist then it will create one.
COPY
Syntax: COPY
It is used to copy new sources from only your local directory to the file system of the image that will become the container in the designated location.
You can include multiple items as the source and can even make use of wildcards and if the destination that you have mentioned does not exist then it will create one.
It’s similar to ADD, the difference being that ADD can also add a new URL to the file system whereas COPY can’t.
RUN
Syntax: RUN
This instruction is used to run specific commands that you want to run in the container during its creation. For example, You want to update the ubuntu instance then you can use the instruction as such:
RUN apt-get update
WORKDIR
Syntax: RUN
This instruction is responsible for setting the working directory so that you can run shell commands in that specific directory during the build time of the image.
CMD
Syntax: CMD
This instruction tells the container what command to run when it gets started.
VOLUME:
Syntax: VOLUME
This instruction makes a mount point for the volume of a specified name.
EXPOSE
Syntax: EXPOSE
This instruction tells what port the container should be exposed. But this can only happen for an internal network, the host will not be able to access the container from this port.
ENTRYPOINT
Syntax: ENTRYPOINT
This instruction allows you to run commands when your container starts with parameters.
The difference between CMD and ENTRYPOINT is that with ENTRYPOINT your command is not overwritten during runtime. When you use ENTRYPOINT it will override any elements specified in another CMD instruction.
LABEL
Syntax: LABEL =
This instruction is used to add Metadata to your image. You need to make use of quotes & backslashes if you want to include spaces. If there are any older labels they will be replaced with the new label value. You can make use of Docker inspect command to see the container’s label.
Once you have created a dockerfile you can execute that dockerfile using the docker build command like so:
$ sudo docker build -t <name of the image> .
Since a new layer is created each time a new instruction is written, it is important to write in the most optimised way as possible and the least number of instructions as possible.
Conclusion
So, we learned today a lot about Docker, it is a tool that we use to maintain consistency across the development pipeline of software development and it also helps us to manage and deploy software as microsystems. It has many different components that help it become the amazing tool it is. I would recommend to anyone reading this to start learning more and more about Docker as what I cover here is just a drop in the ocean, do this especially if you want to get into DevOps.
Top comments (0)