Docker
Docker Reference Page
Updated: 03 September 2023
Based on this Cognitive Class Course
Prerequisites
- Docker
- WSL2 for Windows (ideally) (if using Windows)
Run a Container
Introduction
Containers are a group of processes that run in isolation, these processes must all be able to run on a shared kernel
Virtual machines are heavy and include an entire operating system, whereas with a container they only contain what is necessary. Containers do not replace virtual machines
Docker is a toolset to manage containers and integrate into our CI/CD pipelines. This allows us to ensure that all our running environments are identical. Furthermore Docker provides a standard interface for developers to work with
Running a Container
To run a container on our local machine we use the Docker CLI
This command will look for the ubuntu image locally, which it will not find and will then check for the image online, after which it will run the ubuntu
container with the top
command. This can be seen with the following output
It is important to note that the container does not have its own kernel but instead runs on the host kernel, the Ubuntu image only provides the file system and tools
We can view our running containers with
We can interact with the container by
Where
-it
states that we want to interact with the shellbe81304e2786
is the container IDbash
is the tool we want to use to inspect our container
Now that we are in the container we can view our running processes with
To get out of our container and back to our host we run
Running Multiple Containers
Just run another container, basically
Ngix
Mongo
If you run into the following error, simply restart Docker
We can list running containers and inspect one that we chose with
It is important to remember that each container includes all the dependencies that it needs to run
A list of available Docker images can be found here
Remove Containers
We can stop containers with
Then remove all stopped containers with
CI/CD with Docker Images
Introduction
A Docker image is an archive of a container that can be shared and containers can be created from them
Docker images can be shared via a central registry, the default store for Docker is Docker Hub
To create an image we use a Dockerfile which has instructions on how to build our image
Docker is made of layers, image layers are build on top of the layers before them, based on this we only need to update or rebuild layers that are changed or need to be updated, based on this we try to keep the area where we are making modifications to the bottom of our Dockerfile in order to prevent unnecessary layers from being rebuilt constantly
Create a Python App
Make a simple python app in a directory that you want your app to be in which contains the following
This app will simply use Flask to expose a web server on port 5000 (the default Flask port)
Note that the concepts used for this app can be used for any application in any language
Create and Build the Docker Image
Create a file named Dockerfile
in the same directory with the following contents
So, what does this file do?
FROM python:3.6.1-alpine
is the starting point for ourDockerfile
, each Dockerfile needs this to select the base layer we want for our application, we use the-alpine
tag to ensure that changes to the parent dependency are controlledRUN pip install flask
is executing a command that is necessary to set up our image for our application, in this case installing a packageCMD ["python","app.py"]
is what is run when our container is started, this is only run once for a container, we are using it here to run our app.py we can leave this here even though it will only be run once all the other lines are as this will not yield any changes to layersCOPY app.py /app.py
says that docker should copy the file in the local directory to our image, this is at the end as it is our source code which changes frequently and hence should affect as few layers as possible
From the directory of our application we can build our image
If you run into the following error you may need to ensure that your encoding is UTF 8
We can then view our image in the list with
We can run our image with
The -p
option maps port 5001 on our host to port 5000 of our container
Navigating to http://localhost:5001
with our browser we should see
If we do not get a response from our application, and if our application is not shown under the list of running containers we can view our logs for information, we use the string that was output when we did docker run
as this is the container ID we tried to run
We can view our container logs with
Push to a Central Registry
We can push our docker images to Docker Hub by logging in, tagging our image with our username, and then pushing the image
Note that the <USERNAME>/python-hello-world
refers to a repository to which we want to push our image
Thereafter we can log into Docker Hub via our browser and see the image
Deploy a Change
We can modify our app.py
file and simply rebuild and push our update
We can view the history of our image with
Removing Containers
We can remove containers the same as before
Container Orchestration with Swarm
Introduction
Orchestration addresses issues like scheduling and scaling, service discovery, server downtime, high availibility, A/B testing
Orchestration solutions work by us declaring our desired state and it maintaining that state
Create a Swarm
We will be using Play-With-Docker for this part
Click on Add a new instance to add three nodes
Thereafter initialize a swarm on Node 1 with
This will output something like the following
We can then add a manager to the swarm with
We can then run the docker swarm-join
command from the other two nodes, then on node 1 we can view the swarm with
Deploy a Service
On node 1 we can create an ngix service
We can then list the services we have created with
We can check the running container of a service with
Because of the way the swarm works, if we send a request for a specific service, it will automatically be routed to the container which has ngix running, we can test this from each node
Scale the Service
If we want to replicate our service instances we can do so with
When we update our service replicas Docker Swarm recognises that we no longer match the service requirement and it therefore creates more instances of the service
We can view the running services
We can send many requests to the node and we will see that the request is being handled by different nodes
We can view our service logs with
Rolling Updates
We can do a rolling update of a service with
We can fine-tune our update process with
--update-parallelism
specifies the number of containers to update immediately--update-delay
specifies the delay between finishing updating a set of containers before moving on to the next set
After a while we can view our ngix service instances to see that they have been updated
Reconciliation
Docker Swarm will automatically manage the state we tell it to, for example if a node goes down it will automatically create a new one to replace it
How Many Nodes?
We typically aim to have between three and seven manager nodes, in order to correctly apply the consensus algorithm, which requires more than half our nodes to be in agreement of state, the following is advised
- Three manager nodes tolerate one node failure
- Five manager nodes tolerate two node failures
- Seven manager nodes tolerate three node failures
It is possible to have an even number of manager nodes but this adds no additional value in terms of consensus
However we can have as many worker nodes as we like, this is inconsequential