A Dive into Docker

With the rise of new development methodologies such as Continuous Delivery, long gone are the days where a Software Engineer pushes code into the abyss and hope it comes out unscathed on the other side.  We are seeing a shift in the industry where the traditional walls between Development, Quality Assurance and Operations are slowly being broken down, these roles are merging and we are seeing a new breed of Engineer.  The buzz word “DevOps” has become prominent in the industry and as a result we are seeing project development teams that are more agile, more efficient and able to respond more quickly to change.  This shift has led to a rise of new tools and frameworks to help us automate deployment, automate testing and standardise infrastructure.

One of the tools at the forefront of this transformation is Docker, Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications.  Before diving further into this practical exercise I would suggest having a read over What is Docker?

Before beginning the exercise you will need to install Docker,  I use boot2docker on MacOS, for further details on installation for your platform visit Docker Installation.  Another option is to use a cloud provider to run your docker host, Digital Ocean provide Docker ready servers running on the cloud for as little as $0.007/hour, this is an especially attractive option if you are limited by bandwidth or resources.


A few basics

Docker Image

A docker image is a read-only blue-print for a container, an example blue-print may be the Ubuntu operating system, or a CentOS one. Every container that you run in Docker will be based off a docker image.

Dockerfile

A Dockerfile contains code that tells Docker how to build a Docker image. Docker images are layered and so can be extended, this allows you to stack extra functionality on top of existing base images. A commonly used base image is ubuntu:latest which is a blue-print of the base installation of an Ubuntu distribution.

Docker Container

A docker container can be thought of as a light weight self-contained instance of a virtual machine running a linux distribution (usually with modifications), they are extremely cheap to start and stop.  Docker containers are spawned using a docker image, they should be considered as stateless/ephemeral resources.

Docker Hub

Docker Hub brings Software Engineering DRY principles to the system infrastructure world, it is a global repository platform that holds Dockerfiles and images. There are already images available that run ubuntu, redhat, mysql, rabbitmq, mongodb, nginx to name just a few.


Diving into Docker

Let’s dive straight into Docker, we are going to build a simple infrastructure that will host a self-contained instance of WordPress, a popular blogging tool that is used by many organisations and writers across the world.  The infrastructure will include a nginx server to route/proxy requests, a WordPress application server to host the user interface and a MySQL database to provide storage.  Once complete our infrastructure will look something like this:

DockerCrashCourse


The database container

Let’s start by creating our MySQL database container, luckily for us MySQL has already been “dockerised” and is available for us to pull via Docker Hub, the defaults are fine so there is no need to write our own Dockerfile or build any new images.  A new container can be started using the docker run command.

The first run may take some time while images are downloaded, they will be cached for subsequent runs.

docker run --name wordpress-db -e MYSQL_ROOT_PASSWORD=mysecretpassword -d mysql
So what just happened here?  We asked Docker to run a new container using the MySQL base image:
-name the name/tag to assign the new container
-e this sets environment variables for the container, in this case the password for the MySQL instance, documentation for available configuration can be found in the MySQL Docker Hub documentation
-d this tells docker to run the container in the background as a detached process
mysql the name of the docker image to use, this is pulled from Docker Hub
Edit: Please note that in order to maintain any data across containers, a VOLUME should be configured to ensure data stays persistent.  For the sake of simplicity we will omit this flag but be aware deployments that involve state should carefully consider the durability of data across the life-cycle of containers.

The application container

Now let’s move onto running the WordPress application container, again this has already been “dockerised” and resides in the Docker Hub WordPress repository.
docker run --name wordpress-app --link wordpress-db:mysql -d wordpress
–link wordpress-db:mysql This tells Docker to create a network link to the wordpress-db container (which we created earlier), this makes network communications possible between the two containers.  The value has two parts, the left hand side signifies the container to connect to (wordpress-db), and the right hand sign represents a hostname alias from this container (mysql)
Let’s now run docker ps to see what containers we have running:
docker ps

CONTAINER ID        IMAGE               COMMAND                CREATED              STATUS              PORTS               NAMES
c39600354fcb        wordpress:latest    "/entrypoint.sh apac   About a minute ago   Up About a minute   80/tcp              wordpress-app       
20e66802e914        mysql:latest        "/entrypoint.sh mysq   About a minute ago   Up About a minute   3306/tcp            wordpress-db        
We can see two containers running as expected on the ports 80 and 3306, let’s ssh onto the wordpress-app container and check that we can talk to wordpress-db:
docker exec -i -t wordpress-app bash
ping mysql
64 bytes from 172.17.0.2: icmp_seq=0 ttl=64 time=0.085 ms
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.127 ms
64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.108 ms

Excellent, the wordpress-app container can talk to the wordpress-db container.  Exit the bash session, if desired you can check the logs for your running containers.

docker logs wordpress-app

Great, everything is looking good so far.


The nginx container

It is fairly common for many web applications to be fronted by a HTTP web proxy.  This provides advantages such as control of request routing, auditing, security, logging, caching, load balancing, hosting static content and more.  Nginx is a commonly used implementation of a HTTP web proxy server.  As we are creating a custom nginx we will need to create a new Dockerfile to define a new image that contains some custom nginx configuration:

mkdir wordpress-nginx
cd wordpress-nginx
vi default.conf

server {
    listen       80;
    server_name  localhost;

    error_log /var/log/nginx/error.log warn;

    location / {
        proxy_pass http://wordpress-app:80/;
        proxy_redirect http://server_name http://wordpress-app:80/;
        proxy_set_header   Host               $host;
        proxy_set_header   X-Forwarded-For    $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto  http;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}
Notice we have routed inbound requests from / to the wordpress-app container on port 80. Now let’s create a Dockerfile that defines how to build our nginx container image:
vi Dockerfile

FROM nginx
COPY default.conf /etc/nginx/conf.d/default.conf
FROM nginx the FROM instruction tells Docker to pull the base image of nginx from DockerHub
COPY default.conf /etc/nginx/conf.d/default.conf this command takes the file default.conf from the current directory and copies to the container image under /etc/nginx/conf.d/
Now all that is left is to build our new docker image and run a container using the image:
docker build -t wordpress-nginx .
docker run -d --name=wordpress-nginx --link=wordpress-app:wordpress-app -p 80:80 wordpress-nginx
docker ps

CONTAINER ID        IMAGE                    COMMAND                CREATED             STATUS              PORTS                         NAMES
2b9f99664249        wordpress-nginx:latest   "nginx -g 'daemon of   3 seconds ago       Up 2 seconds        443/tcp, 0.0.0.0:80->80/tcp   wordpress-nginx     
c39600354fcb        wordpress:latest         "/entrypoint.sh apac   9 minutes ago       Up 3 minutes        80/tcp                        wordpress-app       
20e66802e914        mysql:latest             "/entrypoint.sh mysq   9 minutes ago       Up 4 minutes        3306/tcp                      wordpress-db        
You may notice we gave the argument -p 80:80, this tells Docker to expose the port 80 on the container so it can be accessed externally from the docker host machine.

Hey Presto

Now browse to http://DOCKER_HOST_IP/ in your browser and voila, WordPress is ready to go, follow the WordPress setup prompts to configure your instance, you should soon see the following page ready to go:

Wordpress Admin Console

So to recap, we have learnt some of the fundamental concepts of Docker by making practical use of the resources available in Docker Hub to build a self-contained running instance of WordPress. All with just a few Docker commands. I hope this post will serve as a good introduction for you to start Dockerising your own applications infrastructure and to reap the many benefits that Docker brings.

If you enjoyed this post, I’d be very grateful if you’d help it spread by emailing it to a friend, or sharing it on Twitter or LinkedIn. Thank you for reading!

Edit: This post is also available in Chinese, thank you to dockerone.com for the translation – 深入浅出Docker(翻译:崔婧雯校对:李颖杰)

5 thoughts on “A Dive into Docker

  1. Nice introduction, especially the basics.

    I do have some remarks about the nginx/wordpress/mysql setup – although they already touch more advanced useage:

    – You create a new nginx image with specific configuration baked into it. Configuration is state, and docker images/containers in general should be stateless. It would be better to store the configuration file externally and mount it as a volume with “–volume /pathto/config/default.conf:/etc/nginx/conf.d/default.conf”
    – while “links” would initially seem like a good idea, they are only good for short-lived containers. The problem is that when you would restart your mysql container, it would get a different IP, and your wordpress would not be updated.
    – The mysql and wordpress data directories should be mounted as volumes, so you can stop and delete the containers, upgrade your image and start a newer version of it without losing data.

    Liked by 1 person

    • Hello Bart,

      Welcome and thank you for reading my post, some excellent considerations that you have raised, especially if you wanted to run this in any kind of production environment. (I have now added an edit that mentions the importance of VOLUME for persistent data)

      In regards to using links you’re totally right, I have recently come across this issue in projects. I think a more elegant solution is to use a private DNS server to resolve the IP’s dynamically. Another thing I have found to be really handy is using Fig to sit on top of Docker, this has taken away some of the pain of managing links but not all.

      Regards
      Rama

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s