UPDATE: Since the time of writing this post, Docker has become much more mainstream. Some of the APIs have evolved and there is also native installation options for Mac available (instead of boot2docker). The docker tutorials are the best way to get started, https://docs.docker.com/get-started/. In addition docker-compose is useful for spinning up groups of containers as demonstrated in this post.
With the rise of new development methodologies such as Continuous Delivery, long gone are the days where a Software Engineer pushes code into the abyss and hope it comes out unscathed on the other side. We are seeing a shift in the industry where the traditional walls between Development, Quality Assurance and Operations are slowly being broken down, these roles are merging and we are seeing a new breed of Engineer. The buzz word “DevOps” has become prominent in the industry and as a result we are seeing project development teams that are more agile, more efficient and able to respond more quickly to change. This shift has led to a rise of new tools and frameworks to help us automate deployment, automate testing and standardise infrastructure.
One of the tools at the forefront of this transformation is Docker, Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Before diving further into this practical exercise I would suggest having a read over What is Docker?
Before beginning the exercise you will need to install Docker, I use boot2docker on MacOS, for further details on installation for your platform visit Docker Installation. Another option is to use a cloud provider to run your docker host, Digital Ocean provide Docker ready servers running on the cloud for as little as $0.007/hour, this is an especially attractive option if you are limited by bandwidth or resources.
A few basics
Docker Image
A docker image is a read-only blue-print for a container, an example blue-print may be the Ubuntu operating system, or a CentOS one. Every container that you run in Docker will be based off a docker image.
Dockerfile
A Dockerfile contains code that tells Docker how to build a Docker image. Docker images are layered and so can be extended, this allows you to stack extra functionality on top of existing base images. A commonly used base image is ubuntu:latest which is a blue-print of the base installation of an Ubuntu distribution.
Docker Container
A docker container can be thought of as a light weight self-contained instance of a virtual machine running a linux distribution (usually with modifications), they are extremely cheap to start and stop. Docker containers are spawned using a docker image, they should be considered as stateless/ephemeral resources.
Docker Hub
Docker Hub brings Software Engineering DRY principles to the system infrastructure world, it is a global repository platform that holds Dockerfiles and images. There are already images available that run ubuntu, redhat, mysql, rabbitmq, mongodb, nginx to name just a few.
Diving into Docker
The database container
Let’s start by creating our MySQL database container, luckily for us MySQL has already been “dockerised” and is available for us to pull via Docker Hub, the defaults are fine so there is no need to write our own Dockerfile or build any new images. A new container can be started using the docker run command.
The first run may take some time while images are downloaded, they will be cached for subsequent runs.
docker run --name wordpress-db -e MYSQL_ROOT_PASSWORD=mysecretpassword -d mysql
-name | the name/tag to assign the new container |
-e | this sets environment variables for the container, in this case the password for the MySQL instance, documentation for available configuration can be found in the MySQL Docker Hub documentation |
-d | this tells docker to run the container in the background as a detached process |
mysql | the name of the docker image to use, this is pulled from Docker Hub |
The application container
docker run --name wordpress-app --link wordpress-db:mysql -d wordpress
–link wordpress-db:mysql | This tells Docker to create a network link to the wordpress-db container (which we created earlier), this makes network communications possible between the two containers. The value has two parts, the left hand side signifies the container to connect to (wordpress-db), and the right hand sign represents a hostname alias from this container (mysql) |
docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c39600354fcb wordpress:latest "/entrypoint.sh apac About a minute ago Up About a minute 80/tcp wordpress-app 20e66802e914 mysql:latest "/entrypoint.sh mysq About a minute ago Up About a minute 3306/tcp wordpress-db
docker exec -i -t wordpress-app bash ping mysql 64 bytes from 172.17.0.2: icmp_seq=0 ttl=64 time=0.085 ms 64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.127 ms 64 bytes from 172.17.0.2: icmp_seq=2 ttl=64 time=0.108 ms
Excellent, the wordpress-app container can talk to the wordpress-db container. Exit the bash session, if desired you can check the logs for your running containers.
docker logs wordpress-app
Great, everything is looking good so far.
The nginx container
It is fairly common for many web applications to be fronted by a HTTP web proxy. This provides advantages such as control of request routing, auditing, security, logging, caching, load balancing, hosting static content and more. Nginx is a commonly used implementation of a HTTP web proxy server. As we are creating a custom nginx we will need to create a new Dockerfile to define a new image that contains some custom nginx configuration:
mkdir wordpress-nginx cd wordpress-nginx vi default.conf server { listen 80; server_name localhost; error_log /var/log/nginx/error.log warn; location / { proxy_pass http://wordpress-app:80/; proxy_redirect http://server_name http://wordpress-app:80/; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto http; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } }
vi Dockerfile FROM nginx COPY default.conf /etc/nginx/conf.d/default.conf
FROM nginx | the FROM instruction tells Docker to pull the base image of nginx from DockerHub |
COPY default.conf /etc/nginx/conf.d/default.conf | this command takes the file default.conf from the current directory and copies to the container image under /etc/nginx/conf.d/ |
docker build -t wordpress-nginx . docker run -d --name=wordpress-nginx --link=wordpress-app:wordpress-app -p 80:80 wordpress-nginx docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2b9f99664249 wordpress-nginx:latest "nginx -g 'daemon of 3 seconds ago Up 2 seconds 443/tcp, 0.0.0.0:80->80/tcp wordpress-nginx c39600354fcb wordpress:latest "/entrypoint.sh apac 9 minutes ago Up 3 minutes 80/tcp wordpress-app 20e66802e914 mysql:latest "/entrypoint.sh mysq 9 minutes ago Up 4 minutes 3306/tcp wordpress-db
Hey Presto
So to recap, we have learnt some of the fundamental concepts of Docker by making practical use of the resources available in Docker Hub to build a self-contained running instance of WordPress. All with just a few Docker commands. I hope this post will serve as a good introduction for you to start Dockerising your own applications infrastructure and to reap the many benefits that Docker brings.
If you enjoyed this post, I’d be very grateful if you’d help it spread by emailing it to a friend, or sharing it on Twitter or LinkedIn. Thank you for reading!
Edit: This post is also available in Chinese, thank you to dockerone.com for the translation – 深入浅出Docker(翻译:崔婧雯校对:李颖杰)
Follow @rama_nallamilli
Nice introduction, especially the basics.
I do have some remarks about the nginx/wordpress/mysql setup – although they already touch more advanced useage:
– You create a new nginx image with specific configuration baked into it. Configuration is state, and docker images/containers in general should be stateless. It would be better to store the configuration file externally and mount it as a volume with “–volume /pathto/config/default.conf:/etc/nginx/conf.d/default.conf”
– while “links” would initially seem like a good idea, they are only good for short-lived containers. The problem is that when you would restart your mysql container, it would get a different IP, and your wordpress would not be updated.
– The mysql and wordpress data directories should be mounted as volumes, so you can stop and delete the containers, upgrade your image and start a newer version of it without losing data.
LikeLiked by 1 person
Hello Bart,
Welcome and thank you for reading my post, some excellent considerations that you have raised, especially if you wanted to run this in any kind of production environment. (I have now added an edit that mentions the importance of VOLUME for persistent data)
In regards to using links you’re totally right, I have recently come across this issue in projects. I think a more elegant solution is to use a private DNS server to resolve the IP’s dynamically. Another thing I have found to be really handy is using Fig to sit on top of Docker, this has taken away some of the pain of managing links but not all.
Regards
Rama
LikeLike
Thanks for your great article.I think it is delicious for Chinese readers.So we have translated it and hope you can add a link in your article:http://dockerone.com/article/189
LikeLiked by 1 person
Hi DockerOne,
Thank you for your feedback and thanks for providing a translation on your website. I have added a link at the bottom of my article.
Kind Regards
Rama
LikeLike
Nice article! Well written and informative. You should follow up with a article on automating the three parts using Fig.
LikeLiked by 1 person