If you’ve ever worked on a large piece of software, I’m sure you’ve endured the pain of setting up a complex development environment. Installing and configuring a database, message broker, web server, worker processes, local smtp server, (and who knows what else!) is time consuming for every developer starting on a project. This guide will show you how to set up a docker development environment which will enable you and new developers to get up and running in minutes with even the most complex system. This will make your life much easier in the long run and get new developers up and running on the project much more quickly.
The code for the guide is available here.
Grab Docker for your operating system here. Docker is available for all modern operating systems. For most users, this will also include Docker Compose. Once installed, keep Docker running in the background to use Docker commands!
Your Dockerfile is the blueprint for your container. You’ll want to use your Dockerfile to create your desired environment. This includes installing any language runtimes you might need and installing any dependencies your project relies on. Luckily, most languages have a base image that you can inherit. We'll get dig into this further with the Dockerfile example below.
Your Dockerfile doesn’t need to include any instructions for installing a database, cache server, or other tools. Each container should be built around a single process. Other processes would normally be defined in other Dockerfiles but you don’t even need to worry about that; in this example, we use 3 readymade containers for our databases and message broker.
The Dockerfile above does a couple things: first, we inherit from the node base image. This means that it includes the instructions from that image’s Dockerfile (including whatever base image it inherits from). Second, I install the Yarn package manager since I prefer it over the default NodeJs package manager. Note that while my preferred language here is NodeJs, this guide is language independent. Set up your container for whatever language runtime you prefer to work in.
Give it a try and run
docker-compose build and see what happens.
A few sections ago, I mentioned Docker Compose which is a tool to declaratively define your container formation. This means that you can define multiple different process types which all run concurrently in different containers and communicate with one another via http. Docker makes exposing interfaces between containers easier by using what they call links. The beauty here is that it’s as simple as working with multiple processes on single machine but you can be sure that there are no tightly coupled components that might not work in a production environment!
Let’s walk through this example:
We have 7 different containers in our formation: web, clock, worker, shell, postgres, rabbitmq, and redis. That’s a lot! In a production environment, these processes might each run on separate physical servers; or, the processes all might run on a single machine.
Notice how the web, clock, worker, and shell containers are all built from the current directory. So each of those 4 processes all run on the container that we defined in our Dockerfile. The postgres, rabbitmq, and redis containers, on the other hand, are built from prebuilt images which are found on the Docker Store. Building containers for these tools from images is much quicker than installing each of the tools on your local machine.
Take a look at the
volumes key. Here, we mounted our current directory at
/app. Then the
working_dir key indicates that all commands shall be run relative to this directory.
Ok. Now, take a look at the
links key present on the locally built containers. This exposes a network interface between this container and the containers listed. Notice how we use the name of the link as the hostname in our environment variables. In this example, we link the containers and then we expose the uri for each of our linked services as environment variables.
Try running one of the services: run the command
docker-compose up web.
Ok, our server architecture includes 3 process types that run your application code; we have our web process that is responsible for serving web requests and pushing work to a job queue; we have our worker process that is responsible for pulling jobs off the queue and doing the work; and we have our clock process is effectively a cron runner that pushes work onto our job queue.
Our architecture also includes 3 other services that you commonly see in web server architecture: a Postgres database, a Redis datastore, and a RabbitMQ message broker.
Here’s a minimal implementation of the 3 aforementioned processes that also showcase the usage of our 3 data backends:
There are example endpoints for each of the different components of our architecture. Visiting /postgres/:something will insert something into the postgres database and render a view containing all of the contents. Visiting /redis will count the number of visits to that page and display the count. Visiting /rabbit/:msg will send a message to the worker process and you can check the terminal logs to see the message. The clock process will also run continuously and send a message to the worker process once every minute. Not bad for a 1 minute set up!
I like to write a simple script so I don't have to memorize as many commands:
Done! Now, we don’t need to worry about remembering docker-compose commands. To run our entire server stack we now simply run
./manage.sh start. If we need to build our containers again because we changed our Dockerfile or we need to install new dependencies, we can run
Our shell container exists so that we can shell into our container or run one-off commands in the context of our container. Using the script above, you can run
./manage shell to start a terminal session in the container. If you want to run a single command in your container, you can use
./manage run <command>.
If you're familiar with the difficulty caused by complex development environments running on your local machine, then investigating a Docker powered development environment could save you time. There is a bit of set up involved but the productivity gained in the long term by using a tool like Docker pays for itself.Tweet