In this post we'd like to share our experience in automating big projects start in development, testing, and configuration using docker-compose. A few simple changes to the development process has made our team more efficient and more focused on the product development.
tl;dr;
Docker in 2017
According to the Docker’s CEO, Ben Golub on Dockercon 2016 the number of applications running inside Docker has increased by 3100% in the last two years. Docker powers some 460 thousands of applications worldwide. It is unbelievable!
If you haven't adopted Docker yet, I would recommend reading this impressive whitepaper about docker adoption. Docker has changed the way we build applications and has become a critical part for Development and DevOps teams. In this post, we are taking for granted that you are already using Docker and we are going to give you one more good reason to keep using it.
The issue
At the beginning of my career, when I was a developer building web products in c# and asp.net, starting an application in development environment was not an easy task. You had to install databases and tools needed for an application to run. You had to update your local config files to match your local machine settings, like database port, a path to the local uploads folder, etc. All these steps were usually poorly documented and we had to spend an enormous amount of time to just start an app.
Any product is quite simple at the beginning but grows bigger and bigger with time. It leads to addition of some new tools to the project such as one more database or message queue. Because of growing popularity of microservices, applications are often split into smaller services instead of one monolithic monster. Any kind of these changes usually requires attention of the whole team. A developer in a team, who introduces local environment breaking changes commonly writes long emails describing steps required to keep a project working. Once there was a situation, where the overseas developer did a huge product refactoring, wrote an email with steps to do to make local environments work again and went to sleep. I guess you know what happened next. Right, he forgot to mention a few quite important moments. As a result, the next day was a complete waste of time for the majority of the team.
Developers do like do not like to write documentation and steps for project launching are usually hidden in the minds of the team members. As a result, the launch of the project becomes a painfully difficult task, particularly for the newcomers.
As an engineer, I like to automate everything around. I do believe, that running, testing, deploying application should be always done in a single step. That allows the team to focus on the things that really matter: developing the product and improving it. It was harder to automate everything 10 years ago, but now it's become very simple and everyone should do it. The sooner the better.
Easy start with docker-compose
Docker-compose is a simple tool, that allows to run multiple docker containers in a single command. Before giving you any further details, I should tell you more about the project structure. We use "monorepo" where codebase for every service (web application, an API, background processors) is stored in its own root level directory. Every service has its own Docker file that describes service dependencies. You see how it looks like by checking out our sample GitHub repo here.
Let's start from automating a simple application, that has MongoDB as a dependency and one simple Node.JS application. All the configuration for docker-compose is described in docker-compose.yml
file, that is usually stored in the root directory of the repository.
version: '2'
services:
web:
build:
context: ./web
dockerfile: Dockerfile.dev
volumes:
- "./web/src:/web/src"
ports:
- "8080:8080"
mongo:
command: mongod
image: mongo:3.2.0
ports:
- "27100:27017" # map port to none standard port, to avoid conflicts with locally installed mongodb.
volumes:
- /var/run/docker.sock:/var/run/docker.sock
To start a projeсt you need only one command:
$ docker-compose up
During the first start, all docker containers will be built or downloaded. Though this should look easy to understand if you have used Docker before, we still need to pay attention to a few details:
context: ./web
- this way you specify a path to the service source code within monorepo.dockerfile: Dockerfile.dev
- we use separate Dockerfile.dev for development environments. For production environment, we copy the source code into docker, while for development environments we add it as volume. Because it's added as volume, you don't need to rebuild docker containers everytime you change the code.volumes: - "./web/src:/web/src"
- this way code is added as volume to docker.- Docker-compose automatically links containers with each other and you can access mongodb from web service using name of the service:
mongodb://mongo:27017
Always use --build
argument
By default docker-compose up
won't rebuild containers if they already exist on a host. You can force rebuilding of containers by using --build
argument. The need for this commonly occurs in case of change of project third party dependencies or Docker file itself. In our team, we always use docker-compose up --build
. Docker can cache container layers nicely and won't rebuild them if nothing has changed. By using --build
all the time you might add a few seconds to the startup time, but you are guaranteed to have no unexpected issues, related to running application with old third-party dependencies.
Tip: You can abstract the way you start your project using a simple shell script like this one.
#!/bin/sh
docker-compose up --build "$@"
This gives you an opportunity to change your mind about the options and tools you use to run application later on. You can keep it as simple as ./bin/start.sh.
Partial start
In the example docker-compose.yml example of docker-compose file given below some services depend on each other. For instance:
api:
build:
context: ./api
dockerfile: Dockerfile.dev
volumes:
- "./api/src:/app/src"
ports:
- "8081:8081"
depends_on:
- mongo
In this case, api
service needs a database to run. When running docker-compose you can append a name of service to start only this service along with all dependencies: docker-compose up api
. This command will start MongoDB and API afterward. In large projects, you will surely have parts of it, that you will need only from time to time.
Different team members may need different parts of the application to do their work. For example, frontend developer, who works on a landing site, doesn't need to run the entire project, he can just start a landing site.
>/dev/null noisy logs
There are some tools, which produce a lot of logs, which are hardly useful and only distract our attention. In our example repository, we just turned off logs for the MongoDB by setting logging driver to none:
mongo:
command: mongod
image: mongo:3.2.0
ports:
- "27100:27017"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
logging:
driver: none
Multiple docker-compose files
By default, when you run docker-compose up
it is looking for the docker-compose.yml
file in the current directory.
In some cases (we'll talk about this in a moment) you might need to have multiple docker-compose.yml
config files. To use another configuration file you can just use --file
argument with docker compose:
docker-compose --file docker-compose.local-tests.yml up
So, why you might need multiple config files? Well, the first use case is splitting compose project into several smaller ones. It is interesting, that even if you run some compose files separately - you can still link services among them. For example, you can split infrastructure containers (databases, queues, etc.) and application containers into separate docker-compose files.
Running tests
Our test suite includes multiple tests: unit, integrational, ui, linting. Every service has its own set of tests. Integrational and UI tests require api and web frontend to work in order to run.
At the beginning, it seemed to us that we had to run tests everytime we started our main compose file. Soon, we figured out, that it was very time-consuming. Sometimes, there was a need in a little bit more control over which tests to run. We defined a separate compose file specifically for the tests:
version: '2'
services:
api-tests:
image: app_api
command: npm run test
volumes:
- "./api/src:/app/src"
web-tests:
image: app_web
command: npm run test
volumes:
- "./web/src:/app/src"
Our tests compose file requires main docker-compose to be up in order to run. Integrations tests connect to the development version of the api
and UI tests connect to the web frontend
. Basically tests compose file just start containers, built in the main docker-compose file. When you need to run tests only for some service, you can use partial start, for example:
docker-compose --file docker-compose.local-tests.yml up api-tests
This command will run only api
tests.
Containers prefix
By default, all containers run by docker-compose has a prefix equal to the parent directory (in our case checkout directory). The directory name can vary among development environments. Docker-compose file for tests, listed above, might not work because of that. The prefix (app_
) is used when we want to refer to a container from the main docker-compose file. To keep it consistent across environments you can define .env
file in the directory from where you run docker-compose:
COMPOSE_PROJECT_NAME=app
This is the way to keep prefix the same for all containers regardless of the parent directory name.
Conclusion
Docker-compose is a very useful and flexible way of running your project.
When we onboard new developers they usually have a task to ship some small feature or bug fix to production on their first day. Our getting started guide looks like this:
- Install Docker and Docker-compose
- Clone GitHub repo
- Issue the following command in terminal
./bin/start.sh
For you to understand this whitepaper better we have a project example in GitHub repository. Share your experience and ask questions.
We hope that our article has been useful and it will help to make your project better :)