Step-by-step: Building a Node.js server (2021 edition) — Part 2/4 — Docker

Koa.js, GraphQL, MongoDB, Docker, Mocha, Typescript

If you want to use the server starter directly without going through the tutorial, find the code on Github. Link to the next parts are at the bottom of this page.

In Part I we built a basic Koa.js server with Typescript and improved our workflows with some tooling. The next step would be to set up a database to store and retrieve data. We will use MongoDB, a NoSQL database. But we’d like to have a single and simple way to install it on each developer’s machine instead of relying on tedious manual configuration. We’d also like to make the installation process deterministic, with configuration stored in files instead of set by the OS itself (environment variables).

So let’s containerize our apps, including the existing server and the future MongoDB database. It will make installation of the whole back-end (which will now be considered as a suite of services) much easier, with only one step.

Requirements: Knowing the basics of Docker and MongoDB. We won’t cover the installation of docker as it is OS-dependent and requires some specific tweaking for everyone. Please refer to the getting started and/or installation guide provided by Docker.

Robin level: dockerize the server

With docker installed on your machine (see the requirements above), create the following Dockerfile:

(Eventually update the node and typescript versions)

These are the instructions to dockerize the existing server. Let’s get through the file:

  1. We request a first container as the builder. It copies our entire working directory into /usr/src/app on the container (line 2 & 4).
  2. It installs the devDependencies and builds the project (line 6).
  3. We use a second container (runtime-container) that only contains the generated files, with no “useless” dev dependency (line 8 to 14).
  4. Line 10 and 16 we are defining and exposing ${SERVER_PORT}. We won’t use it yet, and it will remain undefined. However, the Koa server (src/server.ts) defaults to 3100 and we can explicitly expose the container’s port by using the -p argument when running Docker. We will only use the ${SERVER_PORT} argument later in this tutorial.

Run the following commands to build and run the server container:

docker build -t <name> .
docker run -p 3100:3100 -d <name>

Go to http://localhost:3100/health and check that we still have a 200 — OK response.

Batman level: dockerize the whole back-end

The benefit of using containers grows larger when our application is composed of many different services, like our back-end is. So let’s add a MongoDB service, in a container.

Add a docker-compose.yml file. Docker-compose files operate at a higher level than Dockerfiles. While Dockerfiles are nice to configure a single container/service, docker-compose files are great for listing all the containers/services.

Things are only getting slightly more complex from now, I promise. I recommend keeping the Docker Compose API reference nearby if needed.

  • Starting from docker-compose version 3.4, we can set local variables. That’s what we’re doing here with DB_NAME because we’re going to use it many times in the file. Consider it as an alias for the string “database”.
  • We are setting up many Environment Variables for both containers (all the variables prefixed with a dollar ($) sign). More info on that below.
  • We are defining 3 containers/services:database, mongo-express and server.
  • For database we’re using the Docker image made by the community (mongo:4).
  • The database container is using volumes. The first volume is used to copy/paste a database initialization script into the container. More info on that below. The second one (mongodb_data_container) is used to persist data on the host when the container is turned off. It’s managed by Docker.
  • mongo-express is a “Web-based MongoDB admin interface” (repo). It will be super useful to browse the database from our machine through the database container. 🤯
  • For server we’re using the custom Docker image we prepared at the previous step with the Dockerfile (see line 39).

Here’s what it looks like:

The back-end architecture (Docker containers)
The back-end architecture (Docker containers)
The back-end architecture

🔐 A way to manage environment variables

How to set the variables mongo needs without hard-coding them in the docker-compose file and eventually push them to the repository remote and make them publicly available? We can use .env files!

I recommend having a .gitignored .env file, copy/pasting the keys from a git-tracked sample.env file and filling the values with your own secrets. This way, each team member can have their own secret variables.

Not hard-coding the values in the docker-compose file will also be useful later when we deal with multiple environments (like development, testing, production).

So let’s write the sample.env file, duplicate it and rename the copy .env:

Add this line to the .gitignore file if it’s not already there:

.env

A script to initialize our database

Before storing and retrieving data with MongoDB, we need a database and a user to interact with it. This is done by adding an initialization script to a specific volume in docker (see line 18 of the docker-compose.yml file). Here is an example of a script adding a database and a user as named in the .env file. Feel free to tweak the script or to rewrite it. Mongo DB also supports javascript (thread).

Adding init-mongo.sh at the root is the only thing we need to do, docker will copy it automatically where it needs to. It will be run only if the docker volume is empty (so only the first time, so that we can keep working on the same database afterwards).

Final touches

Finally, add a .dockerignore file to avoid putting useless files in the containers:

We are now ready to see if everything works! Start the containers by running:

docker-compose up [-d if you want to start the containers in detached mode] [--build if you want to force rebuild]

Give it a few seconds to pull the images and build the containers. You should then be able to:

  • Browse http://localhost:3100/health and get a 200 — OK as before. This is our server.
  • Browse http://localhost:8081 and log in, using the mongo express credentials you listed in the .env file. This is a way to visualize the database we created with our init-mongo.sh script and which is living inside the database container.
  • Note: To turn the containers off, use docker-compose down.

Note: it’s still possible to access the database container with a connectionString, for example in MongoDB Compass from the host:

mongodb://your_username:your_password@localhost:27017/database?authSource=database

Injecting variables at runtime

Containers are great, but when developing it’s better to instantly see the changes we made to the code rather than rebuilding the containers and waiting for them to start.

So the ideal workflow would be to only start the database and eventually mongo-express containers, and to keep our server on our host machine. But how will the same environment variables we defined in the .env file be injected into the server? There’s a package to answer that need:

npm i -D dotenv

We then have a few adjustments to make to our nodemon.json file. Replace the “exec” line with the following:

"exec": "npx ts-node --require dotenv/config ./src/server.ts"

Dotenv will take the variables defined in our .env file and inject them into our server, each time it finds process.env.VARIABLE. Remember this line at the start of our src/server.ts file?

const port = process.env.SERVER_PORT || 3100
const port = process.env.SERVER_PORT || 3100
src/server.ts, line 6

Now the server will use the $SERVER_PORT variable defined in the .env file, instead of defaulting to 3100.

We can then safely run the following commands and properly develop our server locally, interacting with the containerized database:

docker-compose up [-d] [--build] database mongo-express
npm run start

Now our server and database are ready to talk to each other. 🕺🏻

In part III, we will write some boilerplate code to set up GraphQL and MongoDB. Click the link below to go to part III:

Software developer — theopnv.com