⚠️ Traduction non officielle - Cette documentation est une traduction communautaire non officielle de Docker.

Utiliser des conteneurs pour le développement Go

Prérequis

Parcourez les étapes du module exécuter votre image en tant que conteneur pour apprendre à gérer le cycle de vie de vos conteneurs.

Introduction

Dans ce module, vous examinerez l'exécution d'un moteur de base de données dans un conteneur et sa connexion à la version étendue de l'application d'exemple. Vous allez voir quelques options pour conserver des données persistantes et pour relier les conteneurs afin qu'ils communiquent entre eux. Enfin, vous apprendrez à utiliser Docker Compose pour gérer efficacement de tels environnements de développement local multi-conteneurs.

Base de données locale et conteneurs

Le moteur de base de données que vous allez utiliser s'appelle CockroachDB. C'est une base de données SQL distribuée, moderne et native pour le Cloud.

Au lieu de compiler CockroachDB à partir du code source ou d'utiliser le gestionnaire de paquets natif du système d'exploitation pour installer CockroachDB, vous allez utiliser l'image Docker pour CockroachDB et l'exécuter dans un conteneur.

CockroachDB est compatible avec PostgreSQL dans une mesure significative, et partage de nombreuses conventions avec ce dernier, en particulier les noms par défaut des variables d'environnement. Donc, si vous êtes familier avec Postgres, ne soyez pas surpris de voir des noms de variables d'environnement familiers. Les modules Go qui fonctionnent avec Postgres, tels que pgx, pq, GORM, et upper/db fonctionnent également avec CockroachDB.

Pour plus d'informations sur la relation entre Go et CockroachDB, consultez la documentation de CockroachDB, bien que cela ne soit pas nécessaire pour poursuivre avec ce guide.

Stockage

L'intérêt d'une base de données est d'avoir un stockage persistant de données. Les Volumes sont le mécanisme préféré pour la persistance des données générées et utilisées par les conteneurs Docker. Ainsi, avant de démarrer CockroachDB, créez le volume pour celui-ci.

Pour créer un volume géré, exécutez :

$ docker volume create roach
roach

Vous pouvez afficher la liste de tous les volumes gérés dans votre instance Docker avec la commande suivante :

$ docker volume list
DRIVER    VOLUME NAME
local     roach

Réseau

L'application d'exemple et le moteur de base de données vont communiquer entre eux sur le réseau. Différents types de configuration réseau sont possibles, et vous allez utiliser ce qu'on appelle un réseau de pont défini par l'utilisateur. Il va vous fournir un service de recherche DNS afin que vous puissiez vous référer à votre conteneur de moteur de base de données par son nom d'hôte.

La commande suivante crée un nouveau réseau de pont nommé mynet :

$ docker network create -d bridge mynet
51344edd6430b5acd121822cacc99f8bc39be63dd125a3b3cd517b6485ab7709

Comme c'était le cas avec les volumes gérés, il existe une commande pour lister tous les réseaux configurés dans votre instance Docker :

$ docker network list
NETWORK ID     NAME          DRIVER    SCOPE
0ac2b1819fa4   bridge        bridge    local
51344edd6430   mynet         bridge    local
daed20bbecce   host          host      local
6aee44f40a39   none          null      local

Votre réseau de pont mynet a été créé avec succès. Les trois autres réseaux, nommés bridge, host et none, sont les réseaux par défaut et ils ont été créés par Docker lui-même. Bien que cela ne soit pas pertinent pour ce guide, vous pouvez en apprendre davantage sur le réseau Docker dans la section aperçu du réseau.

Choisir de bons noms pour les volumes et les réseaux

Comme le dit le dicton, il n'y a que deux choses difficiles en informatique : l'invalidation du cache et le nommage des choses. Et les erreurs de décalage d'un.

Lors du choix d'un nom pour un réseau ou un volume géré, il est préférable de choisir un nom qui indique l'usage prévu. Ce guide vise la brièveté, il a donc utilisé des noms courts et génériques.

Démarrer le moteur de base de données

Maintenant que les tâches ménagères sont terminées, vous pouvez exécuter CockroachDB dans un conteneur et l'attacher au volume et au réseau que vous venez de créer. Lorsque vous exécutez la commande suivante, Docker tirera l'image de Docker Hub et l'exécutera pour vous localement :

$ docker run -d \
  --name roach \
  --hostname db \
  --network mynet \
  -p 26257:26257 \
  -p 8080:8080 \
  -v roach:/cockroach/cockroach-data \
  cockroachdb/cockroach:latest-v20.1 start-single-node \
  --insecure

# ... sortie omise ...

Remarquez une utilisation intelligente de l'étiquette latest-v20.1 pour vous assurer que vous tirez la dernière version de patch de 20.1. La diversité des étiquettes disponibles dépend du mainteneur de l'image. Ici, votre intention était d'avoir la dernière version patchée de CockroachDB tout en ne vous éloignant pas trop de la version de travail connue au fil du temps. Pour voir les étiquettes disponibles pour l'image CockroachDB, vous pouvez vous rendre sur la page CockroachDB sur Docker Hub.

Configurer le moteur de base de données

Maintenant que le moteur de base de données est en direct, il y a une certaine configuration à faire avant que votre application puisse commencer à l'utiliser. Heureusement, ce n'est pas beaucoup. Vous devez :

  1. Créer une base de données vierge.
  2. Enregistrer un nouveau compte utilisateur avec le moteur de base de données.
  3. Accorder à ce nouvel utilisateur les droits d'accès à la base de données.

Vous pouvez le faire à l'aide du shell SQL intégré de CockroachDB. Pour démarrer le shell SQL dans le même conteneur où le moteur de base de données est en cours d'exécution, tapez :

$ docker exec -it roach ./cockroach sql --insecure
  1. Dans le shell SQL, créez la base de données que l'application d'exemple va utiliser :

    CREATE DATABASE mydb;
  2. Enregistrez un nouveau compte utilisateur SQL avec le moteur de base de données. Utilisez le nom d'utilisateur totoro.

    CREATE USER totoro;
  3. Donnez au nouvel utilisateur les autorisations nécessaires :

    GRANT ALL ON DATABASE mydb TO totoro;
  4. Tapez quit pour quitter le shell.

Voici un exemple d'interaction avec le shell SQL.

$ sudo docker exec -it roach ./cockroach sql --insecure
#
# Bienvenue dans le shell SQL de CockroachDB.
# Toutes les instructions doivent se terminer par un point-virgule.
# Pour quitter, tapez : \q.
#
# Version du serveur : CockroachDB CCL v20.1.15 (x86_64-unknown-linux-gnu, built 2021/04/26 16:11:58, go1.13.9) (même version que le client)
# ID du cluster : 7f43a490-ccd6-4c2a-9534-21f393ca80ce
#
# Entrez \? pour une brève introduction.
#
root@:26257/defaultdb> CREATE DATABASE mydb;
CREATE DATABASE

Time: 22.985478ms

root@:26257/defaultdb> CREATE USER totoro;
CREATE ROLE

Time: 13.921659ms

root@:26257/defaultdb> GRANT ALL ON DATABASE mydb TO totoro;
GRANT

Time: 14.217559ms

root@:26257/defaultdb> quit
oliver@hki:~$

Rencontrez l'application d'exemple

Maintenant que vous avez démarré et configuré le moteur de base de données, vous pouvez porter votre attention sur l'application.

L'application d'exemple pour ce module est une version étendue de l'application docker-gs-ping que vous avez utilisée dans les modules précédents. Vous avez deux options :

  • Vous pouvez mettre à jour votre copie locale de docker-gs-ping pour correspondre à la nouvelle version étendue présentée dans ce chapitre ; ou
  • Vous pouvez cloner le dépôt docker/docker-gs-ping-dev. Cette dernière approche est recommandée.

Pour récupérer l'application d'exemple, exécutez :

$ git clone https://github.com/docker/docker-gs-ping-dev.git
# ... sortie omise ...

Le main.go de l'application inclut désormais du code d'initialisation de la base de données, ainsi que le code pour mettre en œuvre une nouvelle exigence métier :

  • Une requête HTTP POST vers /send contenant un JSON { "value" : string } doit enregistrer la valeur dans la base de données.

Vous avez également une mise à jour pour une autre exigence métier. L'exigence était :

  • L'application répond avec un message texte contenant un symbole de cœur ("<3") aux requêtes vers /.

Et maintenant, ce sera :

  • L'application répond avec la chaîne contenant le nombre de messages stockés dans la base de données, entre parenthèses.

    Exemple de sortie : Bonjour, Docker ! (7)

La liste complète du code source de main.go suit.

package main

import (
	"context"
	"database/sql"
	"fmt"
	"log"
	"net/http"
	"os"

	"github.com/cenkalti/backoff/v4"
	"github.com/cockroachdb/cockroach-go/v2/crdb"
	"github.com/labstack/echo/v4"
	"github.com/labstack/echo/v4/middleware"
)

func main() {

	e := echo.New()

	e.Use(middleware.Logger())
	e.Use(middleware.Recover())

	db, err := initStore()
	if err != nil {
		log.Fatalf("échec de l'initialisation du magasin : %s", err)
	}
	defer db.Close()

	e.GET("/", func(c echo.Context) error {
		return rootHandler(db, c)
	})

	e.GET("/ping", func(c echo.Context) error {
		return c.JSON(http.StatusOK, struct{ Status string }{Status: "OK"})
	})

	e.POST("/send", func(c echo.Context) error {
		return sendHandler(db, c)
	})

	httpPort := os.Getenv("HTTP_PORT")
	if httpPort == "" {
		httpPort = "8080"
	}

	e.Logger.Fatal(e.Start(":" + httpPort))
}

type Message struct {
	Value string `json:"value"`
}

func initStore() (*sql.DB, error) {

	pgConnString := fmt.Sprintf("host=%s port=%s dbname=%s user=%s password=%s sslmode=disable",
		os.Getenv("PGHOST"),
		os.Getenv("PGPORT"),
		os.Getenv("PGDATABASE"),
		os.Getenv("PGUSER"),
		os.Getenv("PGPASSWORD"),
	)

	var (
		db  *sql.DB
		err error
	)
	openDB := func() error {
		db, err = sql.Open("postgres", pgConnString)
		return err
	}

	err = backoff.Retry(openDB, backoff.NewExponentialBackOff())
	if err != nil {
		return nil, err
	}

	if _, err := db.Exec(
		"CREATE TABLE IF NOT EXISTS message (value TEXT PRIMARY KEY)"); err != nil {
		return nil, err
	}

	return db, nil
}

func rootHandler(db *sql.DB, c echo.Context) error {
	r, err := countRecords(db)
	if err != nil {
		return c.HTML(http.StatusInternalServerError, err.Error())
	}
	return c.HTML(http.StatusOK, fmt.Sprintf("Bonjour, Docker ! (%d)\n", r))
}

func sendHandler(db *sql.DB, c echo.Context) error {

	m := &Message{}

	if err := c.Bind(m); err != nil {
		return c.JSON(http.StatusInternalServerError, err)
	}

	err := crdb.ExecuteTx(context.Background(), db, nil,
		func(tx *sql.Tx) error {
			_, err := tx.Exec(
				"INSERT INTO message (value) VALUES ($1) ON CONFLICT (value) DO UPDATE SET value = excluded.value",
				m.Value,
			)
			if err != nil {
				return c.JSON(http.StatusInternalServerError, err)
			}
			return nil
		})

	if err != nil {
		return c.JSON(http.StatusInternalServerError, err)
	}

	return c.JSON(http.StatusOK, m)
}

func countRecords(db *sql.DB) (int, error) {

	rows, err := db.Query("SELECT COUNT(*) FROM message")
	if err != nil {
		return 0, err
	}
	defer rows.Close()

	count := 0
	for rows.Next() {
		if err := rows.Scan(&count); err != nil {
			return 0, err
		}
		rows.Close()
	}

	return count, nil
}

The repository also includes the Dockerfile, which is almost exactly the same as the multi-stage Dockerfile introduced in the previous modules. It uses the official Docker Go image to build the application and then builds the final image by placing the compiled binary into the much slimmer, distroless image.

Regardless of whether you had updated the old example application, or checked out the new one, this new Docker image has to be built to reflect the changes to the application source code.

Build the application

You can build the image with the familiar build command:

$ docker build --tag docker-gs-ping-roach .

Run the application

Now, run your container. This time you'll need to set some environment variables so that your application knows how to access the database. For now, you'll do this right in the docker run command. Later you'll see a more convenient method with Docker Compose.

Note

Since you're running your CockroachDB cluster in insecure mode, the value for the password can be anything.

In production, don't run in insecure mode.

$ docker run -it --rm -d \
  --network mynet \
  --name rest-server \
  -p 80:8080 \
  -e PGUSER=totoro \
  -e PGPASSWORD=myfriend \
  -e PGHOST=db \
  -e PGPORT=26257 \
  -e PGDATABASE=mydb \
  docker-gs-ping-roach

There are a few points to note about this command.

  • You map container port 8080 to host port 80 this time. Thus, for GET requests you can get away with literally curl localhost:

    $ curl localhost
    Bonjour, Docker ! (0)
    

    Or, if you prefer, a proper URL would work just as well:

    $ curl http://localhost/
    Bonjour, Docker ! (0)
    
  • The total number of stored messages is 0 for now. This is fine, because you haven't posted anything to your application yet.

  • You refer to the database container by its hostname, which is db. This is why you had --hostname db when you started the database container.

  • The actual password doesn't matter, but it must be set to something to avoid confusing the example application.

  • The container you've just run is named rest-server. These names are useful for managing the container lifecycle:

    # Don't do this just yet, it's only an example:
    $ docker container rm --force rest-server
    

Test the application

In the previous section, you've already tested querying your application with GET and it returned zero for the stored message counter. Now, post some messages to it:

$ curl --request POST \
  --url http://localhost/send \
  --header 'content-type: application/json' \
  --data '{"value": "Bonjour, Docker !"}'

The application responds with the contents of the message, which means it has been saved in the database:

{ "value": "Bonjour, Docker !" }

Send another message:

$ curl --request POST \
  --url http://localhost/send \
  --header 'content-type: application/json' \
  --data '{"value": "Bonjour, Oliver !"}'

And again, you get the value of the message back:

{ "value": "Bonjour, Oliver !" }

Run curl and see what the message counter says:

$ curl localhost
Bonjour, Docker ! (2)

In this example, you sent two messages and the database kept them. Or has it? Stop and remove all your containers, but not the volumes, and try again.

First, stop the containers:

$ docker container stop rest-server roach
rest-server
roach

Then, remove them:

$ docker container rm rest-server roach
rest-server
roach

Verify that they're gone:

$ docker container list --all
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

And start them again, database first:

$ docker run -d \
  --name roach \
  --hostname db \
  --network mynet \
  -p 26257:26257 \
  -p 8080:8080 \
  -v roach:/cockroach/cockroach-data \
  cockroachdb/cockroach:latest-v20.1 start-single-node \
  --insecure

And the service next:

$ docker run -it --rm -d \
  --network mynet \
  --name rest-server \
  -p 80:8080 \
  -e PGUSER=totoro \
  -e PGPASSWORD=myfriend \
  -e PGHOST=db \
  -e PGPORT=26257 \
  -e PGDATABASE=mydb \
  docker-gs-ping-roach

Lastly, query your service:

$ curl localhost
Bonjour, Docker ! (2)

Great! The count of records from the database is correct although you haven't only stopped the containers, but you've also removed them before starting new instances. The difference is in the managed volume for CockroachDB, which you reused. The new CockroachDB container has read the database files from the disk, just as it normally would if it were running outside the container.

Wind down everything

Remember, that you're running CockroachDB in insecure mode. Now that you've built and tested your application, it's time to wind everything down before moving on. You can list the containers that you are running with the list command:

$ docker container list

Now that you know the container IDs, you can use docker container stop and docker container rm, as demonstrated in the previous modules.

Stop the CockroachDB and docker-gs-ping-roach containers before moving on.

Better productivity with Docker Compose

At this point, you might be wondering if there is a way to avoid having to deal with long lists of arguments to the docker command. The toy example you used in this series requires five environment variables to define the connection to the database. A real application might need many, many more. Then there is also a question of dependencies. Ideally, you want to make sure that the database is started before your application is run. And spinning up the database instance may require another Docker command with many options. But there is a better way to orchestrate these deployments for local development purposes.

In this section, you'll create a Docker Compose file to start your docker-gs-ping-roach application and CockroachDB database engine with a single command.

Configure Docker Compose

In your application's directory, create a new text file named compose.yaml with the following content.

version: "3.8"

services:
  docker-gs-ping-roach:
    depends_on:
      - roach
    build:
      context: .
    container_name: rest-server
    hostname: rest-server
    networks:
      - mynet
    ports:
      - 80:8080
    environment:
      - PGUSER=${PGUSER:-totoro}
      - PGPASSWORD=${PGPASSWORD:?database password not set}
      - PGHOST=${PGHOST:-db}
      - PGPORT=${PGPORT:-26257}
      - PGDATABASE=${PGDATABASE:-mydb}
    deploy:
      restart_policy:
        condition: on-failure
  roach:
    image: cockroachdb/cockroach:latest-v20.1
    container_name: roach
    hostname: db
    networks:
      - mynet
    ports:
      - 26257:26257
      - 8080:8080
    volumes:
      - roach:/cockroach/cockroach-data
    command: start-single-node --insecure

volumes:
  roach:

networks:
  mynet:
    driver: bridge

This Docker Compose configuration is super convenient as you don't have to type all the parameters to pass to the docker run command. You can declaratively do that in the Docker Compose file. The Docker Compose documentation pages are quite extensive and include a full reference for the Docker Compose file format.

The .env file

Docker Compose will automatically read environment variables from a .env file if it's available. Since your Compose file requires PGPASSWORD to be set, add the following content to the .env file:

PGPASSWORD=whatever

The exact value doesn't really matter for this example, because you run CockroachDB in insecure mode. Make sure you set the variable to some value to avoid getting an error.

Merging Compose files

The file name compose.yaml is the default file name which docker compose command recognizes if no -f flag is provided. This means you can have multiple Docker Compose files if your environment has such requirements. Furthermore, Docker Compose files are... composable (pun intended), so multiple files can be specified on the command line to merge parts of the configuration together. The following list is just a few examples of scenarios where such a feature would be very useful:

  • Using a bind mount for the source code for local development but not when running the CI tests;
  • Switching between using a pre-built image for the frontend for some API application vs creating a bind mount for source code;
  • Adding additional services for integration testing;
  • And many more...

You aren't going to cover any of these advanced use cases here.

Variable substitution in Docker Compose

One of the really cool features of Docker Compose is variable substitution. You can see some examples in the Compose file, environment section. By means of an example:

  • PGUSER=${PGUSER:-totoro} means that inside the container, the environment variable PGUSER shall be set to the same value as it has on the host machine where Docker Compose is run. If there is no environment variable with this name on the host machine, the variable inside the container gets the default value of totoro.
  • PGPASSWORD=${PGPASSWORD:?database password not set} means that if the environment variable PGPASSWORD isn't set on the host, Docker Compose will display an error. This is OK, because you don't want to hard-code default values for the password. You set the password value in the .env file, which is local to your machine. It is always a good idea to add .env to .gitignore to prevent the secrets being checked into the version control.

Other ways of dealing with undefined or empty values exist, as documented in the variable substitution section of the Docker documentation.

Validating Docker Compose configuration

Before you apply changes made to a Compose configuration file, there is an opportunity to validate the content of the configuration file with the following command:

$ docker compose config

When this command is run, Docker Compose reads the file compose.yaml, parses it into a data structure in memory, validates where possible, and prints back the reconstruction of that configuration file from its internal representation. If this isn't possible due to errors, Docker prints an error message instead.

Build and run the application using Docker Compose

Start your application and confirm that it's running.

$ docker compose up --build

You passed the --build flag so Docker will compile your image and then start it.

Note

Docker Compose is a useful tool, but it has its own quirks. For example, no rebuild is triggered on the update to the source code unless the --build flag is provided. It is a very common pitfall to edit one's source code, and forget to use the --build flag when running docker compose up.

Since your set-up is now run by Docker Compose, it has assigned it a project name, so you get a new volume for your CockroachDB instance. This means that your application will fail to connect to the database, because the database doesn't exist in this new volume. The terminal displays an authentication error for the database:

# ... omitted output ...
rest-server             | 2021/05/10 00:54:25 failed to initialise the store: pq: password authentication failed for user totoro
roach                   | *
roach                   | * INFO: Replication was disabled for this cluster.
roach                   | * When/if adding nodes in the future, update zone configurations to increase the replication factor.
roach                   | *
roach                   | CockroachDB node starting at 2021-05-10 00:54:26.398177 +0000 UTC (took 3.0s)
roach                   | build:               CCL v20.1.15 @ 2021/04/26 16:11:58 (go1.13.9)
roach                   | webui:               http://db:8080
roach                   | sql:                 postgresql://root@db:26257?sslmode=disable
roach                   | RPC client flags:    /cockroach/cockroach <client cmd> --host=db:26257 --insecure
roach                   | logs:                /cockroach/cockroach-data/logs
roach                   | temp dir:            /cockroach/cockroach-data/cockroach-temp349434348
roach                   | external I/O path:   /cockroach/cockroach-data/extern
roach                   | store[0]:            path=/cockroach/cockroach-data
roach                   | storage engine:      rocksdb
roach                   | status:              initialized new cluster
roach                   | clusterID:           b7b1cb93-558f-4058-b77e-8a4ddb329a88
roach                   | nodeID:              1
rest-server exited with code 0
rest-server             | 2021/05/10 00:54:25 failed to initialise the store: pq: password authentication failed for user totoro
rest-server             | 2021/05/10 00:54:26 failed to initialise the store: pq: password authentication failed for user totoro
rest-server             | 2021/05/10 00:54:29 failed to initialise the store: pq: password authentication failed for user totoro
rest-server             | 2021/05/10 00:54:25 failed to initialise the store: pq: password authentication failed for user totoro
rest-server             | 2021/05/10 00:54:26 failed to initialise the store: pq: password authentication failed for user totoro
rest-server             | 2021/05/10 00:54:29 failed to initialise the store: pq: password authentication failed for user totoro
rest-server exited with code 1
# ... omitted output ...

Because of the way you set up your deployment using restart_policy, the failing container is being restarted every 20 seconds. So, in order to fix the problem, you need to log in to the database engine and create the user. You've done it before in Configure the database engine.

This isn't a big deal. All you have to do is to connect to CockroachDB instance and run the three SQL commands to create the database and the user, as described in Configure the database engine.

So, log in to the database engine from another terminal:

$ docker exec -it roach ./cockroach sql --insecure

And run the same commands as before to create the database mydb, the user totoro, and to grant that user necessary permissions. Once you do that (and the example application container is automatically restarts), the rest-service stops failing and restarting and the console goes quiet.

It would have been possible to connect the volume that you had previously used, but for the purposes of this example it's more trouble than it's worth and it also provided an opportunity to show how to introduce resilience into your deployment via the restart_policy Compose file feature.

Testing the application

Now, test your API endpoint. In the new terminal, run the following command:

$ curl http://localhost/

You should receive the following response:

Bonjour, Docker ! (0)

Shutting down

To stop the containers started by Docker Compose, press ctrl+c in the terminal where you ran docker compose up. To remove those containers after they've been stopped, run docker compose down.

Detached mode

You can run containers started by the docker compose command in detached mode, just as you would with the docker command, by using the -d flag.

To start the stack, defined by the Compose file in detached mode, run:

$ docker compose up --build -d

Then, you can use docker compose stop to stop the containers and docker compose down to remove them.

Further exploration

You can run docker compose to see what other commands are available.

Wrap up

There are some tangential, yet interesting points that were purposefully not covered in this chapter. For the more adventurous reader, this section offers some pointers for further study.

Persistent storage

A managed volume isn't the only way to provide your container with persistent storage. It is highly recommended to get acquainted with available storage options and their use cases, covered in Manage data in Docker.

CockroachDB clusters

You ran a single instance of CockroachDB, which was enough for this example. But, it's possible to run a CockroachDB cluster, which is made of multiple instances of CockroachDB, each instance running in its own container. Since CockroachDB engine is distributed by design, it would have taken surprisingly little change to your procedure to run a cluster with multiple nodes.

Such distributed set-up offers interesting possibilities, such as applying Chaos Engineering techniques to simulate parts of the cluster failing and evaluating your application's ability to cope with such failures.

If you are interested in experimenting with CockroachDB clusters, check out:

Other databases

Since you didn't run a cluster of CockroachDB instances, you might be wondering whether you could have used a non-distributed database engine. The answer is 'yes', and if you were to pick a more traditional SQL database, such as PostgreSQL, the process described in this chapter would have been very similar.

Next steps

In this module, you set up a containerized development environment with your application and the database engine running in different containers. You also wrote a Docker Compose file which links the two containers together and provides for easy starting up and tearing down of the development environment.

In the next module, you'll take a look at one possible approach to running functional tests in Docker.