⚠️ Traduction non officielle - Cette documentation est une traduction communautaire non officielle de Docker.

Gérer les réseaux de services swarm

Cette page décrit les réseaux pour les services swarm.

Swarm et types de trafic

Un swarm Docker génère deux types différents de trafic :

  • Trafic de plan de contrôle et de gestion : Cela inclut les messages de gestion swarm, tels que les demandes de rejoindre ou quitter le swarm. Ce trafic est toujours chiffré.

  • Trafic de plan de données d'application : Cela inclut le trafic de conteneur et le trafic vers et depuis les clients externes.

Concepts réseau clés

Les trois concepts réseau suivants sont importants pour les services swarm :

  • Les réseaux overlay gèrent les communications entre les démons Docker participant au swarm. Vous pouvez créer des réseaux overlay, de la même façon que les réseaux définis par l'utilisateur pour les conteneurs autonomes. Vous pouvez attacher un service à un ou plusieurs réseaux overlay existants également, pour permettre la communication service-à-service. Les réseaux overlay sont des réseaux Docker qui utilisent le pilote de réseau overlay.

  • Le réseau ingress est un réseau overlay spécial qui facilite l'équilibrage de charge entre les nœuds d'un service. Lorsqu'un nœud swarm reçoit une demande sur un port publié, il transmet cette demande à un module appelé IPVS. IPVS garde une trace de toutes les adresses IP participant à ce service, en sélectionne une, et route la demande vers elle, via le réseau ingress.

    Le réseau ingress est créé automatiquement lorsque vous initialisez ou rejoignez un swarm. La plupart des utilisateurs n'ont pas besoin de personnaliser sa configuration, mais Docker permet de le faire.

  • Le docker_gwbridge est un réseau bridge qui connecte les réseaux overlay (y compris le réseau ingress) au réseau physique d'un démon Docker individuel. Par défaut, chaque conteneur qu'un service exécute est connecté au réseau docker_gwbridge de l'hôte démon Docker local.

    Le réseau docker_gwbridge est créé automatiquement lorsque vous initialisez ou rejoignez un swarm. La plupart des utilisateurs n'ont pas besoin de personnaliser sa configuration, mais Docker permet de le faire.

Tip

Voir aussi Aperçu des réseaux pour plus de détails sur les réseaux Swarm en général.

Considérations de pare-feu

Les démons Docker participant à un swarm ont besoin de la capacité de communiquer avec les uns les autres via les ports suivants :

  • Port 7946 TCP/UDP pour la découverte de réseau de conteneur.
  • Port 4789 UDP (configurable) pour le chemin de données du réseau overlay (y compris ingress).

Lors de la configuration du réseau dans un Swarm, un soin particulier doit être pris. Consultez le tutoriel pour un aperçu.

Overlay networking

When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:

  • An overlay network called ingress, which handles the control and data traffic related to swarm services. When you create a swarm service and do not connect it to a user-defined overlay network, it connects to the ingress network by default.
  • A bridge network called docker_gwbridge, which connects the individual Docker daemon to the other daemons participating in the swarm.

Create an overlay network

To create an overlay network, specify the overlay driver when using the docker network create command:

$ docker network create \
  --driver overlay \
  my-network

The above command doesn't specify any custom options, so Docker assigns a subnet and uses default options. You can see information about the network using docker network inspect.

When no containers are connected to the overlay network, its configuration is not very exciting:

$ docker network inspect my-network
[
    {
        "Name": "my-network",
        "Id": "fsf1dmx3i9q75an49z36jycxd",
        "Created": "0001-01-01T00:00:00Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": []
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "Containers": null,
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4097"
        },
        "Labels": null
    }
]

In the above output, notice that the driver is overlay and that the scope is swarm, rather than local, host, or global scopes you might see in other types of Docker networks. This scope indicates that only hosts which are participating in the swarm can access this network.

The network's subnet and gateway are dynamically configured when a service connects to the network for the first time. The following example shows the same network as above, but with three containers of a redis service connected to it.

$ docker network inspect my-network
[
    {
        "Name": "my-network",
        "Id": "fsf1dmx3i9q75an49z36jycxd",
        "Created": "2017-05-31T18:35:58.877628262Z",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "Containers": {
            "0e08442918814c2275c31321f877a47569ba3447498db10e25d234e47773756d": {
                "Name": "my-redis.1.ka6oo5cfmxbe6mq8qat2djgyj",
                "EndpointID": "950ce63a3ace13fe7ef40724afbdb297a50642b6d47f83a5ca8636d44039e1dd",
                "MacAddress": "02:42:0a:00:00:03",
                "IPv4Address": "10.0.0.3/24",
                "IPv6Address": ""
            },
            "88d55505c2a02632c1e0e42930bcde7e2fa6e3cce074507908dc4b827016b833": {
                "Name": "my-redis.2.s7vlybipal9xlmjfqnt6qwz5e",
                "EndpointID": "dd822cb68bcd4ae172e29c321ced70b731b9994eee5a4ad1d807d9ae80ecc365",
                "MacAddress": "02:42:0a:00:00:05",
                "IPv4Address": "10.0.0.5/24",
                "IPv6Address": ""
            },
            "9ed165407384f1276e5cfb0e065e7914adbf2658794fd861cfb9b991eddca754": {
                "Name": "my-redis.3.hbz3uk3hi5gb61xhxol27hl7d",
                "EndpointID": "f62c686a34c9f4d70a47b869576c37dffe5200732e1dd6609b488581634cf5d2",
                "MacAddress": "02:42:0a:00:00:04",
                "IPv4Address": "10.0.0.4/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4097"
        },
        "Labels": {},
        "Peers": [
            {
                "Name": "moby-e57c567e25e2",
                "IP": "192.168.65.2"
            }
        ]
    }
]

Customize an overlay network

There may be situations where you don't want to use the default configuration for an overlay network. For a full list of configurable options, run the command docker network create --help. The following are some of the most common options to change.

Configure the subnet and gateway

By default, the network's subnet and gateway are configured automatically when the first service is connected to the network. You can configure these when creating a network using the --subnet and --gateway flags. The following example extends the previous one by configuring the subnet and gateway.

$ docker network create \
  --driver overlay \
  --subnet 10.0.9.0/24 \
  --gateway 10.0.9.99 \
  my-network
Using custom default address pools

To customize subnet allocation for your Swarm networks, you can optionally configure them during swarm init.

For example, the following command is used when initializing Swarm:

$ docker swarm init --default-addr-pool 10.20.0.0/16 --default-addr-pool-mask-length 26

Whenever a user creates a network, but does not use the --subnet command line option, the subnet for this network will be allocated sequentially from the next available subnet from the pool. If the specified network is already allocated, that network will not be used for Swarm.

Multiple pools can be configured if discontiguous address space is required. However, allocation from specific pools is not supported. Network subnets will be allocated sequentially from the IP pool space and subnets will be reused as they are deallocated from networks that are deleted.

The default mask length can be configured and is the same for all networks. It is set to /24 by default. To change the default subnet mask length, use the --default-addr-pool-mask-length command line option.

Note

Default address pools can only be configured on swarm init and cannot be altered after cluster creation.

Overlay network size limitations

Docker recommends creating overlay networks with /24 blocks. The /24 overlay network blocks limit the network to 256 IP addresses.

This recommendation addresses limitations with swarm mode. If you need more than 256 IP addresses, do not increase the IP block size. You can either use dnsrr endpoint mode with an external load balancer, or use multiple smaller overlay networks. See Configure service discovery for more information about different endpoint modes.

Configure encryption of application data

Management and control plane data related to a swarm is always encrypted. For more details about the encryption mechanisms, see the Docker swarm mode overlay network security model.

Application data among swarm nodes is not encrypted by default. To encrypt this traffic on a given overlay network, use the --opt encrypted flag on docker network create. This enables IPSEC encryption at the level of the vxlan. This encryption imposes a non-negligible performance penalty, so you should test this option before using it in production.

Note

You must customize the automatically created ingress to enable encryption. By default, all ingress traffic is unencrypted, as encryption is a network-level option.

Attach a service to an overlay network

To attach a service to an existing overlay network, pass the --network flag to docker service create, or the --network-add flag to docker service update.

$ docker service create \
  --replicas 3 \
  --name my-web \
  --network my-network \
  nginx

Service containers connected to an overlay network can communicate with each other across it.

To see which networks a service is connected to, use docker service ls to find the name of the service, then docker service ps <service-name> to list the networks. Alternately, to see which services' containers are connected to a network, use docker network inspect <network-name>. You can run these commands from any swarm node which is joined to the swarm and is in a running state.

Configure service discovery

Service discovery is the mechanism Docker uses to route a request from your service's external clients to an individual swarm node, without the client needing to know how many nodes are participating in the service or their IP addresses or ports. You don't need to publish ports which are used between services on the same network. For instance, if you have a WordPress service that stores its data in a MySQL service, and they are connected to the same overlay network, you do not need to publish the MySQL port to the client, only the WordPress HTTP port.

Service discovery can work in two different ways: internal connection-based load-balancing at Layers 3 and 4 using the embedded DNS and a virtual IP (VIP), or external and customized request-based load-balancing at Layer 7 using DNS round robin (DNSRR). You can configure this per service.

  • By default, when you attach a service to a network and that service publishes one or more ports, Docker assigns the service a virtual IP (VIP), which is the "front end" for clients to reach the service. Docker keeps a list of all worker nodes in the service, and routes requests between the client and one of the nodes. Each request from the client might be routed to a different node.

  • If you configure a service to use DNS round-robin (DNSRR) service discovery, there is not a single virtual IP. Instead, Docker sets up DNS entries for the service such that a DNS query for the service name returns a list of IP addresses, and the client connects directly to one of these.

    DNS round-robin is useful in cases where you want to use your own load balancer, such as HAProxy. To configure a service to use DNSRR, use the flag --endpoint-mode dnsrr when creating a new service or updating an existing one.

Customize the ingress network

Most users never need to configure the ingress network, but Docker allows you to do so. This can be useful if the automatically-chosen subnet conflicts with one that already exists on your network, or you need to customize other low-level network settings such as the MTU, or if you want to enable encryption.

Customizing the ingress network involves removing and recreating it. This is usually done before you create any services in the swarm. If you have existing services which publish ports, those services need to be removed before you can remove the ingress network.

During the time that no ingress network exists, existing services which do not publish ports continue to function but are not load-balanced. This affects services which publish ports, such as a WordPress service which publishes port 80.

  1. Inspect the ingress network using docker network inspect ingress, and remove any services whose containers are connected to it. These are services that publish ports, such as a WordPress service which publishes port 80. If all such services are not stopped, the next step fails.

  2. Remove the existing ingress network:

    $ docker network rm ingress
    
    WARNING! Before removing the routing-mesh network, make sure all the nodes
    in your swarm run the same docker engine version. Otherwise, removal may not
    be effective and functionality of newly created ingress networks will be
    impaired.
    Are you sure you want to continue? [y/N]
    
  3. Create a new overlay network using the --ingress flag, along with the custom options you want to set. This example sets the MTU to 1200, sets the subnet to 10.11.0.0/16, and sets the gateway to 10.11.0.2.

    $ docker network create \
      --driver overlay \
      --ingress \
      --subnet=10.11.0.0/16 \
      --gateway=10.11.0.2 \
      --opt com.docker.network.driver.mtu=1200 \
      my-ingress
    
    Note

    You can name your ingress network something other than ingress, but you can only have one. An attempt to create a second one fails.

  4. Restart the services that you stopped in the first step.

Customize the docker_gwbridge

The docker_gwbridge is a virtual bridge that connects the overlay networks (including the ingress network) to an individual Docker daemon's physical network. Docker creates it automatically when you initialize a swarm or join a Docker host to a swarm, but it is not a Docker device. It exists in the kernel of the Docker host. If you need to customize its settings, you must do so before joining the Docker host to the swarm, or after temporarily removing the host from the swarm.

You need to have the brctl application installed on your operating system in order to delete an existing bridge. The package name is bridge-utils.

  1. Stop Docker.

  2. Use the brctl show docker_gwbridge command to check whether a bridge device exists called docker_gwbridge. If so, remove it using brctl delbr docker_gwbridge.

  3. Start Docker. Do not join or initialize the swarm.

  4. Create or re-create the docker_gwbridge bridge with your custom settings. This example uses the subnet 10.11.0.0/16. For a full list of customizable options, see Bridge driver options.

    $ docker network create \
    --subnet 10.11.0.0/16 \
    --opt com.docker.network.bridge.name=docker_gwbridge \
    --opt com.docker.network.bridge.enable_icc=false \
    --opt com.docker.network.bridge.enable_ip_masquerade=true \
    docker_gwbridge
    
  5. Initialize or join the swarm.

Use a separate interface for control and data traffic

By default, all swarm traffic is sent over the same interface, including control and management traffic for maintaining the swarm itself and data traffic to and from the service containers.

You can separate this traffic by passing the --data-path-addr flag when initializing or joining the swarm. If there are multiple interfaces, --advertise-addr must be specified explicitly, and --data-path-addr defaults to --advertise-addr if not specified. Traffic about joining, leaving, and managing the swarm is sent over the --advertise-addr interface, and traffic among a service's containers is sent over the --data-path-addr interface. These flags can take an IP address or a network device name, such as eth0.

This example initializes a swarm with a separate --data-path-addr. It assumes that your Docker host has two different network interfaces: 10.0.0.1 should be used for control and management traffic and 192.168.0.1 should be used for traffic relating to services.

$ docker swarm init --advertise-addr 10.0.0.1 --data-path-addr 192.168.0.1

This example joins the swarm managed by host 192.168.99.100:2377 and sets the --advertise-addr flag to eth0 and the --data-path-addr flag to eth1.

$ docker swarm join \
  --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2d7c \
  --advertise-addr eth0 \
  --data-path-addr eth1 \
  192.168.99.100:2377

Publish ports on an overlay network

Swarm services connected to the same overlay network effectively expose all ports to each other. For a port to be accessible outside of the service, that port must be published using the -p or --publish flag on docker service create or docker service update. Both the legacy colon-separated syntax and the newer comma-separated value syntax are supported. The longer syntax is preferred because it is somewhat self-documenting.

Flag value Description
-p 8080:80 or
-p published=8080,target=80
Map TCP port 80 on the service to port 8080 on the routing mesh.
-p 8080:80/udp or
-p published=8080,target=80,protocol=udp
Map UDP port 80 on the service to port 8080 on the routing mesh.
-p 8080:80/tcp -p 8080:80/udp or
-p published=8080,target=80,protocol=tcp -p published=8080,target=80,protocol=udp
Map TCP port 80 on the service to TCP port 8080 on the routing mesh, and map UDP port 80 on the service to UDP port 8080 on the routing mesh.

Learn more