PolarSPARC

Docker Bridge Network


Bhaskar S 10/08/2018


Overview

In Docker, the default network driver used is the bridge network.

A bridge network operates at Layer 2 (Data Link layer) of the OSI network model. In Docker, the bridge network is implemented as a virtual device (in software) that allows for the various Docker containers connected to the same bridge network (on a single host) to communicate with each other, while isolating them from the Docker containers connected to a different bridge network.

Setup

All the commands will be executed on a Ubuntu 18.04 LTS (bionic) based Linux desktop.

Ensure Docker is installed by following the installation steps from the article Introduction to Docker .

Before we get started, we will need to install the bridge-utils package that contains an utility program called brctl to create and manage virtual bridge devices in Linux.

$ sudo apt-get install bridge-utils

We should be ready to get started now.

Hands-on with Docker Bridge Networking

Once Docker is installed on a host machine, the first time the Docker daemon starts up, it creates a virtual bridge device called docker0.

To confirm this, execute the following command in a terminal window:

$ brctl show

The following would be a typical output:

Output.1

bridge name  bridge id   STP enabled interfaces
docker0   8000.0242ea41387c no

To list all the networks created by Docker on the host, execute the following command:

$ docker network ls

The following would be a typical output:

Output.2

NETWORK ID          NAME                  DRIVER              SCOPE
7b8cd1a892be        bridge                bridge              local
5c1dd3c2f31e        host                  host                local
78860686c08d        none                  null                local

From Output.2, one of the network drivers is the bridge network. Also, the SCOPE of local means single-host network.

To display detailed information about the bridge network, execute the following command:

$ docker network inspect bridge

The following would be a typical output:

Output.3

[
    {
        "Name": "bridge",
        "Id": "7e86596c2ead077805fab4a32de3136720a72bf9f34ea1be883afd0aa7a6ea21",
        "Created": "2018-10-07T13:23:36.164779253-04:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

To list all the network interfaces on the host, execute the following command:

$ ip link show

The following would be a typical output:

Output.4

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp2s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
    link/ether 12:34:56:78:90:ab brd ff:ff:ff:ff:ff:ff
3: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DORMANT group default qlen 1000
    link/ether cd:ef:01:23:45:67 brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:0a:95:43:ea brd ff:ff:ff:ff:ff:ff

To fetch a pre-built Docker Image for the latest version of Alpine from the Docker Hub registry and store it on the host, execute the following command:

$ docker pull alpine

The following would be a typical output:

Output.5

Using default tag: latest
latest: Pulling from library/alpine
4fe2ade4980c: Pull complete 
Digest: sha256:621c2f39f8133acb8e64023a94dbdf0d5ca81896102b9e57c0dc184cadaf5528
Status: Downloaded newer image for alpine:latest

We will now create and launch two Docker containers named dc-1 and dc-2 respectively, using the just downloaded Alpine image.

Execute the following commands:

$ docker run -dt --name dc-1 alpine

$ docker run -dt --name dc-2 alpine

To list all the running Docker containers, execute the following command:

$ docker ps

The following would be a typical output:

Output.6

CONTAINER ID    IMAGE     COMMAND      CREATED           STATUS          PORTS       NAMES
0cd0fa5356db    alpine    "/bin/sh"    6 seconds ago     Up 5 seconds                dc-2
fd6f950035a8    alpine    "/bin/sh"    13 seconds ago    Up 11 seconds               dc-1

We will re-run the inspect command on the bridge network by executing the following command:

$ docker network inspect bridge

The following would be a typical output:

Output.7

[
    {
        "Name": "bridge",
        "Id": "7b8cd1a892be50ab128cc0ffa545db6e53a59efa0223718c3047efa672c9066d",
        "Created": "2018-10-08T07:36:26.09516181-04:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "0cd0fa5356db7ffa93c95dc1609a106349e43b19a21963c4109f09e42231cc45": {
                "Name": "dc-2",
                "EndpointID": "c0c1712a6d4a450b357d79ee42056c468ce43476179a46f479bcaf355e0e12c6",
                "MacAddress": "02:42:ac:11:00:03",
                "IPv4Address": "172.17.0.3/16",
                "IPv6Address": ""
            },
            "fd6f950035a8d0ea62abee4eabe33f3a28bd5aa73217731d1dc6926099580158": {
                "Name": "dc-1",
                "EndpointID": "c20334e562f45bde07b2bc164a2244b0f6a14f0cdae3379667e50ebbe68419ba",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

From Output.7, we see the two Docker containers along with their MAC and IP addresses.

When a Docker container is created, the Docker daemon automatically creates a pair of virtual network interfaces, assigning one of the pairs as the eth0 interface of the container, while dynamically attaching the other pair to the docker0 bridge in the namespace of the host.

Let us re-run the command to list all the network interfaces on the host by executing the following command:

$ ip link show

The following would be a typical output:

Output.8

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp10s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 12:34:56:78:90:ab brd ff:ff:ff:ff:ff:ff
3: wlp3s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DORMANT group default qlen 1000
    link/ether cd:ef:01:23:45:67 brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default 
    link/ether 02:42:ea:41:38:7c brd ff:ff:ff:ff:ff:ff
5: vetha6aed86@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 86:1d:96:f2:bc:d4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
6: vethb212b16@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default 
    link/ether 96:e0:bd:71:fe:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1

From Output.8, we see the two new virtual network interfaces, whose names start with veth.

The following Figure-1 illustrates the pictorial view of the virtual network interfaces with the two Docker containers running on the host:

Docker Bridge
Figure-1

Open a new terminal window and attach to the Docker container dc-1 by executing the following command:

$ docker exec -it dc-1 sh

At the prompt, one can ping either of the Docker containers using their respective IP addresses, including the host IP address. However, trying to ping the Docker containers by their names dc-1 and dc-2 will not work as there is no automatic DNS resolution support in the default bridge network docker0.

This is where a user-defined custom bridge network comes in handy as it provides support for automatic DNS name resolution.

Let us now stop and clean-up the two running Docker containers dc-1 and dc-2 by executing the following commands:

$ docker stop 0cd0fa5356db fd6f950035a8

$ docker rm 0cd0fa5356db fd6f950035a8

The above command(s) will also remove the virtual network interfaces from the host namespace.

To create a user-defined custom bridge network on the host, execute the following command:

$ docker network create -d bridge --subnet 10.5.0.1/16 --gateway 10.5.0.1 ps-bridge

The following would be a typical output:

Output.9

a0a644ed328955ca8f3318d4cf57160a51d4962176d4366db272713a55c246db

To list all the networks created by Docker on the host, execute the following command:

$ docker network ls

The following would be a typical output:

Output.10

NETWORK ID          NAME                  DRIVER              SCOPE
7b8cd1a892be        bridge                bridge              local
5c1dd3c2f31e        host                  host                local
78860686c08d        none                  null                local
a0a644ed3289        ps-bridge             bridge              local

To display detailed information about the custom ps-bridge network, execute the following command:

$ docker network inspect ps-bridge

The following would be a typical output:

Output.11

[
    {
        "Name": "ps-bridge",
        "Id": "a0a644ed328955ca8f3318d4cf57160a51d4962176d4366db272713a55c246db",
        "Created": "2018-10-08T20:17:59.437419108-04:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "10.5.0.1/16",
                    "Gateway": "10.5.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

To display the list of network bridges on the host, execute the following command:

$ brctl show

The following would be a typical output:

Output.12

bridge name  bridge id   STP enabled interfaces
br-a0a644ed3289   8000.0242792bb990 no    
docker0   8000.0242ea41387c no

Let us re-run the command to list all the network interfaces on the host by executing the following command:

$ ip link show

The following would be a typical output:

Output.13

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp10s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 12:34:56:78:90:ab brd ff:ff:ff:ff:ff:ff
3: wlp3s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN mode DORMANT group default qlen 1000
    link/ether cd:ef:01:23:45:67 brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:ea:41:38:7c brd ff:ff:ff:ff:ff:ff
5: br-a0a644ed3289: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:79:2b:b9:90 brd ff:ff:ff:ff:ff:ff

From Output.13, we see the new virtual network interface, whose names start with br-.

Let us now re-create and launch two Docker containers named dc-1 and dc-2 respectively, on the custom network bridge using the Alpine Docker image.

Execute the following commands:

$ docker run -dt --name dc-1 --network ps-bridge alpine

$ docker run -dt --name dc-2 --network ps-bridge alpine

We will re-run the inspect command on the custom ps-bridge network by executing the following command:

$ docker network inspect ps-bridge

The following would be a typical output:

Output.14

[
    {
        "Name": "ps-bridge",
        "Id": "a0a644ed328955ca8f3318d4cf57160a51d4962176d4366db272713a55c246db",
        "Created": "2018-10-08T20:17:59.437419108-04:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "10.5.0.1/16",
                    "Gateway": "10.5.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "5ce4cc7d6d2347288bf5a27c97925ed67e218ee3e3db6ff114bfe690477b35ef": {
                "Name": "dc-2",
                "EndpointID": "65cc718e901345634b62cf1e7cfc3ba7488afe63e147deb1e65398576e6c727a",
                "MacAddress": "02:42:0a:05:00:03",
                "IPv4Address": "10.5.0.3/16",
                "IPv6Address": ""
            },
            "80ab49fe31ab46072bd0d15b93077b4abfe1fb49c6e992898fb991a0d220de5f": {
                "Name": "dc-1",
                "EndpointID": "14ab7d5ea9dcd5e1bd2cc4a28871658b088c6f6c850aa24da6cac337132db36c",
                "MacAddress": "02:42:0a:05:00:02",
                "IPv4Address": "10.5.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

From Output.14, we see the two Docker containers along with their MAC and IP addresses.

Open a new terminal window and attach to the Docker container dc-1 by executing the following command:

$ docker exec -it dc-1 sh

At the prompt, one can ping either of the Docker containers using their respective IP addresses, or container names (dc-1 and dc-2).

If we create and launch Docker container(s) on the default docker0 network, they will be isolated from the Docker containers in the custom ps-bridge network.

Let us now stop and clean-up the two running Docker containers dc-1 and dc-2.

To remove the user-defined custom ps-bridge network, execute the following command:

$ docker network rm ps-bridge

Thats it on Docker bridge networking.

References

Introduction to Docker

Use bridge networks

Networking with standalone containers



© PolarSPARC