Search code examples
networkingdockeriproute

Apply NetEM WAN delay on a docker container interface


I want to apply NetEm delay on the egress traffic of a docker container. Usually, I run:

# /sbin/tc qdisc add dev $INTERFACE root netem delay ${DELAY}ms

The issue is that I have no idea about the interface to which the container is connected to.

For example, I am running the following container:

docker run --rm -it alpine /bin/sh

and then I ping 8.8.8.8:

/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=44 time=39.783 ms
64 bytes from 8.8.8.8: seq=1 ttl=44 time=39.694 ms

What I want to do is adding the NetEm rule from my host machine and see changes to the ping time.

If I run ifconfig, I see that several virtual ethernet interfaces are present (since other containers are running), but I don't know which one is connected to the container I am interested in:

# ifconfig
veth09fa1c5 Link encap:Ethernet  HWaddr 96:73:c9:15:93:b8  
          inet6 addr: fe80::9473:c9ff:fe15:93b8/64 Scope:Link
          .....

vethf05ef93 Link encap:Ethernet  HWaddr ca:ea:97:ef:cd:9d  
          inet6 addr: fe80::c8ea:97ff:feef:cd9d/64 Scope:Link
          .....

I believe that I have to apply the NetEm rule to one of these interfaces. Is that correct?


Solution

  • The veth route seems less straightforward but I think it might be doable based on this answer.

    However, by default (using the bridge interface) the requests from and to your container virt interface will be going through the default bridge network interface docker0.

    You could setup the NetEm rule there, but it would slow down all your other containers too. If that's an option, running your container on a separate network (create with docker network create) would be a cleaner way to do this for experimentation/testing.

    docker network create slownet 
    docker network inspect slownet
    [
        {
            "Name": "slownet",
            "Id": "535e40d880716a27efe1fd3fada62bdc4d9fa13bde09279de650fa53f13f7cdd",
            "Scope": "local",
            "Driver": "bridge",
            "EnableIPv6": false,
            "IPAM": {
                "Driver": "default",
                "Options": {},
                "Config": [
                    {
                        "Subnet": "172.19.0.0/16",
                        "Gateway": "172.19.0.1/16"
                    }
                ]
            },
            "Internal": false,
            "Containers": {},
            "Options": {},
            "Labels": {}
        }
    ]
    ifconfig
     .... 
    br-535e40d88071 Link encap:Ethernet  HWaddr 02:42:4E:B6:F8:C2  
              inet addr:172.19.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
              inet6 addr: fe80::42:4eff:feb6:f8c2%32727/64 Scope:Link
              UP BROADCAST MULTICAST  MTU:1500  Metric:1
              RX packets:180 errors:0 dropped:0 overruns:0 frame:0
              TX packets:180 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0 
              RX bytes:14368 (14.0 KiB)  TX bytes:16888 (16.4 KiB)
    .... 
    
    # so br-535e40d88071 is the interface 
    

    Let's spin up the container and start ping:

    host> docker run -ti --rm --net=slownet alpine sh    
    container> ping 8.8.8.8 
    PING 8.8.8.8 (8.8.8.8): 56 data bytes
    64 bytes from 8.8.8.8: seq=114 ttl=37 time=0.251 ms
    

    And then add the NetEm rule:

    host> tc qdisc add dev br-535e40d88071 root netem delay 100ms
    

    When that happens I see the increase in latency:

    64 bytes from 8.8.8.8: seq=115 ttl=37 time=0.693 ms
    64 bytes from 8.8.8.8: seq=116 ttl=37 time=101.086 ms
    64 bytes from 8.8.8.8: seq=117 ttl=37 time=104.056 ms