Emulating Realistic Network Conditions with tc in Docker
The Linux command tc
can add latency to a network interface, constrain its bandwidth, and force it to randomly drop packets. This command can be applied to a virtual interface (veth
) inside of a Docker container to test software with realistic network conditions.
Example: Modifying Latency and Drop Rate
This example walks through the steps to modify the traffic
settings for a Docker container and verify they have been changed using ping
.
Start an Alpine shell in a container with the capability for its network settings to be modified:
$ docker run --cap-add NET_ADMIN -it alpine /bin/ash
Inside the container, add the iproute2
package to install
the tc
command. It’s not included in the base Alpine distribution.
# apk add iproute2
Then, try doing a ping (e.g, ping 8.8.8.8
) before and after
changing the traffic discipline. This example adds 100 ms of delay with a 25%
packet loss rate. You should observe an increased time for the pings to return,
along with many of the sessions being dropped.
# tc qdisc add dev eth0 root netem delay 100ms loss 25.0%
The current tc
settings can be viewed with tc -s qdisc
.
The general formats for tc commands are
tc qdisc add dev DEV root QDISC QDISC-PARAMETERS
tc qdisc change dev DEV root QDISC QDISC-PARAMETERS
tc qdisc del dev DEV root
TODO
tc
is often used to shape traffic. In a future post I will
work through an example of how to do this and verify its performance using
iperf3
. The command uses a token bucket filter to limit the egress traffic.
tc qdisc change dev eth0 root tbf rate 10kb burst 1540 latency 10ms