Search code examples
networkingtcpiperfcongestion-control

How TCP flow's rate change when congestion occur


Thank you for clicking my question.

I'm doing a small experiment for understanding the behavior of TCP flows at congestion.

At this experiment, I found the tcp flow's rate is much bigger than I think, when congestion occur.

I think its rate should become halve at most, due to its congestion control.

But it didn't at my experiment.

Could you give me a little hint? Below is my experiment.

Thank you for click again



I made a small network by using Mininet consisting two hosts and a switch.

H1 - S1 - H2 , with all of link BW 80Mbps

Then, I generated traffic form H2 to H1 by using iperf3 like below

#H1
iperf3 -s -p 1212 -f m -i 1

#H2
iperf3 -c 10.0.0.1 -p 1212 -t 10000 -f m -b 70M

It means H2 send TCP packets to H1, at 70Mbit/s

(iperf3 controls its TCP rate at appication layer.)

Then we can see the report at H1 (server siide)

-----------------------------------------------------------
Server listening on 1212
-----------------------------------------------------------
Accepted connection from 10.0.0.2, port 51786
[ 17] local 10.0.0.1 port 1212 connected to 10.0.0.2 port 51788
[ ID] Interval           Transfer     Bandwidth
[ 17]   0.00-1.00   sec  7.49 MBytes  62.8 Mbits/sec                  
[ 17]   1.00-2.00   sec  8.14 MBytes  68.3 Mbits/sec                  
[ 17]   2.00-3.00   sec  8.54 MBytes  71.7 Mbits/sec                  
[ 17]   3.00-4.00   sec  8.60 MBytes  72.2 Mbits/sec                  
[ 17]   4.00-5.00   sec  7.98 MBytes  66.9 Mbits/sec                  
[ 17]   5.00-6.00   sec  8.80 MBytes  73.9 Mbits/sec                  
[ 17]   6.00-7.00   sec  8.21 MBytes  68.9 Mbits/sec                  
[ 17]   7.00-8.00   sec  7.77 MBytes  65.1 Mbits/sec                  
[ 17]   8.00-9.00   sec  8.30 MBytes  69.7 Mbits/sec                  
[ 17]   9.00-10.00  sec  8.45 MBytes  70.9 Mbits/sec                  
[ 17]  10.00-11.00  sec  8.32 MBytes  69.7 Mbits/sec   

At this time, I throttle the S1 port (s1-eth1, egress from H2 to H1) by using linux tc

sudo tc qdisc del dev s1-eth1 root
sudo tc qdisc add dev s1-eth1 root tbf rate 40mbit latency 10ms burst 1mbit

Then the result is below

[ 17]   0.00-1.00   sec  7.76 MBytes  65.0 Mbits/sec                  
[ 17]   1.00-2.00   sec  8.09 MBytes  67.9 Mbits/sec                  
[ 17]   2.00-3.00   sec  8.53 MBytes  71.5 Mbits/sec                  
[ 17]   3.00-4.00   sec  8.47 MBytes  71.0 Mbits/sec                  
[ 17]   4.00-5.00   sec  8.08 MBytes  67.8 Mbits/sec                  
[ 17]   5.00-6.00   sec  8.09 MBytes  67.9 Mbits/sec                  
[ 17]   6.00-7.00   sec  8.74 MBytes  73.3 Mbits/sec                  
[ 17]   7.00-8.00   sec  7.81 MBytes  65.6 Mbits/sec                  
[ 17]   8.00-9.00   sec  8.35 MBytes  70.0 Mbits/sec                  
[ 17]   9.00-10.00  sec  4.56 MBytes  38.3 Mbits/sec                  
[ 17]  10.00-11.00  sec  4.56 MBytes  38.2 Mbits/sec                  
[ 17]  11.00-12.00  sec  4.56 MBytes  38.2 Mbits/sec                  
[ 17]  12.00-13.00  sec  4.56 MBytes  38.2 Mbits/sec                  
[ 17]  13.00-14.00  sec  4.56 MBytes  38.2 Mbits/sec       

As you see, its rate is about 40Mbps.

I think when congestion occur, TCP state should become slow start and then Its rate should become much smaller. But it didn't.

I check the iperf3 source code but It just make TCP traffic to the amount from application layer. Therefore it don't have an effect to the behavior of TCP algorithm.

Why did it happen? I don't know...

Could you give me a little hint? I appreciate you!


Solution

  • First, I have to set parameters properly before the experiment.

    1) When setting link bandwidth, we can set the burst size(,or mtu size) at TC setting.

    it looks have an effect on the fluctuation of the iperf TCP flow.

    when the burst size is small, its rate fluctuated or is low, than I expected.

    2) Also, we can set the MTU size(Jumbo frame size) at switch.

    Because I use OVS(OpenVSwitch), I set the parameter as the site.

    3) And, we can set the MTU size at interface card by using ethtool. You can see this site.


    After setting above parameters properly, it is desirable to observe the TCP rate.

    The reason why the TCP rate isn't smaller despite of congestion seems small RTT value.

    As in the slide, TCP transmitte its packet at every RTT.

    In my experiment, RTT is much smaller value because there is only a edge between two host.

    Therefore, in a second scale, it looks the host do not decrease it rate but it do in reality.