One of the questions I hate is “who’s using my bandwidth?!?” and not at all because I was the child consuming and all of the available dial-up (28.8Kbps) bandwidth downloading the latest FreeBSD or Linux distribution image. In fact this was the age of magazines with CDs that contained Mandrake, RedHat, or if I was lucky Slackware. Debian which is my current go-to wasn’t on my radar for many years later. I even recall ordering a copy of the latest FreeBSD by mail to run on one of my 386 boxes I’d collected1. I digress…
Who’s using my bandwidth is a good question, because it implies that someone (or something) is consuming more bandwidth than they’re supposed to. If we look at the major protocols that dominate the internet being a combination of TCP, and UDP (in the form of DNS and QUIC), surely there is some fairness built into them?
The answer to the question has a short answer, and a much longer one.
Short Answer
If we look at TCP/IP and QUIC specifically they do care about fairness and have congestion control built into them to back off if they detect the presence of congestion. The challenge can be that EVERY TCP connection on every device manages its own congestion control and may mean some connections may never get to equilibrium causing some endpoints getting more than their fair share.
UDP itself has no congestion control and can blast traffic at any rate, which can flood a connection or even DoS a host if it fails throttle/coalesce CPU interrupts (e.g ethtool InterruptThrottleRate and coalescing).
Quality of Service on un-contended connections can also help with fairness. Wendell Odem and Michael J. Cavanaugh do a fantastic job of explaining this in
The Longer Answer
TCP has undergone radical changes over the last 40 years since RFC793 was released in September of 1981. the original RFC didn’t care about congestion, the result of which was the congestion collapse events of the mid 1980s resulting in several important changes which I outline in this video.
The major changes I’ll outline here include:
- Congestion window
- TCP Slow Start
- Exponential backoffs
Congestion windows were introduced to back off in the event that congestion is detected using retransmission time outs. Slow Start is used at the beginning of each new connection (and after an idle period) which doubles the congestion window every round trip from an initial low value (eg. 10 x Maximum Segment Size) until congestion is detected. Exponential backoff timers were introduced to for a couple of reasons, one is to reduce the likelihood of global synchronisation, and also to give a problem time to resolve while reducing unnecessary traffic.
Most Internet TCP connections never get out of Slow Start before during their lifetime which means that they never get a chance to get to their fair share of bandwidth because they never discover the limits, and don’t cause congestion to trigger other connections to slow down.
To add to this challenge I’ve also looked at 17 TCP Congestion Control Algorithms (including the most popular CUBIC, BBR, and Reno), and most struggle to achieve equilibrium with bulk transfer traffic like iPerf.
For the congestion window to increase in either slow start or during congestion avoidance, ACKs need to return to the sender to tell the sender that data was received. This natural flow control means that TCP is self limiting (unlike UDP), AQM concepts like CoDel (Controlled Delay) can allow routers and TCP senders to effectively slow down a sender by injecting small amounts of delay to slow down TCP. this is the same idea as a receiver being overwhelmed and not acknowledging a packet quickly, resulting in the sender slowing down. I’m currently using this successfully at home with the CAKE (Common Applications Kept Enhanced) implementation on my Internet router to improve user (my family and I) experience, and reduce the impacts of bufferbloat. The beauty of CoDel is that we can target expected latency rather than just bandwidth (which in my case is variable).
If you got this far, thanks for reading!
- My favourite FreeBSD OS book
The Complete FreeBSD ↩︎
Leave a Reply