You probably now have a boot sector virus.
Remember those?
That's only true if every Internet user is also a netflix user. If that were the case, Netflix would be a lot more than 30% of the Internet's traffic.
Apples and Oranges...
I'm not denying the symptoms, all I'm saying is that the buffer causing the jitter is the window size, and it's not the network operator that chooses or configures that. The total 'buffers' between you and the receiver will never be fuller than your window size. The TCP window size is the maximum amount of data sent and not yet acknowledged by the receiver. When the window size is reached, you stop transmitting until you receive acks.
You can modulate the window size on an existing tcp/ip connection at the (endpoint) application level to control latency. I've done it and it works.
I read the article, and author admits himself: "(I’m not a full fledged TCP expert)."
I'm not saying there isn't a problem with latency, bandwidth and saturation. This 'bufferbloat' is just something he made up and then he attributes network behaviour findings to this. That doesn't mean that 'bufferbloat' is anything that exists and causes anything. When I say that, I don't deny the symptoms, I'm only saying that the symptoms are not caused by what is claimed.
There are no such buffers. The author just made up the word and never actually says which buffers he's referring to, he is only referring to a symptom and making the wrong conclusion.
The article can say whatever it wants, such buffers still don't exist.
There is no network 'operator' that configures this. In TCP/IP the only buffering that there is, is the window size, and the sending host sets that size.
I guess that people don't read the RFC's anymore, they just think that they know how the Internet communicated from nonsense and hearsay.
That site won't have a 1GB SNDBUF, so that won't happen. ('man 7 socket', then search for SO_SNDBUF).
It's amazing how many people, Gettys apparently unfortunately included (note: of course I did not rtfa before I posted this), don't know how TCP/IP really works.
There is no 'bufferbloat because RAM is getting cheaper'. What he is seeing is what happens when you want to saturate your link. It's sort of an Heisenberg of communication, if you want low latency and low (or no) packet loss on a connection, then the bandwidth can't exceed the available bandwidth, and for any instance that it does, you get either a buffered or a dropped packet.
Hmm... "proprietary encryption"?
Cobol programmers are down in the dumps.