Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment HFSC to the rescue (Score 1) 147

bufferbloat is definitely still a thing.

I've been using this script for years to drop packets early to improve latency. it uses HFSC (built into linux since forever) and works great:

https://gist.github.com/eqhmco...

from that:

Congestion avoidance algorithms (such as those found in TCP) do a great job of allowing network endpoints to negotiate transfer rates that maximize a link's bandwidth usage without unduly penalizing any particular stream. This allows bulk transfer streams to use the maximum available bandwidth without affecting the latency of non-bulk (e.g. interactive) streams.

In other words, TCP lets you have your cake and eat it too -- both fast downloads and low latency all at the same time.

However, this only works if TCP's afore-mentioned congestion avoidance algorithms actually kick in. The most reliable method of signaling congestion is to drop packets. (There are other ways, such as ECN, but unfortunately they're still not in wide use.)

Dropping packets to make the network work better is kinda counter-intuitive. But, that's how TCP works. And if you take advantage of that, you can make TCP work great.

Comment whatever you use, use HFSC (Score 3, Informative) 104

I just use a fanless box (made by cappuccino pc, but there are other vendors too) with several ethernet ports (at least two for WAN and LAN) running standard debian.

But then I apply linux's best-kept traffic shaping secret, HFSC. See https://gist.github.com/eqhmco... .You should be able to apply that same script to any linux distro or mini-distro.

The idea is you do AQM first, and QoS only later or even not at all, to get both low-latency for interactive TCP sessions and throughput for bulk session.

AQM is all about dropping packets to throttle TCP and prevent it from overwhelming your ISP's bandwidth caps. When done properly, it works amazingly well, and HFSC + SFQ can do it properly.

Comment Re:Where's the incentive? (Score 2) 134

You appear to be confused about the issue. This is not about capacity and oversubscription. This is about a pathology of queueing.

To be fair, it's about both.

Large queues are a problem, but they can be mitigated by adding more capacity (bandwidth). It doesn't matter how deep the queue can be if it's never used -- it doesn't matter how many packets can be queued if there's enough bandwidth to push every packet out as soon as it's put in the queue.

That said, your point about AQM being a valid solution to congestion is, of course, right on:

To avoid large (tens of milliseconds or more) queue backlogs on congested links, you use Active Queue Management. The idea with AQM is, if you have to queue packets (because you don't have enough bandwidth to push everything out in under 10 or 20 milliseconds), then start dropping packets (or ECN-marking them), so TCP's congestion control algorithms kick in.

Dropping packets before they get put in the queue is known as tail-drop AQM. Tail-drop AQM is actually one of the worst ways to do AQM. RED (marking or dropping packets *before* the queue becomes full) and head-drop AQM are better for latency and throughput. However, even a simple tail-drop AQM can *drastically* reduce latency on an oversubscribed (congested) link. AQM really works, and it works quite well.

TCP attempts to divide traffic for different streams evenly among all the flows passing through it.

Well, no, it doesn't. Each stream tries to fight for its own bandwidth, backing off when it notices congestion (dropped or ECN-marked packets). That means that the first stream that is going over the congested link will use the bulk of the bandwidth, because it will already be transmitting at full speed before other streams try to ramp up. The other streams won't be able to ramp up to match the first stream, as they will constantly encounter congestion, and the first stream won't back off enough to let other streams ramp up to match it. To truly enforce fairness between streams, you need AQM technologies, such as SFQ.

ISPs have a VERY LARGE incentive to do this.

ISPs certainly use AQM on their core routers, but they have an incentive NOT to use AQM where it really matters: on the congested link between your computer and the gateway. In other words, they don't set up proper AQM on the cable modem or DSL modem.

They don't set up AQM there because they have another incentive: maximizing speed-test results. AQM by definition slows traffic down, and slower speed-test results are what customers seem to care about above all else. People don't call support to say they're seeing over 100ms of latency, they call support saying they're paying for 10mbits and they want to see 10mbits on the speed-test site.

I don't have any faith that ISPs are going to fix this any time soon. However, AQM really does make a huge different in the quality of one's internet connection. So much so that the first thing I do when setting up any new shared network (e.g. home or office network) is put a Linux box in between the cable/DSL modem and the rest of the network. There are many AQM scripts out there, but this one is mine: http://serverfault.com/questions/258684/automatically-throttle-network-bandwidth-for-users-causing-bulk-traffic/277868#277868

My script sets up HFSC and SFQ, as well as an ingress filter, to drop packets before they start filling up the large cable/DSL modem buffers. It does a bang-up job of reducing latency; I can hardly internet without AQM in place any more.

You can do the same thing (or at least a similar thing) with some of the SoHo Linux routers running DD-WRT and the like. Most of the scripts for those focus on QoS first and AQM second (if at all), which is a huge mistake. Maybe someday we'll have off-the-shelf SoHo routers that can do *proper* AQM. Now there's a start-up idea if I ever had one.

Comment Re:Paper and gasoline-based dinosaurs (Score 1) 136

I can't disagree at all that the CMS is bizarre. However, I don't think that centralization is necessarily a bad thing.

Yes, I'm sure you could have done a great job locally for your paper, building out your own web infrastructure. However, ideally, a centralized group can do a great job for everybody, not for just one paper.

The former relies on top-notch staff at each paper, while the latter enables one set of top-notch staff to make things better for every paper.

There are trade offs on both sides, and nobody would say that we've executed well by any measure. But, at least with a central authority, it is *possible* to get everybody to not suck. It's a lot harder, and we continue to fail at it, but I don't think the alternative is viable; at least, not in the way you describe it.

Staff at local papers must have a central system that they can contribute to, use, and mold into what they need to deliver good local content. That's a hard system to build; again, the alternative is each paper has their own system. I'd argue that doing that is even less maintainable in the long run.

Slashdot Top Deals

The more they over-think the plumbing the easier it is to stop up the drain.

Working...