Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment I am inclined to put this in the "win" column (Score 1) 52

As someone who helped put together one of the biggest filings with the FCC on this matter, with 260+ other people...

http://fqcodel.bufferbloat.net......

(in addition to 1300? 1700? filings from other orgs)

And later met in person with many of the top people there:
https://www.fcc.gov/ecfs/filin...

I am inclined to put this result in the "win" column, provisionally.

June 2 came and went, tp-link's router firmware returned to field upgradable, and other manufacturers did nothing to make flashing other firmwares any harder than it already was. Hopefully, our arguments buttressed the legal case ongoing at the time against tplink (I knew there was one, but not against whom, or over what, I hope to get more details).

This does not mean the war is won, however. Certainly binary blob firmware that completely controls the radio remains a problem - but progress is being made with the very thin firmware in the 802.11ac mt76 chipset, I am not aware of 5ghz ath9k chips requiring blobs, and other binary only firmwares are improving to support APIs that fq_codel on wifi needs.
http://blog.cerowrt.org/post/f...

(Recently a few new *major* chipsets had wifi drivers submitted to the linux kernel, but I haven't looked at what, exactly the firmware controls. The state of most wifi drivers and firmware is thoroughly depressing - and a very smart and fast co-processor is seemingly needed to run at very high rates)

Five things I learned from this exercise:

1) If a legalistic solution can be vague, it will be. It then can be spun many ways for many audiences. Read Ed Bernays.
Still, sometimes what is said publicly, continues to matter, and the FCC has said some very nice things.

2) The FCC was not the enemy, but a harried organization attempting to fulfill its mandates. As minimally outlined, their problem was the FAA complaining about wifi interference with weather radars. The first solution was overbroad. They have a much better understanding of the roles of open source, third party firmware now - after the keruffle - of the usefulness of user control, better security, and more frequent updates.

The FCC has WAY bigger problems than linux wifi. The number of wireless capable devices requiring certification and testing is skyrocketing, among other things.

https://twitter.com/FCC is a good source for the FCC's other concerns.

3) If you really want attention in D.C., it is a good idea to make a good argument, with a lot of well known people, file it somewhere inside the agency's process, and then issue (buy) a press release, and make the biggest stink you can.
As it turned out many of the recommendations we made above cannot be implemented inside the FCC's mandates, but the FTCs.

4) Chipmakers can now no longer hide behind an argument that the FCC will not let them open up their firmware.

5) The best "proof of the pudding" I can think of would be to push through a new product with much more or entirely open wifi firmware through the FCC processes, using the CRDA library to enforce the rules. Lining up a vendor willing to try that has so far not happened, although I expected a few mt76 chipsets to enter the US by now, I have not been actively watching their RSS feed for progress.

All in all, honestly, I do think we moved the dial a few notches in the right direction, and I'm going to sleep pretty well tonight.

Comment Trying mainly to get code *maintained* properly (Score 2) 173

Dear Bruce:
In your slashdot posting today you mischaracterized our efforts as attempting to "open source" all routers. (as have multiple other reporters and people)
I lost sleep for years trying to create a third not "open source" or "closed source" *option* for making society's safety critical source code *public* vs what is currently buried in inauditable binary blobs - and in this letter, tried to shift the core fcc licensing requirements to mandating that the source code at the lowest layers of the network stack be "public, maintained, and regularly updated".

What license is slapped on this "public" code I totally do not care about - it could mandate you have to sell off your first born child, or slit your throat after reading, for all I care.
I care only that the sources be public, buildable, maintained and updated.
http://www.bufferbloat.net/pro...
Open source and closed source alike have been doing a terrible job of maintenance, and in the embedded market - aside from higher end devices like android and mainline OSes like redhat/ubuntu - are not being updated. That is the *real problem* here that we are trying to solve.
thx in advance for any efforts you might make to correct your messaging, particularly when talking about our efforts! I have been busting my b**ls to make these points with every reporter I've talked to.
Aside from that... I think extremely highly of your characterization of the problem's stakeholders, the quality of your letter is even better than ours overall, and your proposed solution quite possibly one that could succeed (although I would shoot for a new licensing regime that made the git committer more responsible, perhaps - it is very worthy of discussion!)
I am totally willing to discuss restrictions on "how public" things become - and how fast they become so! particularly as I am well aware dismal code quality in many mission and public safety critical pieces of software that is out there. Mandating that all that be made public all at once would induce a terrifying amount of risk to society as a whole, and a staged approach towards making the core blobby bits public would be best.
...which is why I have tried to initially limit the call to merely opening up the binary blobs going into wifi, particularly as getting the current 802.11ac trends towards doing so have failed so dismally and wifi far less safety critical than many other things.
I would dearly like, also, to fix the dsl drivers and firmware worldwide, at least in part, because I strongly suspect quite a lot of it, in light of snowden's revelations, is compromised already, and they just need 50 lines of code or so, and a firmware update, to eliminate the bufferbloat in them - and verify, it really is doing what the authors say in the tin, to the FCC.
Sincerely,
Dave Taht
lead author, the cerowrt project's letter to the fcc
http://fqcodel.bufferbloat.net...

Submission + - Ask the FCC to switch to sane software engineering practices for wifi! (google.com) 2

mtaht writes: The CeroWrt project is collecting signatures for a letter to the FCC strongly suggesting they adopt saner software engineering practices for certifying wifi devices instead of pending regulations.

You can view the letter (signed by Dave Täht, Vint Cerf and many other notables) and add your signature,
here.

Comment Friends don't let friends run factory firmware (Score 2) 52

The article recommends updating the firmware to the latest provided by the vendor - which is quite often, no help. First, check to see if that latest firmware is corrected... But preferably - install better 3rd party firmware - like openwrt - designed by people that care about your security, reliability, and uptime.

Submission + - DSLreports new bufferbloat test (internetsociety.org)

mtaht writes: While I have long advocated using professional tools like netperf-wrapper's rrul test suite to diagnose and fix your bufferbloat issues, there has long been a need for a simpler web based test for it. Now dslreports has incorporated bufferbloat testing in their speedtest. What sort of bloat do slashdot readers experience? Give the test a shot at http://www.dslreports.com/speedtest

Has anyone here got around to applying fq_codel against their bloat?

Submission + - Virgin Media censors talk of "bufferbloat" on their discussion forums (blogspot.com)

mtaht writes: Given that bufferbloat is now fixed by fq_codel and the sqm-scripts for anyone that cares to install openwrt and derivatives on their home routers (or use any random linux box for the job), AND standardization efforts for the relevant algorithms near completion in the IETF, I went and posted a short, helpful message about how to fix it on a bufferbloat-related thread on Virgin Media's cable modems... And they deleted the post, and banned my IP... for "advertising". I know I could post again via another IP, and try to get them to correct their mistake, but it is WAY more fun to try to annoy them into more publically acknowledging their enormous bufferbloat problems and to release a schedule for their fixes. Naturally I figured the members of slashdot could help out Virgin and their customers understand their bufferbloat problems better. My explanations of how they can fix their bufferbloat, are now, here.

Submission + - Gogo airline network blocks youtube.. when they could just fix their bufferbloat (reed.com) 1

mtaht writes: David Reed (best known for the e2e argument) rants at Gogo Inflight's interception of https and suggests that: "It is the wrong solution... there’s a technically better one. I use GoGo a lot. I’ve discovered that their system architecture suffers from “bufferbloat” (the same problem that caused Comcast to deploy Sandvine DPI gear to discover and attack bittorrent with “forged TCP” packet attacks, and jump-started the political net neutrality movement by outraging the Internet user community). Why does that matter? Well, if GoGo eliminated bufferbloat, streaming to the airplane would not break others’ connections, but would not work at all, with *no effort on Gogo’s part* other than fixing the bufferbloat problem. [The reason is simple — solutions to bufferbloat eliminate buffering *in the network*, thereby creating "fair" distribution of capacity among flows. That means that email and web surfing would get a larger share than streaming or big FTP's, and would not be disrupted by user attempts to stream YouTube or Netflix. At the same time, YouTube and Netflix connections would get their fair share, which is *not enough* to sustain video rates — though lower-quality video might be acceptable, if those services would recode their video to low-bitrate for this limited rate access]."

Submission + - Help stamp out CVS and SVN in our lifetime (ibiblio.org)

mtaht writes: ESR is collecting specifications and donations towards getting a new high end machine to be used for massive CVS and SVN repository conversions, after encountering problems with converting the whole of netbsd over to git.

What he's doing now sort of reminds me of holding a bake sale to build a bomber, but he's well on his way towards Xeon class or higher for the work.

What else can be done to speed up adoption of git and preserve all the computer history kept in source code repositories?

Comment netperf-wrapper from bufferbloat.net (Score 1) 294

Over at bufferbloat.net we have developed several pretty accurate bandwidth and latency measurement tests, that work at speeds up to 40GigE. We wrap the popular with the linux-netdev's "netperf" tool with something that can aggregate and plot the results, called "netperf-wrapper". The most popular test in the suite is called "rrul" which stands for "Realtime Response Under Load", but there are many others in the suite. It has been used to extensively tune several fair queuing and aqm algorithms, notably "fq_codel" which is in cerowrt, openwrt, and many other 3rd party firmwares. Its been used to debug network hardware, wifi, cablemodems, and most recently during the 40GigE batch-bql patchset now entering the linux kernel. Some examples of use to tune a smarter queue management system against modern day cable modems: http://burntchrome.blogspot.co... http://snapon.lab.bufferbloat.... There are also netperf-wrapper results for 40GigE, DSL, and wifi spread around the Internet. The intermediate format netperf-wrapper uses to store its results are in json and parsable by anything, and I keep hoping someone will get around to writing a web interface for the datafiles... Nothing else I've ever seen even comes close to netperf-wrapper for finding good, accurate, long term numbers and multiple forms of anomoly. Pretty much all the web based tests get increasingly inaccurate above 20Mbits. Single threaded TCP tests are bad also as they generally result in someone defeating TCP congestion avoidance in pursuit of the best benchmark numbers. [2] Far more important to the debloaters is not the bandwidth attained but the latency induced while getting it. [1] We maintain several public servers for netperf-wrapper, all connected via a gigE connection to the internet. Thus far we haven't overloaded them (nor advertised them widely), but if you want to give netperf-wrapper a try, and can't set up your own netperf server on the other side, feel free to ping us on the bloat mailing list for some addresses on various continents. [1] A brief rant: Bandwidth != speed. Bandwidth is capacity/interval. Real perceived speed is obtained via low latency. [2] I really hate that all the web network measurement tests don't simultaneously measure ping while running their upload and downloads. IF ONLY those tests would do that, people would start to realize that there is a huge tradeoff between good latency and high bandwidth, and that they are doing their networks in, by optimizing for bandwidth only. Networks engineered for speedtest's current test, *suck* for voip and gaming. I'd like to petition them to at least report ping times under load to the 98th percentile.

Slashdot Top Deals

Memory fault -- brain fried

Working...