Forgot your password?
typodupeerror

Comment: Re:Motorola Atrix (Score 3, Informative) 110

by Jay Carlson (#43318807) Attached to: Why Your Next Phone Will Include Biometric Security

Apple buying the vendor for the fingerprint stack might have something to do with Motorola dropping the ATRIX 4G fingerprint sensor.

The ATRIX 4G was supposed to get an ICS upgrade. There was a "leak" of a partially functional version. My guess is that the licensing issues with Authentec/Apple broke down. Guess Motorola didn't negotiate any long-term contract options.

It's a shame about how AT&T handled pricing on the LXDE subsystem. The X server implemented on the NVidia framebuffer/compositing layer was pretty nice. In theory Android 4.2.2 should support non-mirrored HDMI better, so hopefully I can get a Linux desktop bigger than 1280x720 on this Galaxy S3.

Comment: Re:All of them (Score 1) 477

by Jay Carlson (#40907731) Attached to: Most Useful Scripting Language To Learn?

but there is no guarantee that awk is present on the remote machine, unlike grep, which is ubiquitous and part of the standard base. See, awk isn't, which experience has told me, but not you.

Which century did you have this experience in? awk has been in POSIX since, uh, 1986? It's mandatory in the SUS and the LSB. Every "charge for each utility program" Unix has been swept from the face of the earth.

If you're talking embedded systems, you're talking about busybox quirks or worse, and it's not really a good thing to lecture people about that situation without caveats.

Comment: Re:Jim Gettys did the world a great service with t (Score 1) 525

by Jay Carlson (#34807014) Attached to: Bufferbloat — the Submarine That's Sinking the Net

By taking the high road and not pointing fingers he is able address an issue in such a way that a lot of the people who did contribute to this problem can recognize what they have done and own it, without being labelled, accused or feeling attacked.

But we aren't all bozos on this bus, and pretending "everybody contributed to it" is not necessarily the best way to fix this particular problem, or to reduce the likelihood of this kind of engineering failure in the future. Understanding how this happened is important.

My elevator version of what happened: the bellhead model of a communication service is a reliable circuit-switched connection. "Reliable" sounds good, and circuits are a familiar model. But the Internet is based on a model of best-effort delivery of packets. Every product group experienced in Internet infrastructure knew horror stories about confusing TCP. New entrants did not know this, or had system design teams tilted towards bellhead decision-making.

Cisco has all the cool toys for queue management in their routers. Are they bozos? People who have even skimmed the Linux traffic shaping HOWTO are sensitized to the issues. They're not bozos.

I have a copy of the first edition of Comer in front of me (the 1988 one that talked about the inevitable transition from TCP to OSI TP4.) The advice to implementors of gateways tells you to read RFC 1009 very carefully, which has a bunch of congestion cites, including John Nagle's (he's downthread) RFC 970 explaining why infinite buffers are a disaster. These are foundational documents of the Internet, and sure, they're from 1987 and routing to a T1 by processing over 9,000 packets a second is no longer something you would need a supermini for (you probably get faster computers free with your breakfast cereal.) But scanning forward through the RFCs you'll see lots and lots of very pointed advice to the effect of "please do not confuse TCP or you'll be sorry."

So some of the people building the network hardware with these problems weren't alive when this was being figured out. They didn't do their homework; fine. The people running the companies designing and building the hardware don't have that excuse, and it was their job to either get a clue or hire one. Their customers are going to be the ones paying to fix this.

So if you're buying Internet infrastructure, you might want to look for companies (and more particularly, product groups) hanging out on nanog and participating in IETF, since although that's not proof their products are not fighting the Internet, maybe it correlates.

My current guess is that organizational decision-making was tilted towards bellhead thinking for a variety of reasons (stereotype: they dress better and do nicer PowerPoint architecture.) Skimming through documentation of bearers such as 1xRTT makes it pretty clear that the design center was "reliable pipe first, then put packets on it." Which makes perfect sense if your company has history in non-Internet telecoms--your senior people are the ones who shipped products that did reliable circuit-switched pipes. But that's just wrong if you're doing IP, and for reasons known in the Internet communications world for decades.

I've been trying to figure out whether I wanted to link some version of this to the blog posts. I figure it's safely out of sight here and won't interfere with the public diplomacy.

Nothing is rich but the inexhaustible wealth of nature. She shows us only surfaces, but she is a million fathoms deep. -- Ralph Waldo Emerson

Working...