Well, the internet probably does need more bandwidth to support Netflix.
And I'm not a fan of QoS to get better streaming video either. But is Cerf giving up on fixing the problems with streaming (and any realtime internet work) that we know about, bufferbloat? I heard about that from Jim Gettys (thanks to a tweet from John Carmack). Here's a two-page intro in IEEE magazine or a (more interesting IMHO) PDF slide presentation with nice graphs and there is other advice and documents and code on that bufferbloat website.
See, the problem with streaming isn't just bandwidth, it's latency, and the variability thereof. We always measure and marketers talk about bandwidth, but only rarely if ever about latency. Thus ISPs don't optimize for it as a rule. The result? You get these occassional 6-second lags and other phenomena and little economic incentive to track them or fix them. (And certain data ISPs are at least mildly incented to look the other way since it protects their VOIP/PSTN revenues).
How about ISPs actually implement ECN to deal with it? How about router manufacturers design for this (or we all switch to OpenWRT?) How about we techies develop tools to help consumers monitor line quality latency (ping times) over time? How about consumers actually learn to care about latency or we educate them? It's not "too complicated" for consumers to understand; consumers can differentiate between velocity ("what's your car's fastest speed?") and acceleration ("how quickly can it go from 0 to 60?") so I'm sure we could get them to understand bandwidth versus latency. It's just not well measured/monitored right now. (I think we need a better phrase/metric that captures the notion of latency like the "0 to 60" one for cars.)
If you want to help develop measures of latency, use Bismark (or vote for it in the FCC open apps competition) or come up with an open source ping-until-quit tool that logs timestamps for long time periods and displays the results graphically and/or competitively. Better yet, make a phone app that does this and hooks it to google/whoever's maps and shares the data so fellow consumers can see which areas of the phone company networks really suck. (I'm open to hearing about other tools. I used to use a freeware one but it went payware and the best tools I know of are DSLReports's SmokePing and their other tools.)
100x greater bandwidth may make recorded video faster, but it won't solve core problems with realtime (streaming or video conferencing) video faster, nor web conferencing, nor necessarily online gaming. I sure as hell don't want the internet's quality to become as lousy as cell phones and that's what'll happen over time if we don't keep ISPs we pay the big bucks to focused on fixing the problems.
(Another too-late post...)
The difference between US "Fair Use" and UK "Fair Dealing" is at least somewhat described at https://secure.wikimedia.org/wikipedia/en/wiki/Fair_dealing#United_Kingdom and in greater detail by a McGill Law Review PDF paper linked there.
As I read it, broadly the UK system enumerates a (restricted, fixed) set of allowed exceptions, while the US system allows any use conceivably to be "fair" pursuant to a set of factors (that are in practice defined by the court as-needed.)
Apparently the UK system doesn't explicitly allow "parody" which is one reason this comes up (as sort of referenced in the BBC article).
But I suspect the UK copyright minister isn't really interested in promoting "parody"; it's more about trying not to strangle the next Google from being invented in the UK. Ask yourself the broader business/economic question like "Google has taken such liberties with copyright fair use in their business model... man that worked out well... why couldn't Google have been invented in the UK? Oh yeah, the copyright system is really picky about what is/isn't allowed and this is anti-innovation; maybe we should legislate by liberty-allowing priority/tests than explictly enumerating consumers' rights and let the courts sort it out".
I've now looked at the Microsoft patent. While I don't know the specifics of the SGI O2 implementation, I doubt they were "processing each of the copies of the frame in parallel, using a different channel of the multiple channels of the GPU" as described in Claim 1. However, IANAL (or should I say IANAPA...)
The claims seem to revolve around handling certain parts of video encoding in a GPU vs certain parts in the CPU but the site is slashdotted so I can't review it at the moment.
All that said, if I were looking for prior art, I would look at SGI patents for SGI's Indigo IMPACT and/or IMPACT Compression board hardware (e.g. see http://www.wordiq.com/definition/SGI_Indigo2) and even better, the slightly later "O2" workstation graphics they implemented in 1997 (see http://www.wordiq.com/definition/SGI_O2 ). The IMPACT graphics video handling was done all in hardware off the CPU as far as I know, but the O2 had a unified memory architecture and integrated graphics in such a way that some video texture operations were handled on the graphics chipset (the MJPEG compression?) and some in the CPU (texture storage in general purpose RAM). Whether this split of CPU/GPU operations matches the claims MS is patenting, I don't know and would welcome informed comment.
(More broadly, I would add that I thought PCs were doing video decoding on the GPU as far back as Nvidia's Riva TNT if not the slightly earlier Riva 128 (1998). Don't know any implementation specifics tho.)
(Sorry I'm posting so late on this topic...)
If cars drive themselves, google's blog claims this could reduce greenhouse emissions.
This seems wishful thinking at best or greenwashing at worst... autonomous cars will be a disaster that will increase greenhouse gasses substantially.
1) because now more people can afford, in terms of their time, to drive further for work. So they will. And
2) if transportation of raw materials is cheaper because drivers aren't needed, the volume of material transported will go up, assuming the demand for goods is somewhat elastic with the price. With the amount of material transported increasing, the gasoline required for that transport and thus the carbon emitted will increase.
Color me a pessimist. Autonomous cars will be great for human freedom, and for human safety, but reduced greenhouse emissions is one thing that will not be a benefit.
Now if Google could build us some nice carpool-sharing app hooked to Google directions, with a reputation engine for the fellow passengers (perhaps in conjunction with their autonomous car work) to avoid unpleasant passenger surprises, *that* I could see helping reduce greenhouse emissions.
The interrupts and NOPs interfered greatly with the network cards, causing the whole thing to come crashing down when more than a couple of the computers were running at a time. It took at least a couple of days for the sysadmin to sort it out.
RIP George, thanks for introducing me to the Internet and I'm sorry that you didn't get to stick around for Linux and
I have a parenthetical aside regarding the word "you".
To take a step back, and at the risk of over-rationalizing this poster's intent, the sloppy language might be due to sloppy thinking, and the sloppy thinking is likely a fruit of a sloppy language-- Modern English. Although it happened before I was born, I've become increasingly aware the decline of the English language in one specific case... we've lost the words to distinguish second-person-singular "you" from second-person-plural "you-as-a-group"/"you all". Currently "you" could mean "just you" or it could mean "you and your community"/"you all"/"all of you".
In older english (e.g. Shakespeare or the King James Bible), there is "thee" if it's directed at a single person, and "ye" if its directed at you-as-a-group (Nobody ever explained that to me as a kid! I thought they were the same!) (And thou/you are used for objects of sentences.) And unfortunately, none of the modern multi-word alternatives for a plural "you" slip off the tongue easily or have a neutral connotation, e.g. "you all" or "the lot of you" or "you people". (Although "y'all" and "youse guys" are the regional equivalents of the old "ye".)
For the English-speaking Christians out there, this means if Paul is telling his listeners that "you" should do something, it might (as seen in the Greek or the Spanish or the
I have no idea if this loss of a distinct second-person plural reflects Western individualism, helped caused it, or is a coincidence, but I don't think its a complete coincidence.
It certainly affects the nature of dialog between peoples, since the speaker, lacking a distinct multiple-person-second-person-familiar plural, typically will A) need to attach a label to the other party to (semi-)accurately describe them as a group, (rather than addressing them as "you all" in a more personal relational way), or B) just use "you" which can then lead to a situation as we all just witnessed where they are accused of an ad-hominem attack when that may not have actually been their intention. Their true intention was to lump you together with some unspecified other people, but that's a little different than attacking you personally. In any case, neither options A) nor B) lead to particularly friendly relationship-building outcomes among groups.
Actually none of my comments were meant to discuss re-using a one-time pad. I'm not sure which of my comments you're referring to, but perhaps you are thinking of my comment about splitting a 1TB one-time pad into 10 components, each for use with one of 10 different parties. That's not re-use. Otherwise, I completely agree with your comments that using a one-time pad multiple times is, by definition, no longer a one-time pad and has much different security properties.
I've been thinking this same thing (using USB keys for a OTP, and "why don't we do that?") for a couple years now, but 10 minutes after reading your post, the following problems/"considerations" with the USB OTP approach did start to enter my mind:
1) I can see that with a big 2TB pad, you'd also want/need to cycle through pads... the longer you keep the same pad without destroying it, the more data an attacker can get with rubber-hose cryptography if they recover your pad... by coming to your(or his/her) house with a gun and ripping the USB key off your neck. Or seizing it when you/they travel.
2) Also, the other trouble I can forsee with OTPs is that you need one of them for each person you need to communicate with securely. Typically if you are doing something needing this security, you are not doing it with just one other person... you also need to communicate with multiple parties. Once you have 5-10 parties to communicate securely with, the OTP can get a little cumbersome. Carrying around 5-10 USB keys and keeping them straight? And I can't envision it working with 200+ counterparties (a USB-OTP-for-the-web scenario). If you partition your 1TB USB into, say, 10 parts, one for each counterparty, you still have problems. You still need to get 1/10th of that USB key to each of the other parties without giving them the other 9/10ths of the key. (Or your whole gang could use a set of the same 1TB keys and you are trading off convenience versus chances of an informant/leaker, and if you're paranoid enough to be using 1TB OTP, why make that tradeoff?) And don't the counterparties need to communicate so they need their own web of keys?
3) There is the little problem of USB-PC security: wouldn't putting the USB key in a PC expose your whole OTP to the perhaps-infected PC? How does this actually work?
One can see that subversion-resistant secure random number generation, secure transport, and secure key usage, and secure key destruction are all required to make OTPs actually secure.
I predict someone will attempt to market USB one-time pads within 5 years as a sort of snake-oil bandaid, and I can see a distant future where they get used, but I don't see them becoming used widely/securely particularly soon. (Disclaimer: bank tokens that give you 5-digit codes for authenticating transactions do make a lot more sense to me however and might be one targetted use of this technology.)
P.S. I have not read the security literature on one-time pads. Forgive me if I'm stating the obvious.
P.P.S. I was kind of stunned last week though when getting a mini-SD card for my phone that I can, for $50, get something that is literally the width/length/thickness of my pinkie fingernail that contains 8GB.
In other words, the KDE team destroyed a perfectly functioning desktop environment to build a better Weatherbug.
This is a perfect summary of may reaction to KDE4. Mod parent up.
A computer without COBOL and Fortran is like a piece of chocolate cake without ketchup and mustard.