Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×

Comment: Re:Copyrighting History (Score 1) 208

by swillden (#49504945) Attached to: Joseph Goebbels' Estate Sues Publisher Over Diary Excerpt Royalties

It seems that the bigger problem here is that modern copyright is so unreasonably long, historical documents are still under copyright. Anything over the original 28 year copyright term is really robbing the next generation of history.

While I know al copyright issues are sensitive on /. and hate going against the stream here, note that the next generation is not really robbed from history. They just have to pay for it.

Assuming the copyright owner can be found, and is willing to sell.

The basis for Eldred v Ashcroft was that the celluloid of many old films is rapidly degrading but because the copyright ownership is muddled it's impossible to find anyone from which the right to republish the films can be purchased, so the films are being lost forever.

Comment: Re:Ok.... Here's the thing, though ..... (Score 4, Insightful) 97

by swillden (#49504883) Attached to: Utilities Battle Homeowners Over Solar Power

The power companies are all moving towards "smart meter" technologies anyway. Why not make sure they've put one in that can monitor the output of a PV solar (or even a wind turbine) installation while they're at it?

For that matter, it seems perfectly reasonable to require the homeowner to install such a meter as part of a solar installation, as a condition of being able to sell power to the utility -- or even to push power into the grid at all.

Comment: Re:What is wrong with SCTP and DCCP? (Score 5, Interesting) 62

by swillden (#49503565) Attached to: Google To Propose QUIC As IETF Standard

SCTP, for one, doesn't have any encryption.

Good, there is no reason to bind encryption to transport layer except to improve reliability of the channel in the face of active denial (e.g. TCP RST attack).

I disagree. To me there's at least one really compelling reason: To push universal encryption. One of my favorite features of QUIC is that encryption is baked so deeply into it that it cannot really be removed. Google tried to eliminate unencrypted connections with SPDY, but the IETF insisted on allowing unencrypted operation for HTTP2. I don't think that will happen with QUIC.

But there are other reasons as well, quite well-described in the documentation. The most significant one is performance. QUIC achieves new connection setup with less than one round trip on average, and restart with none... just send data.

Improvements to TCP helps everything layered on top of it.

True, but TCP is very hard to change. Even with wholehearted support from all of the major OS vendors, we'd have lots of TCP stacks without the new features for a decade, at least. That would not only slow adoption, it would also mean a whole lot of additional design complexity forced by backward compatibility requirements. QUIC, on the other hand, will be rolled out in applications, and it doesn't have to be backward compatible with anything other than previous versions of itself. It will make its way into the OS stacks, but systems that don't have it built in will continue using it as an app library.

Not having stupid unnecessary dependencies means I can benefit from TLS improvements even if I elect to use something other than IP to provide an ordered stream or I can use TCP without encryption and not have to pay for something I don't need.

So improve and use those protocols. You may even want to look to QUIC's design for inspiration. Then you can figure out how to integrate your new ideas carefully into the old protocols without breaking compatibility, and then you can fight your way through the standards bodies, closely scrutinized by every player that has an existing TLS or TCP implementation. To make this possible, you'll need to keep your changes small and incremental, and well-justified at every increment. Oh, but they'll also have to be compelling enough to get implementers to bother. With hard work you can succeed at this, but your timescale will be measured in decades.

In the meantime, QUIC will be widely deployed, making your work irrelevant.

As for using TCP without encryption so you don't have to pay for something you don't need, I think you're both overestimating the cost of encryption and underestimating its value. A decision that a particular data stream doesn't have enough value to warrant encryption it is guaranteed to be wrong if your application/protocol is successful. Stuff always gets repurposed and sufficient re-evaluation of security requirements is rare (even assuming the initial evaluation wasn't just wrong).

TCP+TFO + TLS extensions provide the same zero RTT opportunity as QUIC without reinventing wheels.

Only for restarts. For new connections you still have all the TCP three-way handshake overhead, followed by all of the TLS session establishment. QUIC does it in one round trip, in the worst case, and zero in most cases.

There was much valid (IMO) criticism of SPDY, that it really only helped really well-optimized sites -- like Google's -- to perform significantly better. Typical sites aren't any slower with SPDY, but aren't much faster, either, because they are so inefficient in other areas that request bottlenecks aren't their problem, so fixing those bottlenecks doesn't help. But QUIC will generally cut between two and four RTTs out of every web browser connection. And, of course, it also includes all of the improvements SPDY brought, plus new congestion management mechanisms which are significantly better than what's in TCP (so I'm told, anyway; I haven't actually looked into that part).

I'm not saying the approach you prefer couldn't work. It probably could. In ten to twenty years. Meanwhile, a non-trivial percentage of all Internet traffic today is already using QUIC, and usage is likely to grow rapidly as other browsers and web servers incorporate it.

I think the naysayers here have forgotten the ethos that made the Internet what it is: Rough consensus and running code first, standardization after. In my admittedly biased opinion (some of my friends work on SPDY and QUIC), Google's actions with SPDY and QUIC aren't a violation of the norms of Internet protocol development, they're a return to those norms.

Comment: Re:Simple (Score 3, Interesting) 224

by swillden (#49502187) Attached to: Ask Slashdot: What Features Would You Like In a Search Engine?

False analogy. There's a huge difference between a personal assistant, who by definition *I* know personally, and a faceless business entity who I know not at all (read adversarial entity) scraping 'enough' information about me to presume it knows me sufficiently to second guess what I want and give me that instead of what I requested.

Not really.

I'd say there's a good argument that all of the information I give Google actually exceeds what a personal assistant would know about me. The real difference (thus far) lies in the assistant's ability to understand human context which Google's systems lack. But that's merely a problem to be solved.

Note, BTW, that I'm not saying everyone should want what I want, or be comfortable giving any search engine enough information to be such an ideal assistant. That's a personal decision. I'm comfortable with it... but I'm not yet getting the search results I want.

Comment: Re:Simple (Score 1) 224

by swillden (#49502045) Attached to: Ask Slashdot: What Features Would You Like In a Search Engine?

Why would I want crappy results? I want it to give me what I want, which by definition isn't "crappy".

And you think a system built by man can divine what you and everyone else wants at the moment you type it in? That'll be the day. Until then, assume I know what I want and not your system.

I think systems built by man that knows a sufficient amount about me, my interests and my needs can. We're not there yet, certainly, but the question was what I want... and that's it.

Put it this way: Suppose you had a really bright personal assistant who knew pretty much everything about you and could see what you are doing at any given time, and suppose this assistant also had the ability to instantly find any data on the web. I want a search engine that can give me the answers that assistant could.

Comment: Re:Does it report seller's location and ID? (Score 1) 140

by swillden (#49501287) Attached to: Google Helps Homeless Street Vendors Get Paid By Cashless Consumers
Sure, but that requires only very coarse -- city-level, at most -- geolocation. If I were reviewing this product for launch, I'd tell them that they can use location as a risk signal, but must coarsen it to avoid making it possible to use it for people-tracking.

Comment: Re:What the fuck is the point of the ISP middleman (Score 1) 43

by swillden (#49501277) Attached to: Google Ready To Unleash Thousands of Balloons In Project Loon

If local ISPs are involved, then what the fuck is the point of this?

Not really ISPs, at least as we traditionally think of them. Mobile network operators.

Why the fuck is there still this useless ISP middleman?

The MNO in question isn't the middleman, it's the service provider. It provides service to the balloons, which relay it to regions that are too remote to service now.

For crying out loud, this whole problem exists in the first place because the local ISPs weren't able or willing to invest in the infrastructure needed to provide Internet access to these regions.

No, most of these regions aren't served because it's uneconomical. It's not that no one is willing to invest, it's that it's not an "investment" if you know up front that the ROI will be negative. Putting up a bunch of cell towers to serve remote African farmers, for example, doesn't pan out economically because there's no way the farmers can afford to pay high enough fees to cover the costs of all the infrastructure. Project Loon aims to fix this by radically lowering the cost of serving those regions, to a point where it is economical, so the fees the people in the region can afford to pay are sufficient to make serving them profitable.

As for why Google is partnering with MNOs rather than deploying their own connectivity? I don't know but I'd guess a couple of reasons. First, I expect it will be feasible to scale faster by partnering with entities who already have a lot of the infrastructure in place, particularly when you consider all of the legal and regulatory hurdles (which in many areas means knowing who to bribe, and how -- Google, like most American companies, would not be very good at that). Second, by working through local companies Google will avoid getting into power struggles with the local governments. Google is helping their local businesses to grow, not replacing them.

(Disclaimer: I work for Google, but I don't know anything more about this than what I see/read in the public press.)

Comment: Re:What's the problem? (Score 1) 180

by swillden (#49497259) Attached to: Social Science Journal 'Bans' Use of p-values

There really aren't any good ways to measure those other effects. If you knew how your experiment was biased, you'd try and fix it.

Randomized sampling goes a long way, but only if you have a large enough population. This is one of the problems of social sciences. A randomized 10% subsample from 100 subjects ain't gonna cut it. A randomized subsample from 10,000,000 people isn't going to get funded.

Why wouldn't a randomized subsample from 10M people get funded? The required sample size doesn't grow as the population does.

Comment: Re:What's the problem? (Score 4, Insightful) 180

by swillden (#49495247) Attached to: Social Science Journal 'Bans' Use of p-values

Actually, p-values are about CORRELATION. Maybe *you* aren't well-positioned to be denigrating others as not statistical experts.

I may be responding to a troll here, but, no, the GP is correct. P-values are about probability. They're often used in the context of evaluating a correlation, but they needn't be. Specifically, p-values specify the probability that the observed statistical result (which may be a correlation) could be a result of random selection of a particularly bad sample. Good sampling techniques can't eliminate the possibility that your random sample just happens to be non-representative, and the p value measures the probability that this has happened. A p value of 0.05 means that there's a 5% chance that your results are bogus in this particular way.

The problem with p values is that they only describe one way that the experiment could have gone wrong, but people interpret them to mean overall confidence -- or, even worse -- significance of the result, when they really only describe confidence that the sample wasn't biased due to bad luck in random sampling. It could have been biased because the sampling methodology wasn't good. I could have been meaningless because it finds an effect which is real, but negligibly small. It be meaningless because the experiment was just badly constructed and didn't measure what it thought it was measuring. There could be lots and lots of other problems.

There's nothing inherently wrong with p values, but people tend to believe they mean far more than they do.

Comment: Re:Does it report seller's location and ID? (Score 2) 140

by swillden (#49490627) Attached to: Google Helps Homeless Street Vendors Get Paid By Cashless Consumers

The phone then reports this seller's ID to some central server. Does it also report geolocation data?

I seriously doubt it. I don't see how location reporting for a payment transaction in which location data is irrelevant could possibly pass Google's privacy policy review process. Collection of data not relevant to the transaction is not generally allowed[*], and if the data in question is personally identifiable (mappable to some specific individual), then a really compelling reason for collection is required, as well as tight internal controls on how the data is managed and who has access. I don't see what could possibly justify it in this case, and I can see a lot of risk in collecting it.

FYI, Google product teams have to develop privacy design docs for all new products, and the designs have to be reviewed by the privacy team (or their delegates) and pass the privacy review before they can be launched. Although Google set these processes up before the FTC settlement, I believe they became part of the consent decree and are now mandated by the FTC and validated in regular audits, so Google can't skip or violate them without potentially-significant consequences.

Disclaimer: I'm not a Google spokesperson and this is not an official statement. It is my personal perspective on the process and requirements. However, I'm a Google engineer who's been involved in launching privacy-sensitive products, so I think my perspective is accurate. I also do security reviews of Google projects, which sometimes touches on privacy issues (though privacy review is separate from security review, as it should be).

[*] Just to head off a likely riposte: No, StreetView Wifi collection and the Safari do-not-track workaround are not counterexamples. They predated the privacy review processes and, as I understand it, were part of the motivation for establishing the processes.

Comment: Re:Not fully junk (Score 1) 310

In fact, by decapitating this girl and digging her brain out of her skull, they've guaranteed she is forever dead.

As opposed to what? Cremation? Burial in a box at temperatures well above freezing? You can't seriously argue that this approach makes it less likely that she could be repaired and restarted at some point in the future than typical corpse disposal methods.

You will have a head crash on your private pack.

Working...