Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×

Comment: Re: "Surge Pricing" (Score 1) 70

by swillden (#49505663) Attached to: How Uber Surge Pricing Really Works

If I own a store and there's a civil emergency, I won't even open my store. I would use the products for the safety/survival of my family.

On the other hand if there aren't any silly laws in place preventing your from selling your goods at 10X the normal price, maybe you will only keep aside what your family really needs and sell the rest, thus making important goods available to the public when they're really needed. But if that's illegal, yeah, might as well keep them for yourself. When things get back to normal you can continue selling whatever you didn't use at the normal price -- same as you were able to sell it for during the emergency, but without taking the risk of selling something you might need.

Restrictions on scarcity pricing are a bad idea and serve only to create even more scarcity.

Comment: Re:Copyrighting History (Score 1) 239

by swillden (#49504945) Attached to: Joseph Goebbels' Estate Sues Publisher Over Diary Excerpt Royalties

It seems that the bigger problem here is that modern copyright is so unreasonably long, historical documents are still under copyright. Anything over the original 28 year copyright term is really robbing the next generation of history.

While I know al copyright issues are sensitive on /. and hate going against the stream here, note that the next generation is not really robbed from history. They just have to pay for it.

Assuming the copyright owner can be found, and is willing to sell.

The basis for Eldred v Ashcroft was that the celluloid of many old films is rapidly degrading but because the copyright ownership is muddled it's impossible to find anyone from which the right to republish the films can be purchased, so the films are being lost forever.

Comment: Re:Ok.... Here's the thing, though ..... (Score 5, Insightful) 288

by swillden (#49504883) Attached to: Utilities Battle Homeowners Over Solar Power

The power companies are all moving towards "smart meter" technologies anyway. Why not make sure they've put one in that can monitor the output of a PV solar (or even a wind turbine) installation while they're at it?

For that matter, it seems perfectly reasonable to require the homeowner to install such a meter as part of a solar installation, as a condition of being able to sell power to the utility -- or even to push power into the grid at all.

Comment: Re:What is wrong with SCTP and DCCP? (Score 5, Interesting) 71

by swillden (#49503565) Attached to: Google To Propose QUIC As IETF Standard

SCTP, for one, doesn't have any encryption.

Good, there is no reason to bind encryption to transport layer except to improve reliability of the channel in the face of active denial (e.g. TCP RST attack).

I disagree. To me there's at least one really compelling reason: To push universal encryption. One of my favorite features of QUIC is that encryption is baked so deeply into it that it cannot really be removed. Google tried to eliminate unencrypted connections with SPDY, but the IETF insisted on allowing unencrypted operation for HTTP2. I don't think that will happen with QUIC.

But there are other reasons as well, quite well-described in the documentation. The most significant one is performance. QUIC achieves new connection setup with less than one round trip on average, and restart with none... just send data.

Improvements to TCP helps everything layered on top of it.

True, but TCP is very hard to change. Even with wholehearted support from all of the major OS vendors, we'd have lots of TCP stacks without the new features for a decade, at least. That would not only slow adoption, it would also mean a whole lot of additional design complexity forced by backward compatibility requirements. QUIC, on the other hand, will be rolled out in applications, and it doesn't have to be backward compatible with anything other than previous versions of itself. It will make its way into the OS stacks, but systems that don't have it built in will continue using it as an app library.

Not having stupid unnecessary dependencies means I can benefit from TLS improvements even if I elect to use something other than IP to provide an ordered stream or I can use TCP without encryption and not have to pay for something I don't need.

So improve and use those protocols. You may even want to look to QUIC's design for inspiration. Then you can figure out how to integrate your new ideas carefully into the old protocols without breaking compatibility, and then you can fight your way through the standards bodies, closely scrutinized by every player that has an existing TLS or TCP implementation. To make this possible, you'll need to keep your changes small and incremental, and well-justified at every increment. Oh, but they'll also have to be compelling enough to get implementers to bother. With hard work you can succeed at this, but your timescale will be measured in decades.

In the meantime, QUIC will be widely deployed, making your work irrelevant.

As for using TCP without encryption so you don't have to pay for something you don't need, I think you're both overestimating the cost of encryption and underestimating its value. A decision that a particular data stream doesn't have enough value to warrant encryption it is guaranteed to be wrong if your application/protocol is successful. Stuff always gets repurposed and sufficient re-evaluation of security requirements is rare (even assuming the initial evaluation wasn't just wrong).

TCP+TFO + TLS extensions provide the same zero RTT opportunity as QUIC without reinventing wheels.

Only for restarts. For new connections you still have all the TCP three-way handshake overhead, followed by all of the TLS session establishment. QUIC does it in one round trip, in the worst case, and zero in most cases.

There was much valid (IMO) criticism of SPDY, that it really only helped really well-optimized sites -- like Google's -- to perform significantly better. Typical sites aren't any slower with SPDY, but aren't much faster, either, because they are so inefficient in other areas that request bottlenecks aren't their problem, so fixing those bottlenecks doesn't help. But QUIC will generally cut between two and four RTTs out of every web browser connection. And, of course, it also includes all of the improvements SPDY brought, plus new congestion management mechanisms which are significantly better than what's in TCP (so I'm told, anyway; I haven't actually looked into that part).

I'm not saying the approach you prefer couldn't work. It probably could. In ten to twenty years. Meanwhile, a non-trivial percentage of all Internet traffic today is already using QUIC, and usage is likely to grow rapidly as other browsers and web servers incorporate it.

I think the naysayers here have forgotten the ethos that made the Internet what it is: Rough consensus and running code first, standardization after. In my admittedly biased opinion (some of my friends work on SPDY and QUIC), Google's actions with SPDY and QUIC aren't a violation of the norms of Internet protocol development, they're a return to those norms.

Comment: Re:Simple (Score 3, Interesting) 241

by swillden (#49502187) Attached to: Ask Slashdot: What Features Would You Like In a Search Engine?

False analogy. There's a huge difference between a personal assistant, who by definition *I* know personally, and a faceless business entity who I know not at all (read adversarial entity) scraping 'enough' information about me to presume it knows me sufficiently to second guess what I want and give me that instead of what I requested.

Not really.

I'd say there's a good argument that all of the information I give Google actually exceeds what a personal assistant would know about me. The real difference (thus far) lies in the assistant's ability to understand human context which Google's systems lack. But that's merely a problem to be solved.

Note, BTW, that I'm not saying everyone should want what I want, or be comfortable giving any search engine enough information to be such an ideal assistant. That's a personal decision. I'm comfortable with it... but I'm not yet getting the search results I want.

Comment: Re:Simple (Score 1) 241

by swillden (#49502045) Attached to: Ask Slashdot: What Features Would You Like In a Search Engine?

Why would I want crappy results? I want it to give me what I want, which by definition isn't "crappy".

And you think a system built by man can divine what you and everyone else wants at the moment you type it in? That'll be the day. Until then, assume I know what I want and not your system.

I think systems built by man that knows a sufficient amount about me, my interests and my needs can. We're not there yet, certainly, but the question was what I want... and that's it.

Put it this way: Suppose you had a really bright personal assistant who knew pretty much everything about you and could see what you are doing at any given time, and suppose this assistant also had the ability to instantly find any data on the web. I want a search engine that can give me the answers that assistant could.

Comment: Re:Does it report seller's location and ID? (Score 1) 140

by swillden (#49501287) Attached to: Google Helps Homeless Street Vendors Get Paid By Cashless Consumers
Sure, but that requires only very coarse -- city-level, at most -- geolocation. If I were reviewing this product for launch, I'd tell them that they can use location as a risk signal, but must coarsen it to avoid making it possible to use it for people-tracking.

Comment: Re:What the fuck is the point of the ISP middleman (Score 2) 46

by swillden (#49501277) Attached to: Google Ready To Unleash Thousands of Balloons In Project Loon

If local ISPs are involved, then what the fuck is the point of this?

Not really ISPs, at least as we traditionally think of them. Mobile network operators.

Why the fuck is there still this useless ISP middleman?

The MNO in question isn't the middleman, it's the service provider. It provides service to the balloons, which relay it to regions that are too remote to service now.

For crying out loud, this whole problem exists in the first place because the local ISPs weren't able or willing to invest in the infrastructure needed to provide Internet access to these regions.

No, most of these regions aren't served because it's uneconomical. It's not that no one is willing to invest, it's that it's not an "investment" if you know up front that the ROI will be negative. Putting up a bunch of cell towers to serve remote African farmers, for example, doesn't pan out economically because there's no way the farmers can afford to pay high enough fees to cover the costs of all the infrastructure. Project Loon aims to fix this by radically lowering the cost of serving those regions, to a point where it is economical, so the fees the people in the region can afford to pay are sufficient to make serving them profitable.

As for why Google is partnering with MNOs rather than deploying their own connectivity? I don't know but I'd guess a couple of reasons. First, I expect it will be feasible to scale faster by partnering with entities who already have a lot of the infrastructure in place, particularly when you consider all of the legal and regulatory hurdles (which in many areas means knowing who to bribe, and how -- Google, like most American companies, would not be very good at that). Second, by working through local companies Google will avoid getting into power struggles with the local governments. Google is helping their local businesses to grow, not replacing them.

(Disclaimer: I work for Google, but I don't know anything more about this than what I see/read in the public press.)

Comment: Re:It doesn't work that way. (Score 1) 110

by DerekLyons (#49500315) Attached to: An Engineering Analysis of the Falcon 9 First Stage Landing Failure

what we're saying is that arranging for velocity AND position to be 'null' at the same time is harder than simply arranging for velocity to be null and position to be +/- 100m(or so).

*Sigh*
 
I understand what you're saying - but as with my previously reply, you don't grasp the problem.
 
The appearance of the vehicle "working hard at the last second" during the first attempt was a consequence of running out of hydraulic fluid - and would have occurred regardless of the size of the target. The appearance of the vehicle "working hard at the last second" during the last attempt was a consequence of the throttle valve not operating to spec - and would have occurred regardless of the size of the target.
 
From the point of view of the final landing sequence it's not all that much easier to arrange for velocity to be null and position to be +/- 100m than it is to arrange the same +/- 1m. Selecting a landing point occurs at a relatively high altitude (and on a, relatively speaking, relaxed timeline) and final trim starts around a kilometer or so up (AIUI). From there, jittering the variables (burn time and timing, gimbal angles, and throttle settings) a tiny amount one way or another to maintain targeting isn't a substantial burden (on the software or the hardware) compared to the much larger problem of nulling your velocities.

You're talking about some kind of articulated arm (which can survive being essentially inside rocket exhaust)

I think you're picturing something different. I'm picturing something pretty big that comes in from the sides, staying well away from the exhaust.

That just makes an already heavy, complex, and expensive system even heavier, more complex, and more expensive than I envisioned.

Comment: Re:I guess he crossed the wrong people (Score 1) 306

by fermion (#49498205) Attached to: Columbia University Doctors Ask For Dr. Mehmet Oz's Dismissal
Most guilty people will immediately try to become the victim. Ignore the fact that I convince gullible people to buy junk that at best is useless and at worst will harm them. Ignore the fact that I use my medical degree to trick people. Look at the big bad corporation over here that wants to attack me. Ignore the fact that I am in the arms of a big bad corporation that airs my tv show and wants rating no matter what.

My problem with Dr. Oz is not that he appears to be a unethical charletan that will prostitute himself to any snake oil salesman who asks. My problem is, n the few shows I have seen, is that he actively is teaching his audience bad science. This is not surprising as doctors are not scientists. For instance, there was one show on fat where his depiction of fat was completely inaccurate. The demonstration was there to be visually exciting, but at the expense of any real science. I can imagine the people who saw it going to their doctor and arguing a point, thinking Dr. Oz is right, and their doctor is wrong.

It is entertainment. I agree that persons who are fundamentally entertainers and not seriously committed to medicine should probably not be the medical staff.

User hostile.

Working...