Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:Communication has never been secure (Score 1) 562

I think you spelled "reality" wrong :-) Never say or do anything you wouldn't want your mother to see on the front page of tomorrow's newspaper.

Good advice when making public statements or comments.

When having a private discussion with trusted people the government and any other peeping toms who think they have a right to it can eat random noise.

Comment Re:Communication has never been secure (Score 2) 562

Snail mail and land line phones were never secure, all it took was a search warrant/court order (really easy to get) and the police had it. Email is no different.

Sure they are, you just need to add your own security on top of it. People have always been able to break out their favorite secret book and OTP their message or speak in code.

All the ranting about the NSA and government intrusion just diverts from the fact that; 1) if you don't want anyone to hear what you say, don't say it.

Unacceptable.

) if you don't want anyone to read what you write, don't write it down.

See above.

The USA founding fathers lived with the knowledge that they would be held accountable for what they said and wrote, and today it's no different.

Really so while negotiating and working to build consensus it was all out there for anyone to know their bargaining positions? There was no need for secrecy?

Comment Hack the planet (Score 1) 77

In the real world any serious attack would have been conduced in stealth far in advance with damage triggered at a time of the attackers choosing.

In the fantasy world military brass operate repelling a "cyber attack" means sitting in front of a oversized console while "god" yells Rabbit.. flu shot? Someone talk to me.

Comment Re:Disgusting (Score 1) 95

Insurance externalizes internalities.

No, it doesn't.

In what way does it not? With insurance someone else is paying the bill even when you fuck up. You will feel some additional pain but most of it is offloaded.

There are ways to turn costs or sudden losses into externalities via publicly provided or covered insurance, but that's not an consequence of all insurance.

My remarks are limited to "most Insurance".

It's been no easier in the past to deal with sudden catastrophes than it is now.

I'm not so sure. In isolation this is an easy case to make...hey a tree fell on my house and now I can afford to fix it... there are also downsides and opportunity costs.

Hospital industry is a good example of what happens when you allow externalities to run rampant. Huge increases in overall share of GDP for little measurable improvement in outcomes. What is worse most of the expenditures go into dealing with the consequences of diseases which normally only occur when people fail to take proper care of themselves.

In any event disagreement is not grounds for -1 troll mod and +4 insightful is hardly deserved by those who veer off topic.

Comment Changes for vendors sake (Score 1) 640

I sincerely hope in the year 2020 there is an operating system in existence I would happily want to upgrade to.

Commercial vendors are spending too much time "playing games" and not enough time providing actual value to end users.. I fear by 2020 things will only get worse yet it is also clear MS has belatedly learned some lessons.

The final end of support for Windows 15 will be January 19th 2038.

Comment Re:Disgusting (Score -1, Troll) 95

Even if this were true, what makes it a "pile of dogshit that smells". Insurance does serve a very useful role in our society.

Insurance externalizes internalities. It seems necessary because its existence over many decades has fucked up society enough to make it that way.

Comment Re:HTTP isn't why the web is slow (Score 1) 161

SPDY will allow later requests to be answered before the first one. You seem to be focusing on the aspect of re-using old stale connections. I'm talking about the many dozens of connections needed on the initial visit to a web site right now.

When I mention head of line blocking I am referring to the transmission of the overall stream of data transported via TCP. Whatever structure comprises SPDY the stream itself is subject to head-of-line blocking. Multiple unrelated assets within a related stream are at the mercy of the properties of that stream. Multiple unrelated parallel streams are able to operate *independently* of the other.

The problem occurs normally (bad luck, ICW) and especially with lossy networks such as a high latency wireless network you end up blocking for RTT or RTO.. during that time nothing is transmitted with SPDY. If instead parallel TCP streams are used remaining streams are able to continue transmission.

The RFC itself says that it's vulnerable to replay attacks.

Of course it absolutely is.

Even more so than what's currently in use.

To conduct a replay attack you need to be able to get a copy of the packet to replay it. If you can do this you can own the TCP channel. I don't know how things can get any worse. In either case with or without fast open adding security (e.g TLS) is often helpful.

Comment Re:HTTP isn't why the web is slow (Score 1) 161

There's a different type of HOL blocking specific to multiplexed HTTP pipelining (at the next highest protocol layer). If one resource is slow to load because of being dynamic, it can hold up the entire queue.

This makes little sense. HTTP/1.1 pipelining is only even possible if the size of content is known a-priori. Hard to imagine limited cases where you can know the size in advance before taking time to generate it.

I do agree there are multiple instances at multiple layers that can have the affect of stalling the pipeline.

My understanding is that your browser cookies and user agent string would be re-sent with every request using RFC7413. That's not small.

Its insignificant, what matters for senders is latency.

And it can't handle POST requests safely, meaning fragmented protocols.

I hope your kidding there are no useful transaction semantics defined for POST requests or any other HTTP verbs. Any assumption this is somehow safe today is wrong. It can only be made safe by application layer detection.

Comment Re:HTTP isn't why the web is slow (Score 1) 161

The only reason you've given for HTTP/2.0 being worse is that it's not already an RFC.

It is worse because it is HOL'd and requires additional resources to manage state persistence for idle TCP channels. The other solutions leverage stateless cookies without speculative tradeoffs inherent with sitting on idle sessions. This is a BFD when your servicing thousands of concurrent requests.

SPDY and by extension HTTP/2.0 does not have head of line blocking issues. The requests are multiplexed, but tagged, and requests can be answered out of order.

*Everything* implemented over TCP has head of line blocking issues. This property is inherent in the definition of a stream which is what TCP implements. The only way around it is multiple independent streams. It does not matter how the protocol is structured or what it does as long as it is doing it within a single TCP stream.

Head of line blocking is really only an issue for dynamic content.

Why?

Pipelining all of your static resources through a single connection to a single subdomain is more efficient than multiple requests.

Even in the case where RFC7413 has not been deployed this isn't always true especially over low bandwidth/lossy links. If one stream has to eat RTT or worse RTO other streams can continue to transmit unimpeded. It is important to avoid cherry picking simulation results. Not all of them are positive.

Comment Re:Shrug (Score 1) 161

Browsers on the other hand are supposed to take invalid HTML and try to do something useful with it. If browser developers didn't have to spend so much time trying to make their code interpret invalid syntax, they could probably fix a lot of the other bugs that actually affect valid code.

While it may well be more difficult to write an HTML parser this effort is an insignificant rounding error when considered within context of effort needed to produce a modern browser stack.

Comment Re:versus 20 years for IPv6. 2002 cutover to IPv6 (Score 1) 161

Thirteen years later, 95% of internet traffic is still IPv4. Ten or twenty years from now, do we want to be using a better version of HTTP, or still be using HTTP/1.1 and talking about HTTP/2?

I don't care if we're still using HTTP/1.0 a hundred years from now. IPv6 is actually needed to solve an actual problem and offers real benefit to users needing to directly communicate with their peers - especially those currently stuck behind carrier NATs lacking a global address of their own.

HTTP/2 isn't going to make anyone's online experience any better or faster. Even today with our quad core muti-ghz CPUs, GPUs, several GB ram, dozens mbits of bandwidth sites still take forever to load... the only thing that has changed instead of loading actual content more time is spent engaged in massive data collection and cross-domain spying. The problem that needs solving isn't technical it is political.

Comment Re:Shrug (Score 1) 161

1: mechanisms for interoperability were bolted on later, not included as core features that every client and router should support and enable by default. The result is that relays for the transition mechanisms are in seriously short supply on the internet and often cause traffic to be routed significantly out of it's way.

The Internet is a production network. You either deploy IPv6 fully in a production quality matter or don't do it at all. The mistake was in developing transition mechanisms in the first place which have done nothing but get in the way of adoption.

there was lots of dicking around with trying to solve other problems at the same time rather than focusing on the core problem of address shortage. For example for a long time it was not possible to get IPv6 PI space because of pressure from people who wanted to reduce routing table size.

Not everyone in the world has access to the same buying power enjoyed by rich western states. *Someone* ultimately has to pay for PI, rinky-dink multi-homing and lazy TE shenanigans. It is a political calculation whom that should be.

Stateless autoconfiguation and the elimination of NAT seemed like good things at the time but they raised privacy issues and added considerable complexity to home/small buisness deployments.

Reality is IPv6 privacy extensions were widely deployed in a landscape already dominated by browser fingerprinting, browser cookies, plugin cookies, DNS fingerprinting.

Slashdot Top Deals

New York... when civilization falls apart, remember, we were way ahead of you. - David Letterman

Working...