Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment: Re:Instead... (Score 1) 308

I'd be the first to agree that Google shouldn't get to dictate how the Web works and that sometimes Google or at least some its

They shouldn't get to dictate how the web works but then you basically say don't like it don't use Google... I'm confused... are sites able to chose not to use Google? What sites gets most of their traffic from a different search engine?

However, if you're relying on Google's service for most/all of your visitors to find your site at all, you have to play by their rules if you want the best treatment from them. This is the basic principle of SEO, and it's as old as search engines themselves.

What happens when those rules begin to stray from principals fewer people agree with? Google is more or less a monopoly.

Comment: Google imposing itself on the world (Score 1) 308

You can always count on the Internet to implement good concepts poorly and parade the result as cutting edge technological innovation.

One thing I've always liked about HTML was from very early days before even CSS or Google the "promise" of targeting vast arrays of client form factors with the same information. This sounded great but proved in to be mostly unnecessary and divorced from reality.

Rise of "Responsive" sites more often than not translate back into frustrated desktop, laptop and tablet users with sites resembling pre-tables era childish web layouts boasting comically large fonts and painfully low information density. Paradoxically these "features" persist even when viewed from my mobile phone with the same display resolution as a large HD TV or desktop monitor. Very few appear to actually be capable of designing single scalable sites that don't suck.

There was a time when mobile sites were necessary. Given the proliferation of display sizes, LTE, multi-core multi-ghz processors with GBs of RAM.. that time has came and gone. Google is trying to catch up to a need that for the most part was already solved by hardware and software innovation and no longer exists.

If your going to punish sites for not as judged by a naive non-human algorithm offering something that is not "appealing" to a human using a client of a specific form factor or capability then do so across the board without bias. When I do a Google search from my desktop penalize all search results that consist of mobile handset optimized sites with comically large fonts and childish layouts.

What determines the worth of a website to me has never been layout it has been content and lack of annoying BS. All "looks over brains" does is give legions of spam trap link-baiting sites an even greater advantage.

Stupid all around to say nothing of negative implications of people waking up to the dangers of Aggregation of power into the hands of so few.

Comment: IPv6 is good for something (Score 2) 335

by WaffleMonster (#49517417) Attached to: Why the Journey To IPv6 Is Still the Road Less Traveled

I quite like vastly increased difficulty of scanning the whole IPv6 Internet. As soon as Comcast fixes their business class remote access via IPv4 is going bye bye. Sick of looking at all this crap in my logs. If random fools want to spam me they are going to have to work for it.

Comment: Re:Just say no (Score 1) 84

by WaffleMonster (#49507951) Attached to: Google To Propose QUIC As IETF Standard

And WaffleMonster just wants to bash Google for his/her own benefit (i.e. paid by MS).

Was in the room for delivery of one-sided Google presentations touting benefits of aggressive congestion schemes and ICW settings during TCPM meetings. I honestly fear ease at which clients and servers can cheat and tinker for competitive advantage with completely user land stacks embedded in applications.

Wording was probably a little unfair and oversimplified yet I believe in the general theme. I don't work for a large corporation and have little interest in this space other than promotion of an open decentralized Internet.

Comment: Re:What is wrong with SCTP and DCCP? (Score 1) 84

by WaffleMonster (#49507115) Attached to: Google To Propose QUIC As IETF Standard

I disagree. To me there's at least one really compelling reason: To push universal encryption.

Too bad the goal was not pushing universal security instead.

One of my favorite features of QUIC is that encryption is baked so deeply into it that it cannot really be removed. Google tried to eliminate unencrypted connections with SPDY, but the IETF insisted on allowing unencrypted operation for HTTP2. I don't think that will happen with QUIC.

What we need are systems that are actually secure not ones that pretend to be. Hard to get excited about a "feature" that is worthless against most threats.

True, but TCP is very hard to change. Even with wholehearted support from all of the major OS vendors, we'd have lots of TCP stacks without the new features for a decade, at least.

Is there a technical barrier with respect to TCP and or TLS that makes addressing issues unrealistic? Linux kernel has had TFO support for years and it didn't take decades for SYN cookies or SACKs to hit all but the long tail. There must be at least a dozen or two TLS extensions by now. How hard can it be from a technical perspective to tweak session tickets or whatever vs throwing everything out and reinventing wheels?

QUIC, on the other hand, will be rolled out in applications, and it doesn't have to be backward compatible with anything other than previous versions of itself.

Not sure retrofitting IP stacks into applications qualifies as a feature. More of the same thing duplicated everywhere usually means a bigger headache for all.

Then you can figure out how to integrate your new ideas carefully into the old protocols without breaking compatibility, and then you can fight your way through the standards bodies, closely scrutinized by every player that has an existing TLS or TCP implementation. To make this possible, you'll need to keep your changes small and incremental, and well-justified at every increment. Oh, but they'll also have to be compelling enough to get implementers to bother. With hard work you can succeed at this, but your timescale will be measured in decades.

These are political not technical arguments. Heads of state routinely whine about not being "king" yet there are advantages to not so easily having your way.

I totally understand why Google is doing what their doing. If you own the net stack and the browser and the servers and are a big enough player this is a completely understandable and logical approach to accomplish your goal.

As for using TCP without encryption so you don't have to pay for something you don't need, I think you're both overestimating the cost of encryption and underestimating its value.

I'll assume it can cost nothing in terms of CPU, I/O or Time.

A decision that a particular data stream doesn't have enough value to warrant encryption it is guaranteed to be wrong if your application/protocol is successful.

Not everyone has the same problems or constraints. Caution against mapping your experiences onto the rest of the world and drawing conclusions from it.

I may need to use a specialized security layer.

I may care about the ability for agents to passively monitor a telemetry stream.

I may care only about integrity of a message.

I may be prohibited by law or corporate rules from encrypting data.

I may have a mission critical system where any use of encryption is an unnecessary potential for failure.

Only for restarts. For new connections you still have all the TCP three-way handshake overhead, followed by all of the TLS session establishment. QUIC does it in one round trip, in the worst case, and zero in most cases.

Yes at a minimum any TCP + TLS solution would require minimum 2 separate unavoidable round trips - one for TCP and a second for TLS regardless of construction if state / cookies / tickets / whatever were not available for TCP and or TLS. So what?

Comment: Re:What is wrong with SCTP and DCCP? (Score 4, Interesting) 84

by WaffleMonster (#49503239) Attached to: Google To Propose QUIC As IETF Standard

SCTP, for one, doesn't have any encryption.

Good, there is no reason to bind encryption to transport layer except to improve reliability of the channel in the face of active denial (e.g. TCP RST attack). A feature QUIC does not provide.

Managing transport and encryption in a single protocol makes the resulting system more brittle and complex. Improvements to TCP helps everything layered on top of it. Improvements to TLS helps everything layered on top of it.

Not having stupid unnecessary dependencies means I can benefit from TLS improvements even if I elect to use something other than IP to provide an ordered stream or I can use TCP without encryption and not have to pay for something I don't need.

QUIC integrates a TLS layer into it, in a way that avoids a lot of connection setup time.

TCP+TFO + TLS extensions provide the same zero RTT opportunity as QUIC without reinventing wheels.

I have yet to hear a coherent architectural justification for QUIC that makes sense... The reason Google pushes it is entirely *POLITICAL* this is the path of least resistance granting them full access to the TCP stack and congestion algorithms without having to work to build consensus with any other stakeholder.

Comment: Re:in my opinion this guy is like Jenny McCarthy (Score 2) 319

by WaffleMonster (#49498511) Attached to: Columbia University Doctors Ask For Dr. Mehmet Oz's Dismissal

or do you just stand against genetic engineering as we currently practice because you have an ignorant fear of what you don't understand?

I fear the properties of roundup ready GMO crops are being leveraged to optimized labor costs during production increasing loads of roundup leeching into the food supply.

People say roundup is safe yet nobody has been able to square this with warning labels and handling instructions printed on bottles purchased from home depot. They also chose to ignore the fact Glyphosate has been labeled a group 2A carcinogen.

But more than anything I fear the ignorance of people engaged in some forms of genetic engineering. Worth keeping in mind it was difficult to see cancer signal attributable to atomic blasts during WWII. I have no confidence if there was a problem that did not present immediately or dramatically to a significant percentage of people the cause would have any prayer of being seen or traceable. A common trick is to say there is no statistically significant basis for an assumption... which in and of itself is fair until you begin to understand the range of problems that could possibly exist under that same banner. Given numerous classic historical examples of active industry successful efforts to increase uncertainty and downplay risks .. I am not predisposed to be trusting of corporations whose objective function is not aligned with my own best interests.

Comment: Re:Must hackers be such dicks about this? (Score 2) 268

by WaffleMonster (#49495001) Attached to: FBI Accuses Researcher of Hacking Plane, Seizes Equipment

To anyone who has a shred of fear of flying, the game of "screwing with the pilots for laughs" is not fucking funny.

Your fears are your problem and do not constitute an excuse for irrational response.

Twitter comments were not known to anyone on the flight. Those who would have normally followed his comments would be his hax0r buddies who understand context and are familiar with issues.

So he's scaring people and breaking/threatening-to-break his word, and they're being dicks to him. This may not be statutory justice, but it's poetic.

Being a dick to LEA who is threatening you to back off when they are in the wrong... Sorry I don't see the issue.

All they are doing is discouraging research and attention making the industry less safe and more likely to allow Manufacturers and Airlines to make riskier design choices in the absence of pressure to do otherwise.

But if his frustration with Boeing and Airbus is going to drive him to be a fear-mongering troll, then any inconvenience caused him by the FBI seems utterly fair.

The media, politicians and security industrial complex are fear mongering trolls. They routinely and intentionally stoke fear for financial gain and self promotion while being fully aware of their deceptions.

A researcher who honestly believes something to be true is not a troll. You may disagree with his conclusions or characterizations but disagreement alone does not make someone a troll.

The idea that harassment by LEA is somehow deserved even for crazy anti-social fear mongering trolls is disappointing. Freedom cannot exist in the absence of tolerance. Being a professional LEA is fundamentally incompatible with in-kind reaction to someone doing something to get you mad.

Comment: Re:What? Why discriminate? (Score 1) 699

by WaffleMonster (#49480939) Attached to: 'We the People' Petition To Revoke Scientology's Tax Exempt Status

How is scientology any less of a religion than christianity or islam or mormons or any other belief system?

The purpose of Scientology as openly admitted by its founder was "to make money" ... If anyone is allowed to start their own religion with the express intent of making money then granting tax exempt status based on assertion of "religious" status alone makes for some pretty ridiculous and nonsensical policy.

Comment: Re:Article one giant spew of hyperbole (Score 1) 171

NTLMv2 isn't broken, but it definitely isn't as good which is why Windows uses Kerberos by default.

Both NTLMv2 and Kerberos are broken because an attacker is able to conduct offline brute force attacks against credentials simply by observing challenge/response communication between client and server.

This constitutes an unacceptable risk because the vast majority of users do not use passwords with sufficient entropy to withstand an offline as attack conducted by modern, distributed and specialized hardware. In the end your looking at an easy >90% success rate against most targets vs guaranteed 100% rate with NTLMv1.

I wish MS would finally get off its ass and switch to a zero knowledge key agreement protocol.

Comment: Re:Article one giant spew of hyperbole (Score 1) 171

Only Windows Server 2003 and below will accept LM/NTLMv1 by default, which means as far as supported systems only 2003, and it is EOL July 14, 2015. You'd have to be desperate to still be running any 2003, and if you were you can disable LM/NTLMv1 via GPO. Vista/2008 and above will only accept NTLMv2 responses.

NTLMv2 is broke too.

Science is what happens when preconception meets verification.

Working...