Comment Re:why do people consider this hype? (Score 1) 122
Well, for one thing, 1.www.google.com has access to the www.google.com cookie. It's also a really good place to phish from. In some circumstances, document.domain is even set up such that 1.www.google.com has script level access to www.google.com. Not good.
That makes sense. Nonexistent, subdomain host poisoning is also a serious problem.
Taking over existing domains is a superset of that serious problem, and can be done with the same style attack, just by adding glue. Because existing hijackable domains include nameserver domains, you could take over all DNS for google.com, from webservers and mail servers to SPF and DKIM records.
Anyway, it's all bad. Yes, poisoning is bad.
At this point, BIND, Nominum, Unbound, and Microsoft all suppress colliding queries. The only name server I know of that doesn't is DJBDNS, and it drops its security level noticeably.
DJB was the first to point out that Source Port Randomization would help, years ago, and he gets no credit? Why not concede any? And how many of those servers you named have been open to an easily feasible 32,000 max packet poisoning attack for the eight years that djbdns was requiring a TXID + SPR packet attack? And now you're trying to ding djbdns, characterizing it as a less secure outlier, for allowing 200 simultaneous queries, which opens the space by not quite 8 bits? TXID + SPR for djbdns is still 24 bits. TXID + SPR is only 27 for Microsoft (2500 source ports).
The real lesson is that the patch treadmill doesn't work, and it hasn't for years. This cycle of finding security holes and rushing to patch them before the bad guys exploit those vulnerabilities is expensive, inefficient and incomplete. We need to design security into our systems right from the beginning. We need assurance. We need security engineers involved in system design. This process won't prevent every vulnerability, but it's much more secure -- and cheaper -- than the patch treadmill we're all on now.
What a security engineer brings to the problem is a particular mindset. He thinks about systems from a security perspective. It's not that he discovers all possible attacks before the bad guys do; it's more that he anticipates potential types of attacks, and defends against them even if he doesn't know their details. I see this all the time in good cryptographic designs. It's over-engineering based on intuition, but if the security engineer has good intuition, it generally works.
Kaminsky's vulnerability is a perfect example of this. Years ago, cryptographer Daniel J. Bernstein looked at DNS security and decided that Source Port Randomization was a smart design choice. That's exactly the work-around being rolled out now following Kaminsky's discovery. Bernstein didn't discover Kaminsky's attack; instead, he saw a general class of attacks and realized that this enhancement could protect against them. Consequently, the DNS program he wrote in 2000, djbdns, doesn't need to be patched; it's already immune to Kaminsky's attack.
That's what a good design looks like.
...
I'm not a DJB fanboy. I concede that I think the 200 simultaneous identical queries is a big loss of security. But I also recognize that DJB was doing the right thing nearly a decade ago, and warning people, while everyone else took until now, after disclosure of a specific, very bad vuln, to clean up their acts. I find it distasteful that people are reluctant to publicly acknowledge DJB's right thinking, or even to acknowledge it to themselves. That's the other face of fanboyism, just inverted from fan to detractor.