They can replay it within the absolute time of the RRSIG, which can be made relatively small (needs to be long enough to handle time drift).
A web site built on flat HTML pages is more likely to be secure than a web site built on PHP. The message is the medium.
So, I'm posting as somebody who has gotten critical fixes pushed into both IE and Firefox. (Technically, Chrome and Opera too, but those were the pure crypto vulns.)
It's genuinely hard to write a secure web browser. Forget plugins -- you have a complex internal object model, subject to all sorts of very fine grained rules ("the filename on an input type=file form must not be settable from Javascript"), which can be made into a pile of moving parts under the control of an attacker. What's happened somewhat recently is a lot more people have gotten into bashing Firefox. You know those "many eyes" theories of open source, and how they're usually kind of full of it?
Well, "many eyes" are visiting it now, and Mozilla to their credit is doing a lot of very hard work to deal with the influx. Good on them.
Use the PASCO gear, with their Datastudio app. It's great, and will take all sorts of data wirelessly.
http://store.pasco.com/pascostore/showdetl.cfm?&DID=9&Product_ID=53770&Detail=1
Uh, a few machines have eight cores. Core2Duo is doing OK, but really, the heat problem is not actually going away in any way shape or form.
(This is Dan)
It is true that DNSSEC increases aggregate bandwidth.
I'm not sure about the DNSCurve numbers right now, given that the implementation I've seen can only do about 5,000 encrypt/decrypts a second on hardware that'll do 15,000 DNS responses a second.
(This is Dan)
Yes, because browsing securely should look like UAC, with every new site throwing a prompt in your face as if you had enough information to go on.
No. We can, and need to stop imagining the user is some sort of god that can accurately judge risk of accepting unknown keys (or worse, keys 'recognizable' with some arbitrary sequence of hexadecimal characters). This is a lie we're telling ourselves, and I'm done with it.
You're right that Verisign controls
(This is Dan)
The point is that we can actually share DNSSEC responses across multiple nodes, not just a single node, using the existing framework. Yes, we will need clients that *can* go straight to the root. But they won't *have* to, which is a neat design element of DNSSEC.
Keep hitting me here though, maybe we can find a problem!
(This is Dan)
Excellent, excellent questions. This is the sort of stuff I was asking before I switched sides on the DNSSEC war.
The problem with SSL is it doesn't matter if *you* aren't paying a worthless CA; as long as a worthless CA is out there, he can corrupt every domain, everywhere. That sucks. So SSL becomes a matter of finding the least secure CA possible and compromising that.
Things are different in DNSSEC. Because of delegation, the root is the only entity with absolute power over everyone -- and the root rarely talks to anyone. Verisign is canonical for com, Afilias is canonical for org, and so on. There's no big mess of companies that can all step on eachother. There's one big mess, true, but that's it. Everything else is distributed. That is such a better situation than we have today!
Look. When some registrar had microsoft.co.nz stolen from it, it had a choice: Either clean up its act, or watch Microsoft move its registrar activity to someone that wasn't vulnerable. Microsoft had an actual response strategy. We need more systems with response strategies -- and I think DNSSEC has them.
It really is different. I can't emphasize this enough -- I wasn't a believer. Now I am.
Absolutely true. However, the ability to delegate/federate security is such a powerful force for lowering the costs of proper design and management that making this one technical change will facilitate the operational strength you correctly call out.
...not to mention that DNSCurve requires per-query crypto on the server, while DNSSEC does not (by a design that really, really wants to allow offline key signing). Curve25519 is fast but it's not *that* fast.
(This is Dan)
Estimates on cache hit rates in DNS are about 90% -- meaning for every query that hits a server, ten queries got chomped in a cache.
I'm uncomfortable asking the Internet to increase their DNS query capacity by 10x. DNS has a performance curve where once it dies, it dies kind of catastrophically. 10x increases are asking for trouble.
(This is Dan)
1) Agreed. I'm not very popular in some DNSSEC circles because of it
2) With the root signed, you always have a trusted path that says if a given domain has DNSSEC or not. If it does, stripping the DNSSEC won't matter, you'll know there's *supposed* to be signatures there.
3) Because DNSSEC delegates, it's not really amenable to the sort of tricks that have cost money in the past. If you get a
4) See 2.
5) Not really sure what you mean here.
HOLY MACRO!