Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Overhyped? (Score 1) 122

All the DNS vendors except those who were already immune to the attack, you mean: djbdns always randomized port numbers, and never accepted answers to questions it didn't ask (glue).

Meanwhile, we're listening (DNSSEC) to the people who made and support broken (affected) nameservers, instead of to the people who made compatible, but unaffected and unbroken nameservers. This never ceases to bewilder me.

Comment Re:What linux ACTUALLY needs (Score 1) 865

This is countered with a simple rule - if it malfunctions or stalls, it no longer executes.

I think you need to work on your reading comprehension. Why do you think this is relevant?

Your network card stops working, but Windows restarts it, right? My network card simply doesn't stop working. How exactly are these the same?

In the current state, Windows Vista can alert the user to a malfunctioning driver or device (in my case, it states nvlddmkm has failed and been restarted.) If a hardware driver is malfunctioning, the kernel can kill it and send a message to the appropriate monitoring program that a driver has crashed.

For some drivers, under very specific circumstances.

Good drivers simply never crash. Certainly that would be ideal.

Comment Re:What linux ACTUALLY needs (Score 1) 865

but then again there seems to also be a problem with the Linux method because there aren't enough motivated people willing to spend their precious time writing device driver code for every new device that comes out.

Well, you're wrong.

Linux Kernel Developers have promised to write the drivers at a very low cost. Simply make an engineer and/or documentation available. You already need a knowledgeable engineer, and you already need (internal) documentation, so really all you need is to make them available.

Lots of companies are taking them up on it, but not everyone. What Linux needs are users who are vocal, and who have the expectation that they be able to use their hardare.

I still think the solution lies in a better (and perhaps standardized) interface for writing device drivers.

Well, you're wrong about this too.

When you make an API, you can never delete it. The public (userspace) API for Linux demonstrates a remarkable amount of cruft as a result- mmap2, and setarch are great examples of necessary evils. Extending this bloat into the kernel and you'll be moving those unmaintainable, unremovable chunks of cruft into the kernel, which would only become larger and slower.

When kernel developers want to change something, they can just do it, and fix any and all drivers along the way that might've depended in the old behavior. That happens alot- Linux has been through at least three USB implementations, and many different mass-storage systems- partially due to simplifications as a result of newer, more accessible technologies (MTDs come to mind), but sometimes simply because someone comes along and uses the knowledge gained by the older implementation (libata for example).

Locking the kernel interfaces means that the same level of caution applied to adding userspace interfaces has to be applied to kernel interfaces. New drivers would require more efforts to write, and new technologies would take longer to implement- simply because if any new technology gets made, you end up with an ad-hoc approach to implementation at best, or you wait a few years for all the "closed source developers" to catch up and stop ruining it for everyone.

This happens everywhere- not just in the Free Software community- look at Windows, it has four different driver models (WDM, NDIS, Miniport, and the new certified garbage), all that work completely differently; There are still wifi adapters that want to replace the Windows wifi system.

No, the solution isn't a technical one (bind the developers hands), but a social one: Open your damn interfaces and stop trying to hurt your customers.

If people like you stop apologizing for them, they'll change. They want you to buy their hardware, and at present you will, even if it means you also have to install ndiswrapper or some other garbage. I won't, and haven't for more than a decade.

Comment Re:What linux ACTUALLY needs (Score 1) 865

Then compare price versus capabilities.

This Gateway FX, with Broadcom wireless sitting two desks down from mine can't get above 7MB/sec over wireless on Windows, meanwhile my Thinkpad R61E with Intel 3945 gets over 40MB/sec running Ubuntu, over the same Wireless.

I find that by buying hardware that supports Free Software, I get higher quality, more stability, and better support.

It also seems that the majority of anecdotes surrounding Windows' poor performance is blamed on poor hardware or badly written drivers. I find that the same is true of Linux.

Comment Re:What linux ACTUALLY needs (Score 4, Informative) 865

We don't want that support. Those vendors have a tendency to produce low-quality drivers, and reduce the overall stability of the system.

Most Vista and XP (and previously, Windows 98) apologists agree that the biggest reason Windows is perceived as unstable is due to low-quality drivers for low-quality hardware.

By selecting hardware known to work with Free Software, I'm pretty much guaranteed a solid and stable experience.

Comment Re:Stupid, stupid, stupid! (Score 1) 134

There is a time in the roll-over where you have published a new key, but the old key is still cached. How does DNSCurve deal with this?

The same way you deal with any change of your NS records.

About new encryption options: Much like HTTP, DNSSEC can be extended by wherever both endpoints support. ... That's how compatibility works in the real world.

Rubbish. Compatibility has never worked that way. My site sees over 30 million messages every month; less than 50% of the senders are RFC2821/RFC2822 compliant. Once a standard becomes entrenched, you must operate on the assumption that it always will be.

Eventually, it becomes irrelevant (like Windows), but it never ever gets replaced.

That's how compatibility works in the Real world.

DNSSEC deployment can be as easy if you like, ... DNSSEC can be plug-in simple also.

DNSSEC is replace your infrastructure simple. It requires replacing hardware and software to deploy. DNSCurve requires supplementing software. A forwarder can run alongside your existing DNS content server requiring no special implementation costs.

DNSSEC isn't designed to fix problems, it's designed to replace DNS.

DNSSEC offers the same level of protection against MITM attacks,

No it doesn't.

The only real wins for DNSCurve are smaller packets today, and router compatibility.

Those are extremely important. DNSSEC doesn't have an Internet-wide migration plan; just a myriad of ideas and best practices. Everyone is assumed to figure it out for themselves.

This has obviously worked very well for IPv6...

Its downsides are poor key management management and future upgrade path,

It has exactly the same upgrade path that DNS has because it's 100% compatible with DNS.

I'll get to the key management in a moment.

and even if the tradeoff was worth it (which I don't believe it is), it's not worth throwing away the momentum DNSSEC currently has.

Says you, one guy, who doesn't operate a very large site, or have very much to change.

You're in fact, a very small and singular member of the Internet. Replacing DNS is an Internet-wide change, and it requires an attention greater than you're willing to give.

That's why your opinion is rejected outright.

We all wish DNSSEC could be simpler, and if DNSCurve was really much easier to deploy for the same security, I'd be all over it. However, I don't believe it is.

Your beliefs don't enter into it. It is simpler to implement, and will be simpler to deploy.

Good deployment tools exist for DNSSEC

No they don't. As I said, DNSSEC deployment requires replacing Hardware and Software. You think this is okay because in your mind the Hardware and the Software is broken. DNSCurve does not.

and many more will be created,

This however, is likely, and I agree. I remember old "MX calculators", and "CIDR calculators": Simple things need a lot of tools, complicated things; doubly so.

but DNSCurve's security downsides in key management can't be automated away.

I don't think they're serious. Keys in X509 and DNSSEC represent organizational processes, but this is a layer of overhead that sites apparently don't need: the reality of how SSL and S/MIME ended up getting deployed demonstrates this thoroughly.

The current existing infrastructure of a single delegation controlled and mandated by the parent is something the DNSSEC key system pretends is shared by the delegate.

It isn't true of course: the parent can unilaterally destroy the delegation and the delegate can't do anything about it.

DNSCurve's key system doesn't pretend otherwise, and so appears significantly less capable than DNSSEC. The idea that I could sign specific records (instead of the right to publish) seems very valuable indeed!

Yet, I don't know of any case where DNS servers were hacked in order to surreptitiously publish evil records. I suppose it could happen, but in any event, it seems valuable to protect against this!

Unless, of course, it comes with the cost of being unable to protect against the attacks that actually do happen.

Comment Re:Stupid, stupid, stupid! (Score 1) 134

1. Simple forgery resiliance isn't minor. It is just as difficult to detect forgery as it is a valid key, with DNSSEC, so the cost is magnified by the number of requests. Being able to detect bad packets easier than verifying good packets minimizes the amount of work clients perform.

2. That's retarded. "The hard work of making a protocol" is minimal to the hard work for deploying it.

3. I stand corrected about NSD, but reject the premise that "we've already implemented this far" is a good reason to switch protocols.

4. I think you're wrong. Nobody cares about DNSSEC, they care about not getting to the right website.

5. Wrong. DNSCurve offers some security as soon as some hosts deploy it because as soon as some hosts deploy it, unsigned packets can be rejected from those sites. With DNSSEC, a client may not know if their firewall filters DNSSEC and so would require special configuration.

Tested, secure, with design docs and very good specifications.

Rejected.

Widely implemented in software, though not yet widely deployed.

It took 10 years to reach the point dnscurve reached in 4 months. This is meaningless.

Allows for very strong key protection for high profile sites, or less security and more convience for low profile sites.

By maligning the value of the signing key.

Uses time tested RSA, not a 2 year old ECC variant. Can later switch to ECC if desired.

No it can't. Sites will still need to repeat the deployment operation in order to switch to ECC.

Allows for rolling over keys in ways that do not break the trust relationship.

There is no new trust relationship; The parents simply sign the signing key. Exactly what trust can be inferred from that is independent of DNS.

Allows for easily adding multiple encryption algorithms.

Bull shit.

Overall, more flexible.complicated.

Fixed that for you.

In terms of likelihood of deployment, DNSSEC is far ahead:

It is a sad thing that you are probably right about this. That doesn't mean it's a good thing by any stretch of the imagination.

I think it's downright shameful that the ICANN and the IETF can be bought so easily, that we may soon have to all run software with a known history of security bugs, just as soon as we might all have to compete with companies that can afford to buy their own top-level domain.

In terms of likelihood of actually solving problems, DNSCurve is ahead, and as you point out; there's few implementations (only two to my knowledge), and almost zero deployment.

However, DNSCurve is only a few months old, and already it's almost to the point that DNSSEC is.

Saying "hurry and protect against these security problems by switching to DNSSEC- trust us, we've been working on this for fifteen years" just sounds stupid to me. I think you're probably stupid if you think that sounds like a good idea.

It may be that you think DNSCurve is about as good as DNSSEC, just less mature and less available, but I look at it differently: We're in 2008, and the "deployment problem" of critical infrastructure has yet to be solved. We have sites without MX records, that break mail that isn't 8-bit clean, we have sites that still don't route filter (and for no good reason), and we've still got sites running BIND4.

You simply cannot force people to upgrade, and while I'm sure the BIND group would get it right eventually (I'm told current versions of BIND9 sucks far less than its predicessors) you simply cannot get away with that shit.

Comment Re:Stupid, stupid, stupid! (Score 1) 134

1. Wrong. Go read the DNSCurve implementation guide to see why.

2. Extensibility is a red herring. Adding a new keying system or cryptodevice still requires rolling out software and hardware changes on a scale similar to rolling out DNSSEC in the first place.

3. Yes it's true. All of those servers you named are based on the BIND source tree.

4. You're a tool problem. Verizon sent out tens of thousands of routers with an embedded DNS server on them. An embedded DNS server that drops DNSSEC information. All of those devices need to be replaced in order to support DNSSEC. None of them need to be replaced in order to support DNSCurve.

5. You say I'm wrong, and then say I'm right. The parents don't need to support DNSCurve because the cryptographic information is saved in the NS record. That means that the attack must be focused on the root servers to succeed; meanwhile ANY MITM attack on ANY content server can break DNSSEC by simply blocking the signing information.

6. "The end-user needs the root key ahead of time" means what I'm saying is true. DNSSEC gains no security unless clients can drop/ignore unsigned packets.

Comment Re:Misreported (Score 1) 134

I think you overestimate the value of having the signing key on a different machine.

The hypothetical attack where an attacker secretly changes some records on a content DNS server is unlikely, and there's nothing that says it has to be any less likely than breaking the machine with the signing key on it.

Meanwhile, denial of service attacks occur all the time.

It seems naive to protect against the attacks that don't happen, and ignore the attacks that do.

More importantly, you also seem under the impression that DNSSEC is "extensible". It isn't. Any extensions to DNSSEC will require software and hardware changes on a scale similar to deploying DNSSEC the first time.

No one had simple ways of being immune. 2 vendors had the port randomization turned on, far from everyone not BIND. It was a smart hardening step, but it doesn't make them immune to the currently known attacks. If it did, we wouldn't be having this conversation.

DJBDNS ignores answers to questions it doesn't ask, which makes the attack practically impossible to launch; you only get to try to load bogus data in when the TTL of the NS is expiring, which simply doesn't happen that often on the default configuration.

Port randomization is just another example of how the BIND group doesn't "get it". Port randomization would slow this, and stop many other attacks, and yet the BIND group chose increasingly complicated non-solutions. Doing the simple things (ignore answers to questions not asked, port randomization) would have stopped this.

Comment Re:Emphasis on *amateur* (Score 1) 134

Heck, you do all the crypto off-line, so you can pick a big one.

Bzzt. Wrong.

Caches still have to verify the packets.

DNSCurve and DNSSEC have complementary security goals.

DNSSEC was designed by the same people who created the problem. Yet they keep saying "trust us, we've been doing this for a while".

DNSCurve was designed by cryptographers who went out to solve an actual problem that people are experiencing.

You on the other hand, are doing a lot of hand waving: You clearly don't have even a basic background in cryptography, and you're wrong about basic and simple things. Anyone paying you for advice is an idiot.

Seriously, how the fuck could anyone thing DNSSEC could protect anyone from anything if it really were "completely off-line"?

Comment Re:Misreported (Score 1) 134

DNSSEC is currently deployed live in multiple countries

No it's not.

DNS security initiatives are about protecting clients. There are zero clients protected by DNSSEC, ergo, DNSSEC has zero deployment.

The big attempt at protecting clients looking up .SE users failed miserably- partly due to a bind bug, and partly because clients didn't bother checking at all.

Strange that the link you send doesn't mention DOS attacks at all.

I'll highlight it for you:

Availability

  1. RFC 4033: "DNSSEC does not protect against denial of service attacks."
  2. DNSCurve adds some protection against denial-of-service attacks.

If DNSCurve was proposed 5 years ago, it would have had a good chance of becoming the standard. Now, frankly, it's too late.

I disagree. It's 2008 and there are still sites without MX records. It's also impossible for a multihomed site in the US to deploy IPv6. It'll probably be another 10 years before DNSSEC sees any deployment whatsoever.

On the other hand, it has many future-proof features like the ability to upgrade the crypto used in case RSA, DSA, ECC, or any other scheme falls like a house of cards, or simply need to be made longer in order to survive attacks.

You don't get it. Leaving some bits open to replace the protocol either isn't the same as future-proofing. You'll just make everyone switch DNS servers again- only this time it'll be to BIND 12.

The deployment problem is everything. That's what MX records and IPv6 and IPSEC have proven. Deployment is impossible; your best bet is incremental improvement, and because the BIND group is digging their heels in about that, everyone suffers.

Remember: The recent DNS bugs were predicted over a decade ago. The BIND group said don't worry, we'll have DNSSEC done soon, but other DNS server vendors (all those not based on BIND) adopted the very simple method of being immune to them.

DNSSEC has been about "trust us, we've been doing this for a long time", and frankly that's not enough.

Comment Re:Misreported (Score 1) 134

That's the great part about DNSCurve. It's so simple BIND could implement it in an afternoon.

DNSSEC isn't supported by DJBDNS, MaraDNS, LDAPDNS, or any other DNS server not based on BIND's codebase. In order to gain DNSSEC support, everyone effectively would have to switch to bind.

I cannot see why anyone would think that is a good thing.

By the way, .SE's major problem was serious. It was really serious. The fact that it hadn't been noticed this entire time- with numerous people saying DNSSEC is almost ready, is frankly very disturbing.

Comment Re:Stupid, stupid, stupid! (Score 1) 134

1. Yes it does. Dodging the question by pointing out what content servers can do it is irrelevant. Caches will have to check the packets each and every time. Furthermore, the content server does have to re-sign periodically.

2. There's no reason the protocol had to be changed. DNSCurve proved that. The fact is that the DNSSEC people have had since 1993 to figure this out, and this is very strong evidence that they're bonkers-wrong.

3. It's very true. Maradns doesn't support dnssec. LDAPDNS doesn't support DNSSEC. Every code base not based on BIND lacks DNSSEC support. It doesn't help that DNSSEC is so new, and it doesn't help that DNSSEC is so complicated.

4. It's clearly very easy for very small sites to switch to DNSSEC. Especially if you're the only guy operating it. The real Internet isn't like that.

5. No, it's a serious issue. The parents need to support DNSSEC. The parents do not need to support DNSCurve.

6. A MITM attack could prevent a client from ever knowing that a parent was signed. Unless clients reject unsigned data, DNSSEC is useless.

Comment Re:So what powers does the IETF have on this? (Score 1) 134

The namedroppers list has, in the last 10 years I've been monitoring it, been a source of misinformation and frequently mismanaged.

Kaminsky's bug is a rehash of an old bug that non-BIND nameservers were already strong against.

If your sole source of information about DNS comes from the likes of Randy Bush, you sir are an embarrassment to network administrators everywhere.

1. According to the IETF, DNSSEC was started in 1993. That's far longer than a decade.

2. A controlled, toplevel deployment of DNSSEC to .SE knocked out a number of .SE sites. Look at this for more details.

3. If you honestly think there aren't install costs with replacing DNS with something else, you're a fucking idiot and not worth my time..

Argument by vigorous assertion? Please. This is common knowledge. The BIND group says this isn't important, and DNSSEC is almost there.

Comment Re:Misreported (Score 2, Interesting) 134

Yes, but not more then DNSSEC, which is a published, widely implemented, and tested system.

I disagree. DNSSEC isn't widely implemented, and the widest test had numerous problems.

DNSCurve is 100% compatible with DNS. There's nothing a firewall could do that would be compatible with DNS that is incompatible with DNSCurve.

DNSSEC is not.

DNSCurve trades off more compute resources and the need to have the signing key on the public DNS server to get encrypted DNS, while DNSSEC has a lower server compute load and can store the signing keys off the server, but communicates in the clear.

DNSCurve protects against denial of service attacks. It requires far less compute-power than DNSSEC.

It's hard to make a case for the need to protect the DNS traffic from sniffing, the threat is modification, not sniffing.

Rubbish. Even an amateur cryptographer would tell you that the more you know about the message, the easier it is to break it. Confidentiality protections reduce the amount of knowledge, and thus protect against attacks that are yet unknown.

I would like to see elliptic curve crypto standardized and used in DNSSEC as it will significantly save on the traffic needed, but that is something that can be easily changed later. DNSSEC is very extensible and designed with the future in mind.

I don't think you know what you're talking about.

Slashdot Top Deals

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...