Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?

Comment Re:So you think RSA is broken? (Score 1) 179

Of course. It's just that this is 6-7 orders of magnitude easier than breaking RSA, even against a relatively hard target.

No. It's however hard breaking RSA is plus 6-7 orders of magnitude easier because you still need to break RSA.

Signings shouldn't help the attacker unless your hash is broken... it probably takes a worse break than the current ones against MD5 and SHA1, as well.

That's not true. doi:10.1016/S1007-0214(05)70121-8 for example on weak-key attacks against digital signature systems.

they [the banks] can upgrade much more easily than DNSSEC if RSA-1024 falls.

Sort-of. SSLv2 has been considered obsolete for a long time, but it took new PCI-compliance procedures to really shake it out of a lot of organizations I've worked with.

Upgrading is hard. Saying upgrading HTTPS's RSA-1024 is "easier" than upgrading DNSSEC is patently meaningless: We're not really talking about upgrading, we're talking about replacement.

There are still sites without MX records and still new FTP clients being made. I consider the proponents of DNSSEC and IPV6 similarly incompetent largely because they have spent so little time exploring how to replace our existing crap.

DNSCurve is primarily an exercise in supplanting the existing system; that's what the entire system is built on, *how do we get security*, not how do we build the most secure system, or the best system by any technical measure.

You probably want to avoid them anyway... I'm a grad student so I don't design very practical stuff

Implementations are uninteresting. Where are these identity schemes published?


Australia To Block BitTorrent 674

Kevin 7Kbps writes "Censorship Minister Stephen Conroy announced today that the Australian Internet Filters will be extended to block peer-to-peer traffic, saying, 'Technology that filters peer-to-peer and BitTorrent traffic does exist and it is anticipated that the effectiveness of this will be tested in the live pilot trial.' This dashes hopes that Conroy's Labor party had realised filtering could be politically costly at the next election and were about to back down. The filters were supposed to begin live trials on Christmas Eve, but two ISPs who volunteered have still not been contacted by Conroy's office, who advised, 'The department is still evaluating applications that were put forward for participation in that pilot.' Three days hardly seems enough time to reconfigure a national network."

Comment Re:So you think RSA is broken? (Score 1) 179

What the hell are you blathering on about?

As is common for crypto protocols, if the RSA key in HTTPS is broken, a man in the middle can mess with the protocol in real time.

No it can't. You still need a way to get the packets to the man in the middle, and a way to get the packets where they don't belong.

DNS, using UDP, offers no such protection.

Secondly, DNSSEC uses the RSA key for a long time, and clients can get lots of signings to launch offline attacks. This attack doesn't work on HTTPS, which uses RSA to only sign/encrypt a session key. It doesn't work on DNSCurve either.

All other things being equal, that answers mmell's question: Why is RSA safer for bank transactions than for DNSSEC?

How the hell can anyone be as fucking numb as you are to these two very simple things and still "be a cryptographer"?

I call shenanigans! If you're actually paid to design cryptosystems, let me know which ones so I can avoid them.

Comment Re:So you think RSA is broken? (Score 1) 179

You're missing the important part:

Funny thing - if RSA1024 is more than enough to secure my bank transactions, why wouldn't I trust it with my DNS queries?

In order to break your bank transactions, someone not only needs to break RSA, they also need to break TCP, and quickly, I might add.

Without TCP, RSA becomes significantly weaker, no longer requiring a billion dollar machine to break very small messages- if you're trying to spoof an IP address, you only need to attack four bytes; a differential attack could special case keys for less still.

TCP is rarely broken; people rarely have their POP3 passwords sniffed, and rarely have those connections hijacked, and in the absence of a lan-based attack, the practical probability becomes almost nil.

Breaking TCP is hard because not only do you have to break the sequence numbers, you also need to break route filters, and possibly more; Part of HTTPS's practical security comes from the fact that breaking HTTP's unintentional security is hard as is- a single short-lived message over a dozen messages, spanning a second, is much harder to break than a RSA-signed DNS packet, which might be valid for days- or even weeks.

LAN-based attacks (hacking a router, spoofing ARP, sniffing wireless, splicing cables) are impractical for most attacks; we generally only see them for extremely targetted attacks. It seems reckless and naive to optimize for this case, when DNSSEC only seems able to do it by making the practical and likely attacks easier.

Comment Re:So you think RSA is broken? (Score 1) 179

And I'm not sure what you mean by "breaking TCP"...

Breaking TCP presently requires guessing sequence numbers reliably or a MITM attack. Both are extremely uncommon outside of LANs.

This isn't true... the best known attacks against RSA are just to factor the modulus.

What isn't true? Breaking RSA is easier than breaking RSA and TCP? (note "also" in my original phrasing)

255-bit ECC is probably slower than 1024-bit RSA for verifies, however.

Not just probably, definitely. That's probably why dnscurve uses Curve25519 (very very fast DH), which is significantly faster than RSA at similar key-strengths.

They can get new ciphers rolled out to browsers, and degrade to RSA for browsers that haven't implemented them. These problems are considerably worse for DNS servers and routers.

On the other hand, with DNSSEC, we're talking about using RSA in a new standard; its performance and size are already problematic at the current strength, and will get cubically worse at greater strengths.

Agreed. We already have excellent information about how long it takes to roll out a new protocol (and stop supporting the old protocol): A-fallback for MX records, Path-MTU discovery problems, ECN, and SSLv2 are things that we still have to deal with today, and MX records were introduced over twenty years ago.

It's evident that new protocols need to be carefully designed to be compatible with existing systems, and that the existing systems will be around for a long time. DNSSEC simply isn't compatible with DNS.

So saying "These problems are considerably worse for DNS servers and routers", I believe is woefully understated. These problems are the most important factor here, on a live, moving, Internet.

Comment Re:So you think RSA is broken? (Score 1) 179

That seems to be the crux of his arguement against DNSSEC - that RSA is broken (or soon to be broken).

You're wrong. The crux of his argument against DNSSEC is that it's stupid, requires everyone deploy it until anyone can enjoy it, and is incompatible with DNS. It has also wasted valuable space and time on the promise of an "extensible" cryptosystem completely ignoring the fact that deploying a new cryptosystem would require almost as much work as deploying the first one.

Don't you think we should get it right the first time?

You're right - let's pick the shiniest technology on the shelf, we all know that elliptic curve encryption is faster, smaller and uncrackable, right?

Curve is very well understood, and its security is twenty years old at this point. Curve is much faster than RSA, and in something like DNS, slowness can turn into denial-of-service attacks. Futhermore, Curve can guarantee only exponential time solutions exist, where RSA has been broken in sub-exponential time.

DNSSEC is planning to adopt ECC. The question isn't whether Curve is good; it's clearly good, the question is whether it is exhaustively good. The DNSSEC people believe a "pluggable" DNS security system is important, ignoring the fact deploying new cryptosystems is almost as expensive as deploying the first one.

Funny thing - if RSA1024 is more than enough to secure my bank transactions, why wouldn't I trust it with my DNS queries?

Excellent question.

Because not only does someone have to break RSA to break your bank transactions, someone also has to break TCP, which is actually much harder, and requires a monstrous amount of computer power and bandwidth available at sub-msec speeds. With DNS, breaking TCP isn't a requirement, because DNS doesn't use TCP.

Comment Re:Slow down there (Score 2, Informative) 179

I did

You need to work on your reading comprehension then.

DJBDNS supports all RR types, by way of generic RR support. See near the bottom of this page for details.

There is a series of patches that produce friendly syntax for tinydns-data, a single component of DJBDNS. This isn't valuable to large sites who don't source with tinydns-data's built-in format.

Comment Re:Slow down there (Score 1) 179

I disagree. His reputation is the single most important motivating factor here. Vix et all produced this mess, have been whining about DNSSEC since 1993 and still haven't come up with a deployment plan, or a migration plan. DJB started with a system that was 100% compatible with DNS, instead of starting with a pipe dream.

Furthermore, when BIND and friends were vulnerable to these new attacks, DJB's software wasn't. Not just because he was lucky, but because he's a pedant who thought of similar attack vectors over a decade ago, and announced solutions to the BIND and namedroppers mailing lists- randomize port numbers, and don't accept answers to questions you didn't ask.

Needless to say, the BIND group had their "own" solution to those attack vectors, and I don't have to tell you how well those worked out.

Comment Re:What linux ACTUALLY needs (Score 1) 865

That's not quite the reason behind not implementing the stable interface, as I discovered in another thread. The real issue is dependant on the C compiler version and kernel build options which kill compatibility with binary modules

The Linux Kernel Developers disagree with you.

BTW, size doesn't translate into slowness or bugginess ... those two are related to bad design rather than simply having more code

The problem with a stable kernel API is that you can't fix those bad design decisions anymore because somewhere a driver might exist that depends on that old API.

Even so, you still have to have the driver as a slightly seperate component from the kernal, even if it isn't a via a publically stable API

No you don't. That's the whole point.

For everything else, there's already a standard interface through /dev/* for most of the devices.

That's the userspace interface. Over 200 system calls have been added since Linux kernel 0.99 and none of them have been removed. That means programs that ran on Linux in 1992 still run on Linux today. Almost all of the programs that ran on Windows 3.0 simply don't run today on Windows.

That's what a stable API means. Linux has one for userspace, Microsoft does not. However over 50 of those system calls are for compatibility purposes. That's 50 functions that have to be maintained even though there are almost no programs that use them. Microsoft's userspace API is so large it was impossible for them to continue supporting the old `Win16` api.

Less than 20 system calls get added every year. Can you imagine? Less than 20 new public functions every year? And you're seriously suggesting such a schism be introduced into the kernel, while clearly not understanding the costs, and being blissfully unaware the number of conversations that have led into this.

To you, it seems the question should be "why not have a stable kernel API? That would make it easier for hardware manufacturers to make drivers", and I'm saying, since they clearly cannot write drivers anyway, why do we want to go through efforts to have a stable kernel API?

Comment Re:The fix or DJB's immunity's is still not enough (Score 1) 122

The problem here is that some people don't understand how anyone expects to attack these banking sites without also replicating their SSL certificates for secure login.

Eh, I think it's a big problem that there is a motivating force saying let's trust the guys that caused this problem in the first place. DNSSEC is a replacement to the DNS infrastructure, and the best they've got is we weren't really trying to make something good, when we made DNS. I'd think it pitiful, but look around slashdot: Look at how many people are saying DNSSEC would've fixed this problem.

Anyway. The attack is pretty straightforward. It looks like this:

  1. Register a bogus domain name:
  2. Sign up for a certificate for that domain. Answer the email verifying you own the bogus domain.
  3. Log your nameservers! The things that connect to you are the things you want to attack.
  4. Brute-force poison those caches until you can direct your's domain name to your own servers. Congratulations, as far as the SSL vendor is concerned, you control's mail and web site.
  5. Sign up for a certificate for that - this will not be questioned.
  6. Congratulations, you have an SSL certificate for! Time to MITM your victims with an SSL-encrypted site. Have a beer, you earned it big guy!

It's not complicated, and if the SSL vendor uses BIND (a safe bet: many vendors are deploying DNSSEC support so that they can start selling more snake oil), it's fairly easy.

Comment Re:What linux ACTUALLY needs (Score 1) 865

Because you mentioned low-quality unstable drivers.

No, I didn't.

I said vendors can't write good drivers. The Linux kernel developers clearly can. They are also willing to write the drivers for no-charge.

memory segmentation, allows any failure to be contained within that one program instance.

At the cost of being slower, making the entire kernel slower, bigger, and buggier, making it harder to adopt new methods and implementations unless their API is compatible, and generally making working on the Linux kernel (as a developer) much harder, so users get less features slower.

That's the approach Microsoft took, and we can see just how well it's working for them; less hardware supported, fewer features, and new releases made slower.

It may be a necessity if you want to support closed source low-quality drivers, but I cannot fathom why anyone would want to do that.

If my network card driver fails, attempting to restart the driver crashes the system (although plugging the network cable into the other port works fine.) While I know this is more of a hardware issue, it seems that the driver is expecting a response from the network card and the delay somehow interferes with other devices on the system.

What does this have to do with me?

Are you running Linux?

Does your network device have published technical documentation?

If the answer to either of these is no, it's not really relevant, now is it?

Comment Re:The fix or DJB's immunity's is still not enough (Score 1) 122

You missed the part where djbdns ignores answers to questions it didn't ask.

That means that in addition to getting the 16-bit port number and the 16-bit transaction id, they have to do it after the TTL expired for the record in question, but before the legitimate content server has sent the new (refreshed) information.

That's ridiculously unlikely.

It's intellectually dishonest to say the only way to resolve it is DNSSEC or long XIDs. Furthermore, XIDs and DNSSEC require a similar amount of work to deploy- being as how both are incompatible with DNS, with XIDs being slightly cheaper being as how they are simpler to implement.

Slashdot Top Deals

The moon may be smaller than Earth, but it's further away.