Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security

Remote RSA Timing Attacks Practical 230

David Brumley and Dan Boneh writes "Timing attacks are usually used to attack weak computing devices such as smartcards. We show that timing attacks apply to general software systems. Specifically, we devise a timing attack against OpenSSL. Our experiments show that we can extract private keys from a OpenSSL-based server such as Apache with mod_SSL and stunnel running on a machine in the local network. Our results demonstrate that timing attacks against widely deployed network servers are practical. Subsequently, software should implement defenses against timing attacks. Our paper can be found at Stanford's Applied Crypto Group."
This discussion has been archived. No new comments can be posted.

Remote RSA Timing Attacks Practical

Comments Filter:
  • Personal crypto? (Score:5, Interesting)

    by pro-mpd ( 412123 ) on Thursday March 13, 2003 @09:09PM (#5508240) Homepage
    OK, so we know now that SSL servers may be vulerable. Can this sort of an attack be used against personal encryption, a la pgp?
    • by Anonymous Coward on Thursday March 13, 2003 @09:18PM (#5508291)
      Only if you type really, really fast.
    • Re:Personal crypto? (Score:2, Informative)

      by SgtPepper ( 5548 )
      Well, according to the article:

      "Many crypto libraries completely ignore the timing attack and have no defenses implemented to
      prevent it. For example, libgcrypt [6] (used in GNUTLS and GPG) and Cryptlib [7] do not defend
      against timing attacks."


      So I would say yes it is (If you consider GPG the same has PGP that is)
      • Re:Personal crypto? (Score:5, Informative)

        by SgtPepper ( 5548 ) on Thursday March 13, 2003 @09:34PM (#5508366)
        I should clarify though that it would only be possible if someone was on your system WHILE you were encrypting something or decrypting something that was encrypted with your key. It isn't /possible/ to execute this attack AFTER something is encrypted, only during the encrypting process.

        Someone tell me if I'm wrong...I might be, but I don't think I am...
        • by Anonymous Coward
          I should clarify though that it would only be possible if someone was on your system WHILE you were encrypting something or decrypting something that was encrypted with your key. It isn't /possible/ to execute this attack AFTER something is encrypted, only during the encrypting process.

          Someone tell me if I'm wrong...I might be, but I don't think I am...


          You shouldn't worry. Oh yea, and can you send that last email again please? I had to reboot and missed it.......
        • Re:Personal crypto? (Score:3, Informative)

          by the_quark ( 101253 )
          Theoretically possible. But they'd have to get you to encrypt arbitrary chosen-plaintext, and a lot of it, to check the timings. It's probably easier just to root the box and steal your password with a keyboard sniffer.
          • Re:Personal crypto? (Score:3, Informative)

            by jareds ( 100340 )
            No, they'd have to get you to decrypt chosen ciphertext (but they don't need the result of the decryption, just the timing). PGP uses public-key cryptography, so anyone can encrypt chosen plaintext.
    • Re:Personal crypto? (Score:5, Informative)

      by tjohns ( 657821 ) on Thursday March 13, 2003 @09:25PM (#5508315) Homepage
      No. According to the article, "Timing attacks enable an attacker to extract secrets maintained in a security system by observing the time it takes the system to respond to various queries." Once you choose to encrypt and send a message with GPG/PGP, it is done, and that is that. Aide from the conversation your computer makes with the SMTP server, there is no further communication between hosts, so there is no way somebody could try to measure the time it takes to generate an encrypted reply to something.

      Therefore, as long as your computer isn't comprimised, you're fine. However, if your computer is comprimised, well, you've got bigger things to worry about than somebody trying to time how long it takes to encrypt a message. ;-)
      • A theoretical possibility is that someone makes, say, a message containing javascript code to measure the time it takes for GPG to decrypt the message... I don't know if this would be possible - the method described requires pretty precise timings and I know absolutely nothing about how good rtc support javascript has, nor if it is possible to embed a script in such a way in a message. It still takes about a million chosen plaintexts, so hopefully it won't work in practise, but if you're this paranoid you p
  • by The Mainframe ( 573877 ) <bennettprescottNO@SPAMgmail.com> on Thursday March 13, 2003 @09:09PM (#5508242) Homepage Journal
    Great, and this after I've been bragging about my 'not-breakable for a billion years' 2048-bit key.
    $mouth . $foot
    • A billion years? (Score:2, Interesting)

      by Iron Fusion ( 591400 )
      Not breakable for a billion years? Assuming that a typical modern computer could check at least 1000 keys per second, and that Moore's law continues and computing power keeps doubling every 18 months, then it would "only" take 3057 years until a typical computer was capable of checking all possible 2048 bit keys in 1 second. Sure, that's a long time, but hardly a billion years :).
  • What a shame. (Score:2, Offtopic)

    by AltGrendel ( 175092 )
    Pity this didn't appear BEFORE the Paul Kocher post.
  • by MisterFancypants ( 615129 ) on Thursday March 13, 2003 @09:13PM (#5508263)
    Microsoft didn't write OpenSSL..How could this be possible????

  • Dan Boneh (Score:3, Funny)

    by blackmail ( 62874 ) on Thursday March 13, 2003 @09:13PM (#5508267)
    Is one smart cookie. He's also the only prof I wouldn't take a class from because it wasn't webcast. In other words you can't pause and rewind his live lectures. He talks real fast. And tilts his head at a 30 degree angle to his left.
    • Re:Dan Boneh (Score:4, Interesting)

      by ChadN ( 21033 ) on Thursday March 13, 2003 @09:35PM (#5508368)
      I took a couple classes from him, and found him to be a very good lecturer. VERY knowledgeable about his subject, and very well organized with his presentation.

      Even reading all the course books intently, I got a great deal of helpful information from the lectures.
  • Umm... (Score:5, Funny)

    by Anonymous Coward on Thursday March 13, 2003 @09:16PM (#5508284)
    That summary is so buzzword-rich I feel compelled to purchase a product, if one were offered.
  • by Phishpin ( 640483 ) on Thursday March 13, 2003 @09:17PM (#5508287)
    The article says that they "can extract private keys from an OpenSSL-based web server". That doesn't sound like a compromise to the RSA algorithm. They just got the private key.
  • just randomize the timing in every reply... a few NOOPs?

    or am i missing something?

    it would seem to me that the encryption/ decryption process has no self-awareness of what is a short and long time to reply. therefore, maybe you need another process to watch the time it takes to reply. it could average it over time and provide some sort of super-lengthening of the time it takes to reply so that long replies can be made to look shorter than an artifically lengthened short reply?

    because if you just lengthen the short replies, doesn't that still reveal something since a really long reply will still show a spike?

    if replies happen on the order of 5-50 nanoseconds naturally (hypothetically speaking), and you lengthen the short replies so that the range becomes something like 35-50 nanoseconds, then that could still reveal something.

    so what is needed is maybe a superlengthening of replies so that 5-50 nanoseconds becomes 100-200 nanoseconds? only that way could you hide the difference between a 5 nanosecond and a 50 nanosecond reply completely, no?

    doesn't this have implications though on overall speed? or are the times still so tiny it really doesn't matter?

    • by selectspec ( 74651 ) on Thursday March 13, 2003 @09:41PM (#5508396)
      It's called RSA Blinding which pads the computation by some random timings. Implementations vary but typical performance overhead is 2-10%.

    • Rather than randomly padding the response time, how about running tests to determine the longest time that can be reasonably expected to take for the calculation, then adding some fudge factor to arrive at a set constant.

      Where 'constant' need only be constant for a specific length of message. An implementation that allows different sized-packets to be input would be allowed to use a formula to determine the response time, such that any two packets of the same size would have the same response time

      Set a t

    • by rew ( 6140 ) <r.e.wolff@BitWizard.nl> on Friday March 14, 2003 @04:04AM (#5509862) Homepage
      The idea is that bad code will do something like:

      if (bit[i] != key[i]) return FALSE;

      By timing this accurately, you'll know what (approximately) bit failed. Once fixed, and everyone agrees the fix is trivial, your code will look something like:

      if (bit[i] != key[i]) bad_bit = 1;
      else good_bit = 1;
      [...]
      if (bad_bit) return FALSE;

      As you can see the computer has to do exactly the same work for a good or a bad bit. Thus the timing should be identical.

      It's VERY VERY hard to mask these timing issues by "random" delays. Worst case, the attacker guesses your random numbers. Best case, the attacker will have to run his attack a couple more times to get the same results through the noise.

      Cryptographers have known about this for years. In fact, this (kind of) attack was posted on slashdot no more than a week ago.

      Roger.

      • rew wrote:

        It's VERY VERY hard to mask these timing issues by "random" delays. [...] Best case, the attacker will have to run his attack a couple more times to get the same results through the noise.

        This is true only if the random delays are added to the running time of the operation you want to conceal. A simple, alternative method that reveals no information about the operation's running time is to decide upon the total (typically random) running time before performing the operation. Then, after t

      • why can't they fix the timings? ie, make all computations last a maximum and a minimum of 5 seconds. That way, all computations will take the same amount of time, how could the program detect this?
  • In a nutshell... (Score:5, Informative)

    by Dr. Bareback ( 644802 ) on Thursday March 13, 2003 @09:20PM (#5508302)
    The paper was admittedly somewhat difficult reading for those of us who are not frequently subjected to academic research texts. However, I would like to take some time to shed some light on the topic for those of you who do not have an Master's degree from Harvard as I do. :)

    These timing attacks are very different from those executed against an embedded device, such as a smart chip, in that the attack against the smart chip aims to disrupt the device and cause it to skip one or more instructions in order to breach the security. These attacks instead use timing attacks as an oracle which allows the malicious hacker to make thousands of guesses against the insecure server, knowing that the timing of the response will eventually give away the key. For instance, by sending a specially crafted packet to one of these vulnerable SSL servers, one will be able to deduce from the timing whether a given bit in the private key is a 0 or a 1, simply by looking at how much time it takes to respond (on the average, for that particular crafted input). You can see how this could be a bad thing.

    Although this could be a very nasty threat today for machines within a small, predictable network distance from the attacker, there is hope. In the 2.5 kernel, developers have begun adding features that randomize round trip latency for packet reponses. This means that these systems will not serve as good oracles for an active attacker because the timings generated by the randomization feature will not approximate an even (normal) distribution. This means that even by averaging them out, it will not be possible to determine from the timing of a cryptographic response whether (say) the bit is a zero or a one.

    This vulnerablility has actually been discussed as a possiblility for the past few years (mostly within the CERT "members only" club) but it was not until recently that a practical exploit was published. So if your keys were compromised before this went public, perhaps one of the blackhats figured the trick out first. :(

    • Practical? (Score:2, Interesting)

      by Anonymous Coward
      If this method of attack develops bits of a private key by averaging the response time of lots of "specially crafted packet[s]", how would you conduct an attack that would succeed in the real world?

      It's going to take a lot of time for a 1024-bit key, and it seems to me such an attack should be easily noticed, and even then the timing dependency should make this attack impossible to pull off in the real world unless you're sitting right next to the target box.

      • Re:Practical? (Score:2, Informative)

        by djbrums ( 633961 )
        It takes 2 hours in the real world. If you log connections, you'll see it there. But do you watch your logs that closely? One funny defense to try is to run OpenSSL w/ logging to a full disk. The full disk breaks the timing attack as the OS valiantly and unsuccessfully tries to find free blocks to record the SSL error message.
      • Re:Practical? (Score:3, Informative)

        by ca1v1n ( 135902 )
        I saw Boneh give a talk on this a few days ago. One machine, over a LAN, can do it in two hours. Sure, that can be noticed if you know to look for lots of aborted SSL sessions, but if you don't have this on an IDS or something like that, someone can Nimda your secretary's box and still get your private key before your sysadmin gets back from the weekly staff meeting.
    • by Lux ( 49200 ) on Thursday March 13, 2003 @11:36PM (#5508885)
      I'm pretty sure you're trolling, but I'll bite...

      Your elitism has been commented on thoroughly, so I'll skip to the meat. :)

      It takes 1.5 million queries to break an SSL key using this technique if you have an account on the same machine. That's for a local attack.

      They go on to give evidence that a remote attack is possible, but don't actually give parameters that would indicate the complexity of the attack. 1.5 million queries is already kind of pushing it though, and I would imagine taking the attack to the remote setting, even with a small round trip, will explode the complexity significantly. So the attack may not be feasible at all.

      Furthermore, adding random variance to latency data doesn't theoretically help much. Get enough samples, and that noise drops out. It raises the number of queries, yes, but doesn't stop the latency from forming a statistically usable distribution.

      -Lux

      • So, how about semi-random latency? Take the longest expected maximum time of a query, then add a small random time to that just to mix things up. Once you've calculated this, make sure the query takes that long.
    • I would like to take some time to shed some light on the topic for those of you who do not have an Master's degree from Harvard as I do.

      Harvard should teach a course on how to shut up about Harvard :-P
  • Xbox (Score:5, Interesting)

    by Anonymous Coward on Thursday March 13, 2003 @09:21PM (#5508304)
    Wouldn't this type of attack be feasible on the Xbox instead of trying to factor the key? You even have access to the hardware so very precise timing could be done.

    Just a thought.
    • No. The key they are looking for isn't stored on the Xbox.
  • by myst564 ( 196476 ) on Thursday March 13, 2003 @09:21PM (#5508305)
    It seems that this is severe when the attacker is very "close" in network terms to the server. It relies on the time difference when multiplying numbers in OpenSSL.

    So, if you are "far" away from the SSL server, I suppose this attack isn't as "good".

    All this talk about closeness and goodness really just means we're using smoke and mirrors.

    OpenSSL needs to figure out how to be completely mundane with any input string from the client.
    • Theoretically, you are right, practically, no.

      Basically, if your connection to the attacked machine is shorter, the noise you'll receive is less.

      Considering that there are thousands of compromised machines on the internet, there'll probably be a machine closeby that is vulnerable to some known exploit. That machine can then be used by the cracker as a proxy that does the timings. It may also do the attack automated, and at the end just relay the resulting key somewhere.

      Remember that most machines are in
  • Followup or what? (Score:5, Informative)

    by Stavr0 ( 35032 ) on Thursday March 13, 2003 @09:22PM (#5508308) Homepage Journal
    Swiss Researchers Find A Hole In SSL [slashdot.org]

    This sounds a lot like the weakness discovered last month by the LASEC folks in Switzerland ...

    • Re:Followup or what? (Score:5, Informative)

      by SiliconEntity ( 448450 ) on Thursday March 13, 2003 @10:07PM (#5508494)
      No, the recent OpenSSL attack was different. That relied on distinguishing messages which failed due to a bad MAC versus bad padding. The timing was slightly different in the two cases. It allowed you to get some portions of a sample message decrypted.

      The new attack just looks at how long the RSA decryption takes for carefully chosen values, and determines from that what the RSA secret exponent is, which means the RSA secret key. So this leaks the server's secret key and the server operator loses all of his cryptographic security. It's a much worse break.

      However the timing precision needed for the new attack is much tighter thatn for the pad-vs-mac one, so at this point it can only be mounted across a LAN or on a shared-user system.
  • by Anonymous Coward on Thursday March 13, 2003 @09:26PM (#5508317)
    Why don't all the OpenSSL folks sue these guys under the DMCA? It's good enough for Adobe, it should be good enough for Open Source folks, right?
    • I can see a few reasons:

      1) We want it fixed, not hidden (no security through obscurity)
      2) That which does not root our boxen makes them stronger
      3) Can't afford the legal fees and lawyers
      4) Too busy with pr0n

      Pick whichever floats your boat
  • the bit about MS palladium being a security risk is very intersting also very ironic.
  • my.domain.org daily insecurity output for Fri Feb 21 03:15:01 PST 2003

    Running /etc/security.local:
    Package openssl-0.9.6g has a weak-encryption vulnerability, see http://www.openssl.org/news/secadv_20030219.txt
  • Define Remote.... (Score:5, Informative)

    by SgtPepper ( 5548 ) on Thursday March 13, 2003 @09:29PM (#5508336)
    According to the conclusion at the end of the article:

    "We devised and implemented a timing attack against OpenSSL { a library commonly used in web
    servers. Our experiments show that, counter to current belief, the timing attack is eective when
    carried out between two machines in a local network. Similarly, the timing attack is eective
    between two processes on the same machine and two Virtual Machines on the same computer. We
    hope these results will convince designers of crypto libraries to implement defenses against timing
    attacks as described in the previous section.
    "

    So it looks like it is only useful against machines on the local network, which means you would have to have a comprimised machine on the network to launch the attack from. Possible yes, but it's not has simple has querying a remote system over the internet (I would assume that the unknown latency would render a timing attack useless, but couldn't use you use a traceroute to determine the latency and compinsate? Just a thought..) Anyway, I don't expect there to be 1,000s of comprimised servers by tommorow...
    • by j.e.hahn ( 1014 ) on Thursday March 13, 2003 @11:12PM (#5508811)
      (I would assume that the unknown latency would render a timing attack useless, but couldn't use you use a traceroute to determine the latency and compinsate? Just a thought..)

      You'd need to know something about how the latency varies. It's not constant. It's got a probability distribution over some range. (it's bounded below, and probably above too...)

      Problem is, if that probability distribution is random, or "near" random, you're going to have a bitch of a time extracting enough info to perform an attack of this sort. This is why you need to be close; it's not because there's low latency (that helps) but because the latency has low variation, which means you can assume a value for it and subtract that out. With a random distribution, your "signal" will be lost in the noise.

      It's another example of an attack being not quite as practical as advertised to the public. There's a real threat here, but it's not the "ultimate 'sploit".

      • Hmmmm... Not as unlikely as that - lots of people host on virtual-machines (e.g. Verio) or with managed hosting companies (e.g. Rackspace, which is why I'm nervous ;) ). That means that latency is very low, and fairly consistent.

        Just a thought.

      • True, but you could also send a burst of traceroute packets (udp or tcp timeouts) to all the routers in the route at the same time that you perform an iteration of the attack.

        Then you could (somewhat) compensate for latency with each iteration.

        Not necessarily elegant or foolproof, but I think possible... I wonder how much varience/latency (noise) you can reasonably account for before the exploit breaks down (or becomes too time-consuming to be worthwhile...)?

    • Anyway, I don't expect there to be 1,000s of comprimised servers by tommorow...

      No, but does that make this any less valid? A determined cracker will find a way. Maybe he has to comprimise an internal box first, to then be able to crack the SSL key -- does that make this any less risky?

      It seems to me, to avoid timing attacks, that the server (SSL or whatever) should be careful to ensure that the response timing is similar whether it is an error condition or success condition -- in other words, give no cl
  • by Anonymous Coward
    I am aware that there is currently an effort to find the private key for the xbox. When I saw this headline I immediately wondered if the xbox is vulnerable. I assume it is not because it is only doing client encryption?
  • by Tailhook ( 98486 ) on Thursday March 13, 2003 @09:34PM (#5508363)
    IANACE (crypo expert)

    The attack works by measuring the time it takes for an SSL server to encrypt things. By causing the SSL server to do lots of encrypting of known things, you can derive a private key. Apparently, this must be repeated many times and is highly dependent on timing. Thus, it's not fast and network latency, high server load, etc. will reduce the effectiveness of the attack. Further, subtle environment differences prevent an obvious "script kiddy" level implementation of this attack. A relevant quote from the paper:

    As we will see, the performance of our attack varies with the exact environment in which it is applied. Even the exact compiler optimizations used to compile OpenSSL can make a big difference.

    A solution to this might be to implement a small random delay before the server returns cyphertext to the client, no? A few extra milliseconds here and there would probably be sufficient.
    • by Anonymous Coward on Thursday March 13, 2003 @10:04PM (#5508476)

      Being further from the server will add noise, but they already compensate for noise by averaging 7 samples (5 would have been sufficient for local use). Over a remote network, you take more samples. (perhaps quite a few)

      It currently takes 2 hours to crack a server.
      If you have a week or so of time on your hands, a realtively unused server (not mail.bigisp.com but admin.bigisp.com), and the asministrator doesn't notice the traffic spike, quite a few sites could be vulnerable.

      Remember, people get upset when crypto can be cracked in terms of *years* and *dollars*. This can crack things in HOURS on a fast pc.

      • Busy sites will have variable server load for the attacker to cope with. It's more than just network latency you have to factor out.

        I wonder to what degree not knowing the precise parameters of the SSL server would complicate things. If, for instance, you don't know whether the server is a SPARC, Intel or some form or dedicated hardware. Each combination of software and hardware would have wildly different behavior relative to the very precise measurements required.

        Also, by it's nature, this attack pro
    • It would be trivial to prevent the attack: add a nanosleep(10) to the end of each operation, and any amount of time under the size of a timeslice disappears, because the SSL server will spend the rest of the timeslice sleeping, making the timing perfectly uniform.

      If you want to be sure (and deal with multi-timeslice operations and kernels that give the process control in a more timely fashion), use setitimer. It's not a hard problem to correct, but people didn't previously think that it was actually a prob
    • A solution to this might be to implement a small random delay before the server returns cyphertext to the client, no? A few extra milliseconds here and there would probably be sufficient.

      No, you don't always want to have that delay (random or otherwise), or it can be accounted for -- better is to have a desired total time for the operation (random or otherwise) and usleep(desired_total_time-time_taken), and much better than that is just to perform all the possible steps in any case (merely discarding thos
  • ROT-13+ (Score:4, Funny)

    by silvakow ( 91320 ) on Thursday March 13, 2003 @09:52PM (#5508436)
    Is there even a reason to be concerned with this when ROT-13+ [unixjokes.com] is perfectly secure? It was recently expanded from regular ROT-13 so it doesn't only encrypt letters, so it should be good enough for any application.
    • Re:ROT-13+ (Score:3, Funny)

      by iggymanz ( 596061 )
      For added security, I encrypted this post with Rot-13 *twice*
      • I don't mean to sound redundant here or anything, but you know how people use DES 3 times and called it 3DES? Why not use ROT-13+ 3 times and call it 3ROT-13?
        • don't mean to sound redundant here or anything, but you know how people use DES 3 times and called it 3DES? Why not use ROT-13+ 3 times and call it 3ROT-13?

          ROT-13 CUBED sounds more L337. I was gonna do a super script 3, but /. wont let me. That would be more L337.
        • you know how people use DES 3 times and called it 3DES? Why not use ROT-13+ 3 times and call it 3ROT-13?

          ROT-n is a closed associative binary operation. That is, two composed ROT-i and ROT-j make a ROT-k. This is not true of DES, which is why DES E(i) D(j) E(i) provides 112 bits of effective key length.

      • For added security, I encrypted this post with Rot-13 *twice*

        (Putting on my best Scooby voice) That would be called Rot-Row-13...
  • Easily remedied (Score:4, Insightful)

    by divide overflow ( 599608 ) on Thursday March 13, 2003 @10:06PM (#5508484)
    This attack is highly sensitive to unpredictable delays (network latency, local process interruptions, I/O interrupts, ...) that make accurate timing of the crypto operations difficult to impossible. It seems that most probes would see so much "jitter" as to render the attack ineffective. It would be relatively simple to provide randomized jitter or simple ensure that the crypto operations responded in roughly equal time so that they didn't give away any useful information.
  • Too bad this system can't be used against the Xbox signature key, because signing of games is not done in real time... Myria ^_^
  • by Myria ( 562655 ) on Thursday March 13, 2003 @10:37PM (#5508621)
    The exploit behind this is actually not that complicated, if you know RSA. In RSA, you have these formulas:

    C=M^e mod n (encryption/signature verification)
    M=C^d mod n (decryption/signing)

    When OpenSSL wants to sign a message C, it needs to produce the signature M using the "M=C^d mod n" formula above. Calculating this is simple in idea: calculate C to the d'th power modulo n. However, d is a very big number; on average, it is n/2. There is no way you can calculate C^d mod n by multiplying C with itself many times.

    The answer to this is to use a "repeated squaring" algorithm. Let's say that d=17, and you want to calculate C^d. Multiplying C 17 times works, but is very slow. It is possible to calculate C^17 using the following formula:

    C^17 = C * C^16
    C^17 = C * (((C^2)^2)^2)^2

    Now, there are only 5 multiplies: 4 squares and one multiply with C. There is a pattern to when you square and when you multiply. The number 17 is 10001 in binary. The binary digits tell you when to square and when to multiply.

    You start with the number 1, and go through the bits in 10001 from left to right. For each bit, either 0 or 1, you square. But if a bit is a 1, you also multiply. This means that the first step (a 1 bit), you square, then multiply by C. Since you started out with 1, you get C here. For the next 3 bits, all 0's, you square, getting C^8 as your result. The last bit is a 1. You square, getting C^16, then multiply by C again, since it's a 1 bit. This gets you C^17.

    The problem here is that "1" bits require a different amount of work than "0" bits. When you have a "1" bit, you have to perform an additional multiply than for a "0" bit. If you can somehow time each multiplication/square step, you can determine whether the bit of "d" was a 0 or 1. If you can do this 2048 times, you can calculate all the bits of the private key, which is "d". That is what this attack does, minus all the complicated details.

    This RSA hole can only possibly be exploited when the attacker has physical access to the device (as in a smart card attack), or the owner of the private key automatically signs/decrypts messages sent from a client (as in OpenSSL). Manual encryption systems, such as those used for email, can't be exploited this way realistically.

    Myria ^-^ *hugs*
    • The problem here is that "1" bits require a different amount of work than "0" bits. When you have a "1" bit, you have to perform an additional multiply than for a "0" bit. If you can somehow time each multiplication/square step, you can determine whether the bit of "d" was a 0 or 1. If you can do this 2048 times, you can calculate all the bits of the private key, which is "d". That is what this attack does, minus all the complicated details.

      So, does that mean a feasible solution to prevent this sort of

      • Yes, that will work ^-^. Note that if you do this, you also will need to do the modulo operation on the multiplication result you're about to discard. Otherwise, the lack of a modulo operation would become a noticeable time difference.

        Myria ^-^ *hugs*
    • Ím sorry, but thats just crap.

      Please read the paper. The key to this attack is the selection of different multiplication algorithms depending on the lengths of the words being multiplied. Then, depending on the algorithm, the timing attack is based on whether the ciphertext is larger, smaller, or a multiple of the secret RSA moduli. Your explanation is a little too oversimplified.

  • by dido ( 9125 ) <dido&imperium,ph> on Thursday March 13, 2003 @11:01PM (#5508765)

    I believe Paul Kocher first proposed [cryptography.com] (this is PDF) this attack way, way back in 1995, and as I recall, he even applied it to networked systems. RSA Labs' BSAFE, since version 3.0 has included a "blinding factor" in its RSA implementation that renders this attack ineffective. Reading the original RSA Labs bulletin [rsasecurity.com] (also PDF) on this attack shows a very simple fix, and I'm surprised that this hasn't made its way into OpenSSL! Ron Rivest proposed this back in early 1996. What's up?

    • Actually, if you look at the Cryptonomicon.Net article about this attack [cryptonomicon.net], you can see that, in fact, blinding has been added to the OpenSSL code. The issue is whether or not it is turned on by developers.
      • Interesting. But why in the name of all that is holy, did the OpenSSL developers choose to use an insecure-by-default system that requires every developer of an OpenSSL server application to turn on the blinding algorithm explicitly? That will mean patching every application! Apparently, some distributions that bundle OpenSSL don't include the blinding code either, without having it compiled in, which points to the distinct possibility that OpenSSL has this option turned off by default for some reason. The

  • by philthechill ( 316949 ) on Thursday March 13, 2003 @11:09PM (#5508799)
    The paper mentions that things like palladium may also have this kind of problem (unless they implement timing protections of some sort), which leads me to believe the 2048-bit X-Box key could also be attacked this way, and probably much faster since you might be able to attack it right on the box without going over a network.

    But I could be wrong.

    Phil
    • Yeah. But they should have waited for Palladium to go gold before they let everyone know the inherent flaw.
    • by jareds ( 100340 ) on Friday March 14, 2003 @04:26AM (#5509907)

      Microsoft's private key isn't on the X-box, but rather their public key is. This is because the public key is what the box needs to verify that a game is signed by Microsoft. It certainly does not need the private key. Hence, there is no private key on the box to extract.

      X-box security is actually extremely different from Palladium. For Palladium, it is inherently necessary to have a private key (that is, a private key inaccessible to the end user) with every computer, so that computers can authenticate the trusted kernel (aka Nexus) that they are running to third parties. If they use an RSA implementation vulnerable to a timing-based attack, a local user could easily perform such an attack, extract the key, and forge any desired authentication.

      • The whole point is that a binary is validated by the xbox rom. if this validation is also timecritical then an attack might be possible.
        (If OK takes longer than Not ok)

        Or is the checking of a signed message an complete different method?
        • Think about what you're saying for a second. If it is possible, using any sort of method whatsoever, to extract an RSA private key from a device that has only a public key, RSA is completely broken in general. In this particular example, I just build an X-box, substituting my adversary's public key for Microsoft's public key, perform whatever hocus pocus you're suggesting, and get my adversary's private key.

          The fact is that the timing attack given in the article involves timing decryption, an activity wh

      • So maybe we could attack the X-Box this way:

        Write a game for the X-Box. Drop it in the mail to have it signed by microsoft, and start your stop watch. Write down how long it takes to get back. Repeat 1.5 million times.

        The might get a bit suspicious when you send in games like Rouge Ninja Fighters 1432143, though.
  • by freuddot ( 162409 ) on Friday March 14, 2003 @12:37AM (#5509134)
    Everybody's suggestion seems to be to add a random delay to responses.

    Why ?

    Why not make the response time fixed ? Like in the biggest possible response time. If every query was responded in x ms, NO info would be leaking to the attacker. Or am I missing something ?

    • fixed or random delays work find - as long as you don't leak info about the key. this is mentioned in the paper.

      the attack is a nice proof-of-concept, but almost totally impractical. notice that their "lan" was a completely private switch, with just the attack and target machines on it. if your lan is anything like mine, there's huge variance that will defeat the attack. not to mention that any other activity on the target machine will make the attack harder, or any kind of traffic shaping (they were d
    • Having a fixed "response time" would be appropriate. But it seems simpler to add a random delay before replying than it is it determine the elapsed time and then compensate with a difference delay.

      In other words:

      x = time to process

      random delay:
      response time = x + Rand(y)

      fixed delay:
      response time = x + (FixedTime - x)

      The other problem with a Biggest Possible Fixed Time is performance. Encryption already causes overhead; maximizing overhead on remote network connections is not generally viewed as a Good
    • That's usually what is being done. Timing attacks often work simply by noticing that if a key/message/whatever fails to be accepted at a certain stage in the algorithm, the function returns a little earlier or so. As such, the basic protection scheme just says, never return early when an error/improper packet/etc. is encountered, but take the algorithm all the way through and just return failure at the end. (blatantly simplified for explanatory purposes of course...) But if you just time any encryption to e
  • All the OpenSSL developers need to do is check between each calculation, and wait until a slashdot article is moderated at "insightful"....NOT :-D

    ("funny", now, would operate like a geiger counter somewhere in the western region of the former U.S.S.R...and would probably overload the servers of the world; there would quite simply be no delays in any SSL engine on the planet.)

    Not that I'm trying to add to the noise, mind you...

  • Again, this shows how important it is to use separate key pairs for different jobs.

    Imagine you use the same private key for both vulnerable SSL servers, and for offline protocols, such as PGP or S/MIME. Whoever successfully attacks your interactive SSL server would be capable of reading encrypted mail sent to you in the past.

What is research but a blind date with knowledge? -- Will Harvey

Working...