Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×

A New Vulnerability In RSA Cryptography 108

romiz writes, "Branch Prediction Analysis is a recent attack vector against RSA public-key cryptography on personal computers that relies on timing measurements to get information on the bits in the private key. However, the method is not very practical because it requires many attempts to obtain meaningful information, and the current OpenSSL implementation now includes protections against those attacks. However, German cryptographer Jean-Pierre Seifert has announced a new method called Simple Branch Prediction Analysis that is at the same time much more efficient that the previous ones, only needs a single attempt, successfully bypasses the OpenSSL protections, and should prove harder to avoid without a very large execution penalty." From the article: "The successful extraction of almost all secret key bits by our SBPA attack against an openSSL RSA implementation proves that the often recommended blinding or so called randomization techniques to protect RSA against side-channel attacks are, in the context of SBPA attacks, totally useless." Le Monde interviewed Seifert (in French, but Babelfish works well) and claims that the details of the SBPA attack are being withheld; however, a PDF of the paper is linked from the ePrint abstract.
This discussion has been archived. No new comments can be posted.

A New Vulnerability In RSA Cryptography

Comments Filter:
  • I got a question... (Score:4, Interesting)

    by sam0vi ( 985269 ) on Saturday November 18, 2006 @05:50PM (#16899318)
    When i see this kind of news the following question arises: so what are we supposed to do now? Throw away RSA cryptography is not the best answer i think. What do you, fellow /.ers, would do to by pass this problem?
    • "What do you, fellow /.ers, would do to by pass this problem?

      Get rid of the spyware, perhaps?
    • Re: (Score:3, Informative)

      by Anonymous Coward
      This is not a vulnerability in RSA per se, but rather in the implementation of RSA on modern CPUs. It is possible to run a "spy" process along side cryptographic application, which would "sniff" out private keys. It can do this by making note of how its own instructions are executed and thus predicting what instructions are executed for other processes. I think the important thing is that this type of attack requires local execution of code for this to work.

      I would think this can be circumvented by alternat
    • by Eon78 ( 19599 ) on Saturday November 18, 2006 @06:18PM (#16899558) Homepage
      You just keep on using RSA of course. As the article says: it is possible for a spy application running on your machine to get vital information about an RSA enryption process with OpenSSL. So, as long as you make sure your machine is secure there is nothing to worry about.

      Most of the time when you hear an encryption scheme is cracked or successfully attacked they mean that it has gotten easier to crack, not that the encryption is totally worthless. Which of course doesn't mean that countermeasures should not be taken, but it also doesn't mean that you have to throw out RSA.
    • by smallfries ( 601545 ) on Saturday November 18, 2006 @07:04PM (#16899918) Homepage
      RSA isn't the problem. The implementation of RSA on a modern processor is the problem. Moving to another algorithm wouldn't guarantee a lack of side channels. One way around this would be to specialise the algorithm with your own private key. This would unroll all of the loops, and decide the branches statically. If you assume that the machine is not compromised, then this executable could be stored as read-only for your account. If the machine is compromised enough for a non-priviledged process to read your private data then you don't need SBPA - you're toast.
      • by fbjon ( 692006 )
        A-ha. Dynamically compile the key into executable code when it's needed?
        • Yeah that's one way to do it. If you can assume the filesystem hasn't been compromised then you could store the executable on disk, otherwise build it at runtime.
        • by arivanov ( 12034 )
          There are a few more alternatives:

          1. Use a most recent Via CPUs (the ones released this year). The C7 has the time consuming parts of RSA accelerated on the CPU which makes this attack considerably less feasible. This is possibly the only cost effective method for limited budget cases where high speed is required. A full motherboard with CPU, SATA and all bells and whistles is around 150 pounds and IIRC is supported by OpenSSL 0.9.8 (and backport patches).
          2. Use (to the full) TPM on your machine (if present
          • Number 1 seems a bit wrong. The Via Padlock engine does two things - secure PRNG and it has a Montgomery multiplier. The secure PRNG doesn't help as that's not what the spy process is measuring. The multiplier is called either once, or twice, in the exponentation loop depending on a bit in the key. The leakage channel is the branch that decides whether or not to call the second multiplication. Is the Via Padlock that quick that it prevents the branch choice from leaking though the branch table?
            • by arivanov ( 12034 )
              No idea if it is quick enough (I just got 2 spanking new C7 MBs and was going to build some OpenVPN gateways from them while testing this in the process). It is definitely considerably quicker compared to software-only implementations and that is about as much as I know about it right now. Looking at what they did with AES and RNG it may as well be fast enough. No idea until I try.
    • Offboard hardware acceleration would completely sidestep this particular attack.

      Or keep the system so heavily loaded that the spy process can't tell whether it's sharing a BTB with the RSA computation or with one of the other N threads.

      Or use a thread scheduler that assigns separate CPUs to crypto threads and to spyware threads :-)
    • Re: (Score:2, Informative)

      by twenex ( 139462 )
      Run all of your RSA operations in secure, dedicated HW devices (crypto co-processors such as smart-cards, IBM 4758s, nCipher, etc).

    • First, once upon a time, there were no real alternatives to RSA. These days, I know of at least five other major categories of public-key encryption and there are undoubtedly more. This makes the idea of dropping RSA - at least in the really sensitive stuff where any known vulnerability is really bad news - a definite possibility.

      Second, these are timing-based attacks that perform branch prediction. This requires no changes to OpenSSL or any other source to completely mask. You first mix the optimization me

  • by Anonymous Coward
  • Not so bad... (Score:5, Insightful)

    by statusbar ( 314703 ) <jeffk@statusbar.com> on Saturday November 18, 2006 @05:59PM (#16899382) Homepage Journal
    From the Abstract:
    SBPA attacks empower an unprivileged process to successfully attack other processes running in parallel on the same processor

    So it requires a spy proccess to be running on the same processor as the server....


    • by zolf13 ( 941799 )
      Is it ("analyzing the CPU's Branch Predictor states through spying on a single quasi-parallel computation process") possible on modern PC with modern OS? Doesn't switching of the process on CPU also change branch prediction?
    • Re: (Score:2, Informative)

      by MK_CSGuy ( 953563 )
      Reminds me a lecture I attended last year where Adi Shamir talked about one of his latest AES attacks.
      Basically it uses information about the state of the CPU's memory cache and thus attacks processes on the same computer too.
      Here's the paper. [weizmann.ac.il]
      • cool!

        You can also attack the algorithm by measuring current draw on the cpu, or you can attack it by measuring RF radiation from the system too.

        In order to avoid these attacks, the cpu's ALU's etc need to be designed with complementary logic gates such that no matter what signal is changing, there is always a paired signal changing the other way - so there are always the same number of data and clock transitions of every type on every clock cycle, giving you constant power usage.

        • There's a *real* problem with this. Constant power output means *maximum* power output, 100% of the time.

          Think about it...

          • Well, that's the cost of security! If the power output isn't constant, you are leaking security bits!

            I wonder if the 'Trusted Computing' chips do this - If they don't then this could be researched as a work-around for them.

    • mod parent up (Score:1, Informative)

      by Anonymous Coward
      For unmasking the sensationalist headline. This attack only works if you are already logged in to the box and able to run processes on it. Yet from reading the article summary, it looked like I would have to fear for my online banking SSL connections and SSH sessions.
    • Re:Not so bad... (Score:5, Interesting)

      by SnowZero ( 92219 ) on Saturday November 18, 2006 @06:56PM (#16899848)
      It gets better. The attack requires that the two processes are running on the same core with hyperthreading enabled (i.e. ALU-poor CMP). The "spy" process will be sucking up 100% cpu pretty much continuously. They also simplified the multiplication routine from OpenSSL. Even if you are running such a setup on a P4 with HT turned on (even though its often useless), and you need to run secure processes along with unsecure ones (generally not a good idea anyway), patches already exist for Linux and BSDs to address this. The patches modify the scheduler to prevent processes from different users from running on the same physical core. A half-hearted attempt is made in the paper to say that these attacks to generalize to something remote, but no details are given as to how their attack would compensate for the 100,000 fold decrease in timing accuracy to pull off the attack on even a local LAN.

      Essentially they took a very impractical attack with an unlikely scenario, and created a somewhat practical attack with an unlikely scenario. Avoid the problem scenario which was raised in the prior work last year, and you are still golden.
      • Re: (Score:2, Informative)

        by Anonymous Coward
        That's exactly why "Hyper-Threading Technology" is disabled by default on FreeBSD, and probably other systems.

        It's a know issue:
        http://security.freebsd.org/advisories/FreeBSD-SA- 05:09.htt.asc [freebsd.org]
        http://kerneltrap.org/node/5103 [kerneltrap.org]

        • Re: (Score:1, Insightful)

          by Anonymous Coward
          I hope you're being sarcastic. The report discloses how to turn off HTT on FreeBSD systems. Nowhere does it say HTT is off by default. In fact, the fact that they have to tell you how to turn it off means it is probably on by default. Otherwise there would be a report about how to turn it on.
      • Re: (Score:3, Interesting)

        Even if you are running such a setup on a P4 with HT turned on (even though its often useless), and you need to run secure processes along with unsecure ones (generally not a good idea anyway), patches already exist for Linux and BSDs to address this. The patches modify the scheduler to prevent processes from different users from running on the same physical core.

        The problem is that theoretically the attacker could use javascript or any other locally interpreted language or an ActiveX control under Inter
        • by delt0r ( 999393 )
          If they can run code as the same user they don't need the attack. Thay can just read the "private" information more or less. Theres only so much you can do, even a simple keylogger attack is going to be easier than this one.
          • If they can run code as the same user they don't need the attack. Thay can just read the "private" information more or less. Theres only so much you can do, even a simple keylogger attack is going to be easier than this one.

            Not necessarily. Javascript or any other interpreted language could probably perform this attack and would run as the victim user, but since it's sandboxed the attack couldn't get at the keys directly.
            • by delt0r ( 999393 )
              I would think that this would be very difficult if even posable. You need to know a lot of details of the implemetation of the javascript. And some implemetations could make it imposible. Don't get me wrong the threat should be taken seriously. But for many its not the weakest link, and theres enough concern over hyperthreading anyway. I wonder if dual cores has some of the same attact vectors....
        • theoretically the attacker could use javascript or any other locally interpreted language or an ActiveX control under Internet Explorer to run the attacking process as the same user. To get the attacking process scheduled on the same core as the RSA process, just spawn lots of attacker processes. Some of them will get scheduled alongside the crypto process, even on a massively parallel machine.

          OK, so obviously the attack can be thwarted by preventing a crypto thread from sharing a core with any untrusted th
    • Re:Not so bad... (Score:4, Insightful)

      by Beryllium Sphere(tm) ( 193358 ) on Saturday November 18, 2006 @07:01PM (#16899890) Homepage Journal
      For example, on a shared server at a colo site?
    • by WetCat ( 558132 )
      So it looks like in modern dual-core (and more CPU) systems you can avoid this problem by just dedicating one core to OpenSSL.
  • Corel Cache (Score:5, Informative)

    by davidwr ( 791652 ) on Saturday November 18, 2006 @06:03PM (#16899424) Homepage Journal
    Just in case it gets Slashdotted.

    PDF file [nyud.net]
  • If you have a Trojan on your computer you are going to lose your secrets anyway, because, surprise, your private key is probably stored on the disk drive, and you use the keyboard to type passwords, etc. etc. Could someone explain how a local attack can be big news ?
    • Re: (Score:3, Informative)

      On a multi-user system, someone may well have the right to run arbitrary code on the same processor, but not to access your data.
      • by iacp ( 955336 )
        But if he has such priviledged access to CPU, can't we just simply suppose he's also able to "see" what you type on your keyboard ?
        • Re: (Score:1, Insightful)

          by Anonymous Coward
          But if he has such priviledged access to CPU, can't we just simply suppose he's also able to "see" what you type on your keyboard ?
          RTFA. The researchers claim that it does not require privileged access:

          "Moreover, despite sophisticated hardware-assisted partitioning methods such as memory protection, sandboxing or even virtualization, SBPA attacks empower an unprivileged process to successfully attack other processes running in parallel on the same processor."
        • You don't need a priviledged access to the CPU. A normal shell account and a compiler suffice. That's usually not enough to read someone else's keyboard typing.
    • by Cid Highwind ( 9258 ) on Saturday November 18, 2006 @06:31PM (#16899660) Homepage
      Think managed web hosting companies that put dozens of virtual hosts on a single physical server. If this really works from an unprivileged account, one malicious user could steal SSL keys from all the rest.
      • You're thinking too small. The real problem moving forward is virtualization; the industry is converging to a state where hosted servers will transparently share underlying hardware; the new threat model is VM-to-VM, which is why the earlier commenter who dismissed "overly complex local-only problem scenarios" (paraphrase) is off base.
    • by RAMMS+EIN ( 578166 ) on Saturday November 18, 2006 @06:47PM (#16899788) Homepage Journal
      ``If you have a Trojan on your computer you are going to lose your secrets anyway,''

      Whose secrets? Multiple people use my computers. If there's a trojan on the system, it can't necessarily access all these people's data.

      ``your private key is probably stored on the disk drive,''

      Password-protected, thank you very much.

      ``and you use the keyboard to type passwords''

      I don't use a keyboard with most computers I use; I communicate with them over SSH. Of course, I use a keyboard on _some_ machine, so if that machine has a keylogger running on my account (or root's), that would be a problem.

      ``Could someone explain how a local attack can be big news ?''

      I haven't RTFA yet, but local attacks are often problematic for systems used by multiple people, especially if not all people know good security practices (or are even completely clueless - you get many of such people when you operate shared web hosts).
      • by Alsee ( 515537 ) on Saturday November 18, 2006 @10:34PM (#16901296) Homepage
        problematic for systems used by multiple people

        And perhaps more signifigantly, it is problematic for idiots who think the definition of "secure/security" is using some DRM scheme hoping to "secure" a computer against its owner.

        The owner of a computer can use the technique in this article to keep an eye on his own computer and track what his computer is doing for him, and to record the DRM-keys being used to "secure" his own data against him.

      • "I haven't RTFA yet, but local attacks are often problematic for systems used by multiple people, especially if not all people know good security practices"

        For this special attack, I don't think it would matter too much if the users know good security practices. If you start up SSL, it probably start off with a sequence that can be easily detected (e.g. by simply watching what's running on a machine). Then the spy application would kick in. It's something to be solved by the software implementors more than
    • This is a local attack by an unprivileged thread, one that could not install a keylogger, read another user's files, or do much of anything except run its own code and measure the timing.
  • VMS (Score:3, Interesting)

    by MichaelSmith ( 789609 ) on Saturday November 18, 2006 @06:14PM (#16899528) Homepage Journal

    I started working on VAX/VMS systems in 1986. I changed jobs to another DEC site after nine months or so. I got an account, put my username in at the appropriate prompt, hit return and then immediately knew that I had entered my old username, not the new one.

    I had to think for a bit before I knew the reason: VMS searched SYSUAF.DAT for my username before I entered the password. If it found the username it would stop searching. Later versions did the search after the password had been entered and one type of attack became less likely.

    I suppose something like this could be done so that timing can't be used to debug the process runing the algorithm but another way of viewing it is that it is like getting the key from a chip by measuring the amount of power it uses or something. There may be limits to who far protection can go if you have the hardware or are on the same (virtual) machine.

    • by dk90406 ( 797452 )
      Not a pretty solution, but changning the timing to be consistent in all logins might help. So if user XXzzz logs on and it takes 4 seconds while aAAa only take 0.1, making aAAa logon time 4 seconds would help security but annoy most of the users.
  • by hpa ( 7948 ) on Saturday November 18, 2006 @06:16PM (#16899542) Homepage
    This isn't really a flaw in RSA cryptography, but rather the fairly obvious situation that a branch predictor, shared between processes of different privilege levels, can be used as a covert channel and thus can be used to reveal state. The same is true with the cache, for example, and multithreading makes this problem many times worse by increasing the bandwidth of the channel. On architectures which don't have branch predictors, or don't share them, this is not an issue. ARM processors, for example, tend to rely on predication rather than branches (except when running Thumb), and thus don't suffer the same problem.

    This class of problems is only going to grow as CPUs become less and less deterministic.
    • by Terje Mathisen ( 128806 ) on Sunday November 19, 2006 @02:35PM (#16905446)
      From the linked article:

      R0 = 1; R1 = M
      for i from 0 to n-1 do
          if d[i] then
              R1 = R0 * R1 mod N
              R0 = R0 * R0 mod N
              R0 = R0 * R1 mod N
              R1 = R1 * R1 mod N
      return R0

      The key-dependent if statement is the key here, if we can remove all such branches, then there's no Branch Target Buffer entry that depends on it, and no timing channel attack either:

      R0 = 1; R1 = M;
      for (i = 0; i < n; i++) {
          mask = 0 - d[i]; // Either 0 or -1
          nmask = mask ^ -1; // -1 or 0
          T0 = R0 & mask; // Either 0 or R0
          T0 += R1 & nmask; // At this point T0 will point to the value to be squared, R0 or R1!

          T1 = R0 * R1 mod N;
          T0 = T0 * T0 mod N; // Now we move the correct values back into R0 & R1
          R1 = T1 & mask;
          R0 = T0 & mask;
          R0 += T1 & nmask;
          R1 += T0 & nmask;
      return R0;

      There are at least three interesting issues here:

      a) Most modern cpus have hw support for conditional operations, on x86 this is in the form of CMOVcc which is a (constant-time!) conditional move into a register, but as shown above, it really isn't needed here.

      b) The perforance impact of the above branch removal can be negative!
      On a P4 a branch miss costs about 20 clock cycles, and since a key-dependent branch will miss 50% of the time, the average cost is 10 cycles. My replacement code above takes around 5 cycles or less on any current cpu.

      c) A final possible timing-channel attack would be due to the memory alignment of the R0 and R1 values:
      By allocating them at the same address modulo the cpu page size, i.e. at 4 KB offset, the cache lines hit will be the same for both.

      When I worked on the asm version of DFC, one of the AES also-rans, I removed a similar timing attack from a core 128-bit modular multiplication operation, using very similar techniques.

  • by tqbf ( 59350 ) on Saturday November 18, 2006 @07:17PM (#16900020) Homepage

    Aciicmez et al are extending an attack they published a few months ago. It's real, but:

    • It targets RSA implementations, not the algorithm, which is fine

    • Attackers need to be on the same host as the victim

    • This specific attack is tuned to the Pentium 4 architecture

    This paper doesn't break SSL.

    We wrote about the attack two months ago [matasano.com]. A quick, dumbed-down recap:

    The CPU aggressively caches aspects of what programs do. It doesn't make an exception for RSA. You obviously can't just read key bits out of the cache.

    But caches are finite, and way, way too small to accomodate everything every program does. So operations from one program are constantly evicting cached values from other programs. This makes the other program imperceptably but measurably slower. By writing a program that constantly and carefully measures those time differences, you can watch an RSA operation from another program leave footprints through the cache.

    There are years-old attacks like this against the L1 and L2 caches, and extensions that use hyperthreading to improve the resolution. Some variants, which measure timing differences but don't track cache footprints, are remote attacks. These aren't. You run a "spy" process on the machine; it repeatedly executes a series of operations and measures timing differences. Aciicmez found an overlooked cache which makes Pentium branch prediction work (the BTB). They published back in August.

    From what I can tell, this paper extends the attack; they figured out that the Pentium 4 architecture has two BTB caches, and their original attack wasn't hitting both of them. Their new attack does, and that creates much bigger timing differences, making RSA's footprints much easier to see.

    This is really cool stuff, but from where I stand, they hit game-over back in August with the original BTB attack. This paper reads like a refined exploit for the same vulnerability.

    Since this is localhost-only, and (unlike Bernstein's and Boneh's attacks) can't be extended remote, it's not going to impact SSL or (single-user) SSH. The classic victim of timing attacks is smart cards. For these attacks, another interesting possibility is DRM; these attacks say you can't trust crypto running on the same Pentium 4+ as an attacker.

    • Yep localhost only... so who is the primary user of localhost public key cryptography techniques?

      DRMs! yep yep!...
      I would love to see Vista DRMs cracked before the OS even make it to the market... :)

      ok.. i'm probably dreaming, but still it feels good.
      • Re: (Score:3, Interesting)

        by Alsee ( 515537 )
        I would love to see Vista DRMs cracked before the OS even make it to the market... :)

        No no no! I hate when researchers do that!

        Not long ago someone published a way to load unsigned kernal drivers in Vista (Vista's DRM mechanisms criticially rely on owners being strictly forbidden to do that). So what happened? Microsoft patched Vista to prevent that mechanism from working... before the OS even made it to market. So the discovery and all that work went entirely to waste.

        No, when someone finds an anti-DRM met
    • Re: (Score:1, Offtopic)

      by asuffield ( 111848 )

      This paper reads like a refined exploit for the same vulnerability.

      Virtually every academic paper ever published will match some statement of the form "a refined X for a known Y". That's what academic papers do. Papers which break new ground come around about once every few decades; most significant developments are actually a sequence of very small steps that the press ignores because it doesn't sound very impressive that way.

      Academic papers are almost never newsworthy. They are for academics to read. If y

    • This paper doesn't break SSL.


      Since this is localhost-only, and (unlike Bernstein's and Boneh's attacks) can't be extended remote, it's not going to impact SSL or (single-user) SSH.

      What's more, they didn't break OpenSSL even on the same machine. To quote from the paper:

      We used the RSA implementation from OpenSSL version 0.9.7e as a template and made some modifications to convert this implementation into the simple one as stated above. To be more precise, we changed the window size from 5 to 1, removed

  • ...the current OpenSSL implementation now includes protections against those attacks.

    This really hits the mark. What if OpenSSL, an arguably widely-used crypto layer, were closed instead of open? According to what I hear on Slashdot, we would have no idea if "ClosedSSL" would have protection against this kind of thing, but because OpenSSL is, uh, open, we can verify that these kinds of attacks are indeed mitigated.
    • by rduke15 ( 721841 )
      but because OpenSSL is, uh, open, we can verify that these kinds of attacks are indeed mitigated.

      In fact, if I understood the article correctly, this particular attack is NOT mitigated by the protections that were implemented in OpenSSL.
      • Right. The attacks that are mitigated in the current OpenSSL are some earlier, simpler ones; not the main one described in the paper.
  • by jackjeff ( 955699 ) on Saturday November 18, 2006 @08:09PM (#16900432)
    Better than BabelFish I hope.. human made, so prone to errors ;)


    The confidence users have in Internet and in the capacity of the system to secure data has always been relative. And it could collapse if the microprocessor manufacturers and cryptography software editors were to be unable to cope against a new type of attack, fearsomely efficient, discovered by the team directed by the German cryptographer Jean-Pierre Seifert (universities of Haifa and Innsbruck). Electronic commerce could be threatened, but also, more broadly, everything that enables the dematerialization of exchanges, which rely on asymmetrical cryptography applications, would it be ciphers, digital signatures or message integrity checks.

    In the still confidential article, the researcher and his colleagues describe the procedure they used to, gather a nearly entire cipher key of 512 bits (a series of as many of 0s and 1s) in a single attempt, that's to say in a few milliseconds. For comparison, the greatest public key that has been broken so far is 640 bits long, and as announced in November 2005, the process involved the usage of 80 microprocessors running at 2.2 Ghz for 3 months.

    Since the announcement made this summer, on the International Association of Cryptology Research (IACR), that such an attack was theoretically feasible, microprocessors producers were on their nerves: the chips of nearly all of the computers, world wide, are vulnerable. So much that the head of Intel security, the number 1 microprocessor manufacturer, when confronted with the issue declared that he would be "unavailable for a few weeks". This is because the usual fix against classical attacks on public key cryptography - to increase the size of the keys - will not work this time.

    Jean-Pierre Seifert was in fact able to affect the systems from the ground up. As most of the security relies on the incapacity to mathematically deduce the private key, kept secret, from the public one, he chose to study how the microprocessors was reading these confidential data.

    He found out that the mode of operation or the chip itself, optimized for calculation speed, was making it vulnerable. "Security was sacrificed for the sake of performance", estimated the researcher.

    The attack principle can be summed up as such: to go faster and faster, the microprocessor parallelizes operations and uses a branch prediction system to predict the result of the current operation. If the prediction is good, the computation time is greatly decreased. If not, the processor must go back and start again the elementary operation. It is "sufficient" to measure the computation time when the processor goes through the line of 0s and 1s that constitute the cipher key to able able to deduce it.

    This threat, called "Branch Prediction Analysis" (BPA) was already known. It was thought a lot of attempts was necessary to statistically deduce the cipher key, thus making the attack not-practicable. The technique discovered by Jean-Pierre Seifert make it possible to break the key in a single attempt. It relies on the fact that the prediction process, essential to increase the processor speed, is not protected.

    A spyware could then be made to listen to the chip discreetly, and send back the key to hackers, foreign intelligence services or competitors.


    We are not yet there though. "We have not made a turn key application that would be available online" argues Jean-Pierre Seifert. But he estimates that once the method is made public, in early 2007 during the next RSA conference - RSA, being one of the most popular ciphers -, the making of such software would be "a matter of weeks".

    Cryptography specialists confirm that the threat is serious. One of the best world wide public key experts anonymously sums up the situation: "The real solution is to review the conception of the microprocessors itself - a long and difficult process. A short term solution would be to forbid normal applications to run in para
  • Now I have no idea what this means exactly, so I'll let Steve Jobs explain this RSA attack. I called him up and asked him how it worked. Jobs said, "I dunno what it does. It predicts...branches. It's a good thing!"
  • Run it on the GPU (Score:2, Interesting)

    by creysoft ( 856713 )
    This seems to rely on spy software watching your particular RSA application decrypt things, and thus said spy software would need to be running on the same hardware. Wouldn't it make sense to start writing RSA implementations that use the computer's GPU whenever possible? Using the GPU as a coprocessor has been discussed on Slashdot previously, and the main disadvantage was that the GPU isn't typically optimized for general purpose computing. However, since most GPU's are optimized for massive number crunch
    • Using GPU's for general purpose computing has been tried for years: it's never really gotten effective, especially since GPU's vary enough that it's difficult to write robust low-level drivers and compilers, then to get them accepted.

      No, if you're doing a lot of SSL work and are worried about this, take a look at SSL accelerator cards. They range from $100 to $1000, and seem quite useful for website hosting that will be doing a lot of encrypted traffic.
  • Can the spy process do its job from a different virtual machine? If so, and if virtualization methods can't be tweaked to defeat all such attacks while maintaining reasonable performance, that might spell doom for the virtual server market.
  • From the paper, it sounds as though the attack only relies on the public key. Does this mean that all pgp / gpg secret keys can be compromised this way?
  • The paper seems to describe a process where only the public key has to be on the "spied upon" machine. Everybody can get access to almost everybody's public pgp key. Does this mean, that all RSA pgp / gpg keys can be compromised using this method, even if one keeps the secret key on a computer with only a floppy drive (to get the data which will be decrypted onto the machine) and no connection to any kind of a network (ideally...)? Is it really this scary, or do they infact need to spy on the machine with
  • by yppiz ( 574466 ) * on Saturday November 18, 2006 @11:19PM (#16901492) Homepage
    Here's the key part from the paper (pdf) [iacr.org]:

    Also a spy process is executed simultaneously with the
    cipher and it continuously does the following:
    1. continuously executes a number of branches, and
    2. easures the overall execution time of all its branches
    in such a way that all of these branches map to the same BTB set which also stores the
    specific conditional branch determined by the secret key bits of the crypto process. This
    requires that the number of branches in the spy process needs to be equal to the associativity
    of the underlying BTB, i.e., to its number of ways. Recall that it is easy to understand the
    properties of the BTB using simple benchmarks as explained in [MMK].
    Let's analyze what's happening if the adversary starts the spy process before the cipher.
    It simply means that when the cipher starts the encryption (= signing), the CPU cannot find
    the target address of the target branch in the BTB and the prediction must be not-taken, cf.
    [She]. Furthermore, we can distinguish two cases depending on the currently processed secret
    key bit:
    • If the branch turns out to be taken, then a misprediction will occur and the target address
      of the branch needs to be stored in BTB. Then, one of the spy branches has to be evicted
      from the BTB so that the new target address can be stored in. When the spy-process
      re-executes its branches, it will encounter a misprediction on the branch that has just
      been evicted. As the spy-process also measures the execution time of all its branches, it
      can simply detect whenever the cipher modified the BTB, meaning that the execution
      time of these spy branches takes a little longer than usual.
    • If the branch turns out to be not taken, then no misprediction will occur and the BTB
      does not need to be updated. When the spy-process re-executes its branches, measures
      the execution time of all its branches, it can simply infer that the cipher had not modified
      the BTB, and the target branch was not taken by the crypto process.
    Thus, the adversary can simply determine the complete execution flow of the cipher
    process by continuously performing the above very simple spy strategy, i.e., just executing
    spy branches and measuring their overall execution time. Therefore, the spy process will see
    the complete prediction/misprediction trace of the target branch, and is able to infer the
    secret key. Following [OST06], this kind of attack was named an asynchronous attack, as the
    adversary-process needs no synchronization at all with the simultaneous crypto process --
    it is just following his own paradigm: continuously execute spy branches and measure their
    overall execution time.
  • Redundant ? (Score:3, Interesting)

    by udippel ( 562132 ) on Saturday November 18, 2006 @11:19PM (#16901500)
    Even though this might be redundant after so many comments, it might be summarized again that the good prof has not broken RSA. Actually, the vulnerability has little to nothing to do with RSA.

    The whole thing - as critical as it is - spies on the processes running on the machine. It is an indirect attack, checking on the resources used while performing some not broken algorithmic calculations.
    When you disable pipelining and cache while doing the calculations, there is not much to spy on and nothing to gain. Just prevent the wanna-be intruder from seeing cache, pipelines, CPU from working makes you safe.
    The problem is, that this isn't very practical.

    I wished the editors used less misleading headlines. There is no vulnerability in RSA cryptography per se. It is rather that you observe Men At Work and from what you see you can guess what the're doing.
    And in principle this applies to any other cryptography just as well. Inclusive DRM (which makes me giggle).

  • Hi, This paper is based on the wrong assumption that the algorithm that is used is binary exponentiation. This is false as every single respectable implementation uses N-ary multiplication or sliding windows in the worst case scenario. In both of these cases, the attack as shown in the paper would only be able to predict minimal information. Also, the claimed statement that you can do nothing with this type of attack is completely false (even in the case of binary exponentiation.) Just do this: - If y
  • get rid of those pesky branches...
  • Any thoughts, or did I miss somebody's comments?
  • by Anonymous Coward
    Somehow, my password had been changed by someone even though I have always login using HTTPS. I wonder why...
  • Widely-respected Australian cryptographer Peter Gutmann offered a concise analysis of the Seifert's achievement on the Cryptography Mailing List yesterday. It offers both detail and useful perspective.

    Udhay Shankar N had just summarized the scary rumors about the Seifert's attack:

    "... German cryptographer Jean-Pierre Seifert has announced [1]a new method called Simple Branch Prediction Analysis that is at the same time much more efficient that the previous ones, only needs a single attempt, successful

Time is an illusion perpetrated by the manufacturers of space.