Forgot your password?
typodupeerror
Encryption Security

Bernstein's NFS analyzed by Lenstra and Shamir 168

Posted by CmdrTaco
from the my-c0de-is-s3kr37 dept.
kousik writes "The analysis of Bernstein's NFS by Arjen Lenstra, Adi Shamir, Jim Tomlinson, Eran Tromer has been put up on cryptosavvy. Seems interesting it comes from Lenstra and Shamir. Lenstra lead the 1994 factorisation of RSA 129. From the abstract: ... We also propose an improved circuit design based on a new mesh routing algorithm, and show that for factorization of 1024-bit integers the matrix step can, under an optimistic assumption about the matrix size, be completed within a day by a device that costs a few thousand dollars..."
This discussion has been archived. No new comments can be posted.

Bernstein's NFS analyzed by Lenstra and Shamir

Comments Filter:
  • errrrr NFS? (Score:5, Informative)

    by karrde (853) on Tuesday June 04, 2002 @03:51PM (#3640276) Homepage Journal
    Let's just ignore the fact that we're all a bunch of geeks, and the acronymn NFS usually equal 'Network File System'. Not 'Number Field Sieve' as it does in this case. Would it have been so dificult to say that in the post?? The first link doesn't even give you that information.
    • Sorry - you are a bunch of computer geeks. These guys are Mathematical Nerds. There's a difference. </snooty>
    • Yeah, not to mention the fact that when they said this was Bernstein's NFS, the first thing I thought was, "Okay, what has the author of qmail [cr.yp.to] gotten his fingers into this time?"
    • Yes it does! (Score:3, Insightful)

      by John Harrison (223649)
      The first link doesn't even give you that information.

      From the pdf:

      Introduction

      In [1], a new circuit-based approach is proposed for one of the steps of the number field sieve (NFS) integer factorization method, namely finding a linear relation in a large but sparse matrix.

      This is on the first page of the linked pdf.

      However, I agreed that it should have been spelled out in the post.

  • hackers... (Score:5, Funny)

    by coronaride (222264) <coronaride@yahoCHICAGOo.com minus city> on Tuesday June 04, 2002 @03:51PM (#3640280)
    still waiting for that level of encryption shown in everyones favorite hacking movie that displays the giant skull and crossbones in a cheezy GUI to let you know that you don't have access..
    • You could also get around that by shift clicking on the Pi symbol on the bottom right hand corner of the page. BTW when i know she wasnt a hacker but when was the last time you met a computer geek that looked like sandra bullok?
    • Yeah, I'm saving all my best virii for that day.
    • You mean Independence Day, where they use an Apple laptop in order to insert a macro virus into Alien hardware and makes everything go boom?

      No... wait... That's Steve Jobs' favorite hacking movie... my bad.

    • Excellent reference to "Weird Science"
  • by Anonymous Coward
    There are more efficient, deterministic ways of factorization than NFS. Additionally, in the not too distant future, off-the-shelf quantum computers will be able to make short work of 1024+.
    • There are more efficient, deterministic ways of factorization than NFS.

      True, but not by much, and if the jump was as big as claimed, not for long. But these guys revised it down considerably.

      Additionally, in the not too distant future, off-the-shelf quantum computers will be able to make short work of 1024+.

      Physicists aren't even sure if quantum computers are practical... sure Shor's algo made short work of factoring 15, but what if it turns out that engineering the rather arbitrary entanglements required for Shor's is NP-complete? Then what? That possibility hasn't been ruled out yet, and making those entanglements is already pretty hard...
    • Additionally, in the not too distant future, off-the-shelf quantum computers will be able to make short work of 1024+.
      I don't believe this. When you put a quantum computer on a shelf, you never know what part of the shelf it is liable to move to, or even a different shelf entirely.
      • That's funny. Or, to put this another way, "Either you may tell where it is with certainty, and not its velocity, or its velocity but not where it actually is with certainty." :)
    • by billstewart (78916) on Tuesday June 04, 2002 @08:27PM (#3642157) Journal
      Quantum computers aren't deterministic - the techniques that have been discussed have non-trivial chances of getting incorrect answers. With problems like factorization, it's fast and trivial to determine whether an answer was correct or incorrect, but not obvious what to do about an incorrect one. There may be deterministic algorithms a bit faster than NFS (it's got lots of relatives), but they're mostly in the same general range.


      The interesting thing about quantum computing is that it's the one technology that, if it's actually possible to develop usable machines with it, might offer the possibility of getting beyond the exponential-difficulty traps in factoring and other current techniques of public-key math. It's not clear that it will work, but it's the only thing so far that doesn't hit the "well, if you build a keycracking computer the size of the planet and run it for the remaining age of the solar system, I can add three more key bits and make you take over some more planets" wall.

      They're not off the shelf, and won't be any time soon. The biggest quantum computer made so far was able to factor 15 into 5 x 3. The number of bits of answer you can get out of a quantum computer depends on the precision to which you can measure its output - does this hit Heisenberg limits? 10**47 is only ~140 bits. Or do you hit practical limits first? Or are there ways to break up the answer into many parts each of which gets you a few bits of precision? (The latter case is the only way to get it to work...)

      What if quantum crypto does work? Maybe it'll crack conventional RSA and Diffie-Hellman, but that doesn't mean it transfers to Elliptic-Curve cracking, so we may luck out. Alternatively, it's back to conventional techniques like Kerberos and other symmetric-based Key Distribution Center systems.


      But basically, you're trolling :-)

  • by Beryllium Sphere(tm) (193358) on Tuesday June 04, 2002 @03:54PM (#3640304) Homepage Journal
    "In particular, we show that 1024-bit RSA keys are as secure as many
    believed them to be."

    "We thus
    conclude that the practical security of RSA for commonly used modulus
    sizes is not significantly affected"

    Sounds like it only speeds up one step of the factoring process, which is important to keep an eye on but not grounds for alarm.
    • Furthermore the paper goes on to say that the major improvements were done to the matrix half of the procedure rather than the collection side. "Half" probably isn't the most accurate term, as the collection side takes far longer (months) than the matrix side (days).

      Even if you remove the matrix side, it takes a VERY long time to find all of the relations needed to make the matrix in order to solve it.
  • All the mathematical theorems used in public key cryptography have a fine print clause saying:
    "Assuming factoring/[other oroblem] is hard"
    this makes you think maybe it isn't
    • Nah, factoring's easy, I can factor any prime number up you tell me, in my head, in less than a second.

      • Come on people +1 Funny, that was hilarious.

        Ah, forget it...
      • Oh! you're Bill Gates?

      • But can you prove (not show with arbitrary probability) in your head that a given number is prime? For that matter, can you prove using a computer that a given large number (if you go for any) is prime? For some reason, I don't think so...

        Otherwise - funny.
        • Re:Is factoring hard (Score:2, Informative)

          by akruppa (300316)
          >But can you prove (not show with arbitrary probability)
          >in your head that a given number is prime? For that matter,
          >can you prove using a computer that a given large number
          >(if you go for any) is prime? For some reason, I don't think so...

          It depends on how much time you are willing to afford. A trivial algorithm is to try every (prime) number sqrt(N) to see if it divides N, if there is none, N is prime. But in practice this is too slow to let you get much beyond N ~20 digits.

          Elliptic Curve Primality Proving is state of the art for general primality proofs today, try Primo [freesurf.fr] which has been used to prove a 5020 digit number prime.

          Integer factoring is much harder than proving primality, the current record for a GNFS factorization is a 158 digit composite, see here [ercim.org].

          Alex

          • Nice links - thanks for them, though, I'm aware of these methods. What I was trying to say is that for you to factor a prime number, you basically have to show that it's prime, which still is a hard problem (5020 digits is still a limit), not as hard as factoring, but still not-so-easy. Though, all this is just picky details - your post was a joke after all. ;-)
      • I know you said prime number, but according to Roger Penrose in The Emperor's New Mind, the mind is a quantum computer, meaning that you could indeed factor any number (even if its not prime) in your head...Of course it might take quite a bit of practice.
    • by Jeremiah Blatz (173527) on Tuesday June 04, 2002 @04:07PM (#3640398) Homepage
      I don't know. If somebody knows it isn't, they aren't saying.

      The problem is this, there are certain mathematical problems that are known to be Hard. Traveling Salesman, Knapsack, etc. There are no shortcuts to solving these problems. Many mathematical problems can be proven to be in this class of problems. Nobody has, to date, publicly, shown that factoring numbers is Hard, and nobody has shown that it isn't.

      Of course, nobody has proven the security any of the symmetric cryptosystems (with the exception of one-time pads), so any practical system is already victim to this uncertainty.

      • Re:Is factoring hard (Score:1, Informative)

        by Anonymous Coward
        Nobody has shown that traveling salesman and knapsack are hard unless you've got a proof that P != NP.
        • First, to reiterate the AC's point for those who are browsing at +1, no one has shown traveling salesman and knapsack are hard, only that they are NP-complete. If P=NP then it's all easy.

          Secondly, no one has shown a good cryptosystem based on these.

          • (I might be wrong, this is just my understanding of things at the moment)...

            Isn't factoring NP-Complete, and Traveling Salesman NP-Complete, and thus any cryptosystem that relies on factoring is in a certain sense a cryptosystem based on traveling salesman, since a solution to one can be translated to a solution of the other.
            • Factoring, AFAIK, is not NP-Complete. Traveling salesman is. Here's a list [nada.kth.se] of known NP-Complete problems, if you're interested.
            • Factoring is not NP complete. So a solution to
              factoring doesn't get you a solution to traveling
              salesman, but a solution to traveling salesman
              does get you a solution to factoring.
            • More accurately, factoring is not known to be NP-Complete. And if I remember correctly, if it is found _not_ to be NP-Complete, that would show its in P, AND show that P != NP.
              • IIRC, it has also been conjectured that breaking RSA may not actually be equal to factoring...this is discussed (but not in great detail) in RSA Laboratories' Frequently Asked Questions About Today's Cryptography which is available on their website. It's a good read for anyone interested in the mathematics of cryptography.
                • Negative. Factoring reduces to RSAP (The RSA Problem). And RSAP reduces to factoring. They are computationally equivalent.

                  • >Negative. Factoring reduces to RSAP (The RSA Problem).
                    >And RSAP reduces to factoring. They are computationally equivalent.

                    Are you sure about that? Decrypting an RSA message is taking a cubic root or 65537-th root (or whatevery your encryption exponent is) in a finite group modulo N, your public modulus.
                    Taking roots is easy if you know the cardinality of the group and you get the cardinality of the multiplicative group (mod N) easily if you know the factorization of N by Euler's totient function \phi(N).
                    If you know \phi(N) and N, getting the factorization of N is also easy, so these two are equivalent.
                    But, afaik, it has not been proven that you need \phi(N) to do discrete roots (mod N). If you have different information, please post it. That'd be interesting news (to me, anyways).

                    Alex

                  • "RSA was announced in 1978 [RSA78]. The security of the RSA system is based upon the RSA Problem (RSAP). This problem is conjectured (but not proven) to be equivalent to the Integer Factorisation Problem (IFP) [MOV96], [Sti95], [Len96]."

                    "Breaking RSA may not be equivalent to factoring" [BV98]
                    Implications: Provides evidence that certain instances of RSA cannot be equivalent to the IFP. This is contrary to the belief by some that RSA and IFP are equivalent.

                    Both from:
                    http://www.scramdisk.clara.net/pgpfaq.html

                    You're wrong on both counts.
      • by kma (2898)
        The problem is this, there are certain mathematical problems that are known to be Hard. Traveling Salesman, Knapsack, etc. There are no shortcuts to solving these problems. Many mathematical problems can be proven to be in this class of problems. Nobody has, to date, publicly, shown that factoring numbers is Hard, and nobody has shown that it isn't.

        We stand on even sketchier ground than this. Travelling Salesman, Knapsack, and the "etc." referred to above are "NP-complete" problems-- they are the hardest problems in NP, the class of problems whose solutions only expand their inputs polynomially. (For an example of a problem outside of NP: given a number in base 2, represent it in base 1 (e.g., in tick marks. The output of this problem is O(2^n) for input of length n.)

        Factoring is in NP; however, we don't know whether it is NP-complete.

        How "hard" are these NP-complete problems? Nobody knows. Most "serious" folks hypothesize that polynomial-time solutions to these problems aren't possible on conventional computers (essentially, because lots of smart people have worked on it and haven't found one). However, nobody has proven this yet; and because of the equivalency of NP-complete problems, a solution to one can be transformed mechanically into solutions for all NP-complete problems.

        So, to summarize: we don't know how hard the NP-complete problems are; and factoring can only be as hard as the NP-complete problems (and just might be easier). Treating NP-complete problems as "hard" is a reasonable working assumption, but it is just that.
      • Well, if it's just a matter of using maths that's Hard we can use my tax return as the basis of a new method of cypto.

        Forget about all this nth degree polynomial stuff, the REALLY Hard maths is made up by the government!
  • by mridley (571519) on Tuesday June 04, 2002 @03:55PM (#3640308) Homepage
    Well the /. story exerpt is kind of alarmist but I think the more relevant quote from the paper is "However, the theoretical analysis shows that the cost of the relation collection step cannot be significantly reduced, regardless of the cost of the matrix step. We thus conclude that the practical security of RSA for commonly used modulus sizes is not significantly affected..." (typos probably mine)
    • by Anonymous Coward
      funny, i think the /. excerpt should be a bit more alarmist, and the paper should be less conservative in its claims. why? because the relation collection step, while the most expensive in terms of cpu time, is also the easiest to parallelize and distribute. any moron in charge of network administration for a relatively large company can put sieving clients on all the desktops and get all the relations he needs in a few weeks without spending any money. the thing that kept this from being a threat was the inability of our hypothetical net amdin to perform the matrix step. if the method presented in this paper works for the matrix step, then i consider RSA 1024 to be broken.
      • So now that the matrix step has been made a whole 1.17 times faster (or even 3.01 times faster, if you want to believe Bernstein's numbers), you suddenly believe that factoring is within the reach of the common sysadmin?

        The Slashdot quote also fails to mention that the device would cost $5000 only for "large quantities". The initial cost to get to the stage of being able to produce that circuit is over $1 million. Granted, they also say that the previous initial cost using off-the-shelf hardware was $62 million, but neither are exactly in your average sysadmin's price range. Bearing in mind that these prices *only* cover the matrix step, the authors are right to conclude that 1024-bit keys are just as safe as everyone thought they were.

        If your enemies have a sufficiently large budget to build this kind of thing, they'd almost certainly find it easier just to bribe someone with access to the information to reveal it, to physically attack some unencrypted version of the information, or to retrieve the keys (by, for example, bugging your keyboard).
  • by russotto (537200) on Tuesday June 04, 2002 @03:56PM (#3640315) Journal
    The most important part of the abstract is the finding that "for a given cost, the new method can factor integers that are 1.17 times larger (rather than 3.01)." This means that even if the new factoring method scales to "small" numbers of bits like 1024, a 1024 bit key is only reduced to the security of an 875 bit key, not a 342 bit key. This is a big difference! It goes from "uh oh, better revoke all my keys right now" to "Hmm, might want to think about increasing them in the future".
    • Umm, no. We're talking powers of two here. 1024 bits is a number 2 times bigger than 1023 bits. Not 512 bits.

      Even if it is 3.01 times larger, that's still an effective strength of at least 1022 bits.

      (For what it's worth, I've only read the abstract.)
      • Already taken into account -- it's 1.17 times (or 3.01 times) the LENGTH of the integer, not its value. If it had been 3.01, it would have been a big deal.
      • When these crypto freaks talk about size, they mean the number of bits. It's when they use the word value that they mean the actual value of the number. Same thing when they talk about the running time of an algorithm: for them, it's measured relative to the number of bits in the input, not the value of the input.

        So, while for us an algorithm which takes 1024 steps when fed with the number 1024 and 4096 steps when fed with the number 4096 would probably be considered O(n), they would consider that algorithm O(2^n) because it takes 1024 steps when run with an input of bit-length 10 and 4096 steps when run on input of bit-length 12.

        You can usually tell from context which definition of time and space someone is using, but sometimes it gets confusing.

  • it's guys like these that drive the security bigwigs at Los Alamos and Sandia crazy. I guess it's keeps them employed though, because once they break it, they'll have to build a stonger algorithm and try to break that one.

    Or they could just stop letting people work from home and stop letting their kids play MUDs on their secure terminal :P.
  • Also from the abstract:
    In particular, we show that 1024-bit RSA keys are as secure as many belived them to be.
    And:
    However, the theoretical analysis shows that the cost of the relation collection step cannot be significantly reduced, regardless of the cost of the matrix step. We thi=us conclude that the practical security of RSA for commonly used modulus sizes is not significantly affected by [1].
    If i recall correctly, the original device was impractical, as the speed increases gained by the parallelism were negated by the cost of collection/sifting through the results. Apparently, this weakness still holds.
  • I don't know how many of you are cryptobuffs, but you might remember the craze surrounding elliptic curves as a efficient-yet-complex method of obfuscating plaintext. There was a bit of controversy as to whether or not it would lead to predictability because of its nature (ellipses are round, keep in mind, and therefore there was some concern that what goes around comes around and an attacker could just walk full circle to arrive at a key).

    Well, I believe that mesh routing might just give us all the pluses without most or all of the minuses. First of all, it involves routing, which if you've paid attention to the formation of the Internet you'll quickly realize is a design that will lead to redundancy and reliability. More importantly, it is a mesh, which means that one end of the key is not necessarily tied to the other end. This should cut off many of the attacks that would have a chance of success on elliptic curves by way of its nature. Meshing also implies redundancy... there may be some size and speed tradeoffs here, but you can be certain you'll get your data back out of the cryptopot.

    Bruce Schneier, a luminary in the field of cryptography and author of the book Applied Cryptography, has a web site here [slashdot.org].

    • Who the heck modded this up? "One end of the key is not necessarily tied to the other end?" Hilarious.

      -- Brian
    • Wow, what karma whore. You're making more shit up than a battalion of marines with dysentary.
      • And you're getting Karma for your one-line post being labelled +1 Funny. Who's the Karma Whore now, huh? :)
    • by Anonymous Coward
      Here is a handy rule for moderators: never Never NEVER mod a post "Insightful" unless you personally understand exactly what it says. Similarly, never use "Informative" unless you can verify that the post is accurate.

      Trust me, I will be watching for you in metamod.
  • by Anonymous Coward
    They clearly state in the paper that "1024-bit RSA keys are as secure as many believed them to be."

    As they clearly state in the paper, "the security of RSA relies exclusively on the hardness of the relation collection step of the number field sieve." - The speedup that Bernstein proposed just makes the stuff surrounding that step a bit more efficient.
  • Cliff notes version (Score:5, Interesting)

    by Ashurbanipal (578639) on Tuesday June 04, 2002 @03:59PM (#3640345)
    Basically, Dan Bernstein (who has written useable but controversial alternatives to BIND and SENDMAIL) figured out a new method for breaking RSA encryption based on custom hardware. The fellows mentioned in the headline, who are also legit crypto guys, have analysed Dr. Bernstein's work and make the following observations:

    1) it's not quite as fast as Bernstein estimated (about half as fast for cliff notes purposes)
    2) the hardware could be affordable (others have claimed costs that are only feasible for governments)
    3) you don't have to revoke all your RSA keys because there are steps that precede the application of the Berstein method that still take absurd amounts of time and horsepower.

    Oh, yeah, and it has nothing to do with Sun's NFS (Network File System, a lame and usually insecure way to share files).

    Bernstein will no doubt reply. He isn't a shy guy from my experience.
    • this is a bit off-topic, but i'm genuinely curious -

      is qmail controversial ?

      perhaps i'm coming into the game a bit late and don't know the entire history. i've used qmail, and thought it was simple and cool. didn't know what the feelings of the community were.
      • is qmail controversial ?

        Associatively. I think Dan Bernstein has a reputation for being outspoken about himself, his software and so on.

        Qmail just inherited it.
      • by btellier (126120) <btellier@[ ]il.com ['gma' in gap]> on Tuesday June 04, 2002 @04:53PM (#3640705)
        >is qmail controversial ?

        Well I can only speak from a security standpoint, not for functionality, though I've heard that it has nearly all the same features as sendmail and is faster. However, I did a free-time security audit of Qmail to get an idea of how secure DJB's work was. I can say that, bar none, this guy is the best secure coder I've seen to date. Probably the best way to see this is in the fact that he uses all of his own routines to do memory management. In these routines his bounds checking is complete and he is extremely careful about signed/unsigned conversion bugs. Quite impressive.

        Granted the guy is known for being a prick when people question his work (this is known as De Raadt Syndrome), such as this response to someone who supposedly found a hole in his mailing list software:

        ----
        Here we go again: This advisory is fraudulent. My ezmlm 0.53 package
        does not include anything called ezmlm-cgi.

        Perhaps someone else's ezmlm-cgi program has a problem. But ezmlm-cgi
        isn't part of ezmlm. I didn't write it. I haven't reviewed it. I don't
        distribute it. I don't use it. I am not responsible for its bugs.

        vort-fu was aware of these facts before he sent his advisory to bugtraq.
        Why did he continue? Can this be adequately explained by stupidity?
        ---

        The problem is that while he may be abrasive, he's always right. Bottom line: if you want secure software, stick with DJB.
      • I haven't used qmail, but I have used postfix, which is supposed to be even simpler and more straightforward than qmail. I've been very happy with it.
      • is qmail controversial?

        Qmail is probably secure. The controversial issues that I can think of are these:

        1. The license doesn't allow one to distribute binaries of modified source. This pretty much keeps it out of Linux distributions, because they need to modify the default layout, etc. There's other annoying bits about his licenses. See http://cr.yp.to/distributors.html [cr.yp.to] for info.
        2. It has (or perhaps used to, maybe this has been fixed) some pretty abusive behaviors when delivering mail to lots of users on the same host.
    • by MAXOMENOS (9802) <maxomai@nOSPAm.gmail.com> on Tuesday June 04, 2002 @04:31PM (#3640558) Homepage
      Bernstein will no doubt reply. He isn't a shy guy from my experience.

      This post should be modded +4 Understated.

    • Sun's NFS (Network File System, a lame and usually insecure way to share files).

      I never used Sun's NFS, but i was planning in the near future, so what way to share file's nice and secure in a unix network ? like if you want to mount homedirs and such.

      Quazion
      • I never used Sun's NFS, but i was planning in the near future, so what way to share file's nice and secure in a unix network ? like if you want to mount homedirs and such.
        Unfortunately, I do not think (Warning! Controversial Opinion!) there is a "nice and secure" way to share files in a heterogenous *nix network. You can have one or the other, basically. Granted, if you have a monoculture of Sun systems with up-to-date security patches, you can probably do it, but I don't know anybody running a network like that.

        Another post replying to yours said "samba". I'm sure Tridge will forgive me for pointing out that Samba mimics Microsoft's unbelievably kludgy implementation of IBM's NetBIOS protocol - the kludges being there to compensate for the fact that IBM designed NetBIOS for networks of 25 machines or less. Samba is a wonderful way to interconnect VMS and *nix machines with Microsoft clients, but other than that it's an abortion. Do not use it if you don't have to!

        Another poster pointed out the reason for the "Usually" in my description of "usually insecure". If all your *nix boxes run the same version of the same vendor's latest implementation of NFS, it can be secured. In a true heterogenous environment, though, I have never seen anything but pain and suffering result from trying to implement secure NFS. And incidentally, NFS is inherently subject to denial of service attacks (so is NIS/YP) so you certainly can't depend on it if there are any unsecured hosts elsewhere on the network.

        Once you decide to sacrifice "nice" and go for "secure", or vice-versa, your options get broader. Look into Coda [cmu.edu], Andrew, u9fs, or Styx, for example.

    • The fellows mentioned in the headline, who are also legit crypto guys...

      Adi Shamir is the 'S' in RSA.
    • If he'd have written a network file system, their analysis would read:

      "We have determined that it is incompatible with Sun's NFS, and that the license allows Bernstein to remove your ability to use the program merely by changing his web page."
    • You misunderstand what this paper is saying.

      Background. To review the undisputed facts:

      • Conventional NFS implementations (and TWINKLE), as described in (for example) Silverman's cost-analysis paper or the Lenstra-Shamir TWINKLE paper, take time L^(1.901...+o(1)) on a machine of size L^(0.950...+o(1)), exactly as stated in my paper.
      • One can, of course, trade time for hardware cost. For example, one can carry out the same computation in time L^(1.711..+o(1)) on a machine of size L^(1.140...+o(1)).
      • My NFS circuits take time L^(1.185...+o(1)) on a machine of size L^(0.790...+o(1)), exactly as stated in my paper.
      • The exponent ratio 1.443...+o(1) corresponds to multiplying the number of input digits by 3.009...+o(1).

      Carl Pomerance has pointed out that a slightly different conventional NFS implementation takes time L^(1.754...+o(1)) on a machine of size L^(1.006...+o(1)), or time L^(1.656...+o(1)) on a machine of size L^(1.104...+o(1)), but this is still vastly less cost-effective than a circuit for large numbers. The NFS cost ratio between conventional computers and circuits grows as a quite noticeable power of the cost itself.

      There are two basic reasons for this improvement. First, for large numbers, using conventional computers (or TWINKLE) for NFS sieving is a bad idea, because the low-memory smoothness-testing circuits described in my paper are much more cost-effective. Second, for large numbers, using conventional computers for NFS linear algebra is a bad idea, because the linear-algebra circuits described in my paper are much more cost-effective.

      3 versus 1.17: why there is a dispute. This Lenstra-Shamir-Tomlinson-Tromer paper observes, correctly, that reasonably fast low-memory smoothness-testing techniques have been known for decades. (Of course, my paper says the same thing.)

      The Lenstra-Shamir-Tomlinson-Tromer paper then leaps to the incorrect conclusion that the cost-effectiveness of those techniques for large numbers was widely known before my paper.

      If this fact was known before, why didn't it appear in the aforementioned Silverman and Lenstra-Shamir papers two years ago? Both of those papers were discussing the cost of state-of-the-art integer factorization.

      As for the Coppersmith paper cited by Lenstra et al.: As mentioned in my paper, Coppersmith suggested ECM in a context where RAM sieving would have meant looking at many more numbers. He didn't observe that ECM was better than RAM sieving in other contexts. In fact, for the first step of his algorithm, he specifically suggested RAM sieving!

      The simple fact is that papers before mine optimized their machines purely for operation count. The resulting machines are far less cost-effective than the circuits described in my paper.

      The Lenstra-Shamir-Tomlinson-Tromer paper does not claim that circuits are less cost-effective (slower or larger) than stated in my paper. It also does not claim that conventional implementations are competitive. It does not claim that circuits are actually not so scary. What it claims is that we already had fairly scary circuits. This is revisionist history, not a technical dispute.

      Simple versus optimized. My paper is a grant proposal. It explains a very simple circuit, enough to prove the L^(1.185...+o(1)),L^(0.790...+o(1)) result. It then briefly points out several ways that the circuit can be changed and improved. The objective of the grant is to find the most cost-effective circuit. The simplest circuit is certainly not the best one; finding the best one is a huge project.

      Treating the simplest algorithm as if it were my final result, with every trivial improvement worth pointing out, is a serious misrepresentation of my work.

      In particular, the use of mesh routing in this context is one of many obvious possibilities, and is already mentioned in my paper as an alternative to mesh sorting. Lenstra et al. claim that this is an ``improvement'' over my paper; that claim is false.

      I described a counterclockwise mesh routing algorithm in an email message to one of those authors in mid-April, as part of explaining why conventional implementations aren't cost-effective. (On a 20x20 mesh: ``Take a batch of, say, 1000 hits generated by each processor; route the hits 19 steps down, then 19 steps right, then 19 steps up, then 19 steps left, to reach the proper memory locations. ... The point is that the time ratio grows linearly with the 20, while the cost ratio is simply 1 plus overhead. The overhead can be dramatically reduced with special-purpose circuitry, allowing the 20 to be raised by half as many orders of magnitude.'')

      I am astonished that anyone would publish this obvious use of mesh routing as if it were an interesting new result.

      The balance between sieving and linear algebra. There are many parameters in NFS: dials that can be adjusted to affect the speed of the algorithm in complicated ways. The most important parameter is the ``prime bound,'' usually called y.

      As y increases from ``tiny'' to ``standard,'' the cost of sieving drops down to a smallest possible value, while the cost of linear algebra rises. As y increases past standard, the cost of sieving starts rising again, and keeps rising, while the cost of linear algebra also rises.

      In the context of TWINKLE, Lenstra and Shamir claimed that sieving speedups had limited value, because linear algebra by itself was a bottleneck. That claim is obviously false: one can always decrease y to reduce the cost of linear algebra until sieving and linear algebra are in balance.

      The Lenstra-Shamir-Tomlinson-Tromer paper now makes the opposite claim, that linear-algebra speedups have limited value, because sieving by itself is a bottleneck. That claim is also false, at least for large numbers: the optimal value of y is substantially smaller than standard, as stated in my paper, so one can increase y to reduce the cost of sieving until sieving and linear algebra are in balance.

      Perhaps someday we'll discover better linear-algebra methods. Perhaps the cost of linear algebra will become unnoticeable for standard values of y; then sieving by itself will be a bottleneck. However, as stated in my paper, the current situation for large numbers is that linear algebra is relatively difficult; as a consequence, when parameters are chosen properly, sieving and linear algebra are in balance.

      Large numbers versus small numbers, and the cost of sieving. Figuring out that one NFS approach is much better than another for large numbers is much easier than figuring out the situation for small numbers, such as 1024 bits or 1536 bits.

      Analogy: It's easy to prove that merge sorting is much faster than insertion sorting for large arrays. You don't have to worry about all the different variations of the sorting algorithms; these changes can make big differences in speed, but those differences are unnoticeable next to the gigantic speed gap between merge sorting and insertion sorting for large arrays. It's much more difficult to figure out the fastest method for small arrays, such as arrays of size 16.

      The Lenstra-Shamir-Tomlinson-Tromer paper has one paragraph (Section 5.2) on the cost of sieving for a specific input size, namely 1024 bits. That paragraph estimates that RAM sieving would take about a year on 30 million large-memory PCs. It implicitly assumes that conventional RAM sieving is the most cost-effective approach to sieving for 1024-bit numbers. In particular, it implicitly assumes that circuits (mesh sorting, mesh routing, early-abort Pollard+ECM, etc.; see my paper) wouldn't be much less expensive. It concludes that sieving by itself is a bottleneck for 1024-bit numbers.

      There is no justification for these assumptions and conclusions. The authors claim to understand, and to have understood for a decade, that conventional RAM sieving is much less cost-effective than circuits for large numbers; however, when faced with 1024-bit numbers, they don't even consider the possibility of 1024 being large and of conventional RAM sieving being the wrong approach. This is a gaping hole in the Lenstra-Shamir-Tomlinson-Tromer analysis.

      Maybe special-purpose circuits will have a dramatic impact on the difficulty of 1024-bit and 1536-bit and 2048-bit factorizations. Maybe they'll have no impact at all. I don't know yet. I do know that ignoring the question doesn't help us find the answer.

      • Thanks for your correction to my post, Dr. B.

        I suspect most readers of this forum are primarily concerned with appropriate key lengths for use with their SSH and PGP software. Some people think your sieve makes their 768 and 1024-bit keys obsolete.

        I certainly hope some circuits based on your ideas get physically built, rather than merely modeled mathematically, so we can get some empirical data. Best of luck with the grant!
  • by TJ6581 (182825)
    At the very end when they are doing their calculations they use DRAM and a FPGA components. They then go on to state:


    The bandwidth of the fastest PC memory is 3.2GB/sec


    The GX specs specifically state that they support 4.2 GB per second. They also state that memory latency is about 40ns. I checked pricewatch and found at least 6ns for pretty cheap. There are to many areas where it says "at least", "probably" or "about" for calculations regarding how much time it takes. They might be right but their "proof" consists of restating mathmatics rules and estimations. They probably should have spent more time on actual calculations and proofs

    • Well, 6ns is the cycle time (1/clock frequency) not the latency (which I assume means the time it takes to get any random bit of data from the memory).

      Modern DRAM isn't exactly Random Access at all, you can get data out much faster if you read it sequentially than if you read it in random order.
    • by Eran Tromer (583435) on Tuesday June 04, 2002 @05:12PM (#3640877) Homepage
      I am a coauthor of the paper. Indeed, there are many rough estimates in the paper: the theoretical analysis (and indeed, the NFS method itself) relies on some unproven conjectures and asymptotical analysis. Also, the practical cost estimates rely on technological and ecnomical considerations which are rapidly changing. None the less, we believe the assumptions and their conclusions are quite sound when used as order-of-magnitude estimates.

      As for the two technical points you mentioned:

      > > The bandwidth of the fastest PC memory is 3.2GB/sec
      > The GX specs specifically state that they support 4.2 GB per second.


      Indeed, but both PC3200 (DDR400) and dual PC800 (RDRAM) have a bandwidth of 3.2GB/sec.

      > I checked pricewatch and found at least 6ns for pretty cheap

      These "6ns" parts do not have a 6ns random-access latency. For instance, check these figures [ntsi.com].

    • Oh, come on. This is crypto work. No one is going to care about being off by a factor of .3 when they're talking about factoring time. A factor of 10 probably wouldn't bother them that much.
  • Huh? (Score:2, Funny)

    by Grip3n (470031)
    "...under an optimistic assumption about the matrix size, be completed within a day by a device that costs a few thousand dollars."

    Wow, we can make The Matrix in under a day for a couple grand? Better start looking in the paper for real estate in Zion...
  • I haven't been able to find anything anywhere.
    I have some factorization ideas I'd like to test without reinventing the wheel.
    • Somewhat On-topic: 1024+ bit C math library,where?

      Gnu MP [swox.com] for mult-precission numbers. They use the fastest known algorithms and hand-optimized asm on most platforms. I prefer to do crypto in Java and use the BigInteger class 'cause I'm lazy.

      FYI, Sun's BigInteger.isProbablePrime(int) function is broken... don't use it. I was rather embarassed when a collegue and I emailed some people about a possible bug in the Secure Remote Password protocol, only to discover the problem was a known bug in the JVM... which Sun refuses to fix. I don't know why they won't fix it. My personal implementation of the Miller-Rabin primality test runs faster and correctly identifies the SRP modulus as probably prime. (Look in Applied Cryptography by Schneier for how to code it up.)

      • I used it in my final year project. The only problem I had with it at the time was a documentation issue - they don't actually specify the runtimes of their algorithms.

        However, it was certainly pretty damn fast, and I didn't come across any bugs.

  • the NSA is usually 5 ~ 8 years ahead in technology with regards to this stuff. Hmm...

Numeric stability is probably not all that important when you're guessing.

Working...