Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet

Distributed.net Starts New Project 86

drydorn writes "Today, distributed.net will officially begin its next distributed computing project. Visit their Optimal Golomb Rulers project page for more details. Their first ruler length will be 24 marks, known in D.net lingo as OGR-24. " And, remember, your mantra: I must sign up for Slashdot Team. I must crack keys. You can grab your client here, which includes documentation on installation, what clients do, etc. etc.
This discussion has been archived. No new comments can be posted.

Distributed.net Starts New Project

Comments Filter:
  • sorry Hemos, OGR has stubs, not keys. And you don't crack them.
    --
  • Hopefully this will offer the needed salve for those who were complaining that cracking RC5 keys was pointless because it took so long.
    Back to hard math!
  • I don't mean to be a pessimist, but how does figuring out OGRs help the net/society/world in any way? From the link, all I could find was "OGR's have many applications including sensor placements for X-ray crystallography and radio astronomy. Golomb rulers can also play a significant role in combinatorics, coding theory and communications. Dr. Golomb was one of the first to analyze them for use in these areas." Without delving into some deep theory, can anybody give an example?
  • I've been getting sick of RC5-64 (over 2 years now, and no end in sight) and SETI just doesn't interest me. Finally, a contest I can sink my teeth into AND has real, usable results.
  • IANAM, thus why are Golomb rulers so useful? To me it sounds like a useless amazing fact, like how 21 is a prime number, blah blah blah.

    And why aren't Americans using meters and litres yet? Sounds like they're a bit old-fashioned to me -- a paradox considering they claim to be the most amazing whizbang country ever made since sliced breadsville.... :)

    --

  • Electricity was considered an interesting but useless phenonemon for years. Give it time and a use will be found

    When this happens, you will be able to say "I was part of that". This will finally give you the respect you deserve. Women will come flocking to you wanting to be yours. Rich businessmen will pay you lots of money for a few seconds of your valuable advice. Governments will set aside days to celebrate your amazing achievements. You will never have to work again. You will be revered as a God by even some of the most advanced civilisations.

  • After the huge publicity surrounding SETI@home and the various arguments for and against it we've now got a project which has *real* astronomical applications.

    OK OK I will concede that SETI@home has a small chance of finding alien life, but, the chance is still remote and the amount of data that they are processing is miniscule compared to what we really need to be doing to seriously have a chance.

    Large OGR's will of course help improve the sensitivity of Long Baseline arrays and other sensors, and therefore improve the quality of data produced. So... who knows - maybe running your CPU on this will help the search for extra terrestrial intelligence more than running seti@home.

    Personally I'm waiting for distributed.net to help the Spaceguard foundation save the world from cosmic hazards so that other alien races will have someone to talk to in the future.
  • Oops, somebody forgot the comedy factor!

    Damn

    --

  • OGR is really infanately long, as there is no limit to how many marks a ruler can have (kinda like looking for the largest prime number - there is always more). The point with OGR is that it really has value to science (even if can't find shorter rulers, because then we atleast know for sure that we have the optimal lenght).
  • Maybe the Slashdot team could use a hand in the ECC2K-108 [inria.fr] cracking effort ...
  • Your computer sits unused most of the time. Do you really use all those 733 million clock cycles your Athlon generates each second?

    So, let's put those computers to do something. It may be just some pictures of some moving objects on the screen, if you like that.

    But you might as well do something useful. Well, maybe it has just some very obscure use, in a theoretical application somewhere. Maybe it's potentially useful, but it hasn't been proved so yet. Who cares? You aren't losing much, just your unused clock cycles...

    From the /. moderator guidelines: If you can't be deep, be funny

  • thank you for the correction mr. gammatron sir. i feel so bad about this error that i will pour scalding hot grits down my pants.

    -hemos
  • Perhaps annother way for companies to generate revenue would be to include sufficiently generic distributed computational software with their clients. This kind of app could also appear in web pages as a Java applet. (I would, of course, advocate an option to turn it off, and "nice"ing of all processing) Imagine what kind of stock market analysis or data mining one could do with ICQ or yahoo.com so enabled. Companies would pay good money for access to such a powerful processing force.
  • I never understood why distributed.net always waste their time trying to solve these abstract mathematical problems that really aren't anything more than "my distributed penis is bigger than your non-distributed penis" competitions. We all know roughly how much computing power it takes to crack a key, or do this Golumb ruler thing, it is only a question of whether we have or haven't done it yet (and the world isn't really any better-off if we have). Nothing is proved by achieving it, we already know exactly how difficult and statistically how long it will take. On the other hand, genetic programming is an ideal application to be tackled in a distributed fashion, and could be used to evolve some really interesting stuff (like evolving sorting algorithms - or even creatures which learn to walk!). These guys [sourceforge.net] have the right idea, but what is really needed is for someone like distributed.net to get in on the act.

    If you are interested in genetic programming take a look here [genetic-programming.org] for more info.

    --

  • Valid points indeed, but none of the other distributed tasks are as "cool" as SETI. By this I mean it sounds and looks neat... given the choice between my spare CPU cycles being used to try and find alien life or being used to do some "numbers and stuff" its gotta be the aliens.

    Yes, we have about as much chance of finding them as finding a fart in a hurricane, but who cares?! My computer may just be the first to find a little green man. Quite what I'll do if/when that happens is anyones guess.
  • Hmm... And all this time I thought that 'Hemos' was Jeff Bates, now you claim that 'Hemos.' is Jeff Bates. I, sir have met Jeff Bates, and you are no Jeff Bates. ;o)
  • OGRs have practical applications in designing more efficent antennas and also in X-Ray crystalography, among other things.

    See http://members.aol.com/golomb20/intro.htm [aol.com] for more information.

    --
  • "I never understood why distributed.net always wastes their time trying to solve these abstract mathematical problems..." It's not an abstract problem, it's a concrete one. "We already know how difficult and statistically long it will take." And yet we haven't done it. That's why it's useful: granted, genetic algorithms may evolve some interesting results, but we already know that there are uses for OGR's. IMHO, the point isn't how long it will take - the point is that at this point we know how long it will take and we have a tool for the job, a tool of sufficient resources that exceeds the computing power that previously has been thrown at the job. Of course, proving that some encryption standards are insufficient for today's needs is a nice side benefit.
  • PLUG class=shameless

    For those who care, the personal proxy stats script for the OGR project should be done tonight and ready for public consumption. Check out the new ppstats homepage @ http://ppstats.sourceforge.net/ [sourceforge.net]. The ppstats-ogr ftp directory is the one you will want to look in. The announcement will also be posted on freshmeat.

    /PLUG

    Start cycling those nodes!
  • As an alternative, what about the Gamma Flux [dcypher.net] project over at dcypher.net ?

    This has a useful application to it - ray tracing for making safer containers to hold radioactive waste.

    Stats aren't quite as cool as distributed though ;)

  • I am not saying that distributed computation is not a powerful tool, I am just saying that it is a shame that this tool has not been put to better use. Ok, so OGRs may have uses in some obscure field, but surely we should use this resource for something that will be of interest to more people. As for distributed.net proving the insecurity of encryption - this is meaningless. We can tell precicely how much computing power it will require to crack any given encryption algorithm (both worst case, and average case). We can even estimate how long a system such as distributed.net would take to do it, actually doing it adds nothing whatsoever to the debate, it is a waste of time and resources, and tells us nothing. Much more interesting to actually create something (such as a new sorting algorithm or A-life).

    --

  • Ah, dammit, now I gotta pull out my script that I used to run OGR-23 12 hours/day and then switch to RC5-64 for 12 hours so I can toggle between OGR-24 and Gamma Flux. Looks pretty cool.
  • Let's agree on a common name then: Hemos can be Sir Bates, and Hemos. can be Master Bates. Sounds about right ;)
  • I for one love this scripts, just to bad it takes almost over 2 hours for each run. We need a databased allternative. Kevin, hook me up with some beta stuff for me to test.
  • We can even estimate how long a system such as distributed.net would take to do it, actually doing it adds nothing whatsoever to the debate

    Actually, it does. The EFF's building [eff.org] (and publishing the plans [oreilly.com] for) Deep Crack did more to show legislators and other non-techies the ridiculous nature of low limits on key lengths than any amount of mathematical discussion or mentions of Moore's Law ever could. Think of it as a great big clue stick.

    ("What? For $250K anyone can build a box that will break bank encryption in a day?" "Well, it'd cost less now, because the design is already done." "How can we fix this?" "Raise the key lengths.")

  • And then you said that dumb thing about "evolving sorting algorithms". Just how are sorting algorithms any less mathematically masturbatory than cryto-keys? We already know the fastest possible generalized sorting algorithm--different algorithms are just a practical matter.


    --
  • by grinder ( 825 ) on Monday February 14, 2000 @05:40AM (#1277475) Homepage

    The distributed computing project I'd most like to see get of the ground is The Tierra Project [atr.co.jp].

    This project is exploring digital evolution. Start off with a bunch of organisms and breed them with genetic algorithms. See how they fare.

    Then, and this is where it gets interesting, an organism can migrate from one host to another, possibly taking better advantage of the environement there. What kinds of digital ecologies will appear? What kinds of emergent behaviour will be encountered?

    It's actually much more complex than that. If you're curious, I recommend reading the introduction [atr.co.jp].

  • We can even estimate how long a system such as distributed.net would take to do it, actually doing it adds nothing whatsoever to the debate, it is a waste of time and resources, and tells us nothing.
    Isn't that kind of like saying "I know I can run a marathon in under 3 hours, there's no need to actually DO it, it's a waste of time and resources."

    Imagine Kennedy had said such a thing in 1960-something. "We estimate we can get to the moon before the soviets do, so we're not actually going to do it, it's a waste of resources, and tells us nothing."

    There is a difference between estimating it and actually doing it, you know...

  • Aperture Synthesis Radio Astronomy works by letting signals from many different receivers interfere. Each distance between two given receivers gives a data point of the auto-correlation function of that signal. The rotation of the Earth provides the second dimension. This then gives the auto-correlation function of the signal from the sky. The Wiener-Kinchine theorem says that the Fourier Transform of this function is the intensity distribution of sources in the sky. In other words, you make an indirect photograph using radio radiation, in the course of 12 or 24 hours. It is manifestly evident that maximizing the number of distinct distances between receivers minimizes the number of receivers needed, which are of course very expensive. Now what is the use of astronomy? I counter this with: What is the use of a Picasso? What is the use for mankind of literature, the arts etc. Nothing really, but nobody questions spending money on music, movies etc. Governments spends ~$1,- per year per person of taxmoney for astronomy, how much do you spend on beer? Or CD's ? Or clothes? Or your car? The amount of money spent on horoscopes is probably twice as much per person than what is spent on the serious stuff: astronomy. Besides, if astronomers had cared to patent any of their inventions, done doing astronomy, but put to use for all of mankind (geostationary commsats, navigation, even radiative transfer equations used in quality control in milk factories), and asked a license fee of a mere 0,001 fraction of the profits that is made by their inventions; then government would be begging the astronomers for money instead of vice versa. That is the use of all this.
    ------------------------------------------- -------------
    UNIX isn't dead, it just smells funny...
  • I'm glad to see distributed finally putting its user base's computing power towards something worthwhile.

    RC5 -- someone already knows the answer! I understand the social implications, but jeez.. I think it was much more convincing when EFF built that DES cracker...

    SETI -- Without source code or results we can verify, who knows what this is doing? The conspiracy theorist in me hints that this might be the NSA's "distributed client" (how would you do it?)

    At least doing some math has verifiable results and is discovering *new facts* about the universe. Projects like this and GIMPS (my favorite; http://entropia.com/ips/) are more worthwhile.
  • PostgreSQL database support is in the pipeline actually! There are going to be trade-offs tho (ie: disk space requirement is going to grow pretty big) for speed. There's lots of stuff to be worked out, so it will be a while before its done unfortunately.
  • Aperture Synthesis Radio Astronomy works by letting signals from many different receivers interfere. Each distance between two given receivers gives a data point of the auto-correlation function of that signal. The rotation of the Earth provides the second dimension. This then gives the auto-correlation function of the signal from the sky. The Wiener-Kinchine theorem says that the Fourier Transform of this function is the intensity distribution of sources in the sky. In other words, you make an indirect photograph using radio radiation, in the course of 12 or 24 hours. It is manifestly evident that maximizing the number of distinct distances between receivers minimizes the number of receivers needed, which are of course very expensive. Now what is the use of astronomy? I counter this with: What is the use of a Picasso? What is the use for mankind of literature, the arts etc. Nothing really, but nobody questions spending money on music, movies etc. Governments spends ~$1,- per year per person of taxmoney for astronomy, how much do you spend on beer? Or CD's ? Or clothes? Or your car? The amount of money spent on horoscopes is probably twice as much per person than what is spent on the serious stuff: astronomy. Besides, if astronomers had cared to patent any of their inventions, done doing astronomy, but put to use for all of mankind (geostationary commsats, navigation, even radiative transfer equations used in quality control in milk factories), and asked a license fee of a mere 0,001 fraction of the profits that is made by their inventions; then government would be begging the astronomers for money instead of vice versa. That is the use of all this.
    ------------------------------------------------ --------
    UNIX isn't dead, it just smells funny...
  • "no reward for OGR"? But you do get the chance to contribute however fractionally to the sum total of human knowledge, and may even get mentioned in the footnote of some obscure maths journal. That's enough for me. Besides, any mathematician with a big white beard [usc.edu] as impressive as Dr Golomb's is deserving of support.

    Of course it's up to you, but I'll be chipping in my free cycles.

  • Another worthy distributed math project is GIMPS [mersenne.org] (Great Internet Mersenne Prime Search).

    More on Mersenne primes [utm.edu] here.

  • Well, concidering that I have 70megs of logs allready, and my p166 takes close of 2 hours for each run, I dont really mind that. And there is so much fun you can do with rational-databases (altho PostgreSQL is OO if I understand correctly). I want to test it!!
    =)
  • At least Joe Public programmer can see why sorting algorithms are useful, I am not suggesting that a GA will come up with a better sorting algorithm, although it might come up with a sorting algorithm which is better in specific areas.

    --

  • I just checked the d.net homepage and it now says that they're starting on tuesday.

    Wonder what happened there, I guess I'll have to go onto irc and ask.
  • We already know the fastest possible generalized sorting algorithm

    Prove it!

    I mean - until you can prove the minimum amount of work needed you can't assume that the best algorithm found so far is the best

    What I think would be really interesting was if we could evolve the algorithms for cracking RSA and simultaneously get a bunch of mathematicians to try and work out the minimum that the successful algorithm would need to do. Only when these two "values" are the same or nearly the same can we say we have found the best algorithm.

    Also, it is quite likely that for certain algorithms (we can't rule RSA-cracking ones out) there is NO best algorithm. They're are some mathematical questions that CANNOT be answered - EVER.

  • I don't question the value of astronomy, but this is dealing with such an obscure area that I really doubt it will build up much interest. Now evolving a walking algorithm - that is something where everyone could visibly see the results and would likely build up much more interest and support.

    --

  • Intel's Pentium and higher CPUs support an "HLT" instruction which is used when portions of the processor are idle. It is comparable to a low-power "standby" mode for the CPU and uses less energy (and therefore generates less heat) than if the CPU were actually executing "real" x86 code. The "idle loop" of most modern operating systems is just executing HLT instructions (if the processor supports it). Windows 95 and 98 do not do this, but Windows NT and Linux do. I'm not sure about other operating systems.
  • Gamma Flux is a nice project, but my machine kept overheating. :( Has anyone else experienced this problem ?
  • If you read our mission statement [distributed.net], then you will see that distributed.net is all about having a big (if not the biggest) distributed penis. It's not about the projects we're running, but about how we can get these projects done, in other words, "how can we build that largest computer in the world." That's why, besides keycracking contests like RC5, DES and CSC, we now do mathematical projects, and there's still a number of possible (non keycracking) contests on our todo-list.

    Ivo Janssen
    ivo at distributed.net

  • You're right, "proving" that we can crack encryption techniques isn't that interesting to a mathematician. I think the real value of the effort was in demonstrating to the folks who don't understand or believe the argument that DES is insufficient for encryption.
  • This spawned further development, at least; Avida [caltech.edu] appears to be based on Tierra, and was last updated in August 1998.

    But, back to Tierra. Tom Ray was the motivating force, if you want a contact point. The networked version hasn't been released, as far as I know; the other version is released under an open source license, copied below from the original location [santafe.edu]:

    1) License Agreement

    Tierra Simulator V5.0: Copyright (c) 1990 - 1998 Thomas S. Ray

    Tom Ray, ray@udel.edu ray@santafe.edu ray@hip.atr.co.jp (the bulk of the code)
    Joseph F. Hart, jhart@hip.atr.co.jp (general programming, Amiga support)
    Matt Jones, mjones@condor.psych.ucsb.edu (Mac support)
    Agnes Charrel, charrel@int-evry.fr, (tping code for network version)
    Tsukasa Kimezawa, kim@hip.atr.co.jp (socket code for network version)
    Kurt Thearling, kurt@think.com (CM5 adaptation, parallel creatures)
    Dan Pirone, cocteau@life.slhs.udel.edu (frontend, crossover)
    Tom Uffner, tom@genie.slhs.udel.edu (rework of genebanker & assembler)

    If you purchased this program on disk, thank you for your support. If you obtained the source code through the net or friends, we invite you to contribute an amount that represents the program's worth to you. You may make a check in US dollars payable to Virtual Life, and mail the check to one of the two addresses listed below.

    This is license agreement:

    The source code, documentation, and executables can be freely distributed

    The source code and documentation is copyrighted, all rights reserved. The source code, documentation, and the executable files may be freely copied and distributed without fees (contributions welcome), subject to the following restrictions:

    This notice may not be removed or altered.

    You may not try to make money by distributing the package or by using the process that the code creates.

    You may not prevent others from copying it freely.

    You may not distribute modified versions without clearly documenting your changes and notifying the principal author.

    The origin of this software must not be misrepresented, either by explicit claim or by omission. Since few users ever read sources, credits must appear in the documentation.

    Altered versions must be plainly marked as such, and must not be misrepresented as being the original software. Since few users ever read sources, credits must appear in the documentation.

    The following provisions also apply:

    Virtual Life and the authors are not responsible for the consequences of use of this software, no matter how awful, even if they arise from flaws in it.

    Neither the name of Virtual Life, nor the authors of the code may be used to endorse or promote products derived from this software without specific prior written permission.

    The provision of support and software updates is at our discretion.

    Please contact Tom Ray (full address below) if you have questions or would like an exception to any of the above restrictions.

    If you make changes to the code, or have suggestions for changes, let us know! If we use your suggestion, you will receive full credit of course.

    THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.

  • We gave ourselves another 6 hours to do last-minute proxy upgrades and what-not, so the new time to 'flip the switch' is 0600 GMT. If you'll be asleep at that time, fear not: your clients should automagically connect at 0615 GMT and grab OGR blocks, assuming that you've enabled the contest.

    Sorry for any 'cowfusion' }:8)
  • I like this kind of distributed work better than plain old cracking keys, because this is real information that could actually end up in a product or just plain bettering our understanding of the universe.

    I like SETI@home for the same reason. (Although, in the end it'll be either a strike-out or a home run).

    Don't get me wrong, I appreciate the coummunity demonstrating the weakness of small key encryption, but when we are done, what do we to show for it? I guess leverage against weak crypto-heads.

    I like this usefull science/math stuff better.
  • Tierra is somewhat interesting, but it has been around for ages, and while interesting at first, it seems to have stagnated. Tierra is interesting because it leaves evolution to itself, the organisms are given an environment in which they can exist, and they breed and mutate as a result of this environment. In most genetic programming experiments the breeding and mutation is much more contrived - somewhat akin to breeding horses where they are deliberately selected according to certain criteria (although normally this "breeding" is performed by a computer which blurs the distinction). The problem is that I don't think experiments like Tierra scale very well, after a while they converge to a few different types of organism and then things stay more or less the same at a macro level (much as has happened with our biosphere!). I think there is a place for the more contrived G.P experiments as they permit a much wider variety of stuff to be evolved.

    --

  • "When this happens, you will be able to say "I was part of that". " Yeah, I know this is a joke, but the prevailing attitude seems to be: "why not, a use will be found." Thank you to the person(s) who gave real examples.
  • Nothing really, but nobody questions spending money on music, movies etc.
    Indeed. I liked the comparison that Water World cost more than NASA's failed Mars Polar Lander project.
  • I'm currently taking John Koza's (the inventor of GP) class on genetic programming at Stanford, and his company has built a 1000 node beowulf cluster to solve GP problems [genetic-programming.com]. Genetic programming is a powerful, generic, and highly parellizable method of solving difficult problems - GP has already managed to produce some patentable designs, and distributed.net could make potentially make a real contribution to humanity if they were to apply themselves to GP or GA.

  • That would be a really cool project. Since it was written to scale to a 1000 beowolf, how much harder would it be to write a distributed.net core, to spread the GP work across the world? Since you're taking the class, and are obviously interested, have you talked to your prof and the d.net people? Maybe you could bring GP to d.net. It would be a nice project that I wouldn't mind dedicating CPU time for..

  • If all you can do is compare two keys at a time, to tell which one is bigger, then any sorting algorithm has a worst case of O(n lg n). And, since there are already algorithms that achieve this worst case, yes, we've found the best there is.

    Not really. Quicksort has the same worst case and "average" (from a analytical, random input standpoint) performance as mergesort, but yet quicksort performs better "in practice". Worst case or even average case analysis doesn't tell you everything.

  • Does anyone know if D.Net's OGR client uses some sort of eligant mathematical technique, or is it just another brute force effort?
  • Ivo from distributed.net [distributed.net] made a more limited case of the following point: saying, "Nothing is proved by achieving it," you implicitly create an environment where people wander around talking about things rather than adding (albeit in some small way) to the total of human accomplishment. (Sort of like posting to Slashdot all day rather than doing productive tasks. ;) "Hey! Why bother taking out the trash?! '[W]e already know exactly how difficult and statistically how long it will take'!"

    I agree with you that the folks at Project Lightbulb [sourceforge.net] are doing more interesting things. Then again, I'm more interested in Open Source ("Free", whatever) Software than I am in number theory. (Although I think number theory is neat, and lots of fun.)

    But the point needs to be made, and by someone other than our good man Ivo, that no one associated with the OGR project [distributed.net] is wasting their time. Some people like numbers more than they like modular programs. At any rate, writing distributed computing programs is a lot different from running distributed computing projects.

    #include high_horse.h
    {
    Why do people - myself included - think they can blithely dismiss a problem if they know how to classify it?
    }

  • Not really. Quicksort has the same worst case and "average" (from a analytical, random input standpoint) performance as mergesort, but yet quicksort performs better "in practice".

    That's wrong. Mergesort is O (n log n) worst case. Worst case quicksort is Omega (n^2). And to make it worse, common implementations of picking the pivot element (first element, last element, median of the first three elements) have sorted inputs as their worst case (that is, they produce their own worst case input). Even if you pick a pivot at random, there is a non-zero chance you always pick an extreme, leading to quadratic behaviour.

    Now, you *can* find a median of a set in linear time, and using such a method to find the pivot leads to a worst-case O (n log n) sorting algorithm. However, the overhead is so much, the resulting algorithm will be slower, more complicated, and certainly less elegant than either mergesort or heapsort.

    References:
    Knuth, D.E: The Art of Computer Programming, Vol III, Sorting and Searching, 2nd edition, Addison-Wesley, 1998. ISBN 0-201-89685-0.
    Cormen, T. H., Leiserson, C. E. and Rivest, R. L.: Introduction to Algorithms MIT Press, 1990, ISBN 0-262-53091-0.
    Hoare, C.A.R.: "Algorithm 63, Partition; Algorithm 64, Quicksort" Communications of the ACM, Vol 4, 1961, p 321.

    -- Abigail

  • That's wrong. Mergesort is O (n log n) worst case. Worst case quicksort is Omega (n^2).

    Right, I forget. But the point remains that worst case or average case analysis doesn't tell you everything you need to know about the algorithm's performance - even though (as you point out) quicksort has an inferior worst case performance, it performs better 'in practice'.
  • I've been running the OGR client on two boxes since last night and I have done 20% of each packet on each machine. Are the packets jsut enormous or is the client just really slow. I would have done about 100-200 RC5 packets on each machine during the same time.
  • This is a test.

  • I thought this might be interesting to those that are a little bored with D.Net. I found a new distributed processing program called ProcessTree Network. It just started and is not quite up and running yet. You're processing video animation, weather models, scientific and corporate research projections, cryptographics, and any other large computing jobs that would benefit from a fast turnaround. ProcessTree is the Internet's first 'for-pay' distributed processing network. You get paid for your CPU cycles! You can find out more at:http://www.processtree.com/?sponsor=505
  • Just wait for the Cambrian explosion!

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...