Future Of Internet-Based Distributed Computing 117
miss_america writes: "CNN is running an article about how the Internet has fueled distributed/parallel computing. It talks about the limitations, implications and possibilities of internet-based distributed computing. The article highlights UC Berkeley's SETI@home project, Distributed.net, and the ProcessTree Network."
Donated CPU Time (Score:2)
I think it would be an interesting avenue to persue for the people running these sites. The only thing better then getting paid is getting paid from the government.
Re:Distributed computing... (Score:1)
Maybe SETI is actually used by the NSA to crack the higher bit encryption that they're afraid of. Hell, with 30,000 years or whatever of computing power that SETI has racked up so far... we've helped NSA crack every 1,024 bit encrypted data gleaned from the Eschelon project!
I never thought I'd see the day when I was tricked so easily in turning my self in. :(
Rader
Lotto Fever (Score:1)
joel
Security for Participants and Projects (Score:3)
What CNN doesn't talk about is security for the participants' machines. Open source is helpful, because you can see what you're running, and people can find bugs in it, but that's really most effective for the first few special projects like GIMPS, distributed.net and SETI than it will be for running arbitrary code in a large distributed-processing industry. The worst case would be malicious distributed-processing code (either viruses or simple DDOS applications), but even non-malicious code with buffer overflow bugs could be a real disaster, both to the PC users and to whoever their machines might be used to target. It's possible to be somewhat safer by using sandboxed computing environments, such as Java, so everybody knows their machine will be safe, but they tend to be much slower than running compiled native applications. This can be improved somewhat by using standard compiled libraries, e.g. bignum calculations, but it's still a wide open problem.
Are there any environments you know about that are safer, or safe enough and faster?
No need for distributed religion (Score:2)
No, people need to be passionately involved to install distributed clients
When a new employee joins our organization, he or she gets a computer with a "corporate image" on it; an approved operating system (NT, Linux, or Solaris) and the associated applications. If we had a corporate need for some sort of distributed computing, the client could be added to the image, so it would be part of every PC on every desk (or in every lap). With distributed administration tools, such clients could even be installed retroactively. It's the company's computer; is it so wrong for the company to direct its use? (Assume they're smart enough to set this up so it doesn't screw up employee productivity, which is more important than "computing.")
I think this model might have been used by the staff of the company that did the graphics for Babylon 5. I wouldn't be surprised if the NSA already does this. --PSRC
Re:What's the price of my CPU time? (Score:2)
Re:The profibilty of distributed computing-overrat (Score:1)
Be very cautious about writing off powerfull technology because you can't see a need for it.
parallel vs. serial distribution (Score:1)
Distribution can still work where parallel distribution fails. Modularize the program (not the data) into routines (as you would normally do today) and then distribute the routines across the Internet. The benefit is that routines can run on the system most capable of performing the task (better speed or storage), and you don't have to have every possible data processing routine on your local system.
A system for distributed serial computing is currently being developed as "Piper":
http://theopenlab.org/piper [theopenlab.org]
The neat thing about Piper is that it makes use of standard UNIX I/O, or "piping", and allows piping to be done over the Internet. In that sense, Piper networks are like Internet-distributed shell scripts.
Piper is a collaboration between 4 (possibly soon to be 5) GPL'd projects and will be competing directly with M$ .NET in some aspects. Contributions are very much welcome!
Jeff
--
This sort of thing has cropped up before. And it has always been due to human error.
Re:What's the price of my CPU time? (Score:1)
Exactly! I think that this is the #1 reason for participating in a distributed project.
Think about all the geeks, who have spent weeks on tuning their computer, overclocking and whatnot. Now they finally get a chance to prove that they are Real Men with Real Computers that kick some ass. What else are they going to use those giga-flops for?
Re:list of distributed projects (Score:1)
whoa, slow down a second... (Score:2)
it's unpleasant enough to have to sit through commercials while I'm watching cable tv; lets not add another channel where advertizers can reach us while we're doing company a favor.
Re:Probably more than you suspect (Score:2)
So turn off your printers at night too.
Re:list of distributed projects (Score:1)
Check out bottomquark [bottomquark.com] to discuss the latest science news.
GrnArrow
Re:power cycling is very stressful (Score:1)
Another One (Score:2)
Re:I am not participating currently (Score:1)
I disagree. Recently (over the past month or so), I put several large computers worth many millions of dollars to work on distributed.net. Specfically we're talking about multiple Sun E10K's, multiple IBM SP/2 Clusters, and a small Beowulf cluster.
While it did bump my individual stats up into the top 30-ish during the days I did this (and made me _Really_ wonder what those other "individuals" above me were running), my input was still massively outdone by the rest of distributed.net as a whole.
Based on this, I don't really think it is possible to buy (for any remotely reasonable amount of money) a general purpose hardware solution which can parallel distributed.net.
Re:What's the price of my CPU time? (Score:1)
I might actually save money doing this, because I don't need to switch on my heater very often.
If I want to sell my computer time, I would need to take into account the following costs:
* Power consumption
* The cost of my Internet connection
* Depreciation on computer hardware
* The cost of my labour in setting up and running the operation
I would then work out how much it costs to run my computer for 24 hours and add a 200% markup to calculate a reasonable selling price for my computer cycles. The wholesale cost could work out as high as Au$40 to Au$50 per month, assuming the computer does nothing else.
If you want to sell your computer time, remember that your costs could be higher than you think. And when selling anything to a multinational corporation, adopt a Ferengi attitude: always sell for a profit.
--
The Grid (Score:1)
People need to stop looking at the d.net/Seti@home problems as the only model for Internet computing. They're not that hard of problems. What makes them neat is that they've got lots of CPU's. (SETI is cool because it's space and aliens and everything, but RC5-64 is just plain stupid - they're proving that 64 bit RC5 is 256 times harder to crack than 56bit RC5. Yawn.)
Numerical accuracy is a concern. Latency is a concern - but not for a a huge set of problems. You don't need a T3E for Monte Carlo simulations, and you shouldn't try and put your finite-element simulations all around the world. Networks are getting faster and faster, so code size is really not an issue today for anyone on a real network (ie vBNS.) Data size can be a problem, but again, networks are getting faster, and you can prestage a lot of the data. If your code is too sensitive to risk distributing, then no amount of technological progress is going to change it. User security is not that difficult of a problem - it's not too hard to sandbox an application on a decent OS. And as for FORTRAN, I don't see what the problem is. Processors don't run C or FORTRAN or Pascal, and the FORTRAN compilers still produce some pretty tight code.
The Internet makes great sense for high-performance computing, for the right problems.
Re:I am not participating currently (Score:2)
If I were an oil company, with hundreds of offices and tens of thousands of computers of a wide range of models and I've a computing problem that can be solved by either buying a $120M supercomputer, or developing a distributed protocol, I'd seriously look into distributed protocol. After all, most computers are idle most of the time. Unfortunally, there aren't many problems for which both a supercomputer and a distributed protocol would be viable solutions.
-- Abigail
Re:Why this is usually useless (not) (Score:1)
I think that a lot of posters are missing the point.
This is not going to be useful for `traditional' supercomputing stuff -- no one is going to be doing a lot of cosmological simulations, climate modelling, or aerodynamics simulations on a system like this.
But there are applications that are `embarassingly parallel' that this will work for. Ray tracing. Image rendering. Certain classes of optimization problems. And these are applications with a lot of industrial use -- so they are quite likely to have people willing to pay money to have done.
Companies who do a lot of those sorts of computations would be better off getting their own cluster, maybe even by setting something like this up on their own machines. But if they're only going to be running a few runs, it would be silly for them to do so. This is another option for them.
Whether it is a useful option or not is going to depend on what sort of turn around they're going to get on jobs, and what they'll charge to run the jobs. These aren't unrelated. The pay structure will be tricky. Too much, and this option isn't really very attractive (the only concievable advantage of this set-up would be that it would be cheap.) Too little, and people won't volunteer their computer time.
Re:Wrong! (Score:2)
While the number of floating point operations per second surely has some merit, FLOPS speed is certainly not the strong point of supercomputers. A supercomputer is a device that turns a calculation problem into an IO problem. The ability of moving shitloads of data around in small units of time is what makes a supercomputer a supercomputer. In the foreseeable future, the bandwidth of the Internet isn't going to approach even a tiny fraction of the bandwidth of the IO channels in a supercomputer.
-- Abigail
Re:D.Net is my favorite... (Score:1)
As cetan pointed out, the GUI was removed so win32 users would get their new clients faster. This was because the GUI version was a complete fork in the code and was unmaintainable in the end. By the way, what exactly changed? Only the config, the mainscreen was text-based all the time.
You can get the moo-ing back, with a 3rd party application. 3rd party meaning that it's not part of the official distribution. It is code by BovineOne tho, yes, the same guy who helps coding the client.
Almost all source is open, go to http://www.distributed.net/source [distributed.net] and write your own wrapper. All code you need to interface is in there.
If you think you can solve our security problems, I invite you to take a look at our operational code authentication document [distributed.net] and help us out!
Ivo Janssen
ivo@distributed.net
distributed.net staffmember
Re:power cycling is very stressful (Score:1)
Re:LIVERMORE LABS DISCOVER NEW ELEMENT (Score:2)
Oblivium
---
Re:Security for Participants and Projects (Score:1)
Re:Latency and distributed religion (Score:2)
How would you know? Nobody has any data to indicate this it is indeed worth only a few pennies a month. You are assuming that ProcessTree would give any packet to anyone, and so everyone will want one? I am sure they keep track of who is reliable/fast and who is not and distribute their load accordingly. A load balancing scheme is definitely going to be in place.
Regarding motivation, the user is going to see that he has nothing to lose and everything to gain and just sign up. And people looking relatively cheap computing might want to consider this, as opposed to running jobs on their local supercomputer cluster (which frequently are overleaded anyway). As long as there is enough demand, they will be supply.
Probably more than you suspect (Score:1)
Here's a "AskSlashDot" question. Should you turn off your PC when not in use? In the olden days it seemed like it was better to leave them on, but maybe that is not true anymore. Maybe it never was true.
Byzantine generals (Score:2)
The Byzantine Generals problem deals with exactly what McNett needs:
The Byzantine generals problem is formulated similarly. One formulation (the closest to this) is: N generals are on a hilltop, about to attack a city. K are traitors, who will interfere with any protocol in the most damaging way possible. They must agree on some piece of data (the time to attack the city) reliably. Here is a link [caltech.edu] with some explanations and implementations of the solution.A commercial "Distributed.com" would have a simpler problem, because they can reliably a) authenticate a computer's identity, so they know if two messages come from the same computer, and b) they can assume that the server isn't a traitor. This will severely reduce the level of redundancy necessary. Still, they must deal with truly malicious nodes, whereas Distributed.net has only had to deal with faulty ones.
As for granulating the data so that K traitorous nodes cannot glean something useful from the data, this should be interesting information theory. I would think that adding some garbage data to calculate from, along with the real stuff, might be a decent cost/security trade-off.
I think they'll be disappointed (Score:2)
As Mr. Old states in the article, these codes just don't lend themselves to this kind of high-latency, low-communication processing. In fact, to the best of my knowledge, all of the "potential users" the article mentions (seismic analysis, structural analysis, fluid dynamics, stress/crash testing) do not scale well AT ALL under this kind of system because the communication needed is far too frequent.
Don't get me wrong, I think internet distributed computing has a future doing certain, very specialized jobs like rendering. I just don't see it becoming the "next big thing" for scientific computing anytime in the near(or even somewhat near) future.
Internet HPC might not be wise. (Score:2)
Parabon Computation (Score:2)
-spc
It's called ILOVEYOU! (Score:2)
Re:What's the price of my CPU time? (Score:1)
No kidding. I'm sick of it myself. It's like the MCI "Friends and Family" deal all over again, which I also hated like mad. I've had at least four geek friends of mine approach me and ask if I'd like to run some alien program on my computer and get paid for it. When I realized I'd have to be a salesman just like them, I said, "No way!" (Or was that, "You suck!"?)
But then I gave in. But I'm sure as heck not soliciting anybody to be a Process Tree "partner" under me, no way no how.
</rant>
(By the way, my Process Tree partner number is 19291.)
A question, Mr. Eleven: (Score:1)
OR
does everyone disagree with your indentation style, including your employer?
Re:D.Net is my favorite... (Score:1)
Ah yes, SETI, the greatest waste of computer power (Score:2)
But the graphics sure are neat (and note, they don't run on NT servers because they need 256 colors DUH!)
Goddamn waste of time, and they certainly don't get MY machine time.
Try out Linux distributed computing today! (Score:1)
My company, Popular Power [popularpower.com], has had commercial distributed computing software out since April. We just put out a Linux version [popularpower.com] in response to a Freshmeat Petition [freshmeat.net], check it out!
Our system is pretty neat; we're doing real work (researching flu vaccines), and our client is truly general purpose in that we can switch the kinds of work we're doing on the fly with no re-install. We're lining up customers now; we'll switch over to paying work as time goes on. We're also planning an open source release of the client software.
I truly think this kind of computing, along with other distributed systems like Gnutella, is the future of the Internet. For a good overview of this field, check out Howard Rheingold's article in the new August Wired, or this Wired news article [wired.com].
Agorics (Score:1)
http://www.agorics.com/library.html [agorics.com]
Re:power cycling is very stressful (Score:1)
Of course, all these tubes are kept hot any time they are not unplugged, right? Or is that just televisions?
Mersenne Primes (Score:2)
I think the best distributed processing project I've been involved with is GIMPS [mersenne.org], the Great Internet Mersenne Prime Search.
Mersenne numbers are numbers of the form 2^p-1, (2 to the pth power minus 1) Generally, when Mersenne numbers are mentioned, Mersenne numbers with prime exponents are what is actually meant. The Mersenne number 2^p-1 is abbreviated Mp.
A Mersenne prime is a Mersenne number Mp which is prime. P must itself be prime, otherwise Mp has a trivial factorization. Namely if p is divisible by a and b, then 2^p-1 is divisible by 2^a-1 and 2^b-1. More generally, gcd(c^a-1,c^b-1)=c^(gcd(a,b))-1.
So basically, what it boils down to is that you can test the primality of a Mersenne number a lot faster (Using a Lucas-Lehmer test), with a computer and find REALLY big prime numbers. For example, the biggest prime # found to date is the Mersenne Prime where p=6972593 which has 2,098,960 digits in it.
The EFF is offering a $100k award [eff.org] to the first person to get a 10M digit prime number.
I highly suggest you switch from boring old D.Net or SETI@Home and go for finding big prime numbers
Re:Security for Participants and Projects (Score:1)
Yes. EROS [eros-os.org] can run untrusted native code at full speed in a confined sandbox. Unfortunately it's still at the prototype stage IMO.
Re:Condor (Score:2)
We also have been used (using loads and loads of Linux machines, I might add) to solve some extremely massive [wired.com] optimzation problems (using over 1000 non-dedicated -- i.e. desktop -- machines at one time.) The problem in question has been around for 32 years, and was solved using Condor in 7 days!
So anyway, on all of those platforms we support checkpointing (restarting a job on another machine) and remote procedure calls (having a job on a remote machine think its on your machine).
Plus you can download [wisc.edu] Condor right away and get it up and running! Its cool stuff, but then again I might be biased :)
Re:I signed up (Score:1)
Re:A question, Mr. Eleven: (Score:1)
OR
does everyone disagree with your indentation style, including your employer?
Although English is not my first language -- in the last case, wouldn't it be "Disclaimer: Even my employer doesn't agree with me about C indentation style"?
An then I guess you missed: "My employer doesn't do a lot of things with me about the C indentation style. Agreeing is one of the thing he doesn't do with me".
Moderators: -1, Off topic
Re:The profibilty of distributed computing-overrat (Score:1)
Ross Perot started EDS on borrowed time on mainframes... Businesses has tons of need, but just haven't tapped into commodity computing (i.e. lots of desktop machines.)
Besides, you don't necessarially need to have a specialized format like SETI or RC5 to do distributed computing... like I said earlier [slashdot.org] Condor [wisc.edu] works on lots of platforms -- including Linux (and even Alpha Linux too).
Re:D.Net is my favorite... (Score:1)
Fawking Trolls! [slashdot.org]
Process Tree (Score:1)
vmware, anyone? (Score:1)
PRIME (Score:1)
I have a distributed-java server. (Score:1)
SETI is unlikely to succeed (Score:1)
I don't think SETI will find alien radio communications.
If a civilization discovers analog radio, they will eventually discover digital radio, and compressed digital streams. Compressed digital streams are indistinguishable from random noise. Our planet will cease analog broadcasting within 5-30 years. So there is a very short window available to eavesdrop on analog civilizations.
Re:Donated CPU Time (Score:1)
Seti could give you an audited receipt that you donated say 10k mips hours to the project. You would then have the right to write off your 500 mips p2 at 20 hours of
You could claim it, and the revenue agency would have to accept it. The same way that if you make free T shirts for a charity, you can write off direct costs (not what you could have sold them for).
or seti@home could build an auditor in the client that logs time spent on the project, and prints out a receipt locally.
Re:What's the price of my CPU time? (Score:1)
--LP
Forum 2000 (Score:2)
--
Re:Wrong! (Score:1)
However, this is true only given roughly equivalent computing power. If you think of distributed.net as a supercomputer, and compare it to a modern one (say, the Cray T3E series or something), you find that:
Of course, there's still the issue that distributed.net-style computing only works well for a small subset of the problems that work well for the Cray. But for these problems, I think it is a faster architecture given the level of participation.
I would now rant for pages about the coming networked computing architectures, where all network terminals sell unused slices of computing power to the highest bidder for micropayments... and that those computing slices might be used by the content/service providers who are serving the contents/services to the terminals, which creates an almost liquid computing environment... but that's all pipe dreams for now.
It's those little white mice... (Score:5)
Place: Distributed.net HQ
Time: end of 198 year long search for the meaning of life.
"... and the answer is: ......"
"42."
"42? What the hell!?!."
Ham on rye, hold the mayo please.
Re:I am not participating currently (Score:2)
It doesn't even cost a whole lot of money. Lots of companies could afford a machine that size or larger.
Re:What's the price of my CPU time? (Score:3)
D.Net is my favorite... (Score:1)
Fawking Trolls! [slashdot.org]
Re:I am not participating currently (Score:1)
Re:Latency and distributed religion (Score:1)
The literature of the field (computational physics) is full of such approximations (fft, bessel function transformations, tree-codes, multipole methods, and dozens of others that I am less familiar with) however there are always trade off issues. Reducing the number of operations or the ammount of communication always comes down to throwing out some of the information. What information can be safely thrown out without jepardizing the validity of the solution is always the meat of the question.
So yes, in some cases the latency can be beaten down with multipole expansions. I simply point out that highly coupled problems exist, they are interesting, and they do not all have 'convenient' geometries.
In the end it comes down to something that is really interesting about parallel computing in general. Parallel computers aren't really general purpose beasts, there is a huge range of architectures each with different characteristics. Similarly there are huge range of problems and algorithms and for each class of problem a different kind of computer will be most effective.
Specifically wide area distributed computing will probably never be useful for evolving forward highly coupled dynamical systems because of latency. These systems probably need dedicated machines with stripped down network protocols or even a hardware message passing or shared memory architecture.
However distributed computing would be marvelous for exploring huge regions of parameter space with 'smaller' problems, if the problem can fit on one machine (even if it takes weeks to solve) then you can try out millions of different initial conditions and really map out the behavior of the system. An example from my realm of such a problem is modeling gravitational lenses. The methodology is to solve a simple problem (raytracing with a general relativistic lens) for a range of parameters for the lens galaxy and its surroundings, then find the model which most accrately fits the image. This has of course been done in parallel for a long time, at one point using floppy disks and a lab full of PC's (sneakernet protocol). Of course that wasn't millions of machines.
They're not just looking for communications. (Score:2)
What's the price of my CPU time? (Score:3)
Distributed computing... (Score:1)
Just one more thing... (Score:1)
Obviously SETI@home is the best case for distributed Internet computing, but I think that they could have done a little more. Beyond the novelty of what it does, there is some real science and engineering behind projects like this. Why not leverage it? I wouldn't be offended to see an ad or two in the client program if I knew that the proceeds were going to support the program that I was supporting by running the software.
Oh, and while I'm thinking about it, I'd also like to point out that this is one case where closed source development makes very good sense. I know that we'd all like to live in a utopian society where everyone is honest about what they do, but when things like SETI@home and RC5 turn into contests, a few bad people can screw the data up in their misguided quest to "win". By keeping the source closed, it becomes harder to hack the program, thus ruining the data.
My 2 cents' worth.
=h=
Re:I am not participating currently (Score:1)
"One of the big unsolved problems of mankind is the final storage of radioactive waste and substances that the latter half of the century produced in weapons of mass destruction, power plants and research labs. These highly hazardous materials will be a liability for generations to come, even if we decided to abandon fission here and now....
you can find more info on it at http://www.dcypher.net/projects.shtml [dcypher.net].
btw there is actually some benefit out of rc5 project. there have been very few publically documented projects of brute force cracking of large keyspaces so this one might help us to understand brute force cracking a bit more. i agree with you that ogr project is more beneficial for human society.
Latency and distributed religion (Score:3)
Distributed computing is currently only effective for things like Seti or Distributed.net where blocks can disapear into distributed space for hours before returning a result. For this reason, I can't see the current level of distributed technology taking off.
The second item, and possibly the most important, is getting people to run a distributed client itself. Think about it, people run Seti@Home because of an almost religious conviction that they might be able to help find extraterrestrial life. With distributed.net, it's all about the geek-romance of brute forcing huge keys. I can't see people getting passionate about speeding up financial forecasts or bragging to their friend how they helped render part of a frame of some undergrads Multi-media project.
People need to be passionately involved to run distributed clients. If you paid people for their distributed time, the total would probably come up to a few pennies a month. Most people would spend more then that in their own time simply downloading and installing the program!
Distributed computing on this scale can't be effective unless the users who offer their CPU ticks are passionately involved. Business models based on selling ticks are doomed to fail if they can't capitalize on emotional involvement in distributed projects. Money, as shocking as this may sound, just ain't enough for this application.
Re:It's those little white mice... (Score:1)
Hijacking CPU Time (Score:1)
With this setup, you could just hide the applet as a 1x1 in a frame of a website, and hijack a bunch of cycles.
Hmmm. perhaps I should patent that.
Re:A dangerous precedent (Score:1)
get a clue!!
power cycling is very stressful (Score:2)
There are a lot of factors, but thermal expansion/contraction is probably the most obvious.
Mithral CS-SDK (Score:2)
If you want to start your own project right now, today, go get the Mithral CS-SDK [mithral.com]. It was pre-released a few days ago, and came out of the Cosm project.
It will let you put together a d.net/SETI style project in a few days (I would know). Finding something worth doing is up to you :)
Re:Hijacking CPU Time (Score:1)
Re:binge cringe on the fringe (Score:1)
Re:Latency and distributed religion (Score:2)
What sort of real supercomputing problem requires low latency?
Linpack needs low latency (finding each pivot requries a vertical broadcast) -- but there are other ways of solving the same problem without requiring low latency. Similarly a naive physical simulation where each CPU has to transfer boundary data each timestep requires low latency, but with a less naive approach the latency issue can be avoided here as well.
What you can't generally do by tricks like this is reduce the need for bandwidth... but given Gilder's law (bandwidth increases by a factor of three each year), bandwidth is soon going to be of negligable cost compared with cpu cycles.
Great Internet Mersenne Prime Search mersenne.org (Score:2)
Why this is usually useless (Score:2)
Just for the reasons described in the article. To rehash them briefly:
So what's left? 3d rendering with procedural textures, genetic algorithms, and proofs to obscure mathematical problems which require a large amount of trial and error. If there is such a thing, anyway... IANAM (Mathematician.)
You might also be able to do some sorts of 3d rendering with bitmapped textures, bumpmaps, and so on, as long as you are dumping the same person a sequence of scenes which all use the same textures. The problem is that you want to make very very sure that any time a user needs to have new code to solve your problems that they are able to veto it, or at least that it is sent in the most secure method possible. Further, the ONLY THING that any outside user should be able to send you is your datasets - Never new code. While this limits somewhat your ability to work, since you can't really implement a whole VM on the remote systems (due to space and memory constraints) that doesn't hurt you much.
The problem is that as you make a system more flexible you also make it more insecure. (Does this comment make my code look fat? ha ha.) And of course, flexibility is what will enable you to actually sell this CPU time to a variety of people - Not just enhance that ability. Without a great deal of flexibility you lose your ability to adapt to a wide variety of customer scenarios.
list of distributed projects (Score:1)
Check out bottomquark [bottomquark.com] to discuss the latest science news.
GrnArrow
Re:Donated CPU Time (Score:2)
Re:No need for distributed religion (Score:1)
I would be! Can you imagine the shit-hot security they'd need between the client & server? And in the NSA!! That's like letting the chicken lay eggs in the fox's den - those guys wouldn't get any work done, they'd just try to decode sigint all day...
Re:D.Net is my favorite... (Score:1)
Fawking Trolls! [slashdot.org]
Re:Probably more than you suspect (Score:1)
If you use your computer regularly, leave it on, but switch the monitor off. (If you don't use your computer regularly, what are you doing on /.? :-) ) Somebody else mentioned thermal cycling; that's a possible source of damage in the computer, but a more likely problem is that the hard drive will eventually conk out from being spun up/spun down all the time. (Make sure your power-management settings aren't set to spin the HD down, too.)
_/_
/ v \
(IIGS( Scott Alfter (remove Voyager's hull # to send mail)
\_^_/
Condor (Score:2)
Using screensavers is a cool idea and all - but you can only have one screensaver set to run at a time, no? Can I run SETI@home and distributed.net simultaneously? (Not that I'd want to - but I might want to schedule some priorities so each would get equal time while I'm gone for a weekend).
Maybe if condor shipped with linux distribs, it'd make it easier for this technology to take off?
Re:D.Net is my favorite... (Score:1)
Hey, wait a minute...
Fawking Trolls! [slashdot.org]
Re:D.Net is my favorite... (Score:1)
Fawking Trolls! [slashdot.org]
Re:What's the price of my CPU time? (Score:1)
SETI@Home Security (Score:1)
I think SETI@Home is great. The search for ET-life has got to be one of THE coolest things going right now....just think about it quietly by yourself for a few minutes....And the search will go on forever - whether we ever make contact or not.
Re:D.Net is my favorite... (Score:2)
Re:I signed up (Score:2)
Get to the front of the line for paying jobs by building a reputation now. By joining Popular Power during our preview period, you become a charter member, giving you prime positioning for paying jobs when they become available
If that isn't a paragraph ripped right out of "Schemes and Scams for Dummies" I don't know what is.
Floating point is actually very standard (Score:1)
Also, while SETI is highly float-oriented, and nuclear engineering and oil-company problems may also be, crypto and big primes and similar problems are purely integers - you can do just fine on a Celeron.
Re:D.Net is my favorite... (Score:1)
Fawking Trolls! [slashdot.org]
I've already got a ProcessTree account... (Score:1)
They're beta-testing on a voluntary project right now... but soon they might have paying work... we shall see.
slashdot distributed computing site (Score:1)
something like Anandtechs homepage on different distributed projects. i would be happy to donate my free time for such project
I am not participating currently (Score:2)
I lost my interest because the scientific and humanitarian benefit was't great enough. distributed.net dangled the carrot that breaking large keys would help to force Congress' hand regarding pathetically small key-lengths. Now that the current project has been running for an extremely long time, I think the value of that has run out. I just can't think of a good reason for wasting cycles and electricity on a problem that has no scientific or political value anymore.
SETI@home doesn't interest me either, not because aliens aren't cool - first contact would be an amazing thing and that's an understatement. They already have more power than they can use right now, and running a memory hungry client just isn't worth it for a pathetically small contribution to the project.
The colomb ruler project is interesting, and it has real world value.
The new massively parallel computers are even faster than distributed.net, and those have the possibility of even greater future scaling. I think it's easier to build and coordinate a large beowulf than it is to coordinate a few tens of thousands of hobbyists. Throw hacking and the occaisional/inevitable corrupting of projects with bad data, and it becomes apparent that scaling of these distributed.net projects is very difficult. I'm not saying that it can't be done, but for a few million dollars you can build yourself a computer faster than distributed.net. If you were an oil company or a scientist working on a meaningful problem, which direction would you take?
The profibilty of distributed computing-overrated. (Score:2)
Also, where the *heck* do businesses have massively parallel problems in everyday life. this is a *very* specialized thing. I just dont see it coming.
I signed up (Score:2)
Re:D.Net is my favorite... (Score:2)
Then, they turned around and encouraged others to apply 3rd party applications to restore "eye and ear candy" to the client.
Sounds like it was just a terrible thing to do.
The GUI client wasn't just the CLI with a GUI wrapper. It was a whole 'nother fork to the client mix. It was complicated, it was slow(er) and it caused many delays in the rollout of Win32 clients.
Re:D.Net is my favorite... (Score:2)
There are very good reasons to not open source the code and they are all outlined on the website (at least they were last time I checked).
Re:Latency and distributed religion (Score:2)
The local forces are dealt with easily by using overlapping blocks; the solution to the latency problem for long-distance forces falls out of the multipole method (you were using the multipole method, right?),
What's the problem?
Re:What's the price of my CPU time? (Score:4)
Tell all your friends and family - annoy them with incessant pleas to install some mysterious software on their computer because it will make them money. And what's in it for them? They've got to become salespeople in turn if they want to make enough money to cover their electricity.
No thanks. Pay me fairly by the hour and I'll decide for myself if it's a good deal or not. If you want more people to join the project, then pay more. Simple. Don't make me annoy everyone around me. We're already sick of the make money fast schemes.
Sorry, I was ranting..