Another Distributed Computing Effort: CSC 61
Armin Lenz writes "DCypher.Net, a newly formed distributed computing effort, takes up french encryption specialist CS Group on their challenge to break their 56 bit CS-Cipher key.
After successfully completing beta testing, the project officially launched Monday, November 8. During the first days only basic stats will be available, but contributors are invited to download the final client and start work asap.
"
Is distributed processing our future? (Score:1)
Only supports Windows clients, though. :( (Score:2)
If only a Linux or BeOS client existed, I'd be glad to lend a few extra CPU cycles. As demonstrated by both SETI@Home and Distributed.net, non-Windows clients tend to run faster and with fewer problems - meaning this would probably be cracked faster than the pure-Windows user base.
Oh well. I suppose we have to enlighten the world one step at a time.
Yes! (Score:1)
--
Re:Is distributed processing our future? (Score:2)
Other tasks, examples being word processing or web browsing, aren't nearly as discrete as the above types of data analysis. Those sort of functions are ones that lend themselves towards a singular solution, namely by being only one process.
Given how we don't push our computers to the limit, I fully expect that in the future we will have software that will allow our spare cycles and spare computational power to be harnessed for arbitrary distributed processing (a basic first step would be to develop a generic Java client that could download new classes from a central server). But there will always be tasks that are easier, and more efficient, for having a single central processor.
(Disclaimer: I am not a distributed processing expert, only a layman with unsubstantiated opinions.)
Re:Only supports Windows clients, though. :( (Score:1)
Another distributed effort to do the same crack game? I think that the primenet effort is more fun and probably more "pure".
Oh, well. At least is not meaningless like the SETI@home joke.
Answer: Nope. (Score:1)
In the same fashion that a GUI can't do everything a CLI can do (and vice versa), you must choose the best tool for the job - not necessarily the 'trendiest' one.
--
Counter productive (Score:1)
Our spare cpu cycles are already spread thin enough between seti@home and RC5, why spread them even thinner?
dox
Oh, just what the world needs (Score:1)
Oh, just what the world needs. Another Distributed Computing effort.
Wouldn't our time be much better spent trying to co-ordinate all of the current efforts, rather than simply reduce the computing power available to each one by throwing another into the pot? There really is a limited number of computing power available. There are a limited number of people who would want their extra CPU cycles, extra as they be, to be used that way. Adding another "let's brute-force a crypto key" effort into the pot seems to have no point other than to slow down the work of all of the other efforts out there.
- Drew
Windows Client (Score:1)
We are cracking codes here, not drawing pretty pictures.
Also, with the rise of linux and other OS's in europe, I am suprised they took this intial route.
Oh well. It will be there loss of CPU power.
Re:Is distributed processing our future? (Score:1)
For example, say Boeing wanted to design the Next Generation spaceship using genetic algorithms. They certainly could distribute the work for that type of application. And maybe for every data unit done, the home user would get a few pennies...
Too much distribution? (Score:1)
What's the point? (Score:1)
This is all just a publicity stunt by CS. Their description of the "CS-Ciper Challenge" states that the purpose of this exercise is to demonstrate that brute force is the only possible attack against their algorithm, which is absurd. Either the message will be decrypted by brute force, which just proves the obvious fact that brute force attacks are possible, or it will be decrypted by finding a cleaver attack on the algorithm. If no attacks are found, that only means that no attacks were found, not that there are no possible attacks against this algorithm.
Sheesh. Maybe I'll start a contest to prove some other negative.
It's not distributed computing... (Score:1)
Dcypher is not distributed computing, it's client-server, as are SETI and d.net. There is a HUGE difference between client-server and distributed computing. Perhaps slashdot should label it cooperative computing instead.
Another on the way (Score:1)
The idea is to design a nanotech computer. All things considered, it might be worthwhile since molecular electronics could provide us with the computing power we need to make future distributed computing projects unnecessary.
How long is it going to take? (Score:1)
How long is it going to take? (Score:1)
Re:Oh, just what the world needs (Score:1)
Yes, it would be much better to work on a unified system. Unfortunately, organic life as we know it (especially male life) does not work that way.
Things are driven by conflict and competition. You can bet had dcypher not appeared, distributed.net wouldn't have beta CSC clients out. Nor would we be able to nuke the planet 200 times over without the Cold War - we'd still be stuck wiping out one species at a time the old fashioned way.
The idea of Cosm [mithral.com] is to unify all Distributed Computing efforts into a common framework. ["common" doesn't really apply to client/server systems] Computers work together wonderfully, but humans have alot of trouble doing that.
a little friendly competition (Score:1)
Competition is great, but with the right challenge (Score:3)
Moo!
Although some competition would be great among the distributed computing projects, dcypher.net seems to have picked a bad contest to try and get off the ground with. Perhaps more of a "marathon" challenge would be optimal, instead of the "sprint" that CSC provides.
We [distributed.net] had already announced our intent to do CSC, and have an enormous amount of computing power in comparison to the newly-formed dcypher. Dcypher really can't expect to beat us to the CSC key, and after one unsuccessful challenge, their users will likely be unmotivated to stay around.
At this point, our CSC/OGR clients [distributed.net] are only in a beta testing phase; however, based on the few hours that we've been running this public beta, our key-checking rate is at least twice that of dcypher. We'll probably be releasing the final clients in the next week or two, and at that point, our rate will be large enough that we should be able to exhaust the entire keyspace in a few weeks.
Daniel
--
Daniel Baker - dbaker@cuckoo.com - dbaker@distributed.net
Prize distribution? (Score:1)
Keyrate? (Score:1)
Re:How long is it going to take? (Score:1)
Well, if you're going to brute-force an algorithm in the simplest sense, that pretty well means you're playing guess the number. Finding a faster way to do it would mean that there's some sort of weakness in the algorithm, but from the sounds of this distributed computing event, it's probably going to look just like the following:
Client:Is the password '0'?
Server:No
Client:Is it '1'?
Server:No
Client:Is it '2'?
Server:No
.
.
.
Client:Is it '19823745938715903857390857382957'?
Server:Yes! The secret message is:
"Drink Your Ovaltine",
except in parallel.
So anyway, if something's a 64-bit key, that means that you've got 2^64 possible secret numbers. If you've got a 56-bit key, that means you've got 2^56. On average, you're probably going to have to guess about half of those keys before you find the right one. You need some statistics, which I don't have, to figure out more about the chances. If we want to compare the two, we can just say that since the first has 2^64 keys, and the second has 2^56, (2^64)/(2^56)=2^(64-56)=2^8=256. In summary, it's takes on average 1/256 the time to break a 56-bit key as a 64-bit key.
I can't imagine either algorithm being better than another to the point where it makes a big difference, so we can probably assume that the systems were designed by equally competent programmers.
Again, apologies if this is too simple, and please, please don't moderate me to the dungeon if *you* can do the math yourself!
Re:Keyrate? (Score:1)
Re:Only supports Windows clients, though. :( (Score:1)
it lists Linux glib2, FreeBSD, and solris clients.
Dunno about this one... (Score:2)
Now, perhaps I'm just being paranoid, but sounds too much like a great opportunity for data mining to me. Especially when you consider that a) you will have to register some user information so they can track your computational contribution and that information will of course be attached to all data the client sends back and b) you're never going to see the source code for the client, and since you're probably going to be sending back blocks of funky not-really-decrypted text, sniffing the datastream isn't guaranteed to root out any other information they might have coded up and embedded in that data.
I don't know who these people are, but the fact that they're offering for-pay advertising at the very beginning of the project just doesn't bode well. They might have good intentions, but how far will those last when some advertiser offers a check with lots of zeros in exchange not just for banner space but also for the list of usernames/emails of the people running their client?
If a project is going to ask for information about me, as any distributed computation project is almost certainly going to want to, then they just need to stay out of the whole Advertising for Dollars game, especially in the digital world where it's so hard to see what exactly they're doing with your data.
-=-=-=-=-
Re: Submit to Distributed (Score:2)
- Michael T. Babcock <homepage [linuxsupportline.com]>
Open source distributed computing (Score:1)
So, are there any projects which do have full open source clients? I'm not necessarily so worried about their freeness or otherwise, I'd just like to know exactly what they're doing.
Re:Windows Client (Score:2)
yikes (Score:1)
Personally, I spend my computer's idle time finding mersenne primes [mersenne.org]. Seems a bit more worthy than beating a 2 year old dead horse.
Re:How long is it going to take? (Score:1)
Only if the message/whatever is encrypted using the same algorithm. Different algorithms have different speeds, meaning that although it is the most important factor, keyspace isn't everything. The question remains: How fast is the CSC decrypt algo compared to RC5?
Re:It's not distributed computing... (Score:3)
Err... You're wrong, and yet right at the same time.. wow, good job!
SETI and d.net and in fact the entire internet are "client-server". The Web is client-server. Telnet is client-server. Nearly every single piece of software on the internet is client-server. It really doesn't say a lot about what the software does, though..
Seti@home and d.net are distributed computing.
Let's define distributed computing, shall we? According to PC Webopedia here [pcwebopedia.com], distributed computing is:
In the specific cases of seti@home and d.net, they are taking a large project, splitting it up into small pieces, and running it all over the place. Now, there may be a problem, as our definition above implies that each "object" running on each system is different. We can define our object as being our code, but we can also, more intuitively, define our object as being our code running on our data. This conforms more towards the object-oriented methodology. All the objects are inheriting the same source code, but different data. Each bit of code running on each person's computer is running a different bit of data. This is the whole point, in fact. So therefore, all the objects are, in fact, different instances. There we go. Good enough for me.
Damn, I must be pretty bored to respond to that post.. Hmm.. Guess I need a beer.
---
Re:Open source distributed computing (Score:2)
---
Quit bitching D.net lovers (Score:1)
Finally, months after the CSC was launched, *someone* has a client out to work on it. That someone isn't distributed.net. And that's why you all are so mad. d.net isn't the guru of distributed computing.
If d.net hadn't piddled all this time away on rc5, they could have had the thing almost done.
Instead *they* spread themselves too thin over CSC, OGR, RC5, etc...
The heck with d.net. The heck with waiting and waiting and waiting for non-functional beta clients. Finally, the real deal.
TEAM????? (Score:1)
Re:It's not distributed computing... (Score:1)
While I don't fully agree with the defination from Webopedia (a questionable source annyway), I would definately disagree that d.net or SETI fit even that definition. Not to mention distributed computing came well before the OO craze.
Their is no or limited fault-recovery, no interprocess communication, no lookup services, and no process migration.
This of course doesn't make them bad, they just aren't distributed computing. They dont need to be to solve their tasks. Client-server is simple, and solves the crypto or alien hunting jobs very well. If anything they vaguely resemble parallel computing, not distributed.
Prize Money? (Score:1)
Honestly.... (Score:1)
Re:Prize distribution? - Answer (Score:1)
We'd be happy to win you over as a participant and give you the chance to win the full prize money of 10,000 Euros (roughly $10,500) for finding the correct key!
Re:Honestly.... (Score:1)
Actually, rc5-56 has been dead a long time (Score:1)
dB!
decibel@distributed.net
Re:Competition is great, but with the right challe (Score:1)
Are those weeks real time, or DNET time? Because if they're DNET time, we're looking at months. While I will continue to support DNET, I can understand dissatisfaction with how things work.
On the other hand, it looks as if dcypher.net was made out of complaints about SETI's client. In which case, why didn't they just code cores for distributed and offer them up? It may have taken some time to get the core into the distributed.net client, but it would've been a better thing to do, IMHO.
And the website is eerily similar. It's kinda spooky. My main objection to the whole thing was their not saying how much of the prize money would go to the finder of the key. Now that they've done that, hmm.
Re:Open source distributed computing (Score:1)
re: D.net and Open Source clients (Score:1)
I'd agree that it'd be nice to see what the client is actually doing, but I was content just to watch my outgoing network traffic for a day or two.
Also, why would a non-profit organization want to or need to collect info from people?
Just a few thoughts to ponder
NIVRAM
Re:Honestly.... (Score:1)
I'm not going to say "Join Distributed, we're the best group cracking" or anything like that. I won't push my view on others, but I will say that I have tried other groups (PIhex and SETI@home) and I was not happy with them.
NIVRAM
Re:Is distributed processing our future? (Score:1)
No. Sidestepping the fact that distributed processing doesn't form a limitless computer system (let's take the current RC5-64 contest; there are about 40,000 participants. Let's be pessimistic and say every participant only uses one computer. Let's also assume there are 1 billion computers in the world, 1 computer for every 6 people. That would mean you could do about 25000 contests simultanuously. With the current speed, it takes about 6 years to crack a single message, so, throw a lot more computers together than there are, and you're looking at cracking 4000 messages a year. That's far away from limitless...) sharing processing power isn't that easy.
RC5-64, and other brute force cracking attempts are easy to do distributed. It takes the most trivial distributed setup (one master, and a bunch of slaves that all work indenpendently - all the master needs to do is bookkeeping, there's no other communication or synchronization needed). Many other problems aren't easily turned into a distributed equivalent. Other problems only work well in specialized distributed environments, parallel computers, where processors are synchronized and you know in advance when which processor is going to write where in memory. Simulations of fluid flows for instance.
Some problems are impossible to solve. Take for instance the classical election problem. You have two identical processors, with identical software. Elect a leader.
Even harder is the problem of how to split up arbitrary tasks in a distributed environment - and deciding which problems can be solved distributed, and which don't.
Then of course, there are things like nodes and links going down, unknown latency, nodes that cannot be trusted, not knowing your topology, etc, etc. Distributed computing in general is a science - and not an easy one.
-- Abigail
Re:Is distributed processing our future? (Score:1)
They could, but I highly doubt it will be worthwhile for them. The big problem is trust. With RC5, and other cracking contests, trust isn't a major issue. A client saying "I found the key" is easily falsified. A client saying "Nope, the key isn't in this block", while the key is, is a problem, but you'll find out eventually; and you have to "just" retest the blocks. And with the assumption that the number of people not playing fair is low, chances that the winning key is given to an unfair person is low.
But it is more of a problem where all the results are actually used. What if a malicious person starts feeding false results? Sometimes, it doesn't matter. The first large scale internet cooperated crack of a code was RSA-129 by Lenstra et al. That wasn't brute force, but used computing power to initially populate a huge, sparse matrix (it's been years ago, I might misremember some details). The final algorithm was robust enough to cope with some percentage of bogus data.
But do you think Boeing will take that risk? Would *you* want to fly an airplane if it was designed using numbers returned by some random script kiddie?
-- Abigail
Re:How long is it going to take? (Score:1)
Server:No
Client:Is it '1'?
Server:No
Client:Is it '2'?
Server:No
Well, if it was like that, the server could do it all by itself. It's more like this:
Slave: Master, oh, master, give me something to do.
Master: hands over box with stuff Here, go through this, and when you are done, report if there's something interesting.
Slave shuffles off to a corner, goes over the box, comes back to the Master sometime later.
Slave: Master, oh, master, I didn't find anything interesting!
Master: oh, you didn't? Well, well, well, what a surprise. Master turns around, fetches a new box. Here, try this box instead.
Slave shuffles off to his corner.
Etc, etc,...
-- Abigail
It should be an auction. (Score:1)
Then there'd be long term contracts with lower rates of pay, or short term jobs with higher rates of pay, and of course seasonal work (when for some reason or another there is a cyclic demand, aka the weather service needs some more power for the Hurricane season), and shortages or over-supply as the availability waxes and wanes, or as computing power increases... Don't forget the higher rates of pay for those of us with a lot better net connectivity, or more available disk space, which should enhance the type of tasks we can take.
What would be really interesting is what the pay would end up being, assuming we have the best 'programming' infrastructure available for 'clients'. Would it only be a tidbit, or could/would big business and other concerns really take advanatage of this cheap distributed power.
It would be neat if it would both allow companies to get cheaper than normal computing power, and yet at the same time completely paying off the entire cost of my computer. I don't see why it's not possible. Score!!!
Re:Actually, rc5-56 has been dead a long time (Score:1)
and i had also forgotten that distributed.net took on the other challenge
Re:Gnome vs. KDE! Competition is the American Way (Score:2)
I don't claim that competition is necessary. In fact, if Microsoft actually valued its customers and technology as much as it does its money, it would be quite plausible for me to "like" Microsoft because they would be making better software. As it is, I think competition is good in that circumstance because they aren't attempting to innovate and move quickly with technology but are falling behind (often) what hardware can do and aiming for the lowest common denominator.
Distributed.net [distributed.net] does a very good job of what they do and if they released their source code at all to the public (maybe not the part that does the network interaction), it would be very easy to add to it. Modularity would be even better. But why not communicate with them about it in the first place? On a more personal note, I help program GICQ. There are about a dozen Linux ICQ clients, all based on each others' source code to some degree. Sure, lots is good
Just my $0.02 worth.
- Michael T. Babcock <homepage [linuxsupportline.com]>