Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
The Internet

Another Distributed Computing Effort: CSC 61

Armin Lenz writes "DCypher.Net, a newly formed distributed computing effort, takes up french encryption specialist CS Group on their challenge to break their 56 bit CS-Cipher key. After successfully completing beta testing, the project officially launched Monday, November 8. During the first days only basic stats will be available, but contributors are invited to download the final client and start work asap. "
This discussion has been archived. No new comments can be posted.

Another Distributed Computing Effort: CSC

Comments Filter:
  • Since distributed processing forms a virtually limitless computer system, will all our personal computers of the future be simply devices that share processing power?

  • Unfortunately, the only clients listed are all Windows clients.

    If only a Linux or BeOS client existed, I'd be glad to lend a few extra CPU cycles. As demonstrated by both SETI@Home and Distributed.net, non-Windows clients tend to run faster and with fewer problems - meaning this would probably be cracked faster than the pure-Windows user base.

    Oh well. I suppose we have to enlighten the world one step at a time.

  • What an excellent prize! 10,000 gyros. I could live off of those for years! Oh.. wait.. they said 'Euros'. Pfft. Nevermind. *sniffle* =)

    --
  • In some way, yes. But distributed processing is generally done for things that can be broken down into discrete pieces - analysis of data, data decryption, and so on, because those tasks readily lend themselves to being analyzed piecemeal.

    Other tasks, examples being word processing or web browsing, aren't nearly as discrete as the above types of data analysis. Those sort of functions are ones that lend themselves towards a singular solution, namely by being only one process.

    Given how we don't push our computers to the limit, I fully expect that in the future we will have software that will allow our spare cycles and spare computational power to be harnessed for arbitrary distributed processing (a basic first step would be to develop a generic Java client that could download new classes from a central server). But there will always be tasks that are easier, and more efficient, for having a single central processor.

    (Disclaimer: I am not a distributed processing expert, only a layman with unsubstantiated opinions.)

  • Another distributed effort to do the same crack game? I think that the primenet effort is more fun and probably more "pure".

    Oh, well. At least is not meaningless like the SETI@home joke.

  • The answer to your question is a resounding 'no'. Parallelism is a neat way to program, but alot of processing simply can't be done remotely due to the latencies and whatnot involved. A beowulf is largely useless if you're playing Quake. However, a beowulf might be useful if you were running a quake server. The reason being that there's too much latency to have the graphics processed remotely instead of locally. I know this isn't the best example, but it works.

    In the same fashion that a GUI can't do everything a CLI can do (and vice versa), you must choose the best tool for the job - not necessarily the 'trendiest' one.



    --
  • This does nothing that hasn't been done before. Distributed.net already has huge amounts of effort put into brute forcing 56-bit encryption and has already checked 15% of the keyspace. I fail to see any benefit comming from this.

    Our spare cpu cycles are already spread thin enough between seti@home and RC5, why spread them even thinner?

    dox
  • Oh, just what the world needs. Another Distributed Computing effort.

    Wouldn't our time be much better spent trying to co-ordinate all of the current efforts, rather than simply reduce the computing power available to each one by throwing another into the pot? There really is a limited number of computing power available. There are a limited number of people who would want their extra CPU cycles, extra as they be, to be used that way. Adding another "let's brute-force a crypto key" effort into the pot seems to have no point other than to slow down the work of all of the other efforts out there.


    - Drew

  • I am suprised that they started beta with a windows client. I figured a quick and simple unix client that could be ported to windows would be the best. Drop the gui, add the functionality.
    We are cracking codes here, not drawing pretty pictures.
    Also, with the rise of linux and other OS's in europe, I am suprised they took this intial route.
    Oh well. It will be there loss of CPU power.
  • Suppose a commercial company wanted to do some major number crunching, and they said "Hey, let's distribute it and get the internet folks to work for us." Do you think asking for some level of compensation would be appropriate?

    For example, say Boeing wanted to design the Next Generation spaceship using genetic algorithms. They certainly could distribute the work for that type of application. And maybe for every data unit done, the home user would get a few pennies...
  • These distributed projects are all very well, but can one have too much of a good thing? How many people will take part in each one, I wonder, if they become commonplace?
  • We already know that a distributed computing project can exhaust the keyspace for 56-bit keys in a reasonably short time. Regardless of the encryption algorithm, a 56 bit key can be broken. So why bother with this particular algorithm?

    This is all just a publicity stunt by CS. Their description of the "CS-Ciper Challenge" states that the purpose of this exercise is to demonstrate that brute force is the only possible attack against their algorithm, which is absurd. Either the message will be decrypted by brute force, which just proves the obvious fact that brute force attacks are possible, or it will be decrypted by finding a cleaver attack on the algorithm. If no attacks are found, that only means that no attacks were found, not that there are no possible attacks against this algorithm.

    Sheesh. Maybe I'll start a contest to prove some other negative.

  • by Anonymous Coward

    Dcypher is not distributed computing, it's client-server, as are SETI and d.net. There is a HUGE difference between client-server and distributed computing. Perhaps slashdot should label it cooperative computing instead.

  • Actually, there is another distributed computing effort under discussion on the mailing list nanodream@egroups.com.

    The idea is to design a nanotech computer. All things considered, it might be worthwhile since molecular electronics could provide us with the computing power we need to make future distributed computing projects unnecessary.
  • It's impossible to say for sure, but can someone give me an esimate as to how long this is going to take? RC5-64 has been going on for two years and has consumed about 15% of the keyspace, please tell me this is going to be faster. The keyspace is smaller, but how fast is the decrypt algorithm?
  • I know - it's impossible to say for sure, but can someone give an esimate as to how long this is going to take? RC5-64 has been going on for two years and has consumed about 15% of the keyspace, please tell me this is going to be faster. The keyspace is smaller, but how fast is the decrypt algorithm?
  • Yes, it would be much better to work on a unified system. Unfortunately, organic life as we know it (especially male life) does not work that way.

    Things are driven by conflict and competition. You can bet had dcypher not appeared, distributed.net wouldn't have beta CSC clients out. Nor would we be able to nuke the planet 200 times over without the Cold War - we'd still be stuck wiping out one species at a time the old fashioned way.

    The idea of Cosm [mithral.com] is to unify all Distributed Computing efforts into a common framework. ["common" doesn't really apply to client/server systems] Computers work together wonderfully, but humans have alot of trouble doing that.

  • It's good to see another project starting up in the distributed field. I hope that this won't turn into a huge dcypher.net vs. distributed.net flame-war (although hints of this are already surfacing on EFNet). I think two contests pushing eachother forward, trying to create the most optimized clients etc. will be a good thing in the end. Hopefully it won't get pulled down by a lot of territorial "we're better than you" fanatacism...


  • Moo!

    Although some competition would be great among the distributed computing projects, dcypher.net seems to have picked a bad contest to try and get off the ground with. Perhaps more of a "marathon" challenge would be optimal, instead of the "sprint" that CSC provides.

    We [distributed.net] had already announced our intent to do CSC, and have an enormous amount of computing power in comparison to the newly-formed dcypher. Dcypher really can't expect to beat us to the CSC key, and after one unsuccessful challenge, their users will likely be unmotivated to stay around.

    At this point, our CSC/OGR clients [distributed.net] are only in a beta testing phase; however, based on the few hours that we've been running this public beta, our key-checking rate is at least twice that of dcypher. We'll probably be releasing the final clients in the next week or two, and at that point, our rate will be large enough that we should be able to exhaust the entire keyspace in a few weeks.

    Daniel


    --
    Daniel Baker - dbaker@cuckoo.com - dbaker@distributed.net
  • I don't mean to sound like a money grubber, but what is the prize distribution going to be like (ie: how much goes to the lucky keyfinder and how much to dcypher.net)? Also, how much in US greenbacks are 10,000 Euros? I couldn't find any mention of either on the dcypher.net website.
  • Distributed.net is running through CSC at about 120 Mkeys/sec. Does anyone know where to get a total keyrate from Dcypher.net, and how to convert their "Mbytes/sec" to a keyrate?
  • I'm totally sorry if this is too simple an explanation, but maybe somebody would benefit from an elementary analysis.

    Well, if you're going to brute-force an algorithm in the simplest sense, that pretty well means you're playing guess the number. Finding a faster way to do it would mean that there's some sort of weakness in the algorithm, but from the sounds of this distributed computing event, it's probably going to look just like the following:

    Client:Is the password '0'?
    Server:No
    Client:Is it '1'?
    Server:No
    Client:Is it '2'?
    Server:No
    .
    .
    .

    Client:Is it '19823745938715903857390857382957'?
    Server:Yes! The secret message is:

    "Drink Your Ovaltine",

    except in parallel.

    So anyway, if something's a 64-bit key, that means that you've got 2^64 possible secret numbers. If you've got a 56-bit key, that means you've got 2^56. On average, you're probably going to have to guess about half of those keys before you find the right one. You need some statistics, which I don't have, to figure out more about the chances. If we want to compare the two, we can just say that since the first has 2^64 keys, and the second has 2^56, (2^64)/(2^56)=2^(64-56)=2^8=256. In summary, it's takes on average 1/256 the time to break a 56-bit key as a 64-bit key.

    I can't imagine either algorithm being better than another to the point where it makes a big difference, so we can probably assume that the systems were designed by equally competent programmers.

    Again, apologies if this is too simple, and please, please don't moderate me to the dungeon if *you* can do the math yourself!
  • by Anonymous Coward
    divide by 64. This is how the csc docs read. My athlon 600 gets about 122mps so this traslates to about 1.8mkeys/sec
  • Check out: http://nodezero.distributed.net/beta/ [distributed.net]

    it lists Linux glib2, FreeBSD, and solris clients.

  • If you read down on their "what we are" page, they talk about accepting advertising on their pages and giving teams the opportunity to win prizes acquired through advertising revenue.

    Now, perhaps I'm just being paranoid, but sounds too much like a great opportunity for data mining to me. Especially when you consider that a) you will have to register some user information so they can track your computational contribution and that information will of course be attached to all data the client sends back and b) you're never going to see the source code for the client, and since you're probably going to be sending back blocks of funky not-really-decrypted text, sniffing the datastream isn't guaranteed to root out any other information they might have coded up and embedded in that data.

    I don't know who these people are, but the fact that they're offering for-pay advertising at the very beginning of the project just doesn't bode well. They might have good intentions, but how far will those last when some advertiser offers a check with lots of zeros in exchange not just for banner space but also for the list of usernames/emails of the people running their client?

    If a project is going to ask for information about me, as any distributed computation project is almost certainly going to want to, then they just need to stay out of the whole Advertising for Dollars game, especially in the digital world where it's so hard to see what exactly they're doing with your data.

    -=-=-=-=-

  • Why didn't the authors instead submit their code to distributed.net to have the distributed.net client process this new project? We all have distributed.net's clients (in the sense that one exists for just about anyone). Another group trying to make a name for themselves but not being inter-compatible ... would be nice if people joined projects instead of creating new ones for a change.

    - Michael T. Babcock <homepage [linuxsupportline.com]>
  • As far as I know, none of these distributed computing efforts release their source code. I understand that they have reasons for this, but I still have no intention of running any code on my machine for which I cannot see its method of operation; especially given the recent scare over Real sending back 'interesting' data from their servers.
    So, are there any projects which do have full open source clients? I'm not necessarily so worried about their freeness or otherwise, I'd just like to know exactly what they're doing.
  • Sure the porting process would be faster, but it would scare of a lot of potential processing power. As you well know, there are several organizations doing distributed computing, so a new client is competing with all of those organizations. To be successful it will have to be friendly to its users. Windows users like programs with a gui. Personally I would prefer it to minimize into the traybar where it doesn't bother me. Alternatively i would like to run it as a service, that way it wouldn't bother me at all.

  • by aphr0 ( 7423 )
    2 years? What is d.net hoping to prove? Are they just interested in spending furious amounts of energy processing needless keys?

    Personally, I spend my computer's idle time finding mersenne primes [mersenne.org]. Seems a bit more worthy than beating a 2 year old dead horse.
  • In summary, it's takes on average 1/256 the time to break a 56-bit key as a 64-bit key.

    Only if the message/whatever is encrypted using the same algorithm. Different algorithms have different speeds, meaning that although it is the most important factor, keyspace isn't everything. The question remains: How fast is the CSC decrypt algo compared to RC5?
  • by Otto ( 17870 ) on Wednesday November 10, 1999 @10:31AM (#1545763) Homepage Journal
    Dcypher is not distributed computing, it's client-server, as are SETI and d.net. There is a HUGE difference between client-server and distributed computing. Perhaps slashdot should label it cooperative computing instead.

    Err... You're wrong, and yet right at the same time.. wow, good job!

    SETI and d.net and in fact the entire internet are "client-server". The Web is client-server. Telnet is client-server. Nearly every single piece of software on the internet is client-server. It really doesn't say a lot about what the software does, though..

    Seti@home and d.net are distributed computing.

    Let's define distributed computing, shall we? According to PC Webopedia here [pcwebopedia.com], distributed computing is:
    A type of computing in which different components and objects comprising an application can be located on different computers connected to a network. So, for example, a word processing application might consist of an editor component on one computer, a spell-checker object on a second computer, and a thesaurus on a third computer. In some distributed computing systems, each of the three computers could even be running a different operating system.

    Distributed computing is a natural outgrowth of object-oriented programming. Once programmers began creating objects that could be combined to form applications, it was a natural extension to develop systems that allowed these objects to by physically located on different computers.

    In the specific cases of seti@home and d.net, they are taking a large project, splitting it up into small pieces, and running it all over the place. Now, there may be a problem, as our definition above implies that each "object" running on each system is different. We can define our object as being our code, but we can also, more intuitively, define our object as being our code running on our data. This conforms more towards the object-oriented methodology. All the objects are inheriting the same source code, but different data. Each bit of code running on each person's computer is running a different bit of data. This is the whole point, in fact. So therefore, all the objects are, in fact, different instances. There we go. Good enough for me.

    Damn, I must be pretty bored to respond to that post.. Hmm.. Guess I need a beer.

    ---
  • d.net used to open source all their clients, at one time. Naturally, someone mucked about with the code and starting faking keys back to the system. It was a bit of a shambles when they finally caught on. The reason they don't release the source except to a small group now is to prevent anyone from overcoming their security measures... However, if you're mainly interested in the algorithms, they'll give you those. But not their entire source, because of the security issues.


    ---
  • Why the heck is everyone so up in arms about this?

    Finally, months after the CSC was launched, *someone* has a client out to work on it. That someone isn't distributed.net. And that's why you all are so mad. d.net isn't the guru of distributed computing.

    If d.net hadn't piddled all this time away on rc5, they could have had the thing almost done.

    Instead *they* spread themselves too thin over CSC, OGR, RC5, etc...

    The heck with d.net. The heck with waiting and waiting and waiting for non-functional beta clients. Finally, the real deal.
  • ok so, no linux client? come on! im sure we all have dos machines at work? right??? come on!! i want to see a team happen! its /. team or nothing... well certainly not anandtech (nothing against you guys)
  • While I don't fully agree with the defination from Webopedia (a questionable source annyway), I would definately disagree that d.net or SETI fit even that definition. Not to mention distributed computing came well before the OO craze.

    Their is no or limited fault-recovery, no interprocess communication, no lookup services, and no process migration.

    This of course doesn't make them bad, they just aren't distributed computing. They dont need to be to solve their tasks. Client-server is simple, and solves the crypto or alien hunting jobs very well. If anything they vaguely resemble parallel computing, not distributed.

  • I've looked through the dcypher website twice, and haven't seen any mention of the 10,000 Euro prize, or how it is divided. I'm not in this thing for the money, but before I'd consider running a dcypher client, I want to know how the money is being distributed.
  • by Anonymous Coward
    Obviously the main motivation to switch projects is greed in this case. While this may be morally devoid, I have to admit, it tempts me. Now, I'm not totally against giving d.net part, to cover their costs and time, and maybe a small portion to another worthwhile charity, but do I want to give away 90% of the prize money? Not really... d.net does have other sources of income (iGive to name one relatively big one) and they ARE a non-profit org... I think being able to keep at least 50% of the prize money is fair, or having some sort of option. I like d.net alot, and probably will end up with them in the end, but if another group ends up attracting a large user base, AND their giving away a larger share of the prize... my morals may end up compromising a bit. Sadly, but honestly, Anonymous Coward
  • They've updated their website with news on the prize disposition. Here's the quote:

    We'd be happy to win you over as a participant and give you the chance to win the full prize money of 10,000 Euros (roughly $10,500) for finding the correct key!
  • Who want's to be a ten-thousand-dollaire?
  • We sucessfully completed RSA rc5-56 challange October 22, 1997 (See our full history at http://www.distributed.net/history.html [distributed.net] ), over two years ago. We are currently working on rc5-64, which is 256 times harder than rc5-56. Were we to tackle rc5-56 again, we could crack it in a matter of days.

    dB!
    decibel@distributed.net
  • We'll probably be releasing the final clients in the next week or two, and at that point, our rate will be large enough that we should be able to exhaust the entire keyspace in a few weeks.

    Are those weeks real time, or DNET time? Because if they're DNET time, we're looking at months. While I will continue to support DNET, I can understand dissatisfaction with how things work.

    On the other hand, it looks as if dcypher.net was made out of complaints about SETI's client. In which case, why didn't they just code cores for distributed and offer them up? It may have taken some time to get the core into the distributed.net client, but it would've been a better thing to do, IMHO.

    And the website is eerily similar. It's kinda spooky. My main objection to the whole thing was their not saying how much of the prize money would go to the finder of the key. Now that they've done that, hmm.

  • The cores of the clients (eg the parts that decypher or do the main work) are open source, and your encouraged to modify them. That's how the MMX cores came about. That's how the G4 cores look to be coming about. That's how there should already frigging be AMD specific cores for 3dNow and Athlons already instead of not...
  • From what I can remember, there was either something in one of the D.net head's .plan's or in a discussion in #distributed (efnet) about open source. The problem with sending fake keys back is one that was quickly resolved as good logs are kept of all keys going out and coming in. There was consideration of making the next generation of clients open source, I don't know where that went to.
    I'd agree that it'd be nice to see what the client is actually doing, but I was content just to watch my outgoing network traffic for a day or two.
    Also, why would a non-profit organization want to or need to collect info from people?
    Just a few thoughts to ponder

    NIVRAM
  • As I said when I responded to my own article on Monday, I'm not in it for the money. If there was a huge user base that switched to another contest because it was much more interesting or provided better all-around fun, maybe I'd switch, but as it stands... d.net is organized, has been around for a while, their clients don't crash my computer (even with windows), and I like talkin to them.
    I'm not going to say "Join Distributed, we're the best group cracking" or anything like that. I won't push my view on others, but I will say that I have tried other groups (PIhex and SETI@home) and I was not happy with them.

    NIVRAM
  • Since distributed processing forms a virtually limitless computer system, will all our personal computers of the future be simply devices that share processing power?

    No. Sidestepping the fact that distributed processing doesn't form a limitless computer system (let's take the current RC5-64 contest; there are about 40,000 participants. Let's be pessimistic and say every participant only uses one computer. Let's also assume there are 1 billion computers in the world, 1 computer for every 6 people. That would mean you could do about 25000 contests simultanuously. With the current speed, it takes about 6 years to crack a single message, so, throw a lot more computers together than there are, and you're looking at cracking 4000 messages a year. That's far away from limitless...) sharing processing power isn't that easy.

    RC5-64, and other brute force cracking attempts are easy to do distributed. It takes the most trivial distributed setup (one master, and a bunch of slaves that all work indenpendently - all the master needs to do is bookkeeping, there's no other communication or synchronization needed). Many other problems aren't easily turned into a distributed equivalent. Other problems only work well in specialized distributed environments, parallel computers, where processors are synchronized and you know in advance when which processor is going to write where in memory. Simulations of fluid flows for instance.

    Some problems are impossible to solve. Take for instance the classical election problem. You have two identical processors, with identical software. Elect a leader.

    Even harder is the problem of how to split up arbitrary tasks in a distributed environment - and deciding which problems can be solved distributed, and which don't.

    Then of course, there are things like nodes and links going down, unknown latency, nodes that cannot be trusted, not knowing your topology, etc, etc. Distributed computing in general is a science - and not an easy one.

    -- Abigail

  • For example, say Boeing wanted to design the Next Generation spaceship using genetic algorithms. They certainly could distribute the work for that type of application.

    They could, but I highly doubt it will be worthwhile for them. The big problem is trust. With RC5, and other cracking contests, trust isn't a major issue. A client saying "I found the key" is easily falsified. A client saying "Nope, the key isn't in this block", while the key is, is a problem, but you'll find out eventually; and you have to "just" retest the blocks. And with the assumption that the number of people not playing fair is low, chances that the winning key is given to an unfair person is low.

    But it is more of a problem where all the results are actually used. What if a malicious person starts feeding false results? Sometimes, it doesn't matter. The first large scale internet cooperated crack of a code was RSA-129 by Lenstra et al. That wasn't brute force, but used computing power to initially populate a huge, sparse matrix (it's been years ago, I might misremember some details). The final algorithm was robust enough to cope with some percentage of bogus data.

    But do you think Boeing will take that risk? Would *you* want to fly an airplane if it was designed using numbers returned by some random script kiddie?

    -- Abigail

  • Client:Is the password '0'?
    Server:No
    Client:Is it '1'?
    Server:No
    Client:Is it '2'?
    Server:No

    Well, if it was like that, the server could do it all by itself. It's more like this:

    Slave: Master, oh, master, give me something to do.
    Master: hands over box with stuff Here, go through this, and when you are done, report if there's something interesting.
    Slave shuffles off to a corner, goes over the box, comes back to the Master sometime later.
    Slave: Master, oh, master, I didn't find anything interesting!
    Master: oh, you didn't? Well, well, well, what a surprise. Master turns around, fetches a new box. Here, try this box instead.
    Slave shuffles off to his corner.

    Etc, etc,...

    -- Abigail

  • Your effort goes part way there. The 'clients' should be bidding for computing power, and the 'suppliers' (us) would choose to whom their computing power is allocated.

    Then there'd be long term contracts with lower rates of pay, or short term jobs with higher rates of pay, and of course seasonal work (when for some reason or another there is a cyclic demand, aka the weather service needs some more power for the Hurricane season), and shortages or over-supply as the availability waxes and wanes, or as computing power increases... Don't forget the higher rates of pay for those of us with a lot better net connectivity, or more available disk space, which should enhance the type of tasks we can take.

    What would be really interesting is what the pay would end up being, assuming we have the best 'programming' infrastructure available for 'clients'. Would it only be a tidbit, or could/would big business and other concerns really take advanatage of this cheap distributed power.

    It would be neat if it would both allow companies to get cheaper than normal computing power, and yet at the same time completely paying off the entire cost of my computer. I don't see why it's not possible. Score!!!

  • my bad, i meant rc5-64

    and i had also forgotten that distributed.net took on the other challenge
  • Don't automatically call me people as though I somehow represented all of Slashdot. I see this happen a lot. Read my User Info [slashdot.org] page and see what I've posted.

    I don't claim that competition is necessary. In fact, if Microsoft actually valued its customers and technology as much as it does its money, it would be quite plausible for me to "like" Microsoft because they would be making better software. As it is, I think competition is good in that circumstance because they aren't attempting to innovate and move quickly with technology but are falling behind (often) what hardware can do and aiming for the lowest common denominator.

    Distributed.net [distributed.net] does a very good job of what they do and if they released their source code at all to the public (maybe not the part that does the network interaction), it would be very easy to add to it. Modularity would be even better. But why not communicate with them about it in the first place? On a more personal note, I help program GICQ. There are about a dozen Linux ICQ clients, all based on each others' source code to some degree. Sure, lots is good ... to some extent ... but when everyone just starts their own project instead of helping someone else's, they all move slowly.

    Just my $0.02 worth.

    - Michael T. Babcock <homepage [linuxsupportline.com]>

Never ask two questions in a business letter. The reply will discuss the one you are least interested, and say nothing about the other.

Working...