Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Spam

Journal Spy der Mann's Journal: Okopipi anti-spam project coming soon 81

Today I applied for a Sourceforge.net project called "Black Frog" (edit: The new name will be "okopipi"). It will be a completely-new distributed version of Blue Frog.

The project is still in its planning stages, we need people who will take over the project and continue to work on it.

Update:

The project is now official. The discussion pages are the following Google groups: okopipi-dev and okopipi-discuss.

This discussion has been archived. No new comments can be posted.

Okopipi anti-spam project coming soon

Comments Filter:
  • I'm thinking that the Black Frog will include a PHP that will add the opt-out requests to a local file, like "blackfrog_optout_ddmmyy.txt".

    All the spammers have to do is include the blackfrog_optout.php in their main directory, and Black Frog will try to access the blackfrog_opt out page first. If the page is accessed, it will try to opt out there first. If not, it will do the usual.

    The point is to make it EASY for spammers to opt us out. The problem with Blue Frog is that spammers had to download a GLOBAL
    • Uhm, if it's easy to 'set aside' (ignore) for the spammers, why would they listen?

      They just tarpit all requests sent to your special complaint reciever script, and nothing will really reach them.
      • My point is that it should be easy for them to OPT YOU OUT, not IGNORE your request. The effectiveness depends on the NUMBER of opt-outs they do. Since an opt-out request invariably generates bandwidth, they shouldn't get much traffic if they opt out people regularly.

        So it becomes a reward-punishment system. Are you ignoring requests instead of process them? You'll get a bunch of traffic and your server will be "frogged".

        • The bluefrog opt-out requests were effective because they interfered with their normal operations. The messages came in via their normal webforms. So they had to sift through manually, that is tedious.

          If you make it easy for them, by dumping it in a special wastebasket. Well.. then it's not much of a burden and they'll just have to pay some more for bandwidth (or they simply drop those HTTP requests). They don't care about the bandwidth bill. They make enough money to DDoS the largest blogging company off t
          • How about this. First offense, use the (if they have) specialized Black Frog form.

            Second offense, find first "normal customer" form and fill it.
          • If you make it easy for them, by dumping it in a special wastebasket. Well.. then it's not much of a burden and they'll just have to pay some more for bandwidth (or they simply drop those HTTP requests). They don't care about the bandwidth bill. They make enough money to DDoS the largest blogging company off the internet. So even having to pay a few hunderd dollars a month won't make much of a dent.

            Keep in mind that the Frog attack is targetting not only at the spammer, but also at the CLIENTS of the spa

    • Thinking like this is the right way to go!

      Not *all* spammers are beyond reason. Offering compromises for those willing to play by the rules sets a standard of MUTUAL RESPECT. I think this is a great idea.

      I can also code PHP backwards and forwards, and am happy to offer any GPL code I can to your BlackFrog project.

      Josh-ribbit-ibot.tv

    • Man there are a lot of interesting ideas flying around out there. I know ideas are coming out of the woodwork, but I can't resist contributing. I'm just posting to Spy der Man's initial entry due to this being general in nature.

      Having written an essay on the BF forum on how BF was NOT a DDOS attack, (it got copied around a bit) it's probably obvious to those who've seen me lurking the forums that I considered Blue Security to be taking the high road. Therefore I think it's imperative that everyone try to
    • I understand the reasons for the only-hit-spammers-that-spammed-you approach, but I dislike it. It's simply reverse extortion. "Stop spamming me and I'll stop spamming you. But you can keep spamming other people all you want; as long as you don't spam me, I won't spam you." If the spammers do opt-out all the blackfrogs, you've only reduced spam by 1% (if that much). Everyone else on the Net keeps getting spammed.

      One should not have to become a blackfrog to get one's received spam to stop. Spam should
      • As long as spam is legal -- and I believe anything with a functioning unsub method *IS* still legal -- the main avenue you want is congress, not internet vigilantism.

        If you want to complain to spammers for emails that you didn't receive... that's crossing the line from a legal complaint process (one complaint per spam you received) to illegal extortion, or... what do you call it when you force another company out of business through illegal means? I forget, but it's not a line we want to cross.

        Remember, th
        • Of course spammers should be stopped. But unlike just unsubscribing yourself from the spammer (useless at best), or overreachingly telling the spammer that [some gullible customer] doesn't want to hear from you, Blue Frog had half a million or so users, each of whom didn't want spam, and if somebody spammed any of them, they'd get half a million unsubscribe requests in a hurry, including from the users who hadn't received that spam yet.

          I don't remember how aggressive it was about filling out forms - imag

          • We're going to implement throttling to avoid DOS situations happening. And yes, I think captchas will be unavoidable at some point. But here we're trying to establish the concept of a do-not-spam list so that people will begin to accept it as normal, or at least "why hadn't they thought about it before"?

            Ultimately, e-mail rules will have to be changed, and the protocols modernized. SPAM is based on various internet flaws that exist by design, the problem is that companies haven't agreed on fixing those fla
          • if somebody spammed any of them, they'd get half a million unsubscribe requests in a hurry, including from the users who hadn't received that spam yet.

            Not true, actually -- that would be a real DoS attack (and, um, illegal). Blue Security only submitted one complaint for each spam received (first including honeypots, then based purely on spam reports from the userbase). AND they didn't submit any complaints if the spammer was CAN-SPAM compliant, or if the spammer responded within a week or two to direct c
  • If Black Frog is going to be P2P, there are some issues to deal with. How are we going to keep the volunteers (those who create opt-out scripts, and handle asking spammers to comply, etc.) from setting up attacks for their own personal gain? It seems to me that there should be two classes of P2P node. Users (black frogs), and volunteers, and I think everything the volunteers do or decide should be subject to peer review. Any thoughts?
    • Yes, there need to be some sort of check and balance. Once a script has been reviewed by X number of people and given the go ahead it could be pushed out into the P2P network and signed/encrypted with PGP. Of course this bring back the issue of having a central point to attack. The difficult thing is figuring out how many people need to approve a script before it gets accepted, and how to prevent people from abusing the system and generating fake approvals to push out malisious code.
      • Perhaps two different groups with the same function, made to monitor each other. Scripts have to be approved by both groups, and require a waiting period of, say, 2 days. During this time, anyone in the two groups can call foul on a proposed script, with good reasons.

        As for picking who gets to be in the groups... I suppose they'd be picked out of the long-time black frog members, and elected/selected by their peers? It's difficult to make sure you're not infiltrated by spammers :\

        If worse comes to wor

    • I'm not sure about the P2P approach. My idea was that scripts could be uploaded to various webpages and downloaded personally. (There would be no P2P or centralized server).

      Maybe the scripts could be downloaded via shareaza or torrent, to keep full distribution of the system.
      • maybe a system similar to clamav. They have to make sure noone flags legit programs as viruses (i dont know how they do it but it prolly relies on a central repo)
      • Well, there's no way to guarantee someone isn't DDoSing a spammer with the script. Which means that Black Frog could be attacked in court? I really liked Blue Security's "one spam, one complaint" policy, and that they were able to enforce it. Plus, the less work the user has to do, the more people we're going to get signed on to the thing. Just my .02
    • OK I've thought of some points.

      There would be Black Frog nodes, and Black Frog "supernodes". Supernodes are the ones conforming the network, just like Gnutella or Kazaa.

      Now a node would have two functions: Client, or Reviewer. A Reviewer would review a spam and website, and give his "SPAM probability" from 0 to 5.

      The client would "roll a dice" and if the dice has less probability than the reviewer's "SPAM probability", he would send a complaint / opt-out.

      Now the trick is this: Use only reviewers of nodes lo
      • (just in case you didn't get the review system, the final "probability" would be the weighted average of the reviews)

        Reviewers should be given a weight depending on what country they live in. Countries like Malaysia, Russia or China would be given zero credibility ;-) Also, if a reviewer comes from a spammer site, his weight would be zero, too. But it'd be difficult to implement that unless the black frog client has a memory of spam websites and ip's.
      • I had forgotten about the scripts. The scripts would be done by different people than the reviewers (so this would make us have three different kind of frogs: client, reviewer and scripter).

        This way, we can know HOW to opt-out at a website, and at the same time we can decide by ourselves whether to opt-out or not.
    • A solution I was thinking of was asking each Black Frog user to spend some time validating the spam that's submitted. I know I'd happily click away on a "yup, kill 'em" or "nope, someone's screwing with us" button to keep the service working, and I bet others would to.

      I.e., distributed voting/review in addition to distributed unsubscribing.
      • A solution I was thinking of was asking each Black Frog user to spend some time validating the spam that's submitted. I know I'd happily click away on a "yup, kill 'em" or "nope, someone's screwing with us" button to keep the service working, and I bet others would to.

        I disagree with this. I think that the users should be able to nominate a spammer for attack, but the decision should be entirely out of their hands. A voting system like this can lead to serious abuse. Each vote is a potential black hat

        • You are, of course, right. The worst-case scenario is that someone has their entire botnet download the program and abuse it.

          I just can't see the right balance between centralization and losing the single-point-of-failure.
    • We have to start with 1.0, I don't think we're ready for P2P yet :P

      Anyway i have an idea to prevent frog-jobs. Along with complaints, send the e-mail headers (except the To: and CC: fields), so the affected party can complain with the ISPs.

      In other words, the opt-out requests will be the basis for ISP requests in case of abuse.
      • Anyway i have an idea to prevent frog-jobs. Along with complaints, send the e-mail headers (except the To: and CC: fields), so the affected party can complain with the ISPs.

        This is actually a really good idea, and similar to once that I was thinking of making a while ago. It's basically a plug-in for Thunderbird/Firefox. It works in conjunction with the "This Is Spam" button. Each time the user marks a message as spam (or the program automatically marks something as spam), then it automatically forward

    • OK I read your post, yes, we need a P2P system but the client should have the option to not depend on it.
  • What programming language?
    • C++ / LUA. PHP for the serverside (spammers').
      • i.e., the same as the existing Blue Frog code. Are you planning (or legally allowed) to reuse any of that? It'd be a leg up, I think.

        About the language decision, I have to agree... It means I won't be as much of a help as I might have been otherwise (as you can possibly guess from my username) but Java is still just too much of a pain on the client-side, and we will want to maximize the userbase.
  • Quote:
    3. An opt-out processor, which will receive opt-out complains at your server and remove the offending addresses from your spam list.

    Huh? Spamvertized web sites outsource their site promotion to several "bulk-mailer" affiliates. Each affiliate has his own address list.

    How do you envisage step 3. handling that?
  • You should check out this thread

    http://castlecops.com/postitle156112-15-0-.html [castlecops.com]

    They got a bunch of people on board and waiting for SF approval.
  • Hi there. Saw your reply in the Blue Security thread. I'd be glad to lean a hand or two.

    Some thoughts right off the top, regarding abuse of the system

    1) How to you authenticate requests for attacks? These need to come from legitimate, unique users. And a single request shouldn't launch an attack.

    2) Who launches the attack. It's too dangerous a weapon to leave up to an automated system. There should be a threshold of attack requests that will trigger an event. And that event should alert human

    • AH! I have an idea. To make sure the site is human-reviewed, let's use CAPTCHA's! :D
      • Who do you want to host the captcha's? A centralised server? ;-)

        Somewhere you need the trust factor. And it's far easier to trust a single source, than it is to trust a cloud of sources.
    • No, the attack should be client initiated. And it's NOT a DDOS. It's an opt-out request, remember that!

      The client should only ask the server / network to request authorization for opt-out.
      • No, the attack should be client initiated. And it's NOT a DDOS. It's an opt-out request, remember that!

        The client should only ask the server / network to request authorization for opt-out.

        Well, the attack is client initiated. A user puts a request into their Black Frog client asking that a site be attacked. ("Not" a DDoS. A perfectly legitimate opt-out... sent 1000 times a second. I know, I know. ;) ). BUT one client shouldn't have the authority to get ALL clients to participate in the attack. And

        • Well, the attack is client initiated. A user puts a request into their Black Frog client asking that a site be attacked. ("Not" a DDoS. A perfectly legitimate opt-out... sent 1000 times a second. I know, I know. ;) ). BUT one client shouldn't have the authority to get ALL clients to participate in the attack.

          No, a client *MUST NOT* attack the websites that it doesn't have already requested authorization for. A client *MUST NOT* be able to tell other clients to attack *ANY* website. This is how Blue Frog ori
          • No, a client *MUST NOT* attack the websites that it doesn't have already requested authorization for. A client *MUST NOT* be able to tell other clients to attack *ANY* website. This is how Blue Frog originally worked. The Blue Security servers only told the client *WHEN* and *HOW* to attack.

            My question then is why would I (taking on the role of an anti-spam activist) sign up for this? RIght now, I already have the power to attack a spammer myself. If I get an unwanted email, I can write a script to do s

    • Very interesting proposal. I'm still worried that this has the potential to open a lot of users up to abuse. They are, in essence, opening their machines up as restricted zombies.

      For every N redundant servers it is possible that a spammer could have N+1 trojan servers (they control thousands of zombies already). In a worst case scenario where a small group of users were persuaded to download server lists containing those N+1 trojan servers would that mean that the real authorized servers would lose the vot

      • Very interesting proposal. I'm still worried that this has the potential to open a lot of users up to abuse. They are, in essence, opening their machines up as restricted zombies.

        Yup, that's the idea.

        For every N redundant servers it is possible that a spammer could have N+1 trojan servers (they control thousands of zombies already). In a worst case scenario where a small group of users were persuaded to download server lists containing those N+1 trojan servers would that mean that the real authorized

    • If the spammers hack one of these servers, they will have the council's email addresses and cell phone numbers.
    • Yes, we would need a central server there, but who is going to maintain it? How to authenticate?

      I'm thinking that this would require the use of public keys to identify "trustable servers". Volunteers could post their public keys in forums so we could attach to them.

      OR - if we could review peers to see how many fake reports (false positives or the reverse) they've posted. Then we could blacklist them, and the network wouldn't listen to them. But HOW? And how to prevent them from gaining majority?

      Man, this i
      • I don't think this is perticularly terribly complicated! the only way to make it work is by it running over the p2p network, that way there is no single source to be attacked - which appears to have been BS's downfall as for 'attack lists' these could be digitally signed to ensure they've come from who they're intended - however that is decided, probably a peer group of some description. but it might just be better to have the individual client programs attacking their own spammers, so if a spammer sends
        • Yes, but the problem is that the same moment that you make it automatic, someone could spam you over an innocent webpage to disrupt the system (this is called a joe job).

          So we need an authority to determine whether a spam is a joe job or not. I'm thinking that a hierarchical authority (with some initial credentials at the top) would be the way to go. This way if a spammer is disrupting the system, his superior can shut him down.
    • About the moderation system:

      First thought: Webpages with more than N (1,000? 10,000?) different reports are flagged as "possible spam". This would be a first measure, but then we'd have to be careful about joe jobs. So we need reviewers.

      The "reviewers" are the ones to determine whether a mail/page combination is spam AND authentic. But the spammers could set up their own client and start flagging every joe job as true, or every spam as fake?

      Then we need someone to "metamod" the reviewers. But then again, ho
      • One idea which does immediately jump out is the idea of large numbers of small, closed groups, comprised of members who all trust each other. Since a user can belong to more than one group, messages will follow a directed-graph style path through the network, and you will never receive a message from somebody who you don't trust (this is similar to a classic 'web of trust' model.)

        The only problem here, of course, is that the barrier to entry is relatively high, as you (or your closed user group) needs to t
        • The only problem here, of course, is that the barrier to entry is relatively high, as you (or your closed user group) needs to trust (and be trusted) by at least one member of the larger network. However, it would appear to prevent subversion of the network (forgery of messages, etc.) by untrusted third parties (who's ever going to trust a spammer?), and subverted accounts can easily be removed from the various closed user groups which they belong to (as the trust revocation problem is 'easy' due to the sma

  • fyi : I have the latest blue-frog source release.
    I am trying to figure out what the legal status of it.
    I`ll keep you posted.
  • The thing in bluefrog is to identify the URL of the spammers "sponsor" and to generate a script to automatically opt-out (fill the orders) of this site. This will always need some human interaction, since the "sponsors" will change their sites frequently.

    The main intrest should be to make those scripts public.

    Immagine a website full of such scripts (php anyone?) divided in viagra / penis enlargement, etc sections. Now if the spam makes you angry you go to this site and fire some opt-outs in your prefe
  • png file : http://firefang.net/info/blackfrog/blackfrog.png [firefang.net]
    gimp file : http://firefang.net/info/blackfrog/blackfrog.xcf [firefang.net]
    a black frog, with some phoenix like properties :).
  • I've thought about this a little and I think a peer-to-peer solution would have to be very careful. One main problem I have thought is that you are opening it up and allowing the spammers to be part of the community. How would it stop them manipulating the system?
    • I thought about it, we need a hierarchical authority system. The top authority (with a public key for authentication) would designate other authorities. This way we can have a central control even in a distributed P2P network.
      • The key here is to remember that Blue Frog, at its heart, was just automating the complaint procedure for each individual user.

        If you have a client that, no matter what good/bad info it manages to get from the P2P network, is only capable of submitting one complaint per unique email, and ONLY to a domain listed in that email (though the actual reporting scripts may vary), there's very little room for malevolent 3rd parties to screw it up. The spam analysis will need to happen on the client (instead of remo
  • I probably won't be able to actively participate in the development. But I pledge my support in spreading the word. (Starting with my slashdot sig.)
    I want to sign up for War.
  • Here's a post I made on the article thread: Exactly. And it seems to me like the "correct" solution to this is an OpenPGP-based web of trust. Or maybe a web is not really what is needed; all you'd really need is for the "go after this site" instructions to be signed by the trusted party, like BlueSecurity. Then it wouldn't even matter how the instructions were received, as long as they could be verified as authentic. You could use anything from a web site, to e-mail, to IRC, to IM, to a P2P network to
  • Comment removed based on user account deletion

I have a theory that it's impossible to prove anything, but I can't prove it.

Working...