Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Spam Science

Response to Gordon Cormack's Study of Spam Detection 229

Nuclear Elephant writes "In light of Gordon Cormack's Study of Spam Detection recently posted on Slashdot, I felt compelled to architect an appropriate response to Cormack's technical errors in testing which ultimately explain why one of the world's most accurate spam filters (CRM114) could possibly end up at the bottom of the list, underneath SpamAssassin. I spend some time explaining what is a correct test process and keep my grievances simplified about the shortcomings of Cormack's research."
This discussion has been archived. No new comments can be posted.

Response to Gordon Cormack's Study of Spam Detection

Comments Filter:
  • How I do (Score:5, Interesting)

    by mirko ( 198274 ) on Thursday June 24, 2004 @11:32AM (#9518700) Journal
    I set many aliases to my official email and I gave all of these to and only to spammers.
    So, whenever I get a mail more than 95% similar to a mail that I know is a spam, I dump it.
    This combined with Apple's Mail.app Bayesian filter and there may only be a few spams left.
    • My mail provider has some filtering software which lets me customize the threshold that I want filtered. On my side, Thunderbird has filters. Finally, I use a wildcard email that forwards to my actual address. When I use my email somewhere that might spam me, I simply describe the potential spammer like this:

      slashdotspam@mydomain.com

      If I get mail from there, I know how it came in. The combination of all these keeps my total spam to probably three or four a week.
    • Re:How I do (Score:4, Informative)

      by julesh ( 229690 ) on Thursday June 24, 2004 @12:35PM (#9519448)
      Mail.app's filter isn't Bayesian. Please see previous slashdot article on how it works (I'm too lazy to find the reference right now).
  • Excellent review (Score:5, Informative)

    by XMichael ( 563651 ) on Thursday June 24, 2004 @11:33AM (#9518706) Homepage Journal
    On the origional forum, I was saying something of the similair (except not nearly as well written!! hehe)

    DSPAM, IMHO, provides far better results than this report was leading too. A properly trained Bayes filter, but a somewhat intellegent person provides simply amazing results. I swear I can go weeks on end without a single spam getting through, no false positives -- and between 20 and 100 SPAM in my "spam" box per day!

    DSpam using Bayes algorithm is by far the best filtering method i've used. And I've used alot! (From SpamAssassin to SpamProbe and all the inbetweens). The only setback, DSpam takes a couple weeks to train...


    Priceless Photos [pricelessphotos.org]
    • I swear I can go weeks on end without a single spam getting through, no false positives -- and between 20 and 100 SPAM in my "spam" box per day!

      This is what I don't get - in order to be sure you have no false positives, you have to comb through all of the spam by hand, which for the most part defeats the purpose of a spam filter. If you don't do so, then you can't claim zero false positives - you can only claim that you haven't _noticed_ any false positives.

      I have a whitelist at work, and it works quite
      • Exactly - what's the point if you have to re-check it anway?
        That is the main reason I don't use any spam filters.

        Without a filter I can check emails as they come rather than create myself a "homework" of having to check 50 messages at once...
        • Checking emails as they come takes more time than quickly scanning over 50 messages at the end of day.
      • I've used a Bayesian plugin before that let you set thresholds - so a certain score would be marked as "probably good" and be left in your inbox, a range would be set as "probably spam" and put in a possible junk folder, and beyond that was "definitely" spam and went in a spam or trash folder.

        It defaulted to like 10/90 (I don't remember which score was more spamlike, so imagine less than 10 was almost certainly ok, and greater than 90 was almost certainly spam) - I set it much lower for awhile (50) until I
        • it seems to me that with the way that spam is now adays we are taking the wrong approach. Why don't they make a filter assuming everything is spam, and then just filter out the good email? I realize that a white list does just this with the email address, but to take it a step further, look at what the content is and filter according to words or phrases you want to see like 'the kids are doing great.' I can't see this getting any more false positives then what we are using now :)
          • Re:False positives. (Score:3, Informative)

            by Xentax ( 201517 )
            This *is* already done - statistical filters are trained on both words that are 'spamlike' (words that show up only, or mostly, in lots of email marked by the user as spam), and words that are NOT (words that show up only, or mostly, in email marked not spam).

            This is (AFAIK) done against tokens in both the mail body and the headers, which pays dividends if the delivery paths are clustered (for example, if your whole family has accounts with MyISP.com, you'll probably get good filtering provided the spam is
      • This is what I don't get - in order to be sure you have no false positives, you have to comb through all of the spam by hand, which for the most part defeats the purpose of a spam filter. If you don't do so, then you can't claim zero false positives - you can only claim that you haven't _noticed_ any false positives.

        I file spam in a spam box as I can easily scan across the contents in 10 seconds and hit delete before I go to bed, as opposed to the distraction when an email arrives and you go to check it i
      • This is what I don't get - in order to be sure you have no false positives, you have to comb through all of the spam by hand,

        True. But you can check if your false positive rate is low enough by statistical sampling.

        So once every few days I scan through a thousand or so items marked as spam by procmail. As long as I continue to find 0 or 1 false positives (which I add to my whitelists), I consider my filters 99.9% good. That error rate is probably better than my own human error rate for misfiling and/or

    • by mev ( 36558 )
      Unfortunately it seems like the author is too intent on slamming Cormack for his review to fit my description of an "Excellent Review". I wish he had toned this down as he could still have delivered the same technical message in a more credible fashion.

      "Excellent counterattack" might be more fitting.
  • by Timesprout ( 579035 ) on Thursday June 24, 2004 @11:39AM (#9518780)
    I usually frown when I see many of these so called studies offering conclusions, several of which differ radically from my own experience. There recent Java/C++ performance one was a classic example. It gets annoying when a pro MS result is immediately decried as marketing FUD because it just cant be better and a pro Linux result is taken gospel truth here on /. Usually I tend to take all results with a grain of salt or just plain ignore them and focus on the debate around them.

    The benifit of these studies though is that fantical crap aside informed people will usually take the time to interpret results or suggest corrections/improvements that actually benifit developers and improve their knowledge base more than any information provided by the actual study.
    • But the purpose of studies is to offer insight into the best tools for a specific set of dependencies. Decry the dependencies, and you're essentially eliminating the purpose of the study.

      For example, I am working on a GIS application. I looked at offerings from ArcView and MapInfo and found that while they do what I need to do out of the box, they are quite expensive and required a license for every seat of my application. So I looked to Open Source. There I found hundreds of tools, none of which did w
      • by killjoe ( 766577 ) on Thursday June 24, 2004 @01:59PM (#9520438)
        "This is defeatest bullshit. Ignoring your problems doesn't make them go away. "

        You miss an important point. This is not "our" problem, it's YOUR problem. I don't need a GIS program and neither to millions of other other people. YOU need one and too bad for you they cost tens of thousands of dollars. You have no right to complain that somebody else hasn't taken the time and effort required to give you a free equavalent.

        What you need to understand is the open source is nothing but scratching an itch. This is your itch and you need to scratch it.

        OPEN SOURCE ONLY WORKS IF PEOPLE CONTRIBUTE. This very simple and obvious point seems to be lost on most people. You are not supposed to sit around till somebody else does the work and give you something for nothing. You need to contribute.

        You need to start an organization and start raising money to fund an open source development effort or to accelerate and existing one. You need to get involved and contribute. BTW bitching on slashdot does not count as contributing.

        "This is like blaming McDonalds for your big, fat ass, or blaming Microsoft because you got a virus when you didn't run the patch they released to prevent it."

        Or blaming the open source community because they didn't give you something for free.
        • You are not supposed to sit around till somebody else does the work and give you something for nothing. You need to contribute.

          Which is why Open Source will probably always be for developers by developers. Unless of course the non-developing users decide to contribute cash...

          It's sort of like public television. You can sit around and watch it for free, or you can donate and help other people watch it for free. "Your generous donations will make this software free-beer for everyone!"
        • He said he wasn't an expert. So of course he'd be forced to make that conclusion. He cannot scratch his itch because he cannot reach it.

          This is the kind of response he was talking about that does no good. Rather, you should acknowledge that the area is weak and that more focus needs to be given there in the future.

          (Incidentally, I'm interested in OSS in the GIS field. Any ideas/good pointers? Anyone?)
          • Re:Hello? (Score:3, Insightful)

            by killjoe ( 766577 )
            "He cannot scratch his itch because he cannot reach it."

            You don't have to be a developer. As I said you can start a campaign to ask for donations, you can write letters to companies asking for sponsorship, you can donate some of your own money, you can try to get like minded individuals together to solve the problem.

            OPEN SOURCE DOES NOT WORK UNLESS YOU CONTRIBUTE.

            " Rather, you should acknowledge that the area is weak and that more focus needs to be given there in the future."

            More focus needs to be give
  • This guy seems a little harsh and just a bit jealous of the success of Gordon Cormack's article. I'd like to know what makes his opinion any more valid than Gordon's.

    Information on his professional career was very hard to find on the site.

    This just seems like a flame because his software(dspam) didn't perform well in the test.
    • I have to agree that the article has a very put-out and almost bitter feel to it, which makes me less inclined to take it seriously. That said, there are perfectly valid criticisms in it. For example, not releasing the configuration data is clearly improper. Testing the accuracy of the filters against SpamAssassin is totally incorrect methodology! It looks good to apply the filter to such a huge body of email, but a smaller set would have made it much easier to validate the results. Misconfiguration of the
    • > This guy seems a little harsh and just a bit jealous of the success of Gordon
      > Cormack's article.

      Articles aren't 'successful` - they're either useful, or they're just fun to read. Perhaps his is the latter.

      From the response:
      ---
      It turned out that Cormack was using the wrong flags, didn't understand how to train correctly, and seemed very reluctant to fully read the documentation. I don't mean to ride on Cormack, but proper testing requires a significant amount of research, and research seems to be
    • by Otter ( 3800 ) on Thursday June 24, 2004 @11:52AM (#9518934) Journal
      There are some technical objections in there (old versions of software, the fact that Spam Assassin was tested with a spam collection generated by spam assassin). But honestly, after wading through all the whining and sneering, I didn't have the energy to pick the points out of the overall flow.

      Jonathan, next time:

      • Start by summarizing your technical objections.
      • Continue by detailing your technical objections.
      • Leave the nasty rants to the end, or better yet, leave them out entirely.
      • Stop talking about "geeks" in every paragraph.
      • Please stop referring to spam filter comparisons as "science".
      • Please stop referring to spam filter comparisons as "science".

        I believe the author of the article would have two issues with that assertion.

        First off, you can have science about how fast grass grows. You have science about how many sexual partners a person has. You have science about how to manipulate people with irrational arguments. Science can be applied to anything that you apply scientific princepals to. Science in a lot of ways, is merely a matter of measuring in a controlled manner and then

    • by pclminion ( 145572 ) on Thursday June 24, 2004 @11:56AM (#9518967)
      This guy seems a little harsh and just a bit jealous of the success of Gordon Cormack's article.

      Let me explain why he's irritated, as somebody who has conducted spam filter statistical tests and made publications on the topic.

      Yes, it is irritating when somebody demonstrates that his method is better than yours. However, most researchers are able to accept this, and continue improving their own work.

      However, what is far more irritating (by an order of magnitude at least) is when somebody "demonstrates" the inferiority of your work, and they do so in a completely scientifically bogus way.

      Let me give a concrete example. Suppose you were Galileo. You have just put forth the postulate that all objects fall at the same speed regardless of mass. A "debunker" attempts to demonstrate that this isn't true by dropping an iron ball and a feather. Obviously, the feather falls much more slowly.

      "Ha ha, neener, neener!" cries the debunker. Of course, Galileo knows his method is flawed. If people actually listen to this supposed debunker, Galileo might become very, very irritated indeed.

    • by julesh ( 229690 ) on Thursday June 24, 2004 @12:37PM (#9519474)
      He made a few very good points, but the overall tone was a little too ranty.

      This was the most important point, I think, and was buried 2/3rds of the way down:

      The emails being 8 months old, heuristic rules were clearly updated during this time to detect spams from the past eight months. The tests perform no analysis of how well SpamAssassin would do up against emails received the next day, or the next eight months. Essentially, by the time the tests were performed, SpamAssassin had already been told (by a programmer) to watch for these spams. [...] What good is a test to detect spam filter accuracy when the filter has clearly been programmed to detect its test set?
  • by Shoeler ( 180797 ) on Thursday June 24, 2004 @11:41AM (#9518818)
    For any users of spamassassin's 2.x branch (2.63 is current as of this writing), we all know how dated its signatures are right now. When the 2.6 branch was first released, I got zero spam and 100% ham for the first few weeks. Now that 3.x is being integrated as an ASF and being apache-ized, updates have been slow and 3.x is still awaiting deployment.

    Point being - I was darn surprised to see SA at the top of his charts.

    Now - if only mimedefang would easily use another spam-checker....
    • Well, of course it was. As stated in the article, he was using the latest version of SA to classify mail that was up to 8 months old. I'd expect it to be pretty close to perfect on that. It's just current stuff it ain't so hot on.
      • SA gets a bad rap because it works even when the bayesian filter isn't activated. This leads to horrible results.

        We deployed SA on our own internal MX and we have over 99% accuracy over the past 3 months. Although the bayes filter is primitive compared to what other advanced filters are doing, with enough training and a bigger token DB, SA works very very well. Couple that with network checks (ie, Razor2, Pyzor, DCC) and the system is comparable to the best statistical filters.
  • Just read it - (Score:2, Informative)

    by calebb ( 685461 ) *
    I just read the whole article - it does repeat itself a few times, but the author provides additional evidence each time his theses were reiterated:

    1. Cormack is very inexperienced in the area of statistical filtering. Agreed!!!
    2. Cormack went into the testing with many presuppositions. Also Agreed!!

    And in case you're not familiar with the word presupposition:
    1. To believe or suppose in advance.
    2. To require or involve necessarily as an antecedent condition.


    Overall, this is a very good articl
    • Re:Just read it - (Score:4, Informative)

      by Henry Stern ( 30869 ) <henry@stern.ca> on Thursday June 24, 2004 @02:03PM (#9520473) Homepage
      1. Cormack is very inexperienced in the area of statistical filtering.

      Disagreed. Gordon Cormack has been doing information retrieval for 20 years. He is fairly well known in the area. See his publication history at DBLP [uni-trier.de].

      A far more likely conclusion about what's going on here is that Zdiarski's ego has been hurt. Both he and Dr. Yerazunis engage in some very sketchy statistics in their papers and I think that it has caught up to them.

      1. Yerazunis' study of "human classification performance" is fundamentally flawed. He did a "user study" where he sat down and re-classified a few thousand of his personal e-mails and wrote down how many mistakes he made. He repeats this experiment once and calls his results "conclusive." There are several reasons why this is not a sound methodology:

      a) He has only one test subject (himself). You cannot infer much about the population from a sample size of 1.

      b) He has already seen the messages before. We have very good associative memory. You will also notice that he makes fewer mistakes on the second run which indicates that a human's classification accuracy (on the same messages) increases with experience. For this very reason, it is of the utmost importance to test classification performance on unseen data. After all, the problem tends towards "duplicate detection" when you've seen the data before hand.

      c) He evaluates his own performance. When someone's own ego is on the line, you would expect that it would be very difficult to remain objective.

      2. Both Yerazunis and Zdziarski make use of "chained tokens" in their software. This is referred to in other circles as an "n-gram" model. As with many nonlinear models (the complexity of an n-gram model is exponential with n), it is very easy to over-fit the n-gram model to the training data. Natural language tends to follow the Pareto law (sometimes called the 80/20 rule) where the ranking of a term is inversely proportional to the frequency of occurence of that term. The exponential complexity of the n-gram model contributes to the sparse distribution of text leading to a database with noisy probability estimates.

      3. Zdziarski uses a "noise reduction algorithm" called Dobly to smooth out probability estimates in the messages. Aside from his unsubstantiated claim of increased accuracy, I have never seen anything to suggest that it actually works as advertised.

      Considering these points, I was not surprised at all by the results of Dr. Cormack's study. While one may argue that his experimental configuration can use some improvement, his evaluation methods are logically and statistically sound. What I personally saw in the results of this paper was that two classifiers that use unproven technology did not perform as advertised. After all, every other Bayes-based spam filter performed acceptably well.

      Lastly, I won't really touch his flawed arguments about how using domain knowledge about spam (i.e. SpamAssassin's heuristic) somehow hinders the classifier over time when you are also using a personalised classifier. You'll notice that SpamAssassin still did acceptably well when all of the rules were disabled.

      Go read some more of Zdziarski's work and draw your own conclusions about his work. Pay careful attention to his use of personal attacks when comparing his filter to that of others.
  • by VAXcat ( 674775 ) on Thursday June 24, 2004 @12:04PM (#9519043)
    I prefer using the original CRM114 discriminator and it's host platform on spammers. If you're not familiar with the original CRM114 and it's delivery platform, it was featured in the following movie... http://www.imdb.com/title/tt0057012/combined
  • by EsbenMoseHansen ( 731150 ) on Thursday June 24, 2004 @12:12PM (#9519179) Homepage

    There are several warning signs in this article.

    1. The author spends a lot of time trying to discredit the author on such terms as impartialness and experience. While such can lead credence to a strong case, it bodes when mentioned as the very first points. Also note the beginning of the article: "Many misled CS student...".
    2. The author has no statistical or published backings for his claim
    3. Most of the arguments are flawed, in my opionion. Yes, the corpus was trained on SpamAssassin, but the other filters' mistakes were, as far as I recall, examined for errors individually. Thus, any mistakes would be spotted or credit each filter equally.
    4. I also always find it suspect when someone claims: "Yes, the program did not perform, but with a different configuration it might/in the latest version it might". While it could be true, such claims needs backing.
    5. He claims that X's email was atypical, even for geeks. I would like to state here that I have 3 email accounts, of which none lie near his "typical" spam quotient (60%): 2 with >90% spams and 1 with <1% spam.

    That said, he does raise a few valid points, such as the timeline:

    1. If filters expunge old data based on time, this would not work in the test. That gives SpamAssisins' static rules an egde
    2. Configurations should really have been published. I see no reason why not.
    • by int2str ( 619733 ) on Thursday June 24, 2004 @12:49PM (#9519621)
      Yes, I agree with your points. The author spends way too much time dicrediting the study.

      I also have to say that my experience was much more along the line of Cormacks. I've tried DSPAM for a while on my server, starting from scratch. Training on error with only new emails. On a small mail server with about 10 users of different types (geeks, businesses, moms etc).
      - DSPAM took way too long to produce any kind of results
      - 2500 emails before advanced features kick in is *a lot* for the average soccer mom
      - DPSAM produced way too many false positives early on
      - The spam filtering accuracy leveled off at about 80% (number from DSPAMs web interfac)

      So this is not another overzealus CS student here, but real world testing.

      The DSPAM author does not address any of the real points and just rags on Cormack.

      Not much of a "rebutal" in my book.
    • While such can lead credence to a strong case, it bodes when mentioned as the very first points.

      But does it bode well or ill?
    • I think you mean bodes ill. Bodes means something similar to predicts or foretells.

      Thank you, that is all.

    • I disagree entirely with 3. You can NOT test a device's accuracy by comparing it's previous output to future output, even if you also backcheck possible errors using third machines. It is just BAD science and you should graded F- for even attempting to do it.

      You ignore the change in relative accuracy.

      Assume for example that Spam Assasin is in fact the best around, but it has a 10% false spam rate. Every other program is slightly worse with an 11% false spam rate, always making the same mistake that Spa

  • What is typical (Score:4, Insightful)

    by Anonymous Coward on Thursday June 24, 2004 @12:13PM (#9519185)
    Due to X's extremely high volume of traffic and the fact that X's email addresses were available to harvest bots on the Web and in newsgroups for 20 years, it is no surprise that X has an abnormally high spam ratio, 81.6%.


    I'm not happy about this, first he says that this account has a abnormally high spam ratio and then says that a normal user can have 60%. Where do we get these figures from I would like to know as my average is pushing up against 100%. I don't think that there is such as thing as an average user, some people seem to get nearly no spam and the rest of us get almost complete spam.

    Reviewing todays inbox reveals around 200 emails, of which 8 were legit. You do the maths, I would be making progress if it was only 81%.
  • by NigelJohnstone ( 242811 ) on Thursday June 24, 2004 @12:13PM (#9519188)
    Oh boy he goes on and on, if ever you wanted to cut out the spam in an article...

    His main points (at least the ones I agreed with):

    1. No training period, many features only turn on after lots of real emails have been processed. Fair enough.

    2. No purge window, stale emails get purged over time (e.g. 4 months), but in a test everthing is shoved through at once (in minutes) and so nothing gets purged. Again fair.

    The rest of it complains about the tester, or complains that it was less than ideal conditions & settings for the particular filter.
    We call that 'the real world' here.

    Sys admins are not experts in configuring filters.

    Also he should realise that any new filter gets a better rating than the dominant filter. Spammers try to defeat the most popular filter of the day. So sure a new filter might perform better than an existing one *initially* simply because the spammers are targetting it. Until it becomes dominant and then the spammers adjust the spam to defeat the new dominant filter.

    So in the real world the data set will always be unusual because the spammers make it that way.

  • Zdziarski claims Cormack mainly used Spamassassin to classify the corpus into the ham and spam groups.

    If this is true then to me this is a critical flaw in Cormack's methodology.

    Not saying there are, or aren't other flaws. But this to me is the main one to consider. Zdziarski should have just put this at the top of his response, instead of putting a lot of waffle about stuff that does "not appear to have been a problem with Cormack's tests".
    • To repeat a comment I made just above. From his original test paper:

      "The test sequence contained 49,086 messages. Our gold standard classified 9,038
      (18.4%) as ham and 40,048 (81.6%) as spam.
      The gold standard was derived from
      X's initial judgements, amended to correct errors that were observed as the result
      of disagreements between these judgements and the various runs."

      From this I got that:

      1. He had an initial set of Spam judged by person X. (e.g. 99.84% accurate).
      2. That he ran it through each test filter
  • by cynicalmoose ( 720691 ) <giles.robertson@westminster.org.uk> on Thursday June 24, 2004 @12:40PM (#9519507) Homepage
    As far as I understand, Cormack accepted that he was testing only on one person's corpus, and qualified his findings as such.

    This is something that is featured throughout the rebuttal - an argument that runs:
    a) Such and such was done incorrectly
    b) Therefore the system was inaccurate
    c) Therefore CRM-114 is better than stated

    The ultimate point where I lost patience was where he claimed that the results were invalid because they didn't conform to accepted, real world knowledge. The study was empirical; it shows something, based on how it was set up; and what it shows is valuable. If you discarded results each time they contradicted agreed wisdom we would still think of a geocentric universe.
    • The ultimate point where I lost patience was where he claimed that the results were invalid because they didn't conform to accepted, real world knowledge. The study was empirical; it shows something, based on how it was set up; and what it shows is valuable.

      But without knowing how the test was set up, how can you trust the test's so-called empirical results?

      In medicine, research results aren't generally trusted unless 1) the study was sound, e.g., double-blind and 2) a separate team has recreated equi

  • I purpose a little test of my own...
  • POPFile OTOH (Score:4, Informative)

    by JohnGrahamCumming ( 684871 ) * <slashdot@jgc.oERDOSrg minus math_god> on Thursday June 24, 2004 @12:47PM (#9519594) Homepage Journal
    Actually publishes statistics from real users. If the user is willing POPFile sends back accuracy information to a central server and then a nightly cron job analyzes it and publishes information on the web for all to see.

    No need to read a study, or even the author's opinion. No wild claims made, just real data.

    Here it is:

    http://www.usethesource.com/popfile_stats.html

    Shows that POPFile has an _average accuracy_ over all users, including the training period of 95%. After it's seen 500 emails it has an accuracy of 97%. And the average POPFile user has 5 categories of classification.

    John.
  • DSPAM (Score:2, Interesting)

    by Big Boss ( 7354 )
    I don't claim to have done any scientific studies on the subject, but I have tried a number of different anti-spam soultions over the past few years. In my experience, the best soultion is a multi-pronged approach that takes advantage of the strong points of a few setups.

    If you want to talk about the results from a single filter in my current arsenal, I would give DSPAM the highest marks. I found it to catch more spams than a trained and customized SpamAssassin with no false positives. It's also very fast,
  • The author 'architected an appropriate response' . Persumably this is a lot better than simply replying?

    I'd advise the author not to use the word "percept", because he doesn't know what it means.

    I'd advise the author not to use the word "someodd", because dictionary.com doesn't know what it means.

    As for "very unique"...
  • The problem w/ Bayes (Score:3, Informative)

    by king_ramen ( 537239 ) on Thursday June 24, 2004 @01:12PM (#9519915)
    As the author of this article states OVER and OVER, it is REALLY EASY to mess up your filters, and it is very tedious (with lots of permutations) to properly build your corpus. For a centralized spam filtering solution, the goals are: 1. Insulate the users from spam 2. Insulate the users from "administration" 3. Do no harm (no false positives) For these goals, I would take a "dumb" filter, set it conservatively, and hope for 80% catch rate and zero false positives. DSpam has a complicated workflow that requires EACH AND EVERY end user to complete a feedback loop. This is WAY to much to expect from people who are barely capable of finding Google. Unless the ONLY access to the mail is web-based, with a VERY clear "This is Spam" button, Bayes is a sysadmin's nightmare. My only gripe w/ SpamAssassin is performance. If I could get SPAMD to analyze headers in 25ms instead of 2000ms I'd never look back. As it is, DSPAM's performance has me very jealous.
  • by telstar ( 236404 ) on Thursday June 24, 2004 @01:24PM (#9520043)
    He launches rockets ... He develops 3D game engines ... He analyzes spam trends ... Is there anything this Carmack guy can't do?

    What'd you say?
    Cormack?

    Nevermind...
  • I find the most telling point is that he used Spam Assasin to decide if the various spam detectors had made an error or were correct.

    OBVIOUSLY, Spam Assasin is going to agree with Spam Assasin being the best.

    What the test really did was determine how close to Spam Assasin the other spam detecters were, not how good they were at detecting spam.

  • by dougmc ( 70836 ) <dougmc+slashdot@frenzied.us> on Thursday June 24, 2004 @01:36PM (#9520181) Homepage
    This seems very atypical. The test subject does not represent typical email behavior, except among the most hardcore geeks. Even still, typical hardcore geeks will adjust this behavior in an attempt to curve spam. The typical technical user (someone who makes his living online) will have the same email address for perhaps five or more years, and the typical non-technical user (a majority of the users on the Internet, lest we forget) will change email addresses every couple of years. In either case, most sane users use one or two variants at the most.
    Who is Jonathan to decide what consitutes sanity?

    Maybe I'm a hardcore geek, but I do do exactly what Gordon does -- have several accounts feeding a `master' mail account, using addresses I've owned for over a decade. I also post to Usenet and mailing lists with my unobfuscated mailing address -- I want people to be able to reach me, and I refuse to let the spammers take that away from me.

    And I think I'm very sane, thank you.

    49,000 emails in eight months is also absurd.
    I agree. That's an absurdly *small* amount. I personally receive over 1500 spams/day -- so I'd have 49,000 in under a month. Obviously the amount of spam I receive is because I set myself up as a target, but I'm hardly the only one. Even Jonathan's email address is clearly listed on his page, unobfuscated, so he's doing it too, at least to some degree.

    (As a piece of anecdotal evidence, Spamassassin catches all but about 4/day of the spams I get, and false positives are extremely rare. Of course, I have spent a good deal of time tweaking SA to work best with my email, and it now works very well.)

    A good test should have included independent tests with corpora from 10-15 different test subject, of all walks of life - geek, doctor, etc.
    That sounds fine in theory, but in practice it's hard to do. How many people from all non-geek walks of life save *all* their email, including spam, and are willing to give it to you so you can analyze it?

    And merely capturing all their email won't do it -- they need to categorize it for you, because they're the only ones who can reliably decide what's spam *for them* and what's not.

    I do agree, that the study had more than it's share of issues, but this critique goes way over the top.

  • Crap writing (Score:3, Insightful)

    by fuzzy12345 ( 745891 ) on Thursday June 24, 2004 @01:40PM (#9520227)
    I was turned off as soon as I hit that word "architect" being used as a verb. After our hero "architected" his response, did he assign the task of actually writing it to someone else? Nooo.

    English does evolve, and good writers sometimes repurpose words to great effect. Alas, judging by the rest of the reviews here, our hero is NOT a good writer -- having built a shoddy and ramshackle outhouse, he proudly crowns himself the architect of it.

    As for all those people who shout "prescriptive grammarian!", I often suspect they're just too lazy to learn to write well, and have decided that claiming that rules are passe is an effective workaround.

  • When self-proclaimed pundits do these studies, they should also factor into account the exponential increase in resources needed to accept and filter the mail's content. This results in more memory, faster machines, slower mail service and more deferred mail and reduced performance overall of everything else that might be done on that server.

    Contrast this with the effectiveness of RBLs, which block spam based on the source and immediately cut off the huge resource requirement needed by these "filters".

    By
    • I have noticed that black lists are indeed effective. Many spammers now use "bullet proof" spam hosts, so they use static domain names. However, there has been an marked rise in zombie systems sending spams. These are systems that are infected by viruses and then used as spam hosts. Since these systems come on line rapidly (when they are infected) and then drop out (when they are cleared of the virus or booted off their ISP) it seems unlikely that black lists will help.

      At least in the spam stream I s

  • by gvc ( 167165 ) on Thursday June 24, 2004 @02:48PM (#9521026)
    We shall not respond to Mr. Zdziarski's attacks, except to identify the most outstanding factual errors and to note that ad hominem arguments are irrelevant in assessing the validity of our work.

    We encourage interested parties to read our paper [uwaterloo.ca] and our points of fact re Zdziarski [uwaterloo.ca].

    Thomas Lynam
    Gordon Cormack
    June 24, 2004

    • It would be so much easier to believe you if you would just show us the code you used to perform the tests.

    • While obviously Cormack and Lynam are central to this discussion, it's depressing that this is +4, Informative when instead they obviously resent any serious questioning of their work. Is there a '-1, Wussy' moderation?

      "We shall not respond" -- huh? Pull the log out of your ass guys. Like it or not, he's got legitimate beefs with your study. What's more, he's got cred: dude puts SERIOUS effort into GPL'd software that helps people, so his input is relevant and valid. Get over it.

      Besides, his questioning o
  • I am always confused by the omission from these tests of collaborative filters like Cloudmark's SpamNet [cloudmark.com], which I have used at work for a long time with a very high "catch" rate, no real processing time, and no false positives. Essentially, every email you get it hashes and checks with the server. If you get a spam, you right-click and report it as such. Then it pulls any messages from your inbox which enough credible people have marked before you. (A gross oversimplification, but close enough.)

    I feel li

  • by Anonymous Coward on Thursday June 24, 2004 @03:08PM (#9521263)
    I remember going through the CRM114 installation docs, and vividly remember the 20 or so steps that I had to go through, and after about 3 or 4 hours of trying to get it installed, I finally gave up. I think part of the goal of software design is to make your software so that people will be able to quickly install and use it. The author of this program lost sight of this important point. I'm not going to sit there and reverse engineer some esoteric codebase just to get it working, and I'm sure alot of other people feel the same way. Therefore, I use SpamAssassin among other things, and it works really well and was quick and relatively painless to get working. I didn't have to go through their source code to figure out how to get it installed.
  • by jmason ( 16123 ) on Thursday June 24, 2004 @04:09PM (#9521919) Homepage

    My $.02. disclaimer: I'm one of the SA developers.

    • "The Corpus was Classified by SpamAssassin, for SpamAssassin", and "The Accuracy of the Test Subject's Corpus is Questionable":

      No, this is incorrect. Firstly, he states that he used user feedback to reclassify FNs and FPs (p. 4).

      The misunderstanding probably comes from p. 6, where he notes that he also ran SpamAssassin 2.63 over the "gold standard" corpus once it was complete, to verify his original classifications.

      However, in addition to that, he states 'all subsequent disagreements between the gold standard and later runs were also manually adjudicated, and all runs were repeated with the updated gold standard. The results presented here are based on this revised standard, in which all cases of disagreement have been vetted manually.' So in other words, the "gold standard" should be as near as possible to 100% accurate, since all the tested filters and the human classification have "had a shot" at classifying every mail, and the human has had final say on every misclassification.

      In other words, if any misclassifications remain in the "gold standard" corpus, every one of the tested filters agreed on that misclassification.

      IMO, that's as good as a hand-classified corpus can get.

    • "old versions of software were used":

      It's unrealistic to expect the author to use the most up-to-date versions of filters available by the time the paper is made available to the public. That's the difference between results and a paper -- it takes time to analyze results, write it up and come to valid conclusions, once the testing results are obtained. IMO, the author can't be faulted for spending some time on that end of things.

      Given that, using 6-month old release versions of the software under test seems reasonable.

      SpamAssassin 2.60, when new SpamAssassin rules were last added to a released ruleset, is 9 months old (released 2003-09-22); so logically, in testing against DSPAM 2.8 (released 2003-11-26), DSPAM should therefore have had the edge. ;)

    • "test started with untrained filters":

      IMO, that's the real world. People don't start with fully-trained filters.

      In addition, the graphs on pp. 15-20 show accuracy over the course of the entire 8 month period, so "post-training" accuracy can be viewed there.

    • "spam in the test is as old as 14 months":

      Nope, he states (p. 4) that the corpus uses mail between August 2003 and March 2004.

    • "it should purge old data":

      SpamAssassin purges its Bayes databases automatically, based on the age of messages in the corpus. We call it "expiry".

      In that test, the "SA-Standard" dataset would be using this, so stating "Cormack did not perform any purge simulation at all" is not accurate. However, that would not have increased SpamAssassin's accuracy figures, since we have generally have found that while it keeps the overhead of bayes database sizes and memory down, it marginally reduces accuracy, instead of increasing it (at the default settings).

      (Also worth noting that it can deal with being run from an en-masse check over a static corpus, as it uses the timestamp information in the Received headers rather than the current system time. So even if this test was run in the course of 4 hours, it'd still be an accurate simulation of what would happen in "real world" use over the course of 8 months.)

    And finally, what Henry said in comment 9520473 [slashdot.org].

    --j.

  • Honestly, the first time I read Cormack's paper I stopped partway through because his findings didn't jive with my own experience. I've applied no scientific method to debunk his findings, and I don't care to -- I have other demands for my time.

    I use and recommend DSPAM. Many of the accounts that are aggregated in my inbox have been exposed on the web and in Usenet for several years, so my spam load is probably about as high as anyone else's. No comparison testing analysis can change the fact that my inbox

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...