Forgot your password?
typodupeerror

Reporting Vulnerabilities Is For The Brave 245

Posted by ScuttleMonkey
from the cya-first-and-always dept.
An anonymous reader writes "A recent post on the CERIAS weblogs examines the risks associated with reporting vulnerabilities. In the end, he advises that the risks (in one situation, at least) were almost not worth the trouble, and gives advice on how to stay out of trouble. Is it worth it to report vulnerabilities despite the risks, or is the chilling effect demonstrated here too much?"
This discussion has been archived. No new comments can be posted.

Reporting Vulnerabilities Is For The Brave

Comments Filter:
  • by Anonymous Coward on Monday May 22, 2006 @04:18PM (#15383729)
    I agree with the article for the most part - the advice he gives students is probably the correct advice from a teacher. However, the conclusion he reaches:
    I agree with HD Moore, as far as production web sites are concerned: "There is no way to report a vulnerability safely" [securityfocus.com].
    I cannot agree with.

    I think a vulnerability can be reported anonymously quite safely (for a good deal of people anyway). Try the following:

    1) Get a laptop with wireless.
    2) Boot with knoppix, change mac adress.
    3) Walk around until you find unsecured AP.
    4) Post said vuln everywhere (including /.)

    -wmf
  • by overshoot (39700) on Monday May 22, 2006 @04:18PM (#15383731)
    All things considered, it's a whole lot safer (not to mention more profitable) to notify the black hats about vulnerabilities rather than the vendors or the public.
    • by Anonymous Coward
      Every time I've reported a bug of any nature to a F/OSS project it's been quite well received - and the one that was (arguably) a security bug saw the patch issued for the benefit of all users that very afternoon.


      If reporting a security bug to one of your vendors (OS or other software) or suppliers (ISP / hosted software) is a problem, change your vendor.


      If reporting a security bug to one of your employers is a problem, change your employer.


      • That's fine for application software, where the code is running on your machine. However, this article is talking about security testing on 3rd party web pages. In this case, I think the article's opinion is correct. Unless there's a signed statement explicitly allowing you to do penetration testing, you shouldn't go prying into other peoples web sites even if you do think there is a vulnerability. And, should you (inadvertently) find a vulnerability, you ought to keep it to yourself and delete all evid

      • Same here. When I reported a bug I stumbled on with QtParted I got an email the next day. The project manager told me to update because he thought he had it fixed, but added to be sure to email him if I ran into it again. But after seeing people sued for reporting vulnerabilities in prop software I wouldn't even try. Hey, it's closed source so I can't help but feel like why bother to help them anyway? They're making money off of it and they're not going to pay you for helping them and may, in fact, attempt
  • by disasm (973689) on Monday May 22, 2006 @04:19PM (#15383733)
    Open Source projects don't interrogate and try to prosecute you if you find a security problem and report it.
  • wierd (Score:2, Insightful)

    by drfrog (145882)
    im not proposing one do this.. but it makes one think

    'if im gonna get jailed anyways...might as well make some money off of it'

    • by st1d (218383)
      Apply this to the "war on drugs" and other situations, and it's easy to see why this mindset is doomed to failure. You're virtually begging the kinds of folks that might commit a crime to take their crimes as far as they can. At the same time, you're discouraging normal citizens, who might otherwise intervene to help protect their communities, from doing so.

      On the other hand, I can feel for the security admin who's tired of chasing down dead ends created by random people actively trying to punch holes in
  • by booch (4157) <slashdot2010@craigb u c h e k . c om> on Monday May 22, 2006 @04:23PM (#15383763) Homepage
    Maybe there should be a site to allow anonymous reporting of vulnerabilities. This way people could do the right thing without having to worry about the repurcussions.

    You could have some sort of secret key to verify that you were the original submitor, if you later wanted recognition for the report. (I imagine a PGP signature of a secret text would be sufficient to allow validation, without any chance of determining who posted until they came forward.)
  • /. effect (Score:3, Insightful)

    by joe 155 (937621) on Monday May 22, 2006 @04:24PM (#15383772) Journal
    well the website has already gone. One thing which I find with all this though is that you should just put it up anonymously on some often checked bbs or newsgroup or something. It is really stupid tha companies think that the danger of hacking comes from people who publically state security hole and not the people who stay very quiet and use them... some mistake?
    • Re:/. effect (Score:2, Interesting)

      by coj (20757)
      We should be back up now. Here's a tip: unless you have a huge amount of RAM so you can up your MaxClients, Apache is much happier with persistent connections "Off" when dealing with Slashdot visits.
  • Anonymous Email (Score:3, Insightful)

    by Anonymous Coward on Monday May 22, 2006 @04:26PM (#15383780)
    You see, it's simple. Even if Bob's Software knows about the flaw in Program, they can atleast say with a straight face that they had no idea it existed. Once you announce in publically, they have been officially notified that the flaw exists. At that point, anything serious that happens, say Program causes some other company to lose lots of money, puts Bob's Software as a responsible party for allowing this known flaw to exist.

    What you did was open the door litigation against Bob's Software for negligence. Bob's Software doesn't want the flaw to become public. When you stand up and point the finger at Bob's Software, they will be looking for someone to pass on the litigation fees to, so you get sued. Not only that, someone needs to be made an example of so others don't try it in the future.

    Anonymous email accounts are easy to come by. Send an anonymous announcement to the Full Disclosure mailing list and be done with it. Otherwise you're risking the legal bills of fighting whatever company decides to sue you.

    • by dereference (875531) on Monday May 22, 2006 @04:48PM (#15383908)
      Even if Bob's Software knows about the flaw in Program, they can atleast say with a straight face that they had no idea it existed. Once you announce in publically, they have been officially notified that the flaw exists.

      That's all quite true.

      At that point, anything serious that happens, say Program causes some other company to lose lots of money, puts Bob's Software as a responsible party for allowing this known flaw to exist.

      And, if software were like any other tangible (and most intangible) products/services in the world, you would be correct here as well. Unfortunately it's not, so you're not. Why? Those lovely click-wrap EULA licenses explicitly and specifically disclaim all liability, including even fitness for purpose. Look at almost any EULA out there and you'll see that usually the most you could possibly recover, even if this software somehow manages to kill you, through gross negligence or otherwise, is the price you paid for it.

      Of course, Bob's Software doesn't want to part with your money, so your point is still partially valid. However, I think we shouldn't overlook the fact that we're not talking about huge product liability lawsuits, and yet they're treating disclosures as if we were. Basically they're trying to have their cake (EULA dislaimers) and eat it (prevent disclosures) too.

      They would, it seems, be doing fairly well at both right now.

      • Just because the EULA says it, doesn't make it true. Otherwise by reading this sentence you'd be agreeing to sign over your life savings and hand-deliver a package of Skittles to the Dalai Lama within 3 working days.
        Or even better, I can't put a punji pit in my front yard and then put signs on my property that simply say "Not responsible for injuries incurred on this land" and be totally immune from retribution when Little Billy becomes Little Spike.
        • But its amazing how many people would think you could. I once saw a dump truck with a sign on the back that said "Not responsible for damage from falling rocks". I guess this is only to intimidate people into not filing a claim, because only an idiot would think they could get away with that. But the American public is so ignorant when it comes to their rights.
  • by Stanistani (808333) on Monday May 22, 2006 @04:26PM (#15383788) Homepage Journal
    Coincidentally the quote on the bottom of the page when this was posted:
    I stick my neck out for nobody. -- Humphrey Bogart, "Casablanca"

    Ah well, at least we'll always have Paris.
  • by buck-yar (164658)
    This raises a good point. There are many circumstances that exist where "doing the right thing" has potentially negative consequences.

    * Picking up a hitchhiker

    * Peporting evidence of theft from a company (retaliation, backlash if employee is exanerated)

    There's more than my limited mind can produce.
  • Hope not, 'cause right now it's... hold on, there's someone knocking on my door...
  • I don't get it (Score:2, Interesting)

    by gr8_phk (621180)
    Why do people think trying to hack web sites without asking the owners first is somehow acceptable?

    No really. Why should that be OK? Is it OK for someone to walk around the neighborhood and try turning all the doorknobs? How about pushing the doors open to see if they're bolted? Should they take a picture from inside and send it to the homeowner as proof that someone could get in? Should you be suprised when someone tries to prosecute such a person? Sorry for the analogy, let's just try to answer the firs

    • Re:I don't get it (Score:5, Interesting)

      by Mr. Hankey (95668) on Monday May 22, 2006 @04:57PM (#15383970) Homepage
      You're assuming someone tried to hack it. It's not impossible to stumble into a bug. I was using a "training" site at work a few years ago (we're required do the same training/test every year) and hit the wrong button accidentally. I then hit the back button so I could click on the button to print a "certificate". As it turns out, I was then logged in as another user.

      Do you think I should have reported this? Should I have ignored the issue? I had access to another person's training records without authorization. No doubt someone could have gained access to mine as well. On the other hand, I'm not interested in being prosecuted for something this silly.
      • Re:I don't get it (Score:2, Interesting)

        by Jerim (872022)
        I don't trust the legal system to understand technology.

        Their logic is that you accessed someone else's account. Whether you intentionally did it or not, the fact remains that you did it. Therefore, 9 out of 10 courts are going to assume you are guilty.

        Just like if they saw you carrying a bag of cash right after someone robbed the 7-11. Nevermind the fact that you just cashed your paycheck at the local bank. You were found carrying money in a bag right after a store was robbed. No one is going to listen to
    • Re:I don't get it (Score:2, Insightful)

      by Anonymous Coward
      Why do people think trying to hack web sites without asking the owners first is somehow acceptable?

      I fail to see what any of your comments have to do with TFA. The author explicity does not condone hacking. Your metaphor is wrongheaded, too. Public web sites are not the equivalent of a random private house on the street. If I walk into a store to buy something, go to the checkout, and discover that if I lean against the checkout counter that cash streams out the register, does the store want me to le

    • Re:I don't get it (Score:4, Informative)

      by Chandon Seldon (43083) on Monday May 22, 2006 @05:01PM (#15383997) Homepage
      The analogy is your problem.

      In the article, it's talking about students noticing security issues in web applications that they are using. If you accept the physical property analogy at all, this is more "seeing that a door that should be secured was left open".


    • Here we go again with the Doorknob Analogy. I see your "try turning all the doorknobs" and raise you a "don't leave your door open with a big neon sign that says WIDE OPEN DOOR HERE".
    • Re:I don't get it (Score:3, Insightful)

      by finkployd (12902)
      Is it OK for someone to walk around the neighborhood and try turning all the doorknobs? How about pushing the doors open to see if they're bolted?

      Because that is EXACTLY like finding a vulnerability on a website. Once again, real life analogies serve to only confuse the issue, having little to no relevence to the subject at hand.

      There are many ways to find a bugs in web applications, often just from regular use. A vulnerability is nothing more than a bug that happens to have more serious reprocutions. I've
    • I remember someone explaining that it's perfectly legal to walk up to a house and walk through an open door, so long as you leave when you're asked to. It's not trespassing until you refuse to leave when they ask. It's not breaking and entering if the door was wide open.
      • Good luck with that.
    • Is it OK for someone to walk around the neighborhood and try turning all the doorknobs?

      Bad analogy.

      House doors don't just magically spring open just when you walk down the street and have an Irish sounding name.

      Some website, however, do. Especially if they run Microsoft Sequel Server, hehe.

    • Is it OK for someone to walk around the neighborhood and try turning all the doorknobs? How about pushing the doors open to see if they're bolted?

      No, but there are neighborhood watch groups, and it is normal to call the police if a door looks like it is hanging ajar. It's also normal to petition the local government to install or repair streetlights in dark or dangerous areas. Due to the nature of computing (zombies, identity theft) I think it is very much my business to see that my neighbors are secure

  • by jonfr (888673) on Monday May 22, 2006 @04:51PM (#15383928) Homepage
    "where they can become suspects and possibly unjustly accused simply because someone else exploited the web site around the same time that they reported the problem."

    Been there, done that. Got arrested, got lucky, found not gulty for all but one charge, but lost three computers becose the cort did figure out it was wrong of me to use a pwd (I did test the flaw, big mistake), even if it was on a public C: drive for everyone to see and in a clear text file. I am never going to report a bug in a computer system in a school, company or somewhere else agen. Don't care what the type of the flaw is or who it is, it is there own problem, they can handle there own infestation.
    • Jonfr,

      Some people might argue that reporting networking vulnerabilities on Windows is like shooting fish in a barrel. But nevertheless, from what you wrote, you seem to have done the right thing. I'd strongly suggest that next time you find a flaw in that institution's network (was it your school?), you just post it anonymously on the Internet. Preferably on a high traffic site.

      If people start doing that, maybe the notion that you shouldn't shoot the messenger will slowly sink into the thick skull of th

  • True story (Score:5, Interesting)

    by celardore (844933) on Monday May 22, 2006 @04:54PM (#15383947)
    This story is true...

    It's easy to spoof email addresses with a very simple PHP script.
    I decided one day to trick one of my collegues. I sent him an email 'from' one of our very attractive collegues (in a fairly distant department so I thought it safe at the time) complimenting him on his physique and machismo. I used her real email address as the 'spoof' address, which being the dumbass he is, he replied to. In a manner that would not be considered acceptable in a work enviroment lets say...

    Well, I got in trouble for this. (Everyone where I work already knew I was the only one capable of something like this... [lame] So that same afternoon I was called into my bosses office. He was quite frank, and also remember that I value my job here, he said "That email... You had something to do with it didn't you?"

    I said that I was the cause of that little incident by way of one of my scripts. I said I was sorry it went as far as it did, and my boss accepted that.
    After that my boss said, "Do you have any other things you wish to report?" I decided that I'd come clean with everything I'd found out about the work network. I told them that using the citrux system, I could remotely control anyone on the networks PC. I told them I could spoof emails from anyone... Which resulted in my company rejecting email authorisation for crediting invoices full stop.

    OK, through a prank I caused my company a bit of upset... But I, in turn, improved systems indirectly. And all this because I exposed one weakness, and upon my bosses asking me about it - I told all. As I'm sure any loyal employee would do. Through exposing a weakness in my company, I concentrated effort on plugging those holes.
    • Only on slashdot could anyone be surprised that a man responded to an attractive woman flirting with him. Personally if he's single then he'd have been a dumbass not to(though it might have been more appropriate to keep the physical evidence lower key).
      • Only on slashdot could anyone be surprised that a man responded to an attractive woman flirting with him.

        Only a tool would email someone at their work a sexually explicit proposal (which I assume is what was alluded to) without already being in a sexual relationship. Even then, decency states that you use some innuendo so they don't get in trouble because of your post-teenage lust.

        Personally if he's single then he'd have been a dumbass not to

        Yeah, by setting up a date, maybe? Once you have her full

    • Re:True story (Score:3, Insightful)

      by merreborn (853723)
      "It's easy to spoof email addresses with a very simple PHP script."

      It's easy to spoof email addresses with a very simple telnet client.

      telnet mail.example.com 25
      HELO local.domain.name
      MAIL FROM: billg@microsoft.com
      RCPT TO: pranked@yourdomain.com
      DATA
      Subject:

      .

      QUIT

      Hell, you can usually just set an arbitrary 'from' address in your email client. I learned that trick on Netscape 3.0 in gradeschool.
  • by Todd Knarr (15451) on Monday May 22, 2006 @04:56PM (#15383969) Homepage

    The people running Web sites, or creating software for that matter, might want to consider some of the consequences of their current crack-down on vulnerability reports. Yes, vulnerability reports are bad PR. However, if this keeps up people who find vulnerabilities will have only two feasible alternatives:

    1. Say nothing. This leaves the site or software wide open to exploitation by the unscrupulous. The PR when this comes out will be even worse (and it will come out).
    2. Don't report to the creators. Report only to the general public, anonymously, with full details included so nobody has to trust the reputation of the reporter to verify the validity of the report. Of course this makes it impossible for the creators to fix problems before the world gets told about them.
  • by JeffSh (71237) <[gro.0m0m] [ta] [todhsalsffej]> on Monday May 22, 2006 @05:00PM (#15383986)
    I have two times found and two times reported vulnerabilities I have found in public web based systems.

    Let me tell you, it was not easy. Here's the story of the first time because it's the most interesting.

    I worked for a community college in its' tech department. Alot of my time was devoted to answering phones and helping faculty with problems, which did leave me idle alot. (high availability requires high idle time as a concequence). As a tinkrer, my idle time is never spent truly idle, but pursuing things that don't require 100% attention.

    The community college I worked for had many different systems, and as such had many many translation layers between them. One of these transition layers was a transition from a "Portal" type website to another website that handled student information. (class registration, transcripts, billing, paying, you know all that important personal stuff).

    Anyway, I found a flaw in one of the scripts used to authenticate a user session to the second web service. The flaw was that the moron who coded it decided that creating a script that accepted 1 variable (the username) was enough security to authenticate a login.

    by closely observing the scripts actions through my web browser, i noticed there were 2 very quick redirects. Focusing my efforts there (and logging my URL requests), i found the call to the script that required only the username.

    So, basically, at that point I had access to anyones student account that I had the username for.

    I documented it very well in a long email, and demonstrated the flaw to my coworkers. I thought I would be a real hero for finding it; I mean afterall, if I had found it who knows who else might have? surely, disaster averted!

    But... my idealism in the situation was met hard with reality. My inexperience led me to not take into account factors I should have.

    After reporting the vulnerability, a minor investigation was launched which I was the subject of. I felt more like a crminal than a saint. After demonstrating how I could login to their accounts, my coworkers were suspicious as were my superiors. The thought pattern seemed to go like "Well shit if he can do that, what else has he done? Why was he even poking around there in the first place?".

    While never actually accused of any wrong doing, they weren't nearly as impressed with my find as i thought they would be. I was looking for a pat on the back, maybe a bonus, but instead my superiors were troubled and nervous. I'm not sure if I was right in feeling this way, but I never felt quite fully trusted there again after that one.

    The other thing I didn't think about was how the existance of the error then impeached the person who wrote it. rightfully so, because it was a FOOLISH error, but the guy who wrote it was a guy who had been employed there far longer than I, and of course having me find it and dismantle it presented quite an embarassment to him.

    I ended up leaving the job there 6 months later for a variety of reasons, but reporting the vulnerability was one of the 2 or 3 core reasons that I left. I don't regret it all and would do it the same way again, but going through it taught me alot about how to NOT be someones boss (should I ever become one in the future), and not react in the accusatory manner like my superiors did.
    • Similarly, I was recently taking a proctored exam. The exam center used a computer-based testing method, running on a Windows PC. The test was a math test, and the computer was pretty much wide open. Only very minimal measures were taken to lock down access and functionality. Yet, they had a pair of goons frisk me on the way in, and took away my cell phone, my watch, and pen.

      I demonstrated for the proctor, the fact that ANYONE could use the start menu, run item to open calc.exe, and therefore, access the
    • Don't expect to get rewarded for making someone else look bad. Yes, that wasn't your intent. I know you were working to improve operations, but that idea was probably inconceivable to administrators.
    • Suprisingly I went through an almost identical situation to this, and also left about 6 months afterwards for similar reasons.

      In my case it was a very simple SQL injection bug in the login page, being the person I am I do test for these things out of curiosity and an almost compelling need to re-assure myself that the systems I'm working with or using are relatively secure.

      I landed up in the middle of an 'investigation' after an e-mail with a couple of screenshots and a quick description of the bug was sent
  • by NicoNet (466227)
    I had worked for the Cuyahoga Falls School District in IT. I had noticed that on NeoNet's (Our Internet Provider) FTP server that anonymous was able to download, upload, and delete any file on the server. I reported this in October 2000 to NeoNet, they did nothing about it. In March of 2001 I was laid-off due to financial issues in the school district. Weeks later, the schools web site was replaced with a porn site using the anonymous login. They immediately assumed it was me. Luckily they were able t
  • by cinnamoninja (958754) on Monday May 22, 2006 @05:01PM (#15384002)
    CERIAS Weblogs Reporting Vulnerabilities is for the Brave

    I was involved in disclosing a vulnerability found by a student to a production web site using custom software (i.e., we didn't have access to the source code or configuration information). As luck would have it, the web site got hacked. I had to talk to a detective in the resulting police investigation. Nothing bad happened to me, but it could have, for two reasons.

    The first reason is that whenever you do something "unnecessary", such as reporting a vulnerability, police wonder why, and how you found out. Police also wonders if you found one vulnerability, could you have found more and not reported them? Who did you disclose that information to? Did you get into the web site, and do anything there that you shouldn't have? It's normal for the police to think that way. They have to. Unfortunately, it makes it very uninteresting to report any problems.

    A typical difficulty encountered by vulnerability researchers is that administrators or programmers often deny that a problem is exploitable or is of any consequence, and request a proof. This got Eric McCarty in trouble -- the proof is automatically a proof that you breached the law, and can be used to prosecute you! Thankfully, the administrators of the web site believed our report without trapping us by requesting a proof in the form of an exploit and fixed it in record time. We could have been in trouble if we had believed that a request for a proof was an authorization to perform penetration testing. I believe that I would have requested a signed authorization before doing it, but it is easy to imagine a well-meaning student being not as cautious (or I could have forgotten to request the written authorization, or they could have refused to provide it...). Because the vulnerability was fixed in record time, it also protected us from being accused of the subsequent break-in, which happened after the vulnerability was fixed, and therefore had to use some other means. If there had been an overlap in time, we could have become suspects.

    The second reason that bad things could have happened to me is that I'm stubborn and believe that in a university setting, it should be acceptable for students who stumble across a problem to report vulnerabilities anonymously through an approved person (e.g., a staff member or faculty) and mechanism. Why anonymously? Because student vulnerability reporters are akin to whistleblowers. They are quite vulnerable to retaliation from the administrators of web sites (especially if it's a faculty web site that is used for grading). In addition, student vulnerability reporters need to be protected from the previously described situation, where they can become suspects and possibly unjustly accused simply because someone else exploited the web site around the same time that they reported the problem. Unlike security professionals, they do not understand the risks they take by reporting vulnerabilities (several security professionals don't yet either). They may try to confirm that a web site is actually vulnerable by creating an exploit, without ill intentions. Students can be guided to avoid those mistakes by having a resource person to help them report vulnerabilities.

    So, as a stubborn idealist I clashed with the detective by refusing to identify the student who had originally found the problem. I knew the student enough to vouch for him, and I knew that the vulnerability we found could not have been the one that was exploited. I was quickly threatened with the possibility of court orders, and the number of felony counts in the incident was brandished as justification for revealing the name of the student. My superiors also requested that I cooperate with the detective. Was this worth losing my job? Was this worth the hassle of responding to court orders, subpoenas, and possibly having my computers (work and personal) seized? Thankfully, the student bravely decided to step forward and defused the situation.

    As a consequence of that experience, I in
  • by humankind (704050) on Monday May 22, 2006 @05:03PM (#15384015) Journal
    When vulnerabilities are outlawed, only outlaws will use vulnerabilities.

  • by i am kman (972584) on Monday May 22, 2006 @05:10PM (#15384050)
    Hmmmm, of course the article focuses on the big evil website administrators for attacking the small defenseless students who tried to (probably) illegally break into his system. The article carefully avoids any discussion of what these students actually did to 'discover' the vulnerabilities.

    I'd venture to say that most hackers 'smart' enough to hack into a website is probably smart enough to send an anonymous email reporting the hack. If the administrator ignores the emails or warnings, then the burden falls upon them.

    This is similar to a crook breaking into a house and then reporting the secret stash of drugs or child porn they found. Ok, it would be nice if they could report it anonymously, but it certainly doesn't justify the initial illegal behavior. And, like most crooks, they probably break into hundreds of places before they either get caught or find stuff worth reporting (like being able to access student grades or SSN).

    That said, I agree it's in the website's best interest to allow folks to anonomously post vulnerabilities. Duh.
    • While in general I agree with you *if* the student actual did something illegal, there are a lot of circumstances where just a cursory inspection of the user-visible parts of the system will reveal the strong probability of a security vulnerability. If you say "Hmm, I wonder if this site has a SQL injection vulnerability" and then fire off a SQL command to print out all user names to your screen, congratulations, go-to-jail-do-not-pass-go. But if you open up your own student records, hit "View Source", an
  • You know the statement. And it's twice as true with vulnerabilities.

    It IS already very hard for security companies to get 0day exploits at their hands. Making it illegal to report vulnerabilities is about the DUMBEST thing to do. It means that the info only circulates in the cycles that want to exploit them.

    Now, that SURELY raises security. About as much as the snooping of governments raises freedom.
  • by Intron (870560) on Monday May 22, 2006 @05:22PM (#15384119)
    I recently figured out a fairly anonymous method of reporting vulnerabilities for a cost of only $0.39. Send SASE for details.
  • Not so different (Score:3, Insightful)

    by OpenSourced (323149) on Monday May 22, 2006 @05:22PM (#15384120) Journal
    Well, that's not so different as the situation in physical security systems. Go and tell a bank manager that they have an unsecured entry point in the air ducts, and that their alarms can be blocked by a XT42 bypass (or whatever), and the guards always have lunch at the same time leaving the screens unattended for ten minutes.

    You are probably making them a big favour, but the fact remains that they will be suspicious about you, and may call the police. How do you know about those things? What are your intentions? It's quite a natural reaction. We only perceive the situation to be different because we happen to be experts not in alarms but in computers.
    • Re:Not so different (Score:3, Interesting)

      by alienmole (15522)
      A friend of mine once noticed a mains power anomaly being reported on a regular basis by his APC SmartUPS. He reported it and provided the info from the power supply's automated report to power company. Later that day, he got a call from the police wanting to know why he knew so much about the power system - the power company had "turned him in". The police accepted his explanation, but he (and I) were a bit taken aback by the incident.

      BTW, where is your sig from? I like it. I'm still trying to learn t
    • Re:Not so different (Score:3, Informative)

      by x2A (858210)
      ...and what if you're in the bank, and you notice that their "authorised personnel only" door with a secure code lock is catching on the carpet when staff come through it, and not clicking shut?

      Point is, you don't always have to be looking to see something.

    • Re:Not so different (Score:3, Informative)

      by MikeBabcock (65886)
      This happens because the problem is reported to the wrong person. Management knows nothing of the practicalities of security. Explain these problems to a security expert who does work for the bank or who knows those people. If you report something out of the blue to management as a nobody, you'll obviously be regarded with great suspicion.
  • An MMO I play, Nexus TK [nexustk.com], had a serious bug revolving around event wins.

    Specifically, I was able to give myself an unlimited number of Elixir War victories. Now, an Elixir War is an event held maybe twice a day, if that, in which players are split into two teams, play a sort of paintball/freezetag game, and at the end of three rounds, the event hosts summon an NPC near the winning team's base. Clicking him gives the player a choice of prizes, and selecting a prize gives the player a victory and teleports t
  • In other words, no good deed goes unpunished. It's so sad that attempting to help people can put you at risk.
  • I wouldn't take his advice about deleting any evidence you found the vulnerability.

    The problem is; you could have stumbled onto a honeypot. Or, the system could be vulnerable, but they could be logging your IP anyway (they're only half-incompetent).

    Deleting evidence is a sure-fire way to get indicted for obstruction of justice, lying to investigators, etc.

    I'm not sure what the right answer here is - but it's not "covering your tracks" because you can't always cover ALL of your tracks, and covering some of
    • Deleting evidence is a sure-fire way to get indicted for obstruction of justice, lying to investigators, etc.

      Bullshit. It isn't obstruction until there's an investigation (exceptions for legally required document retention). If they find you, tell them you deleted the records 'because you didn't need it anymore'. I suspect that if you tell them it was to avoid being persecuted by some DA looking for a kill, it wouldn't go over too well.

  • by bIOHZRd (196012) on Monday May 22, 2006 @06:06PM (#15384366) Homepage Journal
    ...Basically, I was job hunting and a friend directed me to a website of his company who was hiring. Now, instead of typing "www.company.com" i typed in "company.com". Boom, I'm presented with a database login. Hmm, I thought this was maybe for the job search, and didnt see a register button, so I just hit login. I was then presented with what I THOUGHT was a fake database...kind of like the example php websites you can "login" to to get a taste for the app. I wasn't 100% sure, but eventually decided to try running a sql command...I changed all the company descriptions (it was a hiring agency) to "Change your admin password!" I then realized (late I know), that this was a REAL database after more poking around and finding real names/phone #'s/emails. I found the head of the company's email and politely told her there is a SERIOUS hole in her system. She (VERY) quickly responded with her phone number that I already knew and asked me to call. So, being the good citizen that I was, I called. Ha! She immediately asked my personal information which I was hesitant to give, and resorted to only giving my first name. Then she connected me with the "IT guy" if you could call him that, and I explained what I had did and how I did it. Throughout this whole conversation I was very nervous and got the feeling that I was being criminalized. After the whole ordeal was over (luckily they had backups), she offered me the job that I was initially seeking, but I politely refused stating I didn't feel comfortable working for a company that was as insecure as hers.
  • Anonymous DSL (Score:2, Informative)

    by knifeyspooney (623953)
    Step 1: Get AnonDSL service [bway.net].

    Step 2: Create an anonymous webmail account.

    Step 3: Practical immunity to abusive lawsuits means they can't take you to court for ...

    Step 4: Profit!

  • Then giggle insanely to yourself when it does. Better than letting them shoot the messenger. Fucking vindictive fucktards.
  • by The Wicked Priest (632846) on Monday May 22, 2006 @06:47PM (#15384518)
    In 1988, on the first BBS I ever called, I found a vulnerability one day. It was a configuration error that allowed any user to elevate themselves to sysop status. Thinking I was being helpful, I reported it to the sysop. The next call, I was shocked to find myself locked out. Eventually the co-sysop persuaded the sysop to let me back on, but I was "on probation".

    So of course I learned my lesson, and I never reported any vulnerability to anyone, ever again. Found them, though.

    Here's my favorite: On my first ISP (shell account), files in /var/spool/mail/ were set readable and writable by the "mail" group. Also, "pine" was setgid mail. I could start pine, Compose a new message, and then ^R anybody's inbox right into it. One of the sysadmins had three megs of messages in his inbox, and some of them included credit card numbers. But like I say, I'd learned my lesson; I reported nothing. (Don't worry, that ISP later got assimilated by a bigger one, and that particular email system is long gone.)
  • I can't help but think that with the risk of negative consequences from informing someone incompetent, selfish, or insecure of a vulnerability that there needs to be some sort of safe harbor provision in laws in the case of reporting a vulnerability.

    For example: If you stumble upon (or more proactively find) a vulnerability, if you send details of the vulnerability, the actions you took to find it, the exact steps you took whilst exploiting it; and you only performed reasonably minimal actions whilst in the
  • How about a site specially made just for anonymously reporting vulnerabilities in software. How difficult would it be? How difficult is it to guarantee anonymity this way?
    Such a site would make it easy to expose vulnerabilities, but it would also have to be capable of weathering DOS attacks from those that are less than scrupulous.
  • I once reported a little insecurity in my high school's network. It was just an ftp host that allowed anyone to logon without a username or password, which was setup by the network admin's assistant. I revealed it's existence by uploading a bit of porn to the directory and mailing a floppy disk with a couple of files I downloaded from it. The ftp was password protected within a week.
  • If the company in question is likely to sue or prosecute or persecute you for revealing the fact that the emperor has no clothes, then let them stew. I'm sure that someone with less honorable intentions will come along and find it just as easily, and then you can sit by and chuckle as their website/customer database/company is destroyed by a very small shell script.

    Of course, this isn't the moral thing to do - to let a company die when you could have helped, but it's not what they want.
  • by Saggi (462624) on Tuesday May 23, 2006 @04:32AM (#15385738) Homepage
    A lot of posts go into how to report a flaw anonymously. But this is curing the symptom. The disease is the fact that you get to be a suspect if you report a bug - and might even be incriminated by it.

    Many years ago some wise men in the air-traffic industry realized this. Often planes got into dangerous situations, but due to the risk of getting accused of being the wrongdoers and the risk of losing their jobs, no pilots would report these situations. The result was that the security of air-traffic was not improved. Sometimes these incidents caused people to get killed.

    So they changed the rules. Today pilots can report all dangerous situations, without blame, even if they themselves caused the situation. Airports have such a briefing room where these reports are collected.

    The reason for this is that human error in air traffic does happen. But by getting a clear picture of the situations you may be able to focus on helping them out. If pilots miss a sign on the runways, focus should not be on the pilot, but on the visibility of the sign. It doesn't really matter if you say: Pilots should look out for signs or they should get fired. Next time an unlucky pilot misses the sign... bang.

    Something similar could be done with IT security. Reporting a bug if you encounter it should be with the focus on fixing the bug. Not to blame the one who found it.

    Remember the focus in this case is the flaw or bug, not the one who finds it. Unfortunately the case appears to be focusing on the man rather than the real issue. We do this in our daily life. It's a part of human nature. But the bug never gets fixed... and then the really bad guy comes...
  • by dcam (615646) <david&uberconcept,com> on Tuesday May 23, 2006 @09:28AM (#15386857) Homepage
    I once found an issue on a university network.

    It turned out that for a number of the windows labs, available to all students, you were always logged in as administrator. When I reported this issue (along with a list of actions I could perform that would be cause damage to the University or its students), I got the brush off. At the time I considered exploiting this to demonstrate the problem. I'm glad I didn't.

    This is a few years ago but it was interesting that there was a total disregard for any security concerns with that particlular section of IT support.
  • by pclminion (145572) on Tuesday May 23, 2006 @12:14PM (#15388140)
    I didn't exactly receive any thanks, either, though. Back in the early 90's I had a shell account on a local UNIX system. The system was set up to let people automatically create new accounts, which were then authorized by the administrators. To do this, you logged in as the user called "new."

    Well, first thing that happens when you did that, was you read their terms of service in a "more" listing. Of course, it was easy to hit Ctrl-Z and drop to a shell at that point. Once in the shell, I did an "ls" of the "new" user's home directory. Lo and behold, in that directory was a file containing all the new users created that day, along with their system-assigned passwords.

    Funny thing -- most users never change their passwords. I had the master list to almost 90% of the accounts on the system! It got better, though. I noticed certains patterns in the assigned passwords. E.g., the last three chars of one password where the same as the first three of some other password. I wrote a program to piece it all together.

    Turns out, the "random" passwords were drawn from a 512-character string, with the beginning point randomly selected. So I busted the string up into each possible password and ran the thing through a crack program. Now I had closer to 99% of the accounts on the system!

    I reported this, and suggested that perhaps the system-assigned password algorithm was weak. The admins grumbled and yelled but didn't threaten any legal actions.

    I pissed them off again later, with an accidental fork bomb. I lost my account that time :-)

Disks travel in packs.

Working...