Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
News

Security Through Obscurity A GOOD Thing? 329

twrayinma writes: "In this story Marcus Ranum, CTO for Network Flight Recorder, claims that "Full disclosure is creating armies and armies of script kiddies" and that "grey hat" hackers aren't really interested in better security."
This discussion has been archived. No new comments can be posted.

Security through Obscurity a GOOD thing?

Comments Filter:
  • I find this story vaguely hypocritical considering Slashdot obscured their source code for about 3 years to maintain security.

    Wouldn't you find it more hypocritical if Slashdot, the NEWS site, decided not to post this story that they knew their readers would be interested in BECAUSE they obscured their source to maintain security, and people would call them hypocrites?
  • When the author speaks of "script kiddies", he implies an attacker who does not fundamentally understand the technology they are exploiting.
    It seems like the issue he _should_ have been focusing on is the problem of a clever minority creating easy-to-use pre-packaged cracking tools, thus empowering masses of dumb angry kids who would otherwise be completely harmless.

    Clearly one motivation for widely distributing a cracking tool that any idiot could use would be to force an unresponsive & incompetent software company to fix obvious dangerous problems that they might otherwise continue to ignore. This could be seen as legitimate activism.

    While a penetration tool that is easy to use facilitates legitimate security auditing, it seems reasonable to question the judgement of lowering the threshold of competence required to wage an effective attack...
  • I, as a clueless sysadmin, would rather see the source code, for numerous reasons:

    1. Source code allows me to compile an executable to test if my current systems are vulnerable. "Just patch," you say. The problems with this are twofold:
      1. Many of the patches that are released are not fulled regression tested on some of the more obscure problems. A choice between "Well, I could install this non-regression tested patch on an important machine" or "I can't figure out if I'm even vulnerable to this one!" is not much of a choice at all.
      2. Not all of these exploits can be solved wtih a simple patch, some require reconfiguration, new software, whatever.
    2. Source code, rather than an executable, allows me to make sure I am not installing a Trojan, e.g. "New vulnerability found! Use this binary to test your system!" and having it format c: or alter my /bin/login
    3. Source code allows me to further incorporate detection of this vulnerability into an automated scanner, for later work. As I add machines, I run an automated scan against them.
    I simply cannot count on vendors to getting around to fixing things. I will give a practical, if Microsoft, example, namely the RFPoison [microsoft.com] DoS attack on the IPC$ for services.exe under WinNT 4.0. I was nailed by this one almost two years ago, quite casually (mind you, by someone who was not a mere script kiddie). When did Microsoft correct this? Service Pack 6. Of course, if you search on their site, they also claim it to be a not-fully-tested post Service Pack 6a hotfix on a different page. Which is it? Apparently, not even they know.

    It was a security issue (DoS). It was obscure (MS sure as hell didn't tell me). I got nailed. Security through obscurity failed in this particular instance. It would be interesting to do a comparison of various exploits to see how they work out, rather than us all shouting opinions, ambiguous logic, and, in my case, lousy anecdotal evidence.

  • Then some 5kr1p7 k1dd13 stumbles on an exploit and suddenly the bulk of the world's computer users is vulnerable.

    I think you're rather overestimating the typical level of skill and competence possessed by the average script kiddie.

    Most anecdotal evidence (and certainly my server logs) point to the fact that most attacks consist of running whatever tools they have over whatever hosts they 'like the look of'. If nothing cracks, they move on.

    I don't honestly think that, if the tools were to dry up, these same kiddies would actually bother to learn about the theory and practice of security. I'd bet that most of them don't even understand how TCP/IP works, or know how to program beyond a trivial level in C.

    Cheers, Nick.

  • by Malc ( 1751 )
    How do you know how many people are actually looking at the source code? I've been using Linux (on and off) since 1994. I'm a professional software engineer. The only time that I've even bothered looking at any open source code was when the readme file instructed me to change something so that it would build for my machine. I don't have time or the urge to look through the code for most things. Most of my friends or co-workers who have the ability to find these problems seem even less motivated to look at source code to the software they use than me!

    Just looking at the code isn't good enough either: you have to understand it before you can start seeing security problems. Only a very small percentage of users of most pieces of software are going to bother examining the code in depth. Imagine how many users of a product there would be if you had 1 million people (2 million eyes) actually looking at the source and understanding it sufficiently to spot problems.
  • Nader didn't do it by running cars off the road and hurting people.

    I also doubt your socially conscious kiddies do many public education activities or letter writing campaigns before causing damage.
  • As a sysadmin, the thought of a security hole being found in software and NOT getting full disclosure gives me shivers.

    One of the first things I do every morning is check the security sites to see what bugs may have poped up. Then I check the versions against the versions we have installed. Then I take action, either to replace, patch, whatever it takes.

    Yes script kiddies give me headaches, but I would rather put up with them than to have my systems cracked and not even know it, or be able to track the problem down.

    We were hit a while back by the DNS DOS attack, somehow I missed that report. But I was very happy to find the fix for the problem when I finally traced down what the symtoms were. Without full disclosure, I would still be getting hit with it. Duh!

    Full disclosure is a two edge sword, it can cut either way, but I would rather have it.

    ********

  • by Signal 11 ( 7608 ) on Thursday July 27, 2000 @06:47AM (#900624)
    The flip side is that without full disclosure, we're creating an army of script kiddies and crackers whom we cannot track.
  • I worked on the COAST (now CERIAS [purdue.edu]) Vulnerability Database as an academic for about a year. COAST was probably the best known academic security lab in the world and even we had trouble getting good information on vulnerabilities.

    Frankly, partial or non-disclosure keeps the information from the people who really need it. Academics need the information to keep up with and understand what a vulnerability really is. Things like CERT [cert.org] advisories are useless for this. They don't have the information needed to figure out what the vulnerability really is and how to classify it. Another group hurt by partial or non-disclosure is sysadmins. If a sysadmin scans bugtraq even weekly, he can often have a patch or workaround for a vulnerability in his systems long before the vendor releases anything. Open source really rules here where there are usually alternatives such as fixing the code or getting a different free package put up instead.

    Even if there exists some cabal of fully informed individuals, they are always going to leave out many of the folks that need the info. Face it, most vulnerability information is useless without enough info to exploit it.

  • Why the hell is this article labelled security through Obscurity is a Good Thing (tm)? Nothing I read in that article talked about security through obscurity.

    What I read is that he thinks it is a bad plan for people who find vulnerabilities in software to release no-brains tools to exploit them and to do it because it is profitable to them.

    He didn't say, "Don't tell everyone about the security problem"; He said tell the appropriate people first, don't do it for your own gain, and finally don't put up a website with a set of tools to exploit the vunerability that script kiddies can use.

    Why didn't slashdot label this right is the bigger question? Is slashdot being run by script kiddies? ;-)
  • by Whip ( 4737 ) on Thursday July 27, 2000 @08:08AM (#900628)
    This issue isn't quite as simple as the author of this article gives it credit for, I don't think. While I do agree that there's a problem here, I don't think the problem is quite what the author suggests.

    I am a subscriber to bugtraq (isn't everyone?), and typically, when a vulnerability is found, one of three things happens:

    1. Someone posts a working exploit, having not notified the vendor, or having not notified them about the problem at all, or in not enough time to actually fix the problem.
    2. Someone posts a working exploit, having notified the vendor 6 months ago, and never having gotten a fix.
    3. Someone posts a working exploit well after a vendor has posted a fix to the problem.
    Unfortunately, #3 is the rarest of them all. Very seldomly do I see "SUN/RedHat/whoever released a fix for this last month, here's the actual bug.." More often I see "I found this bug" or "I notified them yesterday and haven't gotten a response back yet." Half the exploit-producers seem to be in the game so that they can be, as someone else here mentioned, "first to market" with their clever security exploit.

    You'll notice a common element in my list: All of them contain the phrase "working exploit". Many, many of the "I found this bug" postings to bugtraq contain a fully functional script to demonstrate the problem -- A remote root exploit includes a script to (yes, that's right!) give you root on a box, remotely. All a cracker really needs to do is subscribe to bugtraq and wait, and the tools he needs to do his job show up in his lap. Sometimes, these are tools and exploits already found "in the wild," but just as often, they are not.

    This, in particular, I have a problem with. In the vast majority of cases, it is possible to explain and demonstrate a security bug without having to ever make an exploit that actually works. One author, recently, posted a "proof of concept" exploit that required, among other things, a good working knowledge of PPC assembly to actually turn into an exploit. He demonstrated the security problem quite well, without giving "script kiddies" a tool they could use to break systems.

    Now, granted, there are plenty of people who can take information about a vulnerability, and turn it into working code, and distribute it. These are the real hackers amongst the cracker crowd. But I don't think we need to be making the script kiddies' jobs easier by handing them working exploits on a silver platter.

    Then again, these same "real hackers" are perfectly capable of finding these bugs on their own, so hiding an exploit from them (working or non) doesn't really gain you all that much.

    I think that, overall, full disclosure is a very important thing -- That's "full disclosure" as in "give everyone the information they need to identify, demonstrate (if feasable), and fix security problems", not full disclosure as in "give away the farm by posting perfectly functional exploit code before you even tell the vendor". Disclosure of their dirty laundry to the world has goaded a number of vendors into fixing long-standing problems with their software. Without forums like Bugtraq, these problems would persist, with only the bad guys knowing anything about them.

    The other advantage that full disclosure gives is the ability to discuss and learn from the mistakes of others. For example, there is currently a discussion happening on Bugtraq reguarding user-supplied (or otherwise variable) format strings for *printf-style commands and how they can be abused to give visibility into a (privileged or otherwise) process. Though a true solution may never be reached, I've seen more discussion on the topic in the past few days than I've seen on that topic in the entirety of the rest of my life, and that can't be bad. Discussions of this type pop up from time-to-time on bugtraq, and I'd dare say that anyone who cares to listen to them can find themselves writing more secure code very quickly.

    Of course, there's also the downer: Most of the issues I see discussed on bugtraq nowadays are the same types of problems ... that I saw discussed on bugtraq 5 years ago ... Which are the same issues as those brought up by the Morris worm more than 10 years ago. Pity that we'll never learn. *sigh*

  • by BilldaCat ( 19181 ) on Thursday July 27, 2000 @06:48AM (#900631) Homepage
    ""A lot of the vulnerabilities that are being disclosed are researched for the sole purpose of disclosing them," he said. "Someone who releases a harmful program through a press release has a different agenda than to help you."

    And then you have companies like Microsoft, who when notified of an exploit by say, USSR Labs, on June 11th, don't get a fix out, and instead wait until it goes public, and then say "we'll have a fix out this afternoon!"

    The only way to get some things fixed is kick companies in the ass, and making holes public is a great way of doing it.
  • by laetus ( 45131 ) on Thursday July 27, 2000 @06:49AM (#900635)
    It's the very army of script kiddies and hackers out there that are FORCING major corporations to tighten up their code. Script kiddies and hackers are like the Ralph Nader of the auto industry (remember his book, "Unsafe at Any Speed"). The analogy is the same. Nader pointed out that the auto industry was producing unsafe cars. Hackers are pointing out that software companies are producing software that leave your corporate and home networks vunerable to attack. Except that rather than publishing a book like Nader did, they're publishing the weaknesses and potential methods of attacks.

    Nader had to wait years for Congress to pass laws forcing the auto industry to tighten up. I think hackers are a bit more effective. They're forcing companies to tighten up at "Internet speed".

    ---------------------------------
  • Well, if there is no manufacturer (Linux, for example) you really have to post your code to the kernel mailing list, which is publicly available. I agree that ready-to-go exploit code isn't very ethical to provide, though. There's a difference between a five-line code snippet that demonstrates the problem, and a nice GUI client that anyone could use to unleash an attack. Of course, some admins would use the nice GUI tool to test their own networks, but there's a limit to how far you can extend that justification.

  • So now, faced with Dick's signs that point out their insecure security, Dick's neighbors will stop putting their house keys in easily located places, and the net security level goes up.

    They might not be too happy with Dick, but their house is now secure.

    Better REAL security than false illusions of security.

  • Whether security geeks like it or not, the fact is that at any point in time---10 years ago, today, or 10 years in the future---90% of the systems out there will not be secured well enough to avoid "new" exploits. Sadly, many users see OS upgrades as their security patch mechanism of choice.

    I believe the solution here is to create a standard methodology for this kind of stuff, which would go something like this:
    1) Exploit is discovered and announced in very generic terms. No tools or detailed exploit instructions are released. This could be an "announcement" on bugtraq.
    2) 30-day clock starts ticking. Release the tool to the vendor but no one else.
    3) If at end of 30 days the vendor has not provided an effective patch, release the tool and detailed exploit info.
    4) If the vendor has provided a patch, don't release the tool. At all. Ever.
  • by tqbf ( 59350 ) on Thursday July 27, 2000 @07:33AM (#900645) Homepage
    MJR is biased because he is (to my knowledge) the first vendor of a shrink-wrap intrusion detection product to ship/publish a product with a disclosed remote root hole in it. NFR, his network analysis tool, is/was accompanied by a stripped-down web server (ironically, his team wrote this because they thought Apache, the *open source* web server, was insecure!) which had a *stack overflow* in its HTTP GET handler.

    No wonder he's not fond of "gray hat arms dealers".

    Of course, nothing he is saying is backed up by any real researchers. In cryptography, cryptanalysis is a foundation upon which theory is built. Analyzing and breaking algorithms is the respected, hard task. People like Bruce Schneier repeatedly publish papers disclosing flaws not only in cryptographic algorithms, but in protocols that use them!

    MJR's nonsensical position is even more amusing based on the people he consorts with and praises. NFR went through much effort to publically associate themselves with the L0pht --- probably the most well-known active source of full-disclosure security information. He also sticks up for people like Dan Farmer and Weitse Venema, both of whom have published information and tools about new security flaws.

    The message here is not that "full disclosure is evil". What Marcus longs for are the olden days of private security mailing lists, where only his friends got information about security flaws. Those were also the days in which literally every piece of software was riddled with stack overflows and the most common way of breaking into remote computers was by mounting public NFS shares.

    I understand why MJR doesn't like people outside of his insular little clique publishing and discussing security information. But it would be silly to pretend that anything he says is motivated by a desire to secure the Internet.

  • I think your analogy is helpful.

    The ideal response would be for the company to publicly announce a recall due to security probles with the lock (just as car manufacturers do with recalls). They would repair/replace free of charge.

    However, they wouldn't give out explicit details on how to exploit this problem. That would be silly, and would obviously encourage the less scrupulous types to take advantage of it.

    Thus, you have public exposure, a fix, and no unnecessary information given out to the baddies.

    The real issue is what to do regarding companies that ignore security problems, even when brought to their attention.
  • by ucblockhead ( 63650 ) on Thursday July 27, 2000 @08:13AM (#900650) Homepage Journal
    Yes, it is unfair that many adolescents (probably the majority) get tarnished with this image, but you have to understand where it comes from. While the majority of young people are not crackers, or vandals, the majority of vandals, digital and physical, are under twenty-two. It is the nature of humans and maturity. These young punks (and they are almost always young) screw it up for the rest of us. And a very big part of "the rest of us" is young kids like you, who are, like the rest of us, mature and responsible.

    If you really want to know where "script-kiddy" comes from, just look this line from your own post: "But I'm sick of all this childish behavior...". That's exactly it. We call them "kiddies" because their behavior is childish. Immaturity below their years.

    You, being responsible, are not a kid. You are a young adult. And yeah, it sucks that the you're treated like crap by know-nothing adults because of idiots in your age-group. But unfortunately, what we call those idiots isn't going to change that. The only thing that will change that will be to educate those know-nothings that are unfortunately in charge of the stuff they know nothing about in too many places.

    Now if only I knew how to do that...

  • I wholeheartedly disagree. ALL crime, whether it's cyber or meat, should be measured against how much actual harm was caused to actual human beings. Corporate, government and idealistic interests should be secondary to the benefit and well-being of actual thinking, feeling human beings.
  • by Baldrson ( 78598 ) on Thursday July 27, 2000 @09:34AM (#900655) Homepage Journal
    I know for a fact that grey hats have been treated foolishly by the corporate establishment types. All they would have to do to get the bug fixes discovered and fixed and patches released before publication is pay the grey hats what they are worth.

    In otherwords, be businessmen.

    It appears the corporate establishment types are so concerned about real money going into the hands of young guys with an attitude that they would rather subject the Internet community to unnecessary risks, and their stockholders to violations of their fiduciary trust than pay the grey hats what they are worth.

    For example, Dan Brumleve, the developer of DBarter [dbarter.com] (which won the Hackers Conference prize for "best work in progress" last year [dbarter.com]) was quite young when he discovered his first Netscape exploit Tracker [netscape.com]. Netscape subsequently gave credit for finding the "Tracker" hole to a guy from Bell Labs. Their excuse for doing this was that they already knew about the Tracker exploit, having been told of it by Bell Labs -- an act that might have been rational if the Bell Labs exploit had been the one posted to Dan's web site. The problem was, Dan's exploit still functioned under the Netscape's fix to the Bell Labs exploit.

    Dan has documented the behavior of corporate establishment types in this fiasco [shout.net].

    Inspired by such wisdom from corporate establishment wisdom, Dan went on to discover and publish other exploits [shout.net].

    At no time was Dan offered more money by Netscape than he was making as an independent contractor hacking Perl scripts for e-commerce web sites, although Dan did ask for such compensation.

    Each time Dan published one of his exploits, Netscape stock went down 5%, and some of Dans friends made some money shorting Netscape on advanced knowledge of these exploits before Netscape was finally bought out by AOL.

    OK, Dan's exploits may not have caused the Netscape stock price drops (though, try telling that to the guys who made money assuming they did). But even so, this attitude toward grey hats, that controlling them by legislating against them, is going to drive them underground. Society has "punkified" a lot of these young men already so threatening them with prisoner gang rape isn't going to twist their heads around that much -- aside from being a morally reprehensible, not to mention unconstitutional [geocities.com], way of dealing with any problem.

  • The good old tried and tested security-through-killing-people-who-find-out that the mafia employs has always worked well. Both deterrent and remedy.
  • I don't think that causing fatal auto accidents is the real world equivalent of crashing some corporation's webserver. It's more like keying a car or something. Yes, script kiddies are annoying and they can cause quite a bit of damage, but let's try to keep a little perspective on it instead of equating them with murderers, ok?

  • The real problem with script kiddies is not their menacing large, well-funded sites like Yahoo. Yahoo and their ilk have lots of resources they can use to close all the security holes they find.

    No, the problem is sites like my amazing.com or Kiro5hin, which are run by an amateur or amateurs and simply don't have the resources to track down every security patch and close every hole.

    I was blasted off the net for over a month because of one security hole in the Cobalt Raq server I bought. I had the money to buy the server, but not the time to keep it safe.

    Kiro5hin had a staff, but when confronted with the type of attack Yahoo's sysadmins get paid big money to guard against, they weren't able to help. Part of the problem was that they were unpaid volenteers, and I'm sure they - like me - had a boss yelling at them for not doing the work they were hired to do while attempting feverishly to deal with the problem.

    This is why the script kiddie is such a big problem in my mind; it threatens what's left of the non-profit, more or less communal/genteel part of the net. I'm as much of a capitalist as anyone, but I still have a special spot in my mind for this kind of thing.

    I am all for a policy that avoids any disclosure of security holes. I don't even care if my machine is secure; everything on it is public anyway. I don't want to have to care if my machine is secure, and neither should anyone else who sets up a volenteer or individually-run site for the joy of sharing interesting stuff with the world.

    Unfortunately, revenge on the script kiddie, sweet as it might be, takes resources. Yahoo has 'em and can catch 'em if it wants to; I simply don't have the time and don't want to take the effort. So my machine is vunerable, and I don't know what to do about it other than shutting the whole thing down.

    A very depressing choice, let me tell you.

    D
    ----
  • "Immaturity below their years."

    I wouldn't exactly say that. Teenagers spray paint and blow up mailboxes. By definition, they are immature. It just so happens that technology has empowered a lot of them to think that by performing mischief on the internet they are kewl haxxors. I'd say the above poster was mature *beyond* his age, as many smart and often techno-savvy kids are. But most his age are still tipping cows and giving wedgies.
  • by Kaa ( 21510 )
    "Someone who releases a harmful program through a press release has a different agenda than to help you."

    And why in hell should he be interested in helping you? And what do you care about his agenda?

    He is doing you a service: he is telling you that a program you have is vulnerable. It's up to you to decide what to do with this information (standard reaction: ignore), but the guy who published the exploit owes you nothing and gives you useful information.

    Kaa
  • <rant>

    Bah!!

    Kids today have it easy because us late 20-something GenXers suffered for the cause. And now hacking is tres chic these days.

    <voice="old-man">

    Why, when I was kid, from 6th to 11th grad (that would be circa 1983-1988) I was routinely beat-up and harrassed because I spent my recess periods--and countless hours after school--programming the schools commodore Pet. Ok, I guess I deserved it because I was skinny and wore argyle sox all of the time, and maybe I should have done book reports on other things besides Ada Lovelace, ENIAC and the transistor... but hey...

    </voice>

    Now it's ultrahip to be a hacker. Look at the exponential growth at DefCon (had to get a hotel in Barstow!!!)

    Posers vs. Punks vs. Script Kiddes

    Script kiddies are the equivalent of some poser who has 50 bucks and few hours to kill, and thus become instantly hardcore by getting a tribal tattoo around his/her bicept... or maybe even a tongue piercing and bleached hair. Ooo. Bad-ass. If punks hadn't been invented this culture in '75-85, and then pop culture hadn't made punk culture a commodity, nerdy suburban kids would still be wearing Izods and ProKeds.

    I hate being introduced by my friends as a hacker. And it's my fault, I should just lie when friends call me to go out on weekends and I'm too busy optimizing assembly code. I think the best thing real hackers can do is to help devolve the image of hackers back to being booger-eating social-skill-lacking losers so that we can have the quiet solitude of our underground handed back to us.

    </rant>

    ---
  • Actually, any sysadmin or security engineer who knows what he's doing does read those lists/sites. I'm not sure what you mean by "normal folks"; sure, my grandfather who does all his genealogy research online doesn't, but then, I'm not sure he needs to. If you mean "normal" system and network administrators, then I'd reply, that's part of their job. If they're not doing it, then their employer should get someone in there who will do the job correctly.
  • Yeah, but why does it have to be released to the general public? Send any source code that verifies/debugs an exploit to the manufacturer, and just release a description to the public. If the manufacturer doesn't respond after a period, release a snippit to show the problem.

    My problem is with the pre-written, ready to make/execute "demo" code. And if people won't believe that an exploit exists, send a copy to CERT or something, don't post it to USENET...

    Yes, the information has to get out, but don't hand a gun to everyone to show them that your bulletproof vest has a hole in it...

  • Full disclosure helps, but in some cases is too extreme, does source code for a particular exploit really need to be published?

    Exactly. Enough information should be given so that a security expert can find and protect/fix the hole, but code to exploit it should not be handed out. Without someone handing them the code, most script kiddies would be dead in the water. If they're smart enought to figure out how to write and exploit, then they're probably smart enough not to use it for evil.

    Will this stop script kiddies? No, someone will make them a script at some point, but hopefully, it will slow them down and give us a little more time to patch the holes.

    All this IMHO, of course.

    On the other hand, how bad is it that the scripts are out there? As far as I can tell, a lot of sites don't start locking their doors until someone has come in through them. If all script kiddies dropped off the face of the earth today, security would probably go to hell. Would you feel more secure if the sites that kept your credit card info (as an example) didn't have to constantly worry about plugging every little hole that a script kiddie is going to use?

    The internet is a hostle place. We are just going to have to adjust to that. Script kiddies and black hats are not going to go away. ever. All the whining and finger pointing in the world is not going to increase security.

    If you can't take the heat, stay off the net.

  • I like to have source code to test a new exploit on my box. I'd much rather verify that I am vulnerable, patch, and verify that I'm no longer vulnerable than just blindly patch my system and hope that RedHat fixed the problem for me.
  • I agree with a lot of your points but your Ralph Nader analogy needs some clarification. Nader did what he did because of a genuine concern for safety. His entire purpose was to get these things fixed. Script Kiddies do not do it for this reason...they do it because they can, because they can impress their friends with their cut-and-paste
    coding abilities (see this site [enteract.com] for what I mean).

    But the reason they CAN do it is because of shoddy design or implementation of software by designers and sysadmins. I would rather have everyone know about a buffer overflow problem in sendmail or a DNS exploit than only the black hats. Sometimes sysadmins and designers aren't aware of problems and a "grey hat" who creates a cut-and-paste exploit program makes them aware rather quick. This give impetous to fix it.
    For instance, if I found out that every 3rd key in my town could open my back door, I would be concerned. I might have to fix that someday, in case anyone finds out. If I found out that someone published this information in the local paper and was giving away a machine to cut those keys, I would have my locks changed NOW.

    And I would test my lock better.

    And I would demand a new lock design so this could not happen again.

    And I would make sure, possibly by lawsuit, that the lock maker doesn't continue selling that particular lock.

    How long do you think MS or some engineer that worked for them knew something like Melissa or ILOVEYOU was possible but didn't bother to fix it until it happened. How long were the other holes around and used by black hats before they were uncoved and "published" by the grey hats?

    Makes you wonder...

  • Was Taco selling the Slash code? Was he claiming that it was for public release? No. He didn't release it. It was his code, and therefore his right. Do you get annoyed that Sun hasn't released source to [insert product name here]?

    Now, repeat after me, slowly, until you hear a loud 'ding' (the sound of CLUE) -- Source code is not a right. If I don't like it, I will write my own and GPL it.


    The Slashdot Sig Virus
    Hey, baby, infect me [3rdrockwarez.com]!

  • Security is expensive. The threat of massive "script kiddie" attacks and the threat of public embarrassment is the economic incentive that can convince vendors to actually address security problems.

    Without disclosure and actual attacks, security is not an economic incentive. It can't be demonstrated to customers, and it doesn't become a product differentiator. The occasional problem that might happen in a world without disclosure are not sufficient to affect the bottom line, in particular since software vendors are usually not liable.

    We tried hushing up security holes for decades and it didn't work. In fact, it gave us the Morris worm--the exploits it used had all been well known for years without any vendor bothering to fix them--and VMS pseudo-security. Arguably, the current sorry state of computer security is still the aftermath of that approach.

    I see nothing problematic about disclosure of security problems, whether by competitors or anybody else: it's the only policy that objectively makes sense from the point of view of the customers in order to create a market that supports secure products. "Script kiddies", far from being an annoying side effect, are the very mechanism that makes disclosure effective: without the economic threat from their activities, vendors would still have no incentive to fix their security problems.

    In the long run, I hope that these kinds of economic pressures will get rid of the snake oil and tinkering around the edges (and that includes Ranum's own intrusion detection systems) and will force companies and developers to adopt tools, methodologies, languages, and cryptographic approaches that address security problems correctly. (Yes, this also means Linux needs to change.)

  • It wasn't until someone actually wrote some code that the Great Beast was forced to roll-over and grumble. Corporate entities do not respond to warnings. Corporate entities only respond to crisis. There is no crisis until someone codes the bitch.

    A very amusing example of this is buried amidst the Jargon File pages:

    http://www.tuxedo.org/~esr/jargon/html/The-Meaning -of-Hack.html [tuxedo.org]

    Regrettably, "killall" would probably stop this hack in its tracks now, but it's still very amusing reading.
  • If the cost of forcing the corporate world to Get A Clue, and to employ some righteously clueful admins, is inflicting them with a plague of Script Kiddiez, then that's the price we're going to have to pay until we get it sorted. Nothing else seems to have woken them up.

    And I guess shooting you is OK so it forces you to Get A Clue and wear a bullet proof jacket?

    Crime is crime, and no excuse in the form of "it's only done to show that you haven't spend countless hours and gazillions of bucks securing yourself". It doesn't hold in court for theft, rape or murder. Why on earth should computer crime be an exception?

    -- Abigail

  • I would rather have everyone know about a buffer overflow problem in sendmail or a DNS exploit than only the black hats.

    Let's see. You know about a problem in buffer overflow problem in sendmail. You have two options: public disclosure, or no public disclosure. According to you, there are two outcomes: everyone knows, or only the black hats know. Since with public disclosure, everyone knows, the no public disclosure leads to only black hats knowing. Ergo, you must be a black hat, and even more, anyone you tell it to has to be a black hat too. Including (gasp) the people maintaining sendmail and CERT. (Or perhaps you wouldn't tell them and willingly restrict yourself to black hats.)

    Of course, if you aren't a black hat, your reasoning must be flawed. (And I think it is. There *is* something between everyone and only the black hats. But I leave that to you to figure out.)

    How long do you think MS or some engineer that worked for them knew something like Melissa or ILOVEYOU was possible but didn't bother to fix it until it happened.

    Ah, I guess you believe that sendmail is maintained by people using the same policies as MS. Just like everyone else...

    -- Abigail

  • I found my first new exploits in 1994, when I had the opportunity to research AIX 3.2.5 as part of a tiger team. We found a list of about 10 ways to get root on the system (actually more, but this were the ten worst) in only a single way of systematic research of a stock configuration directly from the current installation tape. We called the vendor and waited. Nothing happened. For months.

    I had to write an article in a (german) computer magazine under pseudonym, then take that article to the local vendors office and say "Look, now it is even in the papers" in order to get a reaction from then. IBM didn't care a shit about security back then, unless they were forced to by publicity.

    This has thorougly changed now, but only due to full disclosure.

    And even now you need disclosure and publicity to get people to get their act together. A large german online bookshop had their server wide open for nine months after I informed them that I was able to connect to their Oracle on their webserver using my Oracle installation, and get all their credit card data. Only after they ended up on in the same german computer magazine they decided to firewall themselves shut.

    With open source the situation is better, but only slightly. I was able to break out of safe_mode in PHP 3.0.13 and below using a bug in their popen() implementation, and fixed it in CVS. I then posted the bug on bugtraq, forcing the PHP team to release 3.0.14 with the fix immediately. Nice reaction, but the core team didn't like me publicizing on bugtraq.

    When I found a similar bug to break out of safe_mode using the mail() function, I did not create a fix, and did not post on bugtraq, but informed them privately of my findings. The fix went into CVS in under 3 weeks, but 3.0.15 was released only three weeks later.

    I find this disappointing: Even in Open Source you get appropriate reaction to security issues only by forcing updates through full disclosure. Well, I for my part have learned my lesson: I find a security related bug, it goes to bugtraq - no delay, no mercy. The waiting ain't worth it.


    © Copyright 2000 Kristian Köhntopp
  • All I read was an "expert" whining about people that have more skills than he/his company does. I
    'm sorry, but it is easy to lock down a box, but on the same note... if your web server is the same as your accounting database server... you deserve to lose everything when it get's nailed. Your internet equipment has to be outside the protected zone.. if it isn't then your company is just plain stupid.

    I can see problems like kuro5hin happening, but no script kiddie is gonna take down ibm.com .

    We are currently at a crux of the digital civilzation... we have technology abounds and very few that actually know how to administer it. And we have a large amount of people masquerading as those that know how to administer it (MCSE's, 60% of all the sysadmin's out there, etc..)

    In 10 years things will be different... I see more chaos coming, but more effective filters or private "internets" will rise to meet the demands.

    If you want to gague the chaotic levels of the internet in general... just spend 1 night as an op in any IRC channel.
  • Security by obscurity vs Security by disclosure...

    With Security by obscurity usually a cracker discovers the defect and releases an explote.
    We discover the defect after the crackers efforts has resulted in mant victoms.

    With security by exposure the defect is descovered and made public. Now it becomes a race. Who will win. The patch or the cracker?
    In the case of Linux the patch. The cracker won't bother. His efforts are for the long term not for a short term. A SysAdm will patch his system and relase the patch in a short time. The SysAdm is motivated not only to fix the defect but to make sure it stays fixed (so he dosn't need to fix it on a future update).
    In the case of Microsoft it WAS the cracker as Microsoft didn't take it sereously but now they do so it's an even race.

    Any time a company dosen't take the issue sereously the cracker will win. The public also wins as this provides a hot poker to anyone who would not take security sereously.
    If the company prepetually fails to patch defects the company becomes known for defects and profesionals are discuraged from using the defective products.

    It also helps in monopoly cases... to prove a lack of consern for the costummer.

    Security by disclosure wins out...

    After all.. the consummer can't know if a defect is being ignored if the defect isn't disclosed publicly. If a hacker dosn't expose it sooner or later a cracker will explote it... the exposure just gives the good guys a better chance and the costummer a heads up..
  • A large portion of security experts go home and write tools at night for script kiddies. -- Ranum

    Yeah, right.

    The problem remains that systems are still being shipped with security holes you could drive an 18-wheeler through. This is unacceptable behavior on the part of manufacturers.

    What we need is product liability for this sort of thing. A few billion-dollar lawsuits will make Microsoft, Red Hat, and VA Linux get their act together.

  • Maybe I'm stereotyping here, but right now, I'd say most Linux users read slashdot/Kuro5hin/Freshmeat or something similar, so when someone discovers that you can destroy a Linux box just by connecting on port 7, everybody finds out right away, and can fix it quickly.

    If your assertion is right, one of the biggest stregths of the opensource operationg systems will cease to exist as their market share increases. The fact is, a huge proportion of computer users don't, and never will, keep track of security issues.

    So, if/when Linux has an 80% market share, any bugs that are discovered are going to remain unpatched unless there is some sort of automated system (which, as you pointed out, is not necessarily very effective).

    The problem seems to be with having lots of non-tech savvy users, not necessarily with open/closed source development.


  • http://www.counterpane.com/crypto-gram-0002.html #PublicizingVulnerabilities
  • Just dont code the bitch. Your neighborhood harassed admin will thank you.

    Amen to that. Script kiddies are just that -- ethically impaired children who might know just enough to install RedHat and launch a script from the commandline after having it explained to them ten or twelve times on some IRC channel. Very, very few of them have the knowledge necessary to build their own tools.

    I'm all for full disclosure in much the same way that I am all for well-regulated gun ownership. Keeping the info flowing is one thing, but it's quite another to mass-distribute cracking software to script kiddies, just as there is a difference between licensing adults to own guns and just leaving a case of handguns in a high school locker room.
  • Ultrix was BSD.

    That may not have been true for the MIPS release, but the Ultrix at the time was BSD 4.x, I think 4.2...

  • by Jon Erikson ( 198204 ) on Thursday July 27, 2000 @06:51AM (#900725)

    It's about time somebody stood up to the legions of open source zealots and told them that their cherished view of "many eyes makes bugs shallow" is little more than McCarthy-like jingoism rather than a solid foundation for security.

    I'm not saying that obscurity is good for security either mind you, but the fact is that when you have the source code to a product at hand, it becomes a hell of a lot easier to find exploits with a debugger and a bit of patience than it would be with a raw binary. And thanks to the "efforts" of system administrators who would rather spend their time playing Quake than downloading the latest patches and bug-fixes these exploits put thousands of sites that rely on open source software at risk.

    The many eyes mantra only applies when many eyes are actually looking at the code. In most cases there are about two people (the programmers) who actually look through the code to fix it, and everyone else is hackers looking for their latest backdoor penetration.

    This is an area in which there is so much FUD, from both sides, that a reasoned debate is next to impossible. Until the zealots stop and think, security is going to be something that is argued about rather than realised.



    ---
    Jon E. Erikson
  • Full disclosure helps, but in some cases is too extreme, does source code for a particular exploit really need to be published? In reality, when an exploit surfaces, it should be publicised, but not in detail. This would give reputable companies time to fix it (presuming the finder gave details to the company and perhaps a handful of reputable security experts who might be able to create a workaround plus IDS fingerprints).

    The big question for me is: Who are the reputable Handful? When you limit it to some arbitrary number, decided by whoever finds the bug, you have different gradients of information in the field. THat is, some know, and others don't. You leave it to be a judgement call, and everyone gets screwed over in the process.

    Then you said...

    DoS'ing, cracking, exploiting, rooting, sniffing should all be classified as illegal, and penalties must be established. Although the cost of tracking down perpetrators is high, the increasing number of these l337 scr1p7 k1dd13s is only going to cause more and more financial loss, especially as the Internet becomes more ingrained in society.

    Fine, all well and good as long as you can adequately measure 'Malicious'. All Rooting, sniffing and exploiting is not always malicious. Hell, Security folks who find vulnerabilities would be out of work. The boys and girls who find the bugs in the first place would be incarcerated. (That would at least solve your security-holes-for-the-script-kiddies problem.) Malicious is all dependent on the act and who's view you're looking at. I may scan someone's box without malicious intent, but they may think its terribly intrusive and serving only a sinister plan.

  • We already tried security through obscurity; it roughly translates as "trust the vendors to do the right thing". I'm sure no one is surprised to find that it doesn't work.

    The first full-disclosure lists were founded by sysadmins who were frustrated at the lack of response by vendors, as a "self-help" resource. Some of them even started out as "partial disclosure" lists, thinking that vendors would wise up and fix their bugs. Did it work? Nope.

    Heck, even in this day and age, vendors are still stupid. Every so often, a bug gets posted to BugTraq without an exploit, and the vendor gets all pissed, calls the submitter a liar, and threatens a slander lawsuit. The only way to shut them up? Publish the exploit.

    That's when people like Ranum get all pissed because the poor submitter defended his honor after being slammed in a public forum. Well, cry me a river.

    The current situation isn't ideal by any means; I think exploits shouldn't be posted except in cases of flagrant irresponsibility or hostility, just as one example. But let's not pretend that it's irresponsible, or even immoral. It's the lesser of two evils.

    If Marcus Ranum wants us to stop publicizing cracks, then he had damn well better be ready to deliver a guarantee that vendor response will be better in an age of secrecy. Can I sue him if a vendor sits on a report and doesn't fix it for six months, and a cracker uses it to trash my system?

    Until that happens, get used to full disclosure. It isn't going away as long as the USA has a First Amendment.
  • by BridgeBum ( 11413 ) on Thursday July 27, 2000 @06:52AM (#900735)
    Security through obscurity really only works if a vulnerablilty you have discovered remains hidden from the net in general. Which means that no one else will discover it, a highly unlikely assumption as more and more people probe for such weaknesses. Which senario would *you* want: a vulnerability discovered by some cracker which he shares with his friends to break into sites, or a notice up on SecurityFocus explaining the vulnerability, setting in motion the code writers' ability to close the whole? Personally, I'd rather have more eyes looking at the problem, and trying to fix it.
  • Except that Ralph didn't cause intentional damage and/or inconvenience to users of the cars in publishing his book. Ralph may have been out to make a buck by publishing his book under the pretense of public safety, maybe. Script kiddies are out to make a name for themselves among their peers and to get a rush of fleeting self-esteem by 'owning' some poor slob sysadmin's box.

    I kind of agree that script kiddies are providing something of a benefit to consumers of software and information services by giving a compelling incentive to software makers and online businesses to pay much more attention to security.

    Difference in the analogy is that Nader was intentionally focused on exposing the safety problems of the auto industry. Script kiddies probably mostly don't give a rat's hindquarters about the side effect that occurs from their activity.
  • It was on the basis that I was dirupting and potentially damaging their computer system.

    Wait til they find out i upgraded the ram in a few other ones :))

    A lot of these people understand very little. A friend of mine ran into problems at university because she copied a program used for her course from the university network. They found out, and despite large letters on the about box saying "Freeware - Please Distribute Freely" they told her that "you cant just copy a computer program" and asked her to return the cd.
  • Security through obscurity has been proven not to work through practice.

    We aren't disclosing holes just to the kiddies, we're disclosing it to everybody willing to listen. That means people can know when there's a problem, to which they have every right. Otherwise if one person found this hole, that information will get passed on and on until you get cracked and have no idea why. Security through obscurity is only appealing to lazy sysadmins who don't want to bother with actively keeping their system secure (by visiting bugtraq etc.) and instead want it to be done passively (Microsoft releases a new SP) This is no way to maintain a server and thus no way to look at security.
  • hypocrisy - The practice of professing beliefs, feelings, or virtues that one does not hold or possess.

    No.

    --
  • Think like the old cowboy movies. The good guy wore the white hat, the bad guy was the wanker in the black hat. And grey hats are people somewhere in between, they do some good things, but they also do some bad things.
  • There is a conflict of interests at work here:

    • Disclosure causes security holes to get fixed.
    • Disclosure causes security holes to get taken advantage of.

    Full disclosure is suboptimal because people have better things to do with their time than upgrade and patch software. No disclosure is suboptimal because there is no pressure on software vendors to fix holes.

    Full disclosure works well with stable software. Eventually bugs get fixed and the continuous public review will have created a secure product which can be used for years and years.

    It is with rapidly changing software there is a problem. And in these days where "internet time" is a valid excuse for anything, rapidly changing software is what we have lots of. For this, we need a good idea for how to strike a balance between the two extremes, full disclosure and full secrecy.

    One idea is to have a "waiting period". When someones discovers a security problem they inform the vendor but wait some about of time before informing the public, say 1 month. That way not only will a fix be ready place when the problem is publicized, but frequent upgraders may already have the patched software running. The software vendor, knowing that the problem will be publicized at the end of the waiting period, still have an incentive to get it fixed.

    Of course this doesn't help the masses with years-old software. Someone please get an even better idea!

    /A

  • man, all you slashdot people are completely nuts.

    open source this, free software that, I want to see the source or I refuse to use your software.

    but then the minute someone starts distributing some dangerous source, holy shit, this is a terrible disservice to the community.

    did you ever think that if you read bugtraq and you see a brand spanking new bug that someone discovered and exploited last night - if this bugtraq post contains complete workign source for an exploit and complete instructions on how to use it - then you can turn on your machine, look at the bug, look at the source, and make your own fucking patch? if youre running a machine on the internet and you have the capability to defend it from attacks by fixing source, and someone is GIVING you, for FREE, all the knowledge necessary to fix this bug and assure yourself youre no longer susceptible, its youre responsibility to fix it.

    don't bitch that vendors arent fast to respond, dont bitch that these exploits are dangerous and should not be distributed. the fact is, they ARE distibuted, and instead of complaining, you should be really, really obscenely happy that whoever writes these things is nice enough to tell you about them instead of just rooting you over and over.
  • by linuxonceleron ( 87032 ) on Thursday July 27, 2000 @06:54AM (#900765) Homepage
    The term script kiddies creates a negative image of young people using Unix/Linux as being only vandals. I'm 15, and I was almost suspended for sshing into my own computer from the school library as they assumed I was breaking security on some system. While some people may classify young people as immature and reckless, I've been using my knowledge of computers for good since I was young. These people should be called what they are, digital grafitti artists with nothing better to do. Security disclosure is necesary for sysadmins to be able to secure their machines, and by eliminating it, only the people on the other side will have the knowledge, as it will eventually leak anyway. But I'm sick of all this childish behavior, I'm getting port scanned a few times a week by random hosts and at least I have inetpaged to let me know about it. I'm just tired of being lumped in with all the 15 year old AOLers with no morals.

  • What are you comparing this to?
    Obviously, if *you* know you have a problem, you don't advertise it.

    But what if the hacker knows you have a problem, and you don't know about it, and can't find out? *NOW* you have a real problem. At least when vulnerabilites are published, you can *expect* that people will try to haxx0r you.

    a) Your lock is defective, and
    b) only the criminal organization knows that the lock is defective, because someone from the lock company sold them the information.

    You are far safer
  • Documenting the exploit isn't the same as posting ready-to-build-and-run source code. Describe the exploit on a packet-by-packet level, and forward it to Cisco, CERT, and other appropriate security fora.

    Once you've given good, solid information that anyone with good knowledge of TCP/IP can understand and verify with a moderate amount of effort, you've done your part. It isn't your job to "prove" anything; if you get told "put-up-or-shut-up" then just move on.

    -Ed
  • the fact is, however, that if youre on the internet, you have a responsibility to protect yourself.if your locks suck, thats your problem, not mine, not the hackers writing warez, not the retards runnign these warez against every machine on the net. dont like it? get off the internet.
  • by Calmacil ( 31127 ) on Thursday July 27, 2000 @06:55AM (#900778)
    While scipt kiddies are bad, and lots of them are very bad, they are reletivly easy to discover, and are (usually) not bright or skillfull enough to cover their tracks. Publishing holes probably does make more of them, but they force you to patch these holes.
    If the holes weren't published, you wouldn't be alerted to them, and then the only people who knew about them would be people who /are/ bright and skillfull enough to hide themselves.
    Would you rather a giant (but pitifully unskilled) army in front of you, or one very skilled assassin behind you?
  • by konstant ( 63560 ) on Thursday July 27, 2000 @06:57AM (#900784)
    Sometimes I feel that certain people in security view the products and the admins using those products as the enemy, and not the crackers at all!

    Who was cracking Novell's LANManager password scheme - included in Win9x - before l0phtcrack was released? How many DDoS attacks had you heard of before the release of trinoo, etc? What about fragmented IP packets before teardrop?

    The real problem with full disclosure is not that holes aren't patched - publicly announced bugs usually do get fixed sooner rather than later. The problem is that users don't always deploy the patches. In the meanwhile, well-meaning (or otherwise) "grey hats" who have coded exploits to holes they discovered - usually in order to enhance their media shebang and sell more of their own security "solutions" - have handed a tool to skript kidz who simply hunt the net until they find a box whose harassed admin hasn't installed the latest patch. Alone, many of these "crackers" couldn't crack a paper bag. With the utilities in their arsenal, it's trivial.

    See this related article written by the l0pht:
    http://www.l0pht.com/~oblivion/so apbox/index.html [l0pht.com]

    I'm all for disclosure of security holes - it keeps vendors honest, and it allows for creative security community solutions. It may not be in the best interests of the world (and info security does have a global impact these days) to code actual *demos* in order to pressure vendors into implementing fixes. Just explain the hole, explain the danger, heck even explain a step-by-step exploit. Just dont code the bitch. Your neighborhood harassed admin will thank you.

    -konstant
    Yes! We are all individuals! I'm not!
  • This isn't quite the same... personally, I would assume that anyone who stuck that sign on their front lawn either a) owned a large mean dog, b) owned a large firearm, or c) didn't have anything worth stealing in the first place.

    Instead, I'd go a few blocks over, find the house with a good, expensive lock, and break through the window.

  • An individual discovers that, if he jacks the steering hard and to the left, the power steering fails, and endangers the vehicle, and everyone around him.

    Does the car industry bewail him finding that problem with the car? Well.. Correction.. Do the bewail him /openly/, telling him to grow up, get a real job, and stop making trouble.

    Now.. Let's take that one step further.. An /extremely/ expensive car is claimed by the manufacturer to be unstealable, because it has fashioned impenetrable door locks. Our enterprising car aficionado notices that, if he wiggles a dummy key just right, the 'impenetrable' lock opens, after which he can do whatever he wants with the car.

    Does the automotive industry scream? Yes, for a little bit.. But they issue a retrofit pretty damn quick. Would they scream if he hadn't told everyone about it? Would they hurry with the refit? Would people trust them, in the future, by default?

    In the I/T world, the best approach, with so many faulty packages, is a belt-and-suspenders approach. Layer several 'impenetrable' and 'infallible' packages in such a way that possible weaknesses should be isolated and shielded, then apply careful monitoring. And the /moment/ you discover a new vunerability, scream your head off about it, and try to protect the soft spot until you can get a fix.

    For all these companies, complaining about how a grey hat's article on such-and-such bug ruins the safety of their entire site, I have ZERO pity, because they have obviously made the mistake of placing all their eggs in one basket.
  • Please explain to me how running any local/remote exploit-of-the week and getting yourself a root prompt on the exploited system helps you discover flaws in your code or otherwise allows you to fix the problem.

    A clear, concise description of the flaw and what should be done to work around it or fix it should be given, but in many cases we do not need root exploits released like this.

    If it's ever necessary to distribute a tool to determine if you are vulnerable to some bug (in case for whatever reason it's not immediately obvious), the only thing that should be written is a tool that says "yes" or "no". Sure, somebody will be able to look at the code and figure out the nature of the bug, but the point is that the "exploit" itself cannot be instantly used by thousands of script kiddies. If it's necessary to distribute detailed information/code about a vulnerability, at least do so without providing an out-of-the-box exploit to any kid on the 'Net.
  • by Evangelion ( 2145 ) on Thursday July 27, 2000 @06:58AM (#900796) Homepage

    Wow, that was a bad analogy.

    The point is more like this -

    "How is my house more likely to get broken into?

    I have a door with Brand X lock.

    1. It's discovered that Brand X locks suck ass. This makes the front page of the paper for some reason. You now have the information to get better locks (if you choose to).

    2. It's discovered that Brand X locks suck ass. No one says a word about it, but those doing B&E's soon discover this, and go around caseing all the houses with Brand X locks."

    (disregarding for the moment that what kind of lock you have doesn't matter with respect to if your house is going to get broken into or not...)

    That's more analogous to the situation here. The 'obscurity' doesn't refer to specific information (passwords, etc - in the lock's case, the specific makeup of your key), but to the information pertaining to the general workings of the security system (i.e. in the lock, how the tumblers work - how easy it would be to pick, etc).

    Blah.

  • Here's my two cents: Security through obscurity is horrible for the 5-10% of us Linux users that update our machines obsessively in order to get the latest fixes and patches. For the other 90-95% (and virtually all Windows users) it's a failure- the kiddies who want to know about it, go out and find out about it, while that vast number of users sit there deaf and dumb and get hacked. If you want to argue for security through obscurity, you have to justify screwing those who are knowledgeable enough to care (like me.) But if you want to argue security through openness you have to justify screwing the 90% who wouldn't know what a security update was if you hit them in the face with it. Of course, with things like apt and Windows Update, this balance may be changing (numerically speaking) but who can really say for sure?
    ~luge
  • It's called free speech and it's called diversity.

    different people have different opinions about disclosing security holes. Clearly, if you know about a hole, disclosing it will warn people of a problem that bad guys might already know about, but it will also tell the bad guys about something they might not know about. There is no right answer to what the best policy is. In one circumstance it might help you. In another, it might harm.

    But, if you believe in free speech, and freedom to explore, and in preserving diversity of opinion, live with it. I will note that the folks who complain most bitterly about disclosure are the companies who sell the buggy software, not their customers who are at risk.

  • Yeah, but it's nice to have something to test against.

    Suppose a security flaw is found, but no code is made publicly available. So the vendor has the developers create a patch. The developers create a simple test case, make sure that patch works against it, and ship it. Suppose now that the test case they used was inadaquate. Maybe it didn't fully use the exploit, or maybe there was a bug in it causing it to only use the exploit in one fassion.

    After a few months, all of a sudden, some sys-admin finds that his "patched" system has just been cracked when a black-hat created a program - that properly implemented the exploit.

    Coding demonstrations of how the exploit works can be very benificial anyway, because presumably these have been tested and debugged to fully demonstrate the flaw. It allows the developers to be completely certain that the patch actually completely and fully patches the system. Besides, if most of the script kiddies are going to use that version anyway, it cuts down on the chances of a blackhat writing a new one that gets around the fix. (After all, why bother reinventing the wheel?) It also means that there are fewer potential methods of using an exploit, so even if the patch is inadaquate, it'll work against the most common script.

    It's also nice to be able to look at code and see exactly how the exploit works. Fixing a problem when you can test it against a known working exploit is much easier than needing to first write a test program to do the exploit and then fixing the machine against that test.

    The real problem with having a test case is when a company takes two months to release the patch while countless sys-admins are finding their servers are getting broken into. Then again, maybe if there was no demo, the sys-admins would get some time off before some black-hat coded a version anyway.

    Trust me, there are people out there who would program an exploit they discovered off a security advisory just to see if it works. They want to show off those new m4d h4x0r 5k177z they learned in their CS class, and creating and releasing a program to exploit a known (but demoless) security flaw is one way to do that. All no public demo would do is buy some time. And I don't think it would even be much time.

  • If a sysadmin scans bugtraq even weekly, he can often have a patch or workaround for a vulnerability in his systems long before the vendor releases anything.

    A good advisory should include workaround information. If the person reporting the vulnerability can't do this, then perhaps he needs to pass his information on to a qualified security firm who can.

    Anyone capable of writing a script-kiddie-compatible exploit should be quite capable of providing detailed information and a workaround/fix without necessarily releasing an out-of-the-box root exploit to every kid on the Internet.

    As you are probably aware, poorly written "advisories" on BugTraq are typically followed up in short order with something with significantly more information (in quality and/or quantity).

    If nothing else, if the author of the advisory feels a code example is required to accurately describe the nature of the bug, at least make the reader work to get a working exploit out of it. You can publish example code without publishing a functioning exploit.
  • I for one think full disclosure is the only way holes will be fixed and systems secured. Where does he think he will get new attack sigs from?
    Report your new vulnerabilities to NFR only so they can keep it a secret and roll it into the next signature file version? Hmm strike NFR off my IDS solution list.
    Like those guys at NFR aren't all subscribed to bugtraq.
  • To add to my post:

    Of course, if an exploit already exists "in the wild", you're not hurting anybody much more by posting it to an appropriate forum like BugTraq. At this point worrying about full disclosure is rather moot.

    And regardless, every possible bit of code, example and exploit should always be sent to the vendor first (even if it's just a few hours, if it's urgent that the information be publicized as quickly as possible). It's damn inconsiderate to post something publicly without giving the vendor any time at all to prepare a response, fix or workaround (and this goes back to my point about sending information to a qualified security firm before blindly posting incomplete/inaccurate information.. The vendor could easily be considered "qualified" in this respect, IMO).
  • The guy quoted in this story seems to advocate against full disclosure. Malda seems to think this implies the absense of any disclosure.

    Can't there be a middle ground? Can't we disclose enough information to accurately describe the problem, workarounds and/or fixes (preferably from the vendor itself, in the case of vulnerabilities not yet "in the wild") without publishing script-kiddie-compatible exploits that run right out of the box?
  • by jd ( 1658 )
    If the security is good enough, then it shouldn't matter how much potential attacks know.

    If your system is B1-compliant, who CARES if a script-kiddie has a PERL script capable of stress-testing a Linux box?

    If you're worried that the locks can't tell the difference between a key and a can of coke, then sure, obscurity is effective. But if you KNOW that your systems have bullet-proof authentication and that holes simply don't exist, then WHO CARES WHO KNOWS WHAT????

    The fact is, a lot of "armchair experts" -don't- bother giving their code even a cursary audit, never mind anything stringent. If it compiles, run it and worry about the holes when Swiss Cheese companies start suing.

    Even if there ARE holes, though, the environment SHOULD ensure that everything is in water-tight, sealed compartments. You mean, it doesn't? Then stop whinging about skript-kiddies and start coding! Side-effects, over-runs, undocumented pathways, etc, should NEVER be capable of accessing data or code that is not explicitly associated with that application.

    Last, but not least, does this "expert" think that OB1 is any less secure for being available? Or is it MORE secure, for being checkable and usable by people other than SGI?

  • I agree with the article so far as to say that the tools to exploit hacks should not be distributed. The tools may demonstrate the vulnerability effectively and prompt for a quicker fix to the hole, but;
    • They are too tempting for script kiddies who want to show off for their friends,
    • It is too hard to get the word and patches out to all users quickly enough even if a fix is produced quickly.
    This puts everyone using the software at a disadvantage and causes alot of wasted time and energy defending yourself from script kiddies' latest toys.

    Security holes should be published though because it is the only way to prompt vendors and software authors to fix the holes. It also alerts users to potential security risks so that they can choose another product, defend themselves some other way, or look for the patch.

    So the tools to exploit holes should probably only be distributed to a select few who are capable of fixing the problem and the problem should be published to prompt them to do something about it and to inform the public. Unfortunately, many people producing these tools are often doing it for their own egos.

  • by LizardKing ( 5245 ) on Thursday July 27, 2000 @07:00AM (#900820)
    Script kiddies and hackers are like the Ralph Nader of the auto industry

    Hmmm ... Nader was clearly concerned at the lack of safety in contemporary vehicles. This motivated him to write Unsafe At Any Speed to highlight that concern. Script kiddies aren't bothered about the damage they cause, in fact they generally do what they do just for kicks. Don't mistakenly attribute any goodwill to the little fsckers.

    Chris
  • by theonetruekeebler ( 60888 ) on Thursday July 27, 2000 @07:00AM (#900823) Homepage Journal
    Once a problem has been discovered, how do you keep it obscure?

    All you can do is go back to the Bad Old Days of closed source cathedral systems and hoping to ghod the vendors get around to fixing their systems some day, because the social structures that surround crackers and kiddiez give you higher status if you are among the first to propagate a new crack. When one of them knows, they all know. It's the same with other group now--crackerz, Tori Amos fans, just whoever. If you have the info, you share it ASAFP and bask in the glory of being the first to break the story.

    --

  • by null_session ( 137073 ) <ben&houseofwebb,com> on Thursday July 27, 2000 @12:09PM (#900826) Homepage
    Marcus has the exact wrong idea.

    Now that I've said something that everyone will agree with, let me explain why everyone else's comments are also wrong (or at least all of the ones not moderated down under 3).

    I'm saying this as a data security consultant, and yes, it's my real job. I need, as soon as possible, to see the exact technical details of every new exploit. If someone has written an attack script, I need that too. Why? Any IDS that's worth the HD space it takes up allows you to write custom rules. If I know exactly what a given attack is going to look like, I can write very efficient rules to report/stop it. If I don't, I may have to guess what this attack looks like, or leave myself unprotected. Full disclosure reporting is the ONLY thing that provides this type of response for me, the guy who's really doing the work.

  • Right.

    Lack of full disclosure means the security holes live much longer and propagate until they're very wide spread. Then some 5kr1p7 k1dd13 stumbles on an exploit and suddenly the bulk of the world's computer users is vulnerable.

    Full disclosure may reveal vulnerabilities earlier. But that's the time to plug them and it gets done much more quickly.

  • I don't believe the source code to MS Outlook has been widely distributed, yet so many security flaws have been discovered that the geek community has created a new acronym: YAOE (Yet Another Outlook Exploit).
  • by generic-man ( 33649 ) on Thursday July 27, 2000 @07:02AM (#900838) Homepage Journal
    Say what you will about Ralph Nader, but I think he's just a little bit above the equivalent functional level of a script kiddy. Ralph Nader exposed the fallacies of the auto industry, like these "grey hats" -- that is, he actually did the research and fact-finding himself.

    The equivalent of a "script kiddy" as applied to the auto industry of days past would be a driver who deliberately caused fatal auto accidents to expl0it the safety problems. Script kiddies don't actually find security problems; they just use crax0rz provided by grey hat sources (or by more knowledgeable black hats) to exploit the weaknesses. No thinking required.
  • what makes he thinks that with 'white hats' out of the picture, there are going to be less script kiddies? The system hacking activity will just shift underground and proper authorities will have tougher time figuring out how they did it and customers will be lulled my marketers into false sense of security. I guess it is easier to blame on something you can see rather than what you can't.
  • You're right. Nobody actually reads the source code. Ever. Somebody writes it, then it's never read again, by anybody else.

    Certainly aspiring young software engineers would never read the code to find out just exactly how an OS works. Certainly I never did that. And I'm also quite positive that I never read the source to a driver because I needed to find out why our custom version of the same network card wasn't working. Nor did I read the source to various system utilities when they didn't behave as expected.

    In conclusion, open source is doomed to a buggy abyss, only a commercial closed source release system, with a QA department can provide us with good, quality software, like Windows ME.


    ----------------------------
  • by laborit ( 90558 ) on Thursday July 27, 2000 @07:03AM (#900847) Homepage
    The article does not refute the argument that those who empower script kiddies are helping potential victims... it just proves they aren't being very nice about it. We might call the disclosure of vulnerabilities to kiddie-scripts security through threat. The idea, as I understand it, is that these holes will eventually be found out and exploited. No matter how quiet we try to be, eventually someone malicious will find them and exploit them, and if possible script that exploit. The longer this is put off, the more entrenched and widespread that hole will be, and the greater the potential damage.
    Okay, but what about the idea that they could be kept quiet for just a little while, while the good guys get it fixed? I think the STT people have decided that things don't work that way. Remember how effective it was when all the programmers quietly went to management and told them that there might be some problems coming up if they didn't start converting to four-digit dates? It took publicity and widespread fear before most businesses started putting serious resources into Y2K conversion, and it's not unreasonable that the same is true of security holes. Tell them that there's a potential problem, and get the runaround while the money goes to more immediately profitable things. But if the populace is whipped up over the prospect of another Melissa, there will be action.

    I don't think that these grey-hat types are unaware that they're responsible for a lot of kiddie attacks. But perhaps if the kiddies are a force of nature, unstoppable by law or society, software companies will have no choice but to write good products, with competent security audits and up-to-date patches. That's a goal I can see someone willingly enduring a bunch of 1337 bullshit for.

    - Michael Cohn
  • by darkith ( 183433 ) on Thursday July 27, 2000 @07:04AM (#900849)
    Full disclosure helps, but in some cases is too extreme, does source code for a particular exploit really need to be published? In reality, when an exploit surfaces, it should be publicised, but not in detail. This would give reputable companies time to fix it (presuming the finder gave details to the company and perhaps a handful of reputable security experts who might be able to create a workaround plus IDS fingerprints).

    Egress filtering. Yep, it's argued earlier in the iTrace story...but it is a good idea. Perhaps a mandatory requirement that no ISP passes traffic that isn't in there IP allocation. (there is *no* good reason for routing somebody else's IPs, right?). Yeah, there might be an issue with speed of filtering, but it really is the only way to prevent havoc. (oh, and iTrace is a step in the right direction too...at least a temporary one)

    Malicious activity should be viewed as just that. DoS'ing, cracking, exploiting, rooting, sniffing should all be classified as illegal, and penalties must be established. Although the cost of tracking down perpetrators is high, the increasing number of these l337 scr1p7 k1dd13s is only going to cause more and more financial loss, especially as the Internet becomes more ingrained in society. Cracking system (even if there is no financial loss) should still be viewed as the intrusive crime that it is, and should be prosecuted. (of course, that's very difficult across borders, but something *must* be done...)

    Relying on obscurity to provide any level of security is a bad idea. There are talented people who can find flaws in any closed system, given enough time and effort. But this is no excuse to start handing out information that doesn't need to become public. A source code example isn't required to demonstrate a flaw to the public, so it doesn't need to be distributed.

  • by WNight ( 23683 ) on Thursday July 27, 2000 @02:01PM (#900852) Homepage
    90% of the (exploitable) bugs in Windows are in the networking code and the scripting code. It doesn't matter if there's a bug in an installer because a malicious attacker can't run the installer without having already gained control of the machine.

    Sure, Windows will *never* be bug free, and it's silly to expect that it will. Especially when you consider all the things that people think of as parts of windows, from the essential like ScanDisk and the Networking to the mundane like Solitaire...

    But, if the OS is well designed, a program like Solitaire could never take out the whole OS when it crashed, so it could be dealt with seperately. Only the core system would need to be rock stable, the rest could be restarted easily. (Beos's networking dies on me every now and then (I'm breaking the rules using two identical network cards) which is annoying, but I can restart it with the click of a button, unlike in MS where I have to reboot.)

    Once the system is stable and can't be crashed by a badly written solitaire game, you go on to bug-fix the important parts, the external programs, those that deal with the outside world.

    Your HTML renderer, your network stack, your scripting, those need to be locked down.

    A smart designer can tell what parts of the system need to be secure and which don't. If the attacker could only get to one bug by already having exploited a larger bug (crash solitaire by using a buffer overflow in networking to execute arbitrary local commands) then the one bug is fairly minor.

    Microsoft could secure Windows, at least as much so as BeOS or any other non-multiuser OS, with a little work but they refuse, because it's easier to only fix what has to be fixed.

    I agree with the person who said that bugs in products from companies like Microsoft who don't fix the bugs until they make the news, should be made public without warning them... That way they take the biggest credibility hit.
  • In my mind, the term "Script 'Kiddies'" is more a reference to maturity than age.

    Until recently, most Script Kiddies where in college - simply because that was when people received net access. Recently, the 'Kiddies' tend to be younger - but we also have 40 year old Script Kiddies. They're still 'Kiddies' however - because they're acting like immature 8 year olds.

    At least when I use the term, please don't take the term "Script Kiddie" as a reference to all young computer techs... instead take it as a reference to those techs who shouldn't be allowed to go to the grocery store alone, much less use a computer.
  • Compu terWorld [computerworld.com] also covered this story, with a little different slant than Excite's coverage.
  • whoa, hang on a second.

    why are you looking at every possible source of external data? you know exactly where your code is breaking; you have the exploit right here.

    if your code is horribly broken and relying on other horribly broken code, then sure, it takes you a long time to fix it, but would you rather have even LESS information available to you when you start fixing this bug?
  • The author had the point, but missed it:
    Web vandals tend to use only a handful of exploits to compromise vulnerable sites just enough to post digital graffiti.
    Those "handful of exploits" are analogous to a handful of unlocked doors in a building that's supposed to be secure. The question nobody seems to be asking is, "Why are those doors unlocked?"

    Why did Microsoft ever have a scripting facility with no security checks? Why do products still have buffer-overflow issues? Sloppy design and coding. Until the bar is raised for the production of software (ala OpenBSD), this will continue.

    The REAL problem is that people have no understanding of the origin of these problems. Once it is common knowledge that sloppiness in design is responsible for the Love Bug virus and web-site hacks, people will demand better software and be willing to trade some convenience for security. Current design holes are the equivalent of buttons all over a car which will unlock the doors, un-latch the steering column and start the engine. Nobody would tolerate a car that is so open to theft, and nobody will tolerate software that is equally bad (as so much software is today) once the public is sophisticated enough to know the difference.
    --

  • Better an army of script kiddies get ahold of exploits and use them to DDoS each other off the net for a week or so than to have some motivated and really knowledgable types have years to plan their attacks using unfixed holes.

    Script kiddies, simply because of their numbers, can't keep secrets. Once a program to root some webserver becomes known, they'll use it to put up a brag sheet, or to run an IRC bot, or something stupid. This means the problem is going to get the attention of the admin, or someone calling the admin to report a DDoS...

    I'd much rather that Amazon.com be rooted by a script kiddy posting a brag sheet, or perhaps DoSing EBay for refusing to list his Ultima Online character, than for it to be subtly cracked by someone stealing credit information.

    Similarly, we say that in the old days, nobody had to lock their houses because there weren't thieves everywhere. But that means that if someone wanted to get into your house at night for a more devious purpose, they could.

    Like it or not, bugs won't be fixed till they're exploited in a public way. I'd rather that way be a bunch of stupid kids playing with scripts than millions of dollars being stolen. Similarly, if there was a problem with a certain type of door lock, I'd rather hear about them being recalled after thousands of minor B&Es instead of just a couple rapes or other serious crimes.

    It'd be nice if Microsoft and other big companies would try to fix the bugs as they were reported, but that's unlikely. As long as they can just ignore them, hoping they don't cause problems, and maybe fix it in the next release, they will. We *need* to cause a fuss about each and every exploit so that 'they' have to do something.

    As long as it's cheaper to hide behind EULAs (and bribe politicians to pass fucking stupid laws like the UCITA) there's no reason for them to actually fix bugs unless there's enough of a fuss that people might stop buying the product.
  • And why in hell should he be interested in helping you? And what do you care about his agenda?

    You care about his agenda, because you're trying to rip him off, as much as you possibly can. And you're also trying to rip off everyone else as much as you can. Now, he's telling everyone, that your product sucks... From your perspective its not usefull information. Under the terms of most EULAs you aren't responsible for an defects in your product. So information just persuades people to buy someone else's crap, or pay less for yours.

    --locust

  • The usual assumption is, in the area of cryptography, that using an obscure cipher probably means that it will be fundamentally weak, and that it is preferable to "flow with the herd" and use Blowfish, Triple DES, or whatever flows out of the AES effort.

    Another view is taken by Terry Ritter, of Ciphers By Ritter. [io.com]

    His article Cryptography: Is Staying with the Herd Really Best? [io.com] questions that; his view is that there should be a framework for there to be a rich set of ciphers in use, and that systems should readily, and dynamically, be able to shift to new ones should an older one be broken.

    There are, widely stroking with the brush, two major approaches to security:

    • Create "heavily armoured elephants," with comprehensive, well-understood sets of defenses.

      It is fairly well guaranteed that the armour will prove challenging to would-be attackers, whether we're talking about a crypto system, or a B1-certified version of Unix.

      Unfortunately, since such systems are big, heavy, and complex to assemble, if they do have weaknesses, they will prove extremely vulnerable to attack at that weak point.

    • The other approach might be described as a "herd of gazelles."

      Gazelles are not heavily armoured; they depend on moving quickly to avoid capture by those that would eat them.

      More importantly, they are "physically independent." If a lion is busy chasing one gazelle, he can't catch any of the others.

    The history of major Internet security breaches demonstrates that putting all the eggs in one "pot" is dangerous:

    • The Morris "worm" only affected systems running Ultrix and SunOS
    • The Melissa "virus" affected only those running Microsoft apps
    • Ditto for ILOVEYOU
    If people are running different systems, they will have different vulnerabilities, and so long as the systems do not broadcast the evidences of vulnerabilities, there is value in obscuring the vulnerabilities.
  • Marcus Ranum is great, and he's a great speaker, but he's wrong. It is true that the mass distribution of hacking tools has created a mass of script kiddies. This is an offset of a lot of kids, possibly alienated and marginalized, with excellent basic computer skills and too much time, and not enough legitimate purpose. They do it as a method for asserting themselves. A lot of hacks are a bit like "tagging". You can't drive up 101 in silicon valley without seeing tags all over the overpasses.

    Full disclosure allows people responsible for security to verify vulnerabilities, patch holes, etc. The no-disclosure alternative leads to an unknown mass of hackers, out there trading amongst themselves. It will not stop distribution, even to kiddies, who will spend endless hours on #supah_hot_shells on irc pining away for a new tool. Meanwhile, with no public disclosure, who will protect us?

    You guessed it, Network Flight Recorder. It, and a cadre of other companies like it, will share their secrets with each other under the blanket of draconian NDAs.

    Part of the problem is just that we've recently had a lot of distributed dos attack "exploits". The problem being, you can prevent yourself from being part of it, but you can't prevent yourself from being a victim of it. There's nothing worse that running a tight ship, tuning your box(es) to be safe, and then eating 200megs of smurf because some user with a shell on your machine kicked some flooding fool off #stay_away_flooders.

    Still, the smurf problem (and those like it) are not insurmountable, and people are now aware the problem must be dealt with in an automated way, and they're working on it. Meanwhile, law enforcement will grow more adept at tracking this sort of thing. As many people have pointed out, few connections to the net are truly anonymous. Meanwhile, cooperative logging will grow more likely. Logs will stream offsite immediately to a super-safe host, so even if you break into a system, your tracks are set in stone, etc. Meanwhile, those of us who just want safe boxes can keep them safe.
  • by Anonymous Coward on Thursday July 27, 2000 @07:15AM (#900888)
    Remember always, that the goal is to patch the holes.

    For instance:

    (1) I write code that makes many calls to the linux kernel.
    (2) I am unfamiliar with the intricacies of the kernel.
    (3) I find a call with some messed up parameters that inadvertently gets me root access.
    (4) I do not know how to fix it in the kernel, but other people do. Maybe I just don't have time to spend fixing the kernel.
    (5) So, I **MUST** release the code showing what I did so others who do know the kernel well can fix it.

    And vague descriptions by me of what I did only slows the creation of a patch because I may articulate incorrectly or leave out vital info about stuff I'm unfamiliar with. The source code is always true.

  • by anticypher ( 48312 ) <anticypher.gmail@com> on Thursday July 27, 2000 @07:09AM (#900892) Homepage
    As a homeowner with a Brand-X lock, you feel secure. The Brand-X lock has the most complicated key you have ever seen, and the lock is hardened steel with dozens of anti-tamper function. You feel really secure.

    Brand-X locks have a defect which means they can be opened by anyone inserting a screwdriver and turning it. The manufacturer knows this, but doesn't say anything. This is security through obscurity.

    Thieves (script kiddies) have discovered through experimentation the screwdriver trick. They occasionally wander down your street, looking for Brand-X locks.

    If the manufacturer of Brand-X locks were responsible, they would put out a recall notice and replace all the defective locks. But they aren't, so thousands of homes are broken into by thieves who have learned this exploit. Many homeowners learn of the defect from the police after the fact.

    Once a news report is published showing the fault with Brand-X, many of the consumers clamor for replacements. Eventually the manufacturer gives in and provides new locks to those who ask for them, putting a positive spin on the whole sordid affair. Draw your own parallels with this action :-)

    the AC
  • by Chris Burke ( 6130 ) on Thursday July 27, 2000 @07:10AM (#900893) Homepage
    Yeah, well, I'm no zealot, but I can clearly see the problem isn't the release of source code, but the lazy sysadmins. Closing the source as a method of solving the problem of lazy admins is not a solution at all. It will give you a slightly greater amount of time before the exploits are found, but then there won't be a bugtraq post about the exploit, and the boxes will still be cracked. You've only delayed your doom, at the cost of making it worse when it inevitably comes.

    The 'many eyes' mantra isn't a mantra... it's sound reasoning that has been demonstrated to be effective. Yes, it does actually require many eyes, and more importantly for people to pay attention when the mouths associated with those eyes speak up about a problem -- which is the point. Get rid of the lazy sysadmins, and the article wouldn't have been written.

    What I can only hope is that the spat of low-effort cracking and script kiddies will serve as a wake-up call to those people who think these problems will just fix themselves. To an extent, it seems to have, but I hope they don't react by trying to close the source to protect their own laziness.

    To a limited extent it has worked -- I'm paying more attention to security now myself. and I'm definately a lazy admin (but I don't make any money off my administration efforts, and it's just my box ^_^)

  • by Superb0wl ( 205355 ) on Thursday July 27, 2000 @07:12AM (#900905) Homepage
    • Point the First:
      People are afraid of things they don't understand. This is very evident when dealing with computers, especially after Nightline runs bits like "How to Protect Yourself from Hackers."
    • Point the Second:
      We call them kiddies because they show the knowledge level of a child. They don't understand how things works, and they don't care. It's like an 8 year old with a bazooka. "Hey, billy, check this out, I can get inside anybody's house on the street! Weeee!"
    • Point the Last:
      You are obviously not a 5kr1p7 k1dd13, so don't worry about it. No one on the 'net will EVER know your are 15 unless you tell them (or unless you act like it). I've been reading your posts for a while, and you sound like a resonably intelligent human being. Also, the term script kiddie isn't know outside of the nerd half of the 'net. You will probably never find yourself hanging out after school and a bunch of bullies come up and yell "Hey, Script Kiddie! You Suck!" To sum up: If you know you don't fit into the sterotype, that should be enough to be able to convice others that you're not in the sterotype (I think i got that right). So don't worry about it. And keep "using your computer knowledge for good." Don't joing the dark side :)


    -Superb0wl
  • by gotan ( 60103 ) on Thursday July 27, 2000 @07:50AM (#900919) Homepage
    While i prefer a system of posting security holes in the open to the alternatives (namely that the security holes are spread in obscure cracker forums and thus will have a far longer lifetime) i find it debatable to provide readily usable scripts for even the dumbest to use freely. In most cases a simple "at this point a vulnerability exists which can be used for such and such a form of attack by people with such and such privileges" should give the maker of the software enough hints to fix the hole while it would take at least a little work for a cracker to make use of that information thus greatly reducing the number of potential crackers.

    The only argument for giving away such scripts is to exert pressure on a company that totally ignores announcements of bugs otherwise and will only react when critical comments start to effect their product sales. I think the fairest way would be to give the company some headstart to fix the hole so they can provide the fix with the report (which should honor the finder of the bug for his efforts) and proceed to publish the hole on some open forum after a few days. If the company chooses to ignore the bug it will only make them look worse later. There is no need to add a script to the exploit as these will sprout up anyway as soon as the hole is known.
  • by gavinhall ( 33 ) on Thursday July 27, 2000 @07:21AM (#900921)
    Posted by BSD-Pat:

    Security itself is mostly common sense, you need to know whencertain actions are good.

    Now we published our security setup on slashdot, mostly because its nothing special in and of itself. However I don;t publish our *policy* and implementation.

    Why? its just never a good idea, its common sense.

    I hate the words "Security through Obscurity", mostly because, sometimes its "Security Through Prudence".

    There are certain times I should never disclose actual implemenattion of a security plan. It would, in essence make it easier for those who are not even "grey hats" (not that I don't trust you slashdotters :p). And all it takes is sometimes *time* looking at a problem to fix it. Sometimes I *know* theres an issue, and I don;t want to invite trouble (like in the case of several DoS attacks we've had).

    Network Engineers, Software Engineers, Security Engineers, they are nothing if but human. Now if you privately notify them about a hole and they refuse to do anything about it, or even acknowledge it, thats a *different* story.

    the other thing thats important in this is peer review. A team of people should be in on implementation, its the same way with code. There should always be someone reviewing your work, or else you could FUBAR everything.

    So lets forget security through obscurity, if being obscure is your only protection, then you are stupid. If prudence is the reason, simply until you can fix something known by you, then, maybe thats a good idea.

    -Pat

Remember to say hello to your bank teller.

Working...