Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Bug

SecurityFocus Responds To ESR Column On OSS Security 160

RabbitInTheMoon writes: "Elias Levy, moderator of BUGTRAQ and Chief Technology Officer of SecurityFocus.com has written an article about Open Source Software security in response to the recent message from ESR published here. He makes some very interesting points. Maybe this will clear up some of the misconceptions about open source security. "
This discussion has been archived. No new comments can be posted.

SecurityFocus Responds To ESR Column On OSS Security

Comments Filter:
  • by Anonymous Coward
    Read the article. Clicked on the link.
    Guess what ad the page displayed?

    "You may want a security package.
    You don't know how it works.
    (In fact, nobody does.)
    Maybe you would prefer a trustable one.
    It's...
    Nessus.
    Free.
    Open-sourced.
    ..."

    The words are not right, but the idea is there.

    Roland.
  • by Anonymous Coward
    Even if we didn't find it, it doesn't mean it isn't there :)

    Gotta love paranoia. But I agree, it is very easy to hide a backdoor in closed source, but certainly not impossible to hide one in open-source. I think the easiest would be, being a little careless on getting the bounds on a buffer exactly right, then you have a nice little buffer overflow that no one yet knows about, and when found, is attributed to mistake rather than malice :)

  • by Anonymous Coward
    Elias Levy seems to be saying that security through obscurity does work, because he thinks that everyone is lazy/dumb.

    I don't see where he's saying this. He's instead saying that open source doesn't guarantee that bugs will be seen, or that being seen they will be fixed. Both very important caveats to ESR's open source propaganda, IMO.

    Score:5???

  • by Anonymous Coward

    It is certainly true that open source projects seem to have less bugs than closed source ones, but that doesn't matter in the computing industry. As a consultant researching the freeware phenomenon started by Linus Torveldes and his operating system Linux, I read Slashdot to get "inside info" on the open source community. However the level of naivety about the computing industry displayed here does sometimes drive me to post, as I am doing now.

    The fact of the matter is, CTOs at most companies don't care about issues of ideology. They have no idea about who Richard Strawlman is, and they don't care either. What they want from software is someone to blame when things go wrong. This is why they choose Windows as a platform for their mission-critical enterprises, because it has the backing of a major corporation who can provide the illusion of safety. They don't care about how many bugs there are, they just want that support from a trusted source.

    This is why open source will not truly flourish in the computing market. Your average CTO will not want to use a piece of software for their mission-critical systems that has the image of being supported by a group of long-haired, bearded hippy hackers who are engaged in all kinds of illegal penetrations into other people's data. This may not be true, but it's what they will think.

    The solution - open source projects need to gain respectiblity by incorporating under the law. This would get them the trust of tech-savvy computer people everywhere, and allow them to become respectable solutions for the industry.

  • by Anonymous Coward
    All this talk about many eyes making bugs shallow or whatever is just so much hype.

    Unless you read and understand the source code, at the level of a security genius, there is no difference between downloading a package, and typing ./configure --prefix=/usr/unsecure/bin; make and downloading a binary image from slashdot's secret warez trading forum [slashdot.org]

    Lets face it, none of us sysadmins ever read the code that deeply. I remember once finding a security bug in the md5getty program, where it actually rendered the system less secure than the standard getty. This was after we had been running it for weeks. It only showed up after I noticed wierd log messages, pointing to a root exploit based around the use of the "system" system call.

    My point is, security is not free, it is something you need as a matter of policy. You also need some very good legal backup for when things get nasty, and if you use open source software for mission critical apps, believe me, things WILL get nasty.

    thank you

  • by Anonymous Coward on Monday April 17, 2000 @02:23AM (#1128376)
    ESR forgot to mention one of the strongest sides of OSS: when there's a flaw in the security, you can fix it. And even when you can't there are always others who will do it for you. Even if you read from every damn newspaper on the Earth that a Microsoft product you're using has a backdoor, you can't do anything about it except upgrading to Apache.
  • LinuxToday [linuxtoday.com] has a little piece by ESR where he acknowledges that it's not really a backdoor as ZD and the experts who found it said. But the point of his original article still stands, security through obscurity doesn't work.
  • You missed the point about the Thompson's trojaned C compiler. It was designed not only to insert a backdoor into /bin/login whenever it detected that it was being compiled, but to insert this backdoor producing code into cc itself, if it detected that cc was being recompiled.

    Thus Thompson could distribute clean source, but still guarantee that the trojaned binaries would propagate, since you had to use his trojaned cc binary to compile a (trojaned) cc binary from (clean) source.

    Now if someone had compiled Thompson's cc source with a *different* C compiler, the resulting binary would have been clean. But at the time, his compiler was the only one available.
    --
  • Now I DO know how to code. I'm taking my super-happy C++ coding classes in high school, so I know my way around a compiler and the like. So, after reading this article, I thought "hey, lets see if I can understand this NOW!" Guess what? It's still spaghetti code. I still can't unstand a stick of it, other then the PRINTFs and SCANFs. That's it. And I got a 98% in the class.

    Being able to read other people's code is not a skill that I think is taught well, or indeed taught much at all.

    I've been working as a programmer for 5 years now, and have developed a pretty good knack for reading and figuring out code. Which is good, because where I'm working now, we have code (ugly code) lying around that's 18 years old. I don't even know if the authors are still alive - they sure aren't still working here, though!

    But this skill is purely something I've developed through experience. I did a CompSci degree at uni - we wrote lots of code, and we learnt a lot of interesting and useful things. But we never did anything along the lines of learning how to read code. So don't feel too bad if you haven't learnt that in your high school C++ classes.

    It's a shame..

  • This article presents nothing new; neither did ESR's. Both articles simply rehash the same old arguments we've been hearing for years. I suppose these things do have to be occasionally restated, but I just feel the urge to say "bleaagh."

    Elias Levy seems to be saying that security through obscurity does work, because he thinks that everyone is lazy/dumb.

    Personally, looking at the evidence of the last few years, I'm not so sure I agree with him. Yes, *theoretically* open source software could go unfixed for a long time, but what we have seen is that it seems to be much more common for closed source software to have holes in it for years.

    Regarding his point that black-hats will find the holes in open source: who do you think is finding the holes in closed source stuff? I'd still rather have the "window of opportunity" be of shorter duration, and I think I get that with open source.

    Open source is not a panacea, but it's still safer in the long run, IMO.

    New XFMail home page [slappy.org]

    /bin/tcsh: Try it; you'll like it.

  • I work on a closed source project. Part of our process is to have a code review. I've been part of them, and I've had them done to my code.

    Now this isn't as extensive as the theroitical work that could be done to closed source, it is more then the simple evaluation would inditcate.

    Of course we are an exception. One of our old alpha testers went elsehwere, and she described her expirences as follows "They came ot me and said 'Two weeks in test! thats the longest anything has been in test, ship it'. 'Ship it, there are still bugs.' 'Yeah, but we know where they are.'" The project I'm working on was just shipped last week, and it spend 8 months in test. To get out of test that quickly we had to ship with a long list of bugs. We are not any worse then anyone else at coding.

    Our test process is made much easier in that this is release 1.0, and we designed the hardware from the ground up. If we had to support as many different PCs as say linux or NT does, out test process would be much longer.

  • The anecdote about the backdoor in the C compiler is interesting, but Elias wastes too much time in his article on the observation that black-hat hackers can find security vulnerabilities in open source software. He claims that, by having made such discoveries, they are somehow undermining the claim that peer review leads to better (and more secure) software. Elias misses the point that it doesn't matter who finds the vulnerability. With the source available, anyone can fix it once the word spreads that one has been discovered.

    The claim is not that peer review sets up a magic world where only the kind of heart will discover security holes. Rather, it is that one is not at the mercy of the vendor once the vulnerability has been discovered. Nor is anyone at the mercy of the vendor when the product's architecture is found to be its greatest vulnerability. Moreover, with the source out in the open, the vendor can't deny the existence of the vulnerabiliy.

    To me, all of the white-hat peer review is just one feature of many that leads to greater security in the free software universe.

  • This is actually something I've thought about a lot. I'm involved in an Open Source security company (www.Protectix.com) and we produce network security appliances. It turns out that in this business, the licenses and warrantees specifically say that the company is not liable for basically *anything*. Obviously, that's suboptimal for the consumer, but in today's litigious society it seems that everyone specifically disclaims all "fitness for purpose", etc implied warrantees. There are insurance companies that are getting into the intrusion and loss insuring business, but their number crunchers are having a hard time coming up with a reasonable model for loss probabilities and amounts due to lack for actuarial data on this sort of thing. I think that some day you may see security companies offering an insurance policy along with their service, underwritten by the big insurance houses. That is likely some years away and will initially be quite pricy, IMHO.

    This is not to say that companies don't care about the customer's security! The security business is reputation-dependant and companies will do everything they can to make sure the customer (and their reputation) is defended.

    Mark

  • by bhurt ( 1081 ) on Monday April 17, 2000 @06:32AM (#1128384) Homepage
    1) Is anyone readining it?

    The evidence says yes. There was an attempt to post a trojan in open source recently (I lost the URL- help, anyone?). It lasted _ten_ _hours_ before it was discovered. And this question applies doubly so to closed source- as not just anyone can read it, only employees can read it. And most companies that I know of don't implement any sort of code review, so often a peice of code is only ever read by one person.

    2)
    Are they qualified to review the code? Some yes, some no. Once again, the exact same question can be applied to close source. Opps, most closed source isn't reviewed, even by unqualified programmers!

    3)
    It's easy to hide vulnerabilities in complex, poorly documented source code. This I'd agree with- especially _unintentional_ vulnerabilities. On the other hand, it's also hard to _maintain_ such code- and in the open source world, over time it tends to get reimplemented. Sendmail worries you? Try using smail or qmail instead. And, if anything, complex, poorly documented source code is more likely in closed source projects where the assumption that only the original writter will ever see the code, and as they already understand it, comments and simplicitly are optional. Besides, writting comments and refactoring code takes time, and I have a deadline this week...

    4)
    There is no strong gaurentee that source code and binaries have any relationship. Ah yes, Ken Thompson's paper. Thompson's paper assumes only one compiler, ever. You always compile gcc with only gcc- never Sun's cc, or IBM's xlc. It also assumes that any version of the compiler will recognize any other version of the compiler- so gcc 1.0 would recognize the source code to gcc 2.4, AND be able to insert the back door correctly into the produced binary! It'd also have to recognize the source code to tools like objdump and gdb as well, and insert the proper back doors into _them_ as well. I knew Ken Thompson was legendary- but this defies the imagination.

    And, once again, there's no evidence of this on the closed source world either. Prove to me that Visual C++ isn't inserting back doors into Windows...

    5) Open source makes it easy for the bad guys to find the vulnerabilities.

    I was wondering when this chestnut would show up. The implicit assumption here is that the bad guys can't disassemble code- an assumption proveably false. There was a famous quote an IRA terrorist reportedly once said to Prime Minister Thatcher once- "We only have to be lucky once. You have to be lucky all the time." Security analysts face the same problem- the crackers only have to find _one_ vulnerability, while the security people have to find _all_ of them. You're not making the cracker's lives much difficult, but you're making the white hat's lives all but impossible.

    Open source is not a security silver bullet- and I don't know of anyone outside a few anonymous cowards who claims that. Open source _is_ better than closed source for security. No ifs, ands, or buts.
  • by rlk ( 1089 ) on Monday April 17, 2000 @03:07AM (#1128385)
    What the article omitted is the issue of how quickly bugs get fixed. Certainly patching things on the fly is much easier with open source, and there are alternate ways to get fixes. This, of course, can be a weakness, if you're not careful who you get your fixes from. But overall I think it's a strength.

    Re sendmail and the DEBUG (and WIZ) commands: those commands were quite well known in the 1980's, but one has to look at the times to understand the issue in context. Until the Morris worm, people never really paid too much attention to back doors; the Arpanet was much more of a friendly club than the Internet of today. Also, sendmail wasn't really open source back then anyway; the vendor supplied it as a binary (often with proprietary extensions). You could plug in your own version of sendmail if you were motivated to, but you'd often lose something in the process, such as YP username resolution or some such. I still remember binary patching sendmail to turn off the DEBUG and WIZ commands because they were so obviously braindead (no particular kudos for that; just find the DEBUG and WIZ strings and overwrite them with nulls).

    I don't think that "more people checking source" = "more out there introducing bugs". Most people who hack the source are doing it for their local site. They may well be introducing bugs, but they're not spreading very far. Certainly they're not likely to propagate back to the master source tree, assuming that the people controlling the master source are anywhere on the ball. Most people also don't need to get open source packages via back channels; it's just too easy to get it officially (most of the major Linux distributions include a ridiculous number of packages). One can argue about where they get their source -- with some validity, at least in principle -- but for players such as Red Hat, SuSE, and the like they employ enough known clued people so that there's a good chance that they're getting their software from reasonably trustworthy sources. Along those lines, I'd be more worried about warez; that always comes from back channels, and there are too many places where it's easy to stick nasties on (just do a rogue installer that installs a virus along with the real package).

    Aside from moral issues, the problem with vetting access to open source software is the usual chicken and egg problem. How do you verify someone's bona fides if they don't get a chance to do something in the first place? For the maintainer of a package deciding whether to accept a change it's reasonable that that maintainer use whatever standards he or she wants. For simply inspecting the code, or making local modifications, why go through all this?
  • Mister AC was refering to the option value of open source (which certainly exists). Certainly there are free rider and easy rider issues with open source, as with all public goods- but that option value is real, and is something that is not always available with closed source software.

    Evangelicism on either side of the security debate is of little use to people who actually have to implement security solutions. And the more value that is placed on that security, the higher the option value of open source is.

  • by Malc ( 1751 ) on Monday April 17, 2000 @04:35AM (#1128387)
    Don't worry, I have many years of experience but I still can't just glance at code and know what's going on. Firstly it's time consuming. Secondly, unless you have direction (you want to find out who a small piece works), it's easily to be overwhelmed, especially for bigger projects - you need to apply the "divide and conquer" approach. The smaller your ulterior motive, the harder it is to get into and understand somebody else's code, let alone start finding security bugs. In some ways, the best approach is to review the code with another person... you can discuss it as you go and motivate each other.

    We do code reviews at work and they can take hours. That's just small sections of code - no more than a few hundred lines. Looking at somebody else's code takes sometime just to get up to speed.

    I got really annoyed with Mozilla crashing on my SMP machine a couple of months ago. I downloaded the source, compiled it and then starting running it through Visual C++. Guess what, after spending a whole Saturday doing that I give up (don't give me a hard time, I know I didn't stick at it long enough to get anywhere useful). It takes a long time, and I don't have the time. I already work 8 to 12 hours a day, after which I want to get away from the computer and spend time with my girlfriend, etc.

    So, my point. Although open source means that there is the potential for more eyes looking at the code, does this really happen? How many people just extract and compile, or just install a package? That's effectively no different to closed-source binary distributions. About as close as I've got to the source is when I've had to set a pre-processor define in a kernel header, or when I've got unresolved linker errors building the kernel. The rest of the time I can't be bothered... it's too much effort and I just want it to install and work.

  • By reading this comment (or clicking this box, or opening this package, etc) you give me the right to kill you. Now, when I'm holding the smoking gun, I'll have a hard time staying out of jail.

    You bring up a valid point. The MS EULA, as it currently stands, is probably not legal. However, the fact that Microsoft would print such a EULA, legal or no, should give some sort of idea as to how much they are likely to support you. After all, despite the fact that Microsoft's EULA is probably not legal, and despite the fact that Microsoft installations have been known to melt down and lose companies millions of dollars, there has never been a case that has even tried the legality of the EULA.

    Why is that, you ask? It's quite simple. If you did sue Microsoft the only thing that you could be sure of is that you would be in court for the rest of your natural life with the most capitalized company on the planet. Microsoft has the resources to completely bury all but the best-funded of companies. You could spend millions of dollars, waste hundreds of man-hours and then lose the case in the end. The only way that a court case like this would be worth your company's time and money would be if they happened to be able to prove, in a court of law, that they lost even more money than the court battle would cost due to Microsoft's negligence.

    Dr. Kevorkian's case, on the other hand, was a criminal case. That means several very important things. First of all it meant that the US Government (possibly the only group in the world better funded than Microsoft) would be paying the lawyer fees. In other words the money in the equation was on the other side. If Dr. Kevorkian would have had unlimited funds the case might well have ended up like the O.J. Simpson trial. First rate lawyers clearly can make a big difference (Dr. Kevorkian actually waived his right to a lawyer and defended himself, Yikes!).

    The second major difference between Dr. Kevorkian's trial and the trial that you would get were you to take Microsoft to court regarding the EULA is that there is very little legal precedence for responding to issues of software failure. Dr. Kevorkian had a video of him killing one of his patients. He even admitted that this was the case. Given that evidence the state laws were quite clear. As a plaintiff against Microsoft you would have to prove (beyond a reasonable doubt) that Microsoft knowingly created a defective software product, and that the defective software product was responsible for your lost income.

    Good Luck trying to prove that against Microsoft's lawyers.

    Not to mention that with UCITA becoming law in many states across the nation soon the MS EULA might become completely legal. The fact that Microsoft is lobbying hard to make the UCITA law in all 50 states should give you an idea as to how much Microsoft is concerned about its customers...

  • I'd say ESR's argument is very strong and not at all refuted by the opposing piece. I find it personally satisfying the ESR is able to get so much mileage out of a FUD tactic invented by Microsoft.
  • by Pseudonymus Bosch ( 3479 ) on Monday April 17, 2000 @03:34AM (#1128390) Homepage
    Depending on how active the development is the code may be found in a day, or a year or even more. No-one knows as this has never been done before.

    How do you know?
    Maybe it's part of Microsoft's policy to introduce such an innocent backdoor with every program. And maybe some open source superhacker is grinning to himself knowing that nobody noticed his supercleverly-disguised harmless backdoor in the Linux kernel.

    Even if we didn't find it, it doesn't mean it isn't there :) .
    __
  • The fact that your graduate students could not find that problem unfortunately doesn't say much for them as Unix/Linux security auditors. That system() sticks out like a sore thumb, and execve() would as well. Checking for ways in which a setuid-root program executes another program is a very obvious thing to do.

    Bruce

  • Hi Elias,

    I'd like to point out a few problems with your comment.

    The Gauntlet firewall published by Trusted Information Systems was not an Open Source program. It's what we call "disclosed source-code", and that's very important because that difference means that nobody had much reason to read it or work on it. The software license didn't provide them any incentive to do so - you would have only been fixing bugs in a program that somebody else has an exclusive right to sell. Who wants to be the unpaid employee of another company? With real Open Source, you have the same right to sell the program as anyone else, or to distribute it for free, for that matter, and thus you aren't some company's unpaid dupe. For an explanation of what Open Source is, see The Open Source Definition [perens.com] .

    At the time of the Morris Internet worm, the BSD software distribution of which Sendmail is a part was under a restrictive license and required an expensive ATT Unix license before you could get the system. This is also not what we today know as Open Source. Besides, you are writing about the epochal Internet virus, and few people even considered Internet security before that event.

    Yes, all compilers have a bootstrap problem. One can avoid it by compiling the compiler with another compiler, once in a while, and then compiling the result with itself. This method can also be used to detect the Trojan: compare the generated executable with one that doesn't have another compiler in its heritage - if there's a significant difference, look for a Trojan there.

    Most users do not compile their own applications, but they get them from a trusted source who has compiled them and cryptographicaly signed them. You might not be aware that in all Linux distributions of any import, the packager does compile all programs. If there is a trojan slipped in, you can trace it to the person who compiled the program and bring charges if necessary.

    And what good would it do anyone to grep through source code for strcpy()? We've already done that ourselves, and have fixed obvious problems.

    Sure, it's no guarantee, but it's much better than the alternative, which lets Microsoft embed snide comments (if they really aren't trap-doors, embedding a trap-door would be as easy) in their software and have them undiscovered for years.

    Thanks

    Bruce Perens

  • How would you at all determine whether anybody is an expert at security then, short of being one yourself? It's all about reputation. And to my knowledge, the developers of e.g. OpenSSL and OpenSSH have a reputation that at least approaches that of developers at RSA and say, F-Secure. With the big difference that RSA and F-Secure can point fingers at flaws in OpenSSL/SSH design, and not always the other way round. Although a lot of security companies, probably not without reason, tend to be openish themselves; this at least partly goes for RSA and F-Secure also.

    The point here is that open source is auto-hardening. 'Normal' closed commercial software which is being attacked is hopefully fixed, but nobody, expert or not, can review the resulting, if at all, design changes. What remains is name calling: 'Novell had this flaw' 'Microsoft had that' 'SCO such and such'. With open source name calling doesn't really help cause everybody (including experts) can see whether you are actually adapting.

    I think that incompetent open code would be rather quickly dismissed by people /of reputation/, looking over their shoulders. If it doesn't get dismissed then chances are the 'core team' of that piece of code are probably quite expert.

    About peer review: of course most people potentially having a look at the code don't know what they're looking at. But there are always experts ou there, who tend to be interested in their field. You bet they are watching!

    In a broader view, of course there are no guarantees that open source software is 'secure'. But security/encryption related OSS packages probably are, more or less. If other packages use these few firmly audited crypto libs for encryption related stuff, and adapt a 'secure' coding style with regards to buffer overflows/memory leaks and such (which is reviewable by a much larger group, although I admit yet not everybody), then what results is very nicely secured software. I don't see this advantage in the closed source world.

    Apache as an example: everybody can check out and see that apache uses mod_ssl uses openssl. Also, everybody can check out whether Apache has been audited. So everybody can make a rough estimate about the security mindedness of Apache.
    IIS: not so (apart from experience)

    I apologize for my less then compact writing here, this could be better I'm sure. The point is that while there is no guarantee, there can hardly be a disadvantage. Overall people will be better out with open source.
  • by bert ( 4321 ) on Monday April 17, 2000 @02:44AM (#1128394) Homepage
    In the end they don't really disagree; where ESR says Open Source security would be better almost by definition, whereas Elias Levy notes that the 'open source way' is potentially better but it all depends on the many eyes actually watching/being able to watch for harmful code (the cc malicious chicken and egg problem aside; but proprietary software doesn't seem to offer any advantage here).

    Then again, there's mostly a small team of hard core developers for any open source project, and especially the (mostly technical) security related stuff. Any nasty stuff would probably have to be done by someone inside such a team. An outsider trying to submit something 'bad' would very probably be noticed by one of the core members.

    So if, as a user, you don't want to code everything yourself, it all comes down, I think, to trust. The only question in the open source vs proprietary case is: who do you trust more in the end, a proprietary developer team or an open one.

    Security by Obscurity could maybe temporarily help the Proprietary case, but the 'Exploit-Found' scenerio would always turn out better for Open Source: more fixers, hence quicker fixes. And the fix-it-yourself option.
  • Tell him about the NSA backdoors and the "Netscape engineer are weenies" ... even if it's 100% bollocks, it worked for me! Hey, Microsoft does'nt have to have a monopoly on false advertisement ...
  • Try reading some of the drivers in the kernel source - I've found that much of the V4L stuff (bttv mostly) is fairly easy to read (I was even able to write a new audio chip driver based on existing code), and Donald Becker's ethernet drivers seem to be extremely well documented and commented...

    Of course when you're reading device drivers, you should have the hardware datasheet in hand, so that you'll have some idea what's going on...

    ---

  • Either back-door bugs are obvious as hell, in which case they could be picked up by review by ten people, max. Or they're not all that obvious after all.

    Good God man, you have all ready stated that you do not know how to program. I would imagine that you have no idea who the Apachie group are and do not know the large companies that pay to have Apachie developed. Then again, you could be a toll.

    Furthermore, the more people there are checking the source, the more there are out there introducing bugs, for the hell of it. Even if "Apache" is bug free, how can you tell that the disk you got passed by your third cousin who got it from his ex-boss who got it from "some guy" is?

    Oh come on, do you not know that you can verify the source with a digital signature from a trusted source? Yep, your a troll.

    It would seem far more useful to me if source was "open" in the sense that you could get a copy of the code on production of a reasonable description of what you planned to do with it, what improvement you wanted to make, plus at least two references from people who were prepared to establish your bona fides. Kind of like the criteria for getting a reader's ticket to a law library.

    This is not Law, troll. If the package maintainer is a good manager, he will know what is good and allow it in. I guess you want to spread the LIE that these packages just get moved around without a single person looking at the code. SOMEONE MUST LOOK AT THE CODE BEFORE IT IS PLACED IN THE MAIN DISTRIBUTION TREE. ok, ok, so my karma is going to get tanked for this flame, but I don't care. I am DONE with these trolls.
  • If I download, say, a GPL'd firewall who is legally responsible when it let's through an attacker?

    This whole idea of "blame" doesn't mean much anyway. If you pay a few thousand bucks to Microsoft for your office network, and someone breaks in and trashes everything, Microsoft is not held responsible. You can't sue them. You agreed to that in your license. Tough luck! :-)

    So I really don't see why people run around scared of OSS because they don't have anyone to sue...

    Cheers,
    Vic
  • From the article: "simply being open source is no guarantee of security." ... Most everybody can agree with that, I think. He does an excellent job of making that point, as expected.

    For those of us who suffer from the horrible HTML at securityfocus.com: the unframed article [securityfocus.com] and just for good measure the unframed BugTraq Archives [securityfocus.com]. Really, guys, a banner ad is fine but you've got 3/4 of the browser filled with useless flashing crap. Stop it.
  • Yes, some of us are "stuck" using more than one system. Until Linux comes up with the tools (read: UML modeling tools and better MSWord compatability), I'll be hopping back and forth between NT and Linux.

    (On the other hand, I finally got Linux working on an old laptop, so I'm getting closer to what I need for a useful environment...)

  • Since I think ESR and other open-source zealots can not bring themselves to read all the way to the end, I'll copy the conclusion here

    Open Source Software certainly does have the potential to be more secure than its closed source counterpart.
    But make no mistake, simply being open source is no guarantee of security.

    So, in the end what he wants to state that it's not a given that open-source software is more secure, but the potential is there....

  • In practice it does. You seem to think that security faults can only be introduced on purpose, but of course 99% are errors made by the programmers.
  • exactly. Elias Levy isn't arguing against open source, and certainly not proposing that closed source is more secure. he's just reminding us not to get *too* carried away, and stressing that actually reading code is a good thing to do. your reply just proves that you were expecting an "argument", i.e either for or against "us", the OSS guys. reality isn't black and white, and I thank Elias Levy for reminding us.
  • d00d, you're smoking crack. YOU don't have to read the code (as long as you get it from a trusted source). Since the code is open however, people in general can read the code and make improvements, which you in turn benefit from. How many vendors exactly have been sued into writing bug free programs?
  • ESR is trying to single handedly re-create the "man month" - the idea being the more eyes you throw at the problem the faster/easier/better things happen.
    This is demonstrably untrue.


    Ha ha very funny. The "man month" myth only applies where there is lots of designing and communication overhead. Lucklily looking for bugs requires neither. I think there are now officially more trolls on slashdot than normal posters. You smack one down, two more pop up.
  • So what we have is source with bugs, but a situation where any blackhat hacker can run grep/sed/awk/perl/etc on it to look for trivial bugs. If this same source were closed, it _would_ raise the bar for creating a viable exploit significantly.

    Searching for basic exploits can be hilariously easy. Read the CDC's Tao of Buffer Overflows [cultdeadcow.com] All you have to do is input a bunch of text into a field and see if the program breaks. Not much harder than grep sprintf or scanf.
  • You can argue that OSS isn't more secure. However there is the possibility that the code was reviewed and it is secure. If it wasn't reviewed and there is an exploit, you have the source and can fix immediately. You don't need to wait for the "vendor" to "fix it".
  • I thought "hey, lets see if I can understand this NOW!" Guess what? It's still spaghetti code. I still can't unstand a stick of it, other then the PRINTFs and SCANFs. That's it. And I got a 98% in the class.

    Hang in there. In programming as in any other highly developed speciality it takes a while to get from where you have to start as a beginner to where the pros are working. And also, don't give up. Just because you don't understand the first function in a listing (that's actually pretty normal) doesn't mean you can't go somewhere else in the listing and start with something you *do* understand. Hint: start at the bottom and read up. Start with "main" :-)

    Another factor is that you often have to have a pretty good grounding in the application area the code is written for before you can have a clue what it's doing, so if you're reading cryto code, go check out some crypto sites and maybe get a cryto bood. Same goes for security. Read the howto's and get an idea what the apps are trying to do. Good documentation helps a *lot* too, and while it may not always be included in the source code ;-) you can often find it in other places, for example, the web site the code came from, or news group archives, or the linuxdoc project, etc.

    And then there's just outright bad coding style... it's often not your fault that you can't read something right away. Poorly chosen variable names, unecessarily redundant code ;-) badly structured code and use of obscure library functions all contribute to code being difficult to read.

    But the main thing is just experience. It takes a minimum of 3 years to get to a professional level of knowlege in c++ and until you get there, you'll have to pick and choose the code you work on and you'll have to work hard to understand typical pieces of production code. Also consider that you'll get there faster if you start with a language other than c++ - say, Java or Python, because the concepts you need to learn are much more clearly expressed in those languages with fewer obscure and distracting features.

    In the meantime, don't worry, there are still plenty of eyeballs out here that can read and understand this stuff :-)
    --
  • What might that be exactly? Was it the linux kernel itself, or something tacked onto one of the distributions? I can't think of any terribly significant bugs in the kernel that were exploited by "black hats" before the vulnerability was published (and normally patched). In any case, the latency between a working exploit circulating amongst black hats and the knowledge of that bug is about nill. Yes, I realize there have been some DoS vulnerabilities in the kernel and some relatively minor security issues (relatively recently, say the past 2-3 years), but all of these to my knowledge have been published by white hats (those who publish their work, not exploit others). I fully realize that this is not "effective security", insofar as many admins can't secure their machines before the so-called script kiddies can. In terms of "ultimate security" (as I described in my previous post) though, I still believe Linux to be well ahead of NT.

    The problem is that this is hard to prove empirically. If we know about the exploit shortly after blackhats do, then it's hardly "ultimate." And if we do not know,...well we just don't know. We can, however, make some inferences. We all know the abismal record of microsoft when it comes to bugs. For example, we know this recent "Netscape engineers are weenies!" backdoor remained unobserved by the general population for years. Yet it is hard to deny that any sufficiently motivated individual could have discovered it. Thus we can reasonably infer that an organization such as the KGB could have taken advantage of this bug (not that it was all it was cracked up to be) and exploited a great many machines before the public ever saw a fix. While I realize that i'm comparing apples to oranges here, I've yet to see an analogous situation with the Linux kernel, and I've seen enough with NT (in and of itself) to connect the dots....
  • by FallLine ( 12211 ) on Monday April 17, 2000 @07:35AM (#1128410)
    I disagree though. It does raise the bar for the average "black hat" to create a _viable_ exploit. While it is mindnumbingly dull for most "white hats." Sure, you can look for certain key strings with a hex editor, but it is not comparable to looking at the entire code and seeing it in context. Certainly if you look at the actual number of published exploits for NT they are relatively few. So we must either conclude that your average black hat has a number of NT exploits which the informed public is unaware of, or they simply don't have it. In either case though, a) neither the admins nor microsoft is doing much about it b) Microsoft gets away with it because the hacking incidence of NT isn't much worse than Unix (and some would argue better) c) a relatively small group of intelligent and motivated individuals can punch holes in almost every NT box on the internet. In other words, the possibility of a highly sucessful systematic attack against supposedly secure NT installations is not exactly outlandish.
  • by FallLine ( 12211 ) on Monday April 17, 2000 @08:48AM (#1128411)
    The recent DOS attacks involved large numbers of infected Linux systems being controlled remotely to help with the attack. Look int he recent news for articles.

    I am differentiating between the Linux kernel and the distributions. I fully realize that the actual implimentation of Linux in the various distributions is less than secure. I'm asking you to enumerate what bugs specifically in the kernel were discovered by blackhats before the whitehats. The mainstream press is useless when it comes to technical details such as this.
    There is no such backdoor, and the continued insistence to refer to it as such simply points out the FUD factor in the Linux world.

    Umm, no. The backdoor exists, just it was not all it was cracked up to be. I believe I alluded to this in my previous comment too. Nonetheless, it is not terribly relevant to my point. Microsoft screwed up. The public did not notice it for years. The degree of the severity of the bug is essentially irrelevant, it was certainly significant enough to get noticed. It certainly casts doubt on Microsoft's auditing practices.

    As for being a member of the Linux FUD community, my record speaks otherwise.

  • by FallLine ( 12211 ) on Monday April 17, 2000 @09:32AM (#1128412)
    FYI: http://www.techweb.com/wire/story/TWB20000417S0001
    hardly irrelevant, or non-existant.

  • by FallLine ( 12211 ) on Monday April 17, 2000 @04:18AM (#1128413)
    As far as the Open Source advocates go, I generally find ESR the most levelheaded. ESR is probably right insofar as a blatant backdoor with "Netscape engineers are weenies!" would never escape scrutiny in something such as the Linux kernel. However, his claims were a bit too broad to be digested meaningfully by the masses. Levy addressed ESR's claims. Levy was not claiming "security through obscurity" in and of itself is sufficient. Quite the contrary, he said that many black hats can operate a hex editor and find bugs that way. What he did say was that closed source can offer a significant obstacle to discovery of trivial bugs by black hats. Although it might be obvious when you think about it, many Open Source people hold it as an article of faith that if you take the same source, any source, and Open Source it, it automatically becomes effectively more secure in, say, 6 months. This is simply not the case when you look at the empirical evidence. In other words, if you own some source code to an application, "opening" your code may hurt or it may help your effective security. The change in security is contingent on the specific situation.

    For example, do you really believe that Mozilla is any more secure than Netscape? It obviously contains hundreds of thousands of bugs still. Open source has yet to resolve even more obvious stability bugs, so I think it is reasonable to assume there are significant security issues there as well. Not enough qualified people are truely spending the time to examine and fix it. So what we have is source with bugs, but a situation where any blackhat hacker can run grep/sed/awk/perl/etc on it to look for trivial bugs. If this same source were closed, it _would_ raise the bar for creating a viable exploit significantly.

    On the other end of the scale, we have something like Linux's kernel. Thousands of qualified people really do work and look at the code. The size is managable. The code is easy to understand. The code is modular. All this works in Linux's favor. I sincerely believe the Linux kernel in and of itself (e.g., not the thousands of binaries that come with Linux distros) to be more secure than NT's.

    To make a long story short, the change in security is contingent on the situation. That being said, I do think Open source affords significantly improved security against highly systematic attacks against dedicated attackers. The more reviewed Open source code (e.g., Linux) is at the very least a moving target. The odds of a single blackhat exploiting a bug en masse before the thousands of white hats can close it is quite slim. In other words, although Linux and NT may appear equally secure today, this is just against your average black hat. Your average black hat really isn't all that intelligent or motivated. So Microsoft can afford much less secure code due to their closed source nature and still maintain apparently equal security. Some organization, let's say the KGG, could throw enough brains at Microsoft binaries to create a program to silently scan and backdoor every Microsoft server on the internet, using this as a gateway to more sensitive internal company data (e.g., many companies have worthless firewalls)...A few admins may notice something foul, but many don't fully understand the security model. Fewer yet have the skill to reverse engineer such an attack even partway. And virtually no one other than the big bad evil blackhat group would have the resources or the time to create a working exploit. Consequently, Microsoft can never be made to look sufficiently foolish to force them to do anything. Operations cannot shut down just because of suspected bugs. It would continue getting exploited.

    The bottom line is that apparent security (e.g., the number of known NT exploits vs known Linux) and ultimate security (in scenario's such as the one described above) are different....

    ...gotta run. bye
  • So I have to trust it to someone else. And who do I trust?

    With open source, you can trust whoever you choose to. It can be a "random" consultant, or it can be one who you have known for 20 years, or it can be an employee, or it can be your brother, or it can be yourself, or it can be the maintainer (of either individual packages, or of a whole distribution).

    That last choice -- trusting the maintainer -- is exactly equivalent to trusting the vendor of a closed source product. (How is downloading a fix from Red Hat any different than downloading a fix from Microsoft?) So, in the worst case, open source is identical to closed source. The cool thing is, you don't have to take the worst case if you don't want to. But for convenience's sake, you can, and many people do.

    this is the "hobbyist mentality"

    No, it's the "freedom mentality", where the whole point is that you get to assign your trust to whoever you think best merits it.

    You see having choices as being a burden, rather than empowering.


    ---
  • by Sloppy ( 14984 ) on Monday April 17, 2000 @06:37AM (#1128419) Homepage Journal

    whereas it's actually "review by any boob that can manage to do ftp:"

    Yes, any boob can review it. So what?

    Furthermore, the more people there are checking the source, the more there are out there introducing bugs, for the hell of it.

    Ah, I see your problem. You are assuming that since "any boob" can review it, that means "any boob's" modifications will be fed back into the main source tree. Fortunately, you are mistaken. The decision to accept a change is made by the project's maintainer. It's not, as you seem to imagine, some kind of free-for-all where anyone can change anything.

    It would seem far more useful to me if source was "open" in the sense that you could get a copy of the code on production of a reasonable description of what you planned to do with it, what improvement you wanted to make, plus at least two references from people who were prepared to establish your bona fides. Kind of like the criteria for getting a reader's ticket to a law library.

    How could that possibly be, as you say, "more useful"? Why would you ever want to place any restrictions -- at all -- on who is allowed to know and understand things? If all of those things are needed just to read about law (not just to legislate and judge it), then it sounds like the software profession is far more advanced and reasonable than the law profession. You should be copying us, not vice-versa.

    Restrictions should only be placed upon those who create and modify software (and that's the case, with both open source and closed source). With open source or custom software, the end user has the final decision on how strict or lax those restrictions are. With shrink-wrapped closed source software, the end user has no control of those restrictions, or even knowledge of what those restrictions are or if they even exist.


    ---
  • by robinjo ( 15698 ) on Monday April 17, 2000 @03:35AM (#1128421)

    Peer review of software is not as crucial as in physics. Everyone wants to check the theory of cold fusion or the proof of Fermat's last theorem because they are used as building blocks for further research. When Fermat's last theorem was proved, it opened up more possibilities. Software can also be used as building blocks but new source code almost never cause a revolution. People also prefer to reinvent the wheel instead of reusing source code.

    However, open source software can be audited and that's what some white hats do. The Linux Security Audit Project is actively searching for holes by reading the source code. This includes lots of gifted programmers who can smell a hole from far away.

    I'm sure that commercial software is also audited inside the companies but close software gives you false sense of security. It's easier for you to make sloppier code and leave temporary holes because "nobody knows about them anyway." But if you know that bad guys are going to read your source code and exploit it, you really concentrate more on security. And even if bad guys are not going to read your code, you don't want to be laughed at for leaving holes the size of Titanic.

    But while peer review is not as common as many think, it doesn't mean that it's useless on unexistant. How many of you have actually checked Andrew Wiles's proof of Fermat's last theorem? Only a handful of extremely intelligent and gifted mathematicians can do that and have done it and that's enough for the whole "community" to trust the theorem. So even if only a handful of programmers check important source code, it's enough if those guys are as gifted as Linus Torvalds or Alan Cox.

  • What he says is somewhat OK, but none of it is an argument which relates open source security to closed source security; they are only arguments as to why open source may not be as secure as it might be.

    He's right in that its more complex and nuanced than the simple "everyone will review this" model; unfortunately I think he's emphasised the wrong and less important nuances.

    There's two reasons why TIS wouldn't get feedback from their code: noone is reading it, or noone found anything bad to say about it. My own impression of TIS code was that it is pretty high quality, and there wasn't anything bad to say about it. I don't know of any serious holes reported in Gauntlet.

    People do read source, and the point of open source is that you at least have the option. Most people don't sit down and read slabs of code before installing it, they wait until they have a reason to do so. For security software, one of those reasons is that someone has found a breach.

    Sure, open source means the bad guys can pick through it and find a hole, but they can do that with standard reverse engineering tools with binary-only releases too. But as soon as someone sees a break-in involving and open-source program, you can both audit it and *fix* it. And a piece of software which has shown one flaw is sure to get a lot more attention. If there were holes in Gauntlet, TIS would be deluged in email after the first compromise.

    There's another nuance going on here which Levi completely ignores. Because a developer knows their code is going to be visible for all to see, they're much more likely to keep their code clean (and if they don't, someone else will). A programmer in a commercial environment writing code which will only ever be released as a binary is more likely to hack something now to hit the ship date, with a solemn promise to fix it in the next release (and hoping nobody will find the flaw before then).

    Code is complex, and reviewing it is also hard; security makes it even more difficult because security isn't a functional property (see http://www.counterpane.com/whycrypto.html [counterpane.com]). In commercial environments, code reviews are often skipped in order to keep to schedule, with the rationale of "well, the tests pass". You simply can't test the security of a system - it has to be designed in, and it has to be there from the start.

    In other words, he's right that programs like ssh is large and complex, and may well have subtle flaws. But there's absolutely no reason to think that a similarly sized closed-source program won't have similar problems; my feeling is that it is more likely, because the closed source commercial model precludes the possibility of code-review at several levels (we don't have time, noone else will see the code anyway). The open source model encourages code review by

    • publishing the code
    • often not having commercial time pressures to release it
    • putting the reputations of the developers on the line, and
    • making it easier to respond to any attacks in a timely and decentralized manner.

    J
  • from the article : "Whatever potential Open Source has to make it easy for the good guys to proactively find security
    vulnerabilities, also goes to the bad guys."

    yes, that's true
    more, it should frighten people
    but fear is a good thing

    some time ago experiments in england demonstrated that when people feel in danger when driving they are safer. The experiment was to remove the central line of a dramatically lethal road. The result was near zero car crash.
    Other studies showed since cars become safer (airbag and other security stuff) accidents increase in number and even in danger because people drive faster. Therefore drivers feel safer and drive faster and when they crash, it hits much harder.

    Man kind is such that we prefer forget danger.
    When we believe someone else will review an open source code, we do not review it.
    However it stands the same for close source. "We" believe corporations do a "good" job and we buy the soft eyes wide closed.
    In either case we forget the danger.

    So what ?

    Open source has in this matter one advantage compared to closed source : the bad guys can review security holes from the source and we know it. That must keep us awake, fasten our belt and drive slower.

    We must use open source soft and realise that it does not protect us more that closed source. however it will keep us alert and enable us to have a better understanding of why we choose such security parameter.

    when will we need a security licence test ?
    ;-)
  • The C compiler backdoor mentioned affected only people who installed the binary, instead of recompiling the compiler themselves

    That still doesn't stop this from happening: I have an "un-broken compiler", I down load the new source containing the back door, I of course don't read the whole freaking source, I just compile it. Someday down the road I compile a new login with my now broken compiler, and I am busted.

    I agree that my ability to fix this is greatly enhanced in an open source environment, but my point is that it just isn't the binary, you need to trust (or audit yourself) the entire source. This is something that no one can do on their own. Fortuantely, since we all have the source, we can all watch each otehrs back, if you will.

    If, for example, you're using Red Hat Linux and you don't trust us for whatever reason, all it takes is "rpm --rebuild /mnt/cdrom/SRPMS/* ; rpm -U --force /usr/src/redhat/RPMS/yourarch/* /usr/src/redhat/RPMS/noarch/*".

    Once again, how do I know you haven't fucked with the source code? In fact, if there is a back door, one might just leave it in the CD sources to defeat just this work around.

    My point is, make or rpm --rebuild does not necessarily fix this. They need to be executed on reviewed source. This isn't a slam against open source. Like I said, the only place you have a chance is in an open source environement. I just don't want people to get a false sense of security, "I can just rebuild everything fom the source and I am safe."

  • Seems to me that there's a market for various sorts of warranties as a value-added service to open source. But I doubt that anyone is going to give a blanket warranty that a firewall or anything else is hacker-proof, whether that something is open or closed source. In both cases the downside liability is just too great.

    The folks who built your home didn't give you a warranty that it was burglar-proof. Even my alarm company only promises intrusion detection and rapid reaction...


    A. Michael Froomkin [mailto],
    U. Miami School of Law,POB 248087
    Coral Gables, FL 33124,USA
  • is here [securityfocus.com]
    He basically points out that OSS is not perfect, but can be considered better than closed-source.
  • by FascDot Killed My Pr ( 24021 ) on Monday April 17, 2000 @02:33AM (#1128429)
    If I download, say, a GPL'd firewall who is legally responsible when it let's through an attacker?

    My common-sense approach would be to blame the network admin unless it was a bug of such an egregious nature that it constituted negligence on the part of the company. Many products (see: Microsoft) have just these kinds of bugs in them. So a smart network admin gets something Open Source.

    Could a sufficiently crafty company claim that, since you have the source, certifying that it is secure and bug-free is up to you? And, if so, will we see companies moving to Open Source releases for liability protection?
    --
  • What is with "legally responsible" though? Are you interested in screwing money out of the blighter who cracks your machine, or in running a machine that's more secure next time round?
    While it might not be explicit, I have a feeling that the open-source approach lends itself more towards the latter - some poor geek is going to have to fix the silly box while the company (per)sues the cracker... yippee.

    While I'm here - how is this article anything but a reflection on the fact that the linux user base has become more user than coder?

    The thing is, not all exploits are backdoors, which the article seems to neglect. Anyone can write code with *bugs* in, and the most obvious will be ironed out by the world-sized community, all to the good. But then you've got minor bugs left, and there's no easy way to guarantee that a large lump of software is without those, some of which could combine to constitute an exploit. (Or a performance slow-down or resource hog, of course. Let's not over-focs on the security side.)

    Consider:
    "Security through obscurity is not something you should depend on, but it can be an effective deterrent if the attacker can find an easier target".
    The logic behind this is also only half-complete: if something is closed then you can still throw yourself at it AND you get "clever" folks trying to reverse-engineer it and everything. The real problem bites when you have to wait 2 days for the company to supply a fix that doesn't work, when if you have the source you can *fix it yourself*!
    ~Tim
    --
    .|` Clouds cross the black moonlight,
  • It's me, "Hi!"

    It's a hundred thousand other me's, all of whom have vested personal interest in seeing the software run properly for some reason or another, each of whom is not only willing, but eager to fix those bugs which annoy him/her.

    *You* don't have to fix shit. *You* never have to see a single line of code if you don't want to. Herein lies the beauty of the whole thing. You can simply sit on the sidelines and reap the collective good of the effort exerted on the Open Source software you use.

    Installing the latest bugfix is simply a different terminology for what Micros~1 callsd "upgrading," and it doesn't (usually) cost a damn thing.

    Your ignorance borders on FUD. The notion that you must edit Open Source code personally line by line is so blatantly wrong that you do us harm by spreading it.

    Anthony

  • MS's hotfixes and service packs are also free.

    Point well taken. But Micros~1's entire revenue model centers around phrases like "upgrade path." I should know this, I'm unfortunate enough to administrate 8 production NT servers.

    When something needs to be fixed on one of these servers, or (more appropriately) when one of Microsoft's "hotfixes" of "service packs" breaks something on one of these *production* servers, my employer has to expend huge amounts of time (time==money) and money to figure out what caused the problem, and then (usually) purchase new software from Microsoft to "fix" it again.

    Neither I (who administrate) the servers and care, or my employer who owns them (and doesn't), get to see a single line of code in either case. We have no idea what the "fix" will do, and no way of finding out other than installing it.

    I prefer Open source because it's free as in speech anyway :) I'll take the beer 'though...

    Anthony

  • I need a solution that does not require an expert for daily use

    Not to rag on you guran (I very much agree with your point), but to rag on a different angle I see meant by that statement daily:

    Get a fucking calculator.

    Middle management seems to live under the perpetual delusion that these inconprehensibly complex conglomerations of silicon, plastic, and electrons are somehow going to magically "work" all by themselves some day, and yet still can't seem to figure out how to work Outlook when their system is running fine.

    "I need a solution that doesn't require an expert."
    "I need a solution that doesn't require me to learn or think."
    "How do I forward this chain letter?"
    "What do you mean don't open letters that say 'Good Times'."
    "I think I just deleted The Internet."

    Anthony
  • by adimarco ( 30853 )
    I don't see why they didn't release a fixed dll

    I do :) You seen their source? Neither have I. You seen how amazingly shoddy, unreliable, and *fundamentally* unstable (unstable in that "there's a still a couple serious pointer errors in the pre-1990 code" way) their software is? What do you imagine their code looks like? How easy do you think it is for a bunch of people motivated only by salary to fix something obscure in a couple million lines of badly maintained code? How long you think that takes?

    I see why they don't just release a fixed .dll

    Anthony
  • Marcus J. Ranum, author of the Firewall Toolkit, which is one of the pieces of TIS cod eunder discussion, has said that there are still a few glaring bugs in there that no one was pointed out yet.
  • Okay, so they argue that Open Source Software isn't perfectly secure. Very little is. I see no argument that Closed Source Software is any more secure. And this implies...?
  • The fact of the matter is, CTOs at most companies don't care about issues of ideology. They have no idea about who Richard Strawlman is,

    I had just written a long reply to this, complete with links to informative sites, examples of case studies, Apache market share figures and witty prose about the brain capacity of CTOs implied by this when I looked again at the spelling of Richard Stallman's name.

    Strawlman? You almost got me :)

  • by Paul Johnson ( 33553 ) on Monday April 17, 2000 @03:38AM (#1128444) Homepage
    The article does not mention FreeBSD [freebsd.org] or TrustedBSD [trustedbsd.org]. Both of these make a big thing of security, including reviews of software. TrustedBSD is even going for Orange Book B1 certification.

    Paul.

  • by / ( 33804 )
    MS's hotfixes and service packs are also free.

    Sometimes. Other times, they're given names like "Windows 98" and are charged for.
  • Obviously, if you have a computer, and you don't wish to develop the expertise to administer it yourself, you have to find someone you trust. My lawyer thought he had someone to trust with his Microsoft box. Guess what? That person installed a back door. Who found it? The Linux expert at my lawyer's ISP.
    -russ
  • Right. You're saying that open source can fail. Sure. But in practice it doesn't. There's never been a trojan in Apache, or the Linux kernel, or Perl, or qmail, or even the open source versions of sendmail. But remember the closed-source versions which still had Eric's DEBG botch?
    -russ
  • Nobody accepts that liability now. Read the fine print. Everyone disclaims any liability for anything their program does. The only thing you get with your Microsoft program is a guarantee that you have an exact and reliable copy of their copy of the bits.
    -russ
  • Remember how the closed source sendmail's had Eric's DEBG botch for years after the open source version has diked it out?
    -russ
  • Depends on what you call a trojan. Eric Allman inserted the DEBG command into sendmail because he needed shell access on one of the machines running sendmail at Berkeley, and the sysadmins wouldn't give him an account. I'd call that a trojan (looks like one thing, but has a secret inner surprise you'd refuse if you knew it was there).
    -russ
  • The context of this Slashdot posting is security faults introduced on purpose. Of course programmers make errors, but I'm not addressing that topic.
    -russ
  • There are basically two kinds of software users out there: the ones who really care about security and the ones that don't. The ones that do can be recognized by trappings like: no software gets installed without the software security staff reading the results of an independant security audit based on access to the source; no users are allowed to install anything, only security admins can authorize that; It's someone's full time job to stay on top of security notices and updates/patches; etc.

    The type that does not care will often protest that they do. However, they then turn around and say things like "we'll just use XYZ vendor's product because they're a large stable company that, it would seem, would have no reason to wish us ill."

    We can ignore the second type of company (what the article in question is really talking about) because they will be insecure no matter WHAT they are running. Running MacOS just makes them slightly more secure because that's not what the script kiddies are looking for. If they're going to install Red Hat 6.0 and walk away from it for 2 years, assuming that it's "safe", they'll be wrong. There are numerous security problems with any OS release, and if you don't stay on top of them (and/or find them up front) you're screwed.

    In the case of the company that really cares, OSS is much more attractive. For starters, you can re-compile every single binary from source. Second, you have the ability to fix the bugs that your internal security audits fix, while you wait for an official patch. For mission-crittical software, this can be immesurably important.

    I remember dealing with a market-data vendor when I was working for a medium-sized financial firm. They wanted to pump their proprietary protocol over the Internet and through our firewall, so I said "no problem, just a) give us your source so that we can perform a security review or provide us with the results of an independant security review in writing." They litterally laughed in my face. Needless to say, their data feed did not happen.

    Not everyone cares this much, but when you do, Open Source solves a lot of problems and makes many things much easier.
  • Presumably you're referring to Microsoft Security Bulletin (MS00-025), which, though it had been outed by a couple of groups already was not released until 6:00 pm Pacific on Friday. Or are you referring to the three security bulletins that Microsoft held under wraps until after 6 pm Eastern on Friday Mar. 31? Microsoft does not have a history of timely reporting and fixing of security problems. Moreover, they have a tendancy of holding onto advisories until after business hours on a Friday.

    The author was right about one thing. Open Source is not a panacea. However, where we are vigilant, it does work to improve security. Where there are enough qualified developers (like in the Linux kernel) looking for security related issues, Open Source provides us with an excellent opportunity to track down and fix bugs. Moreover, it means that we do not have to rely on a single source for fixes.

    Nobody appears to be mentioning one very important advantage of Open Source when it comes to security. Even if Open Source software were more insecure than Closed Source software, it provides an advantage that Closed Source will never be able to provide. Developers can learn from other developers' mistakes. Developers can learn how to recognize and avoid common security problems in code by looking at advisories and the code before/after the fix. A new developer certainly cannot look at the IIS code and see how the latest buffer overflow bug caused problems, or how Microsoft fixed it. Do I learn _anything_ by applying Post SP6 Hotfix xxx? No. Do I learn anything scanning through a patch in source form for Apache? YOu bet.

    The author does present an important point. Many Linux users now do not build from source even when it's available. Many don't even have the source to most of their tools/applications, even though it's available. Remember, if you don't exercise your freedom, you don't get the full advantage of it. Build from source. Read the source. Learn from other people's mistakes wrt security, so you don't make the same mistakes yourself.
  • Sorry, but I don't really get your point here. So you don't trust the guy who installs OSS software on your computer, but you do trust the guy who installs MS software on your computer? Or do you think because the OSS guy has the source of the program, he (m/f) will put in backdoors so he can get back on you? Don't you think the MS guy has several ways to misconfigure the system in such a way he'll be able te get back in without you ever knowing it and that it'll be even easier to do? It looks pretty obvious you're on the law side of IT, not on the technical side! *ducks and runs*

    Thimo


    --
  • I have contributed to open source projects, and I have had other people contribute to my projects. And I can tell you this. Both of them have made me a better programmer.

    The project that I contributed to was a graphical email client. I had been toying with writing my own, when I found one that looked like it had good potential. Unfortunately it would not work with my mail server. I checked out the SMTP code in it and it was completely hacked together--not really following the RFC. I sent the author a complete rewrite of that section, and if he used it he will have learned a lot. (The experience also made me realize that I am not about to ever use that mail client since the rest of it looked equally hacked together and the author was reluctant to consider advice.)

    In another case, I published one of my own programs as open source. In less than a day, I had a reply from someone telling me that I had used an "older" style of signal handling, and he
    sent me a rewrite of some of it. Naturally I was following textbook examples from somewhere and it was out of date. I now have a modern example of good signal-handling to work from when I write new programs.

    You may agree or disagree that OS projects are better, but I can tell you this: YOUR projects will be better if you open the source.
  • Yeah, this article says nothing we don't already know.

    • "Just because the source is available, doesn't mean anyone is reading it."

    Yeah, no kidding. But when the source is closed, I guaranteeno one is reading it. Just because the author and many of the developers he knows are not over-conscientious and don't read the source doesn't mean that there aren't many of us out there who do. I personally review the code to almost every piece of software I use regularly, to the best of my ability. Yeah, I may not be "qualified" to "judge" something like Sendmail, but at least I can have the piece of mind that its developers are not trying to pull a fast one on me, as Microsoft did. If I don't feel "qualified" to judge some code, I reread it until I am. Maybe that's just me. No wait, that's not just me -- that's a lot of people out there, and that's why open source works.

    • Open Source makes it easy for the bad guys to find vulnerabilities.

    And leaving your car parked in a parking lot makes it easy for car thieves to find. What is that supposed to mean? The issue is not, and never was, the "bad guys" finding vulnerabilities. Last thing I heard, the bad guys find vulnerabilities in closed source stuff all the time. The issues are prevention and the ability to fix bugs as they are noticed. I can fix the bugs myself if I find them, I can apply patches myself, I don't have to wait for a new version or a binary patch to replace the compromised DLL's or shared library.

    darren


    Cthulhu for President! [cthulhu.org]
  • by thogard ( 43403 ) on Monday April 17, 2000 @04:05AM (#1128459) Homepage
    This has happened with sendmail wu-ftp several years ago. It was released as a "new" version and it was even distributed from the wu-archive server.
  • This is a well-made point.

    -Who- make the big -painful- decisions with OSource code? Does anyone have the strength and power to say 'This section of code is lousy, lets accept the 'hit' of re-writing it.'?

    This must be an area where OSource can score over closed models, but to be honest I have never heard of it happening in the OSource community (there must be some examples? I have a vague memory of Mr Carmack saying he wanted to rewrite some Linux network code?). Is it a case of you do it, and then hope that your changes are accepted by the wider community, does this encourage you to start?

    OTOH I have seen it twice in a closed context (admittidly in a 'quality over quantity' embedded controller market). Legacy code was looked at and a decision made that "we spend man-years dealing with all the problems from this, it is a bottomless pit of bugs and we pay a fortune to the one geek who understands it, if he ever leaves we're stuffed!". Suddenly a corporation will invest in fixing it up-front because they can see a payback in the mid-term, (obviously we're not talking M$oft here..)

    EZ
    -'Press Ctrl + Alt + Delete to log on..'
  • The problem is more subtle that a casual reading of Thompson's classic paper suggests. He explained how to create a trojan horse in the compiler with nothing showing in the source code, but that is not the only tool you need to worry about. Every step between source code file and program image loaded and running is a potential place where a trojan horse could be inserted.

    The linker could do some subtle patching of the object files as it links.

    A shared library loader would be a neat place to splice in some extra behaviour; more fun than just subverting the basic program loading system.

    It would be fun to subvert the virtual memory system to spot where certain code is loaded, and add some interesting side effects.

    The truly paranoid will wonder if the microcode in the processor has anything strange in it as they insert the hand-assembled binary code into the memory as the first step of bootstrapping their system into a state they can trust. (They will, of course, have built the tool that is inserting the code, and be worrying about any non-trivial components it contains.)

    Any tools - diff, debuggers, etc. - that you use to inspect the system will, of couse, hide the exploit code and show the 'clean' version, and the necessary features will propagate by the same mechanism as everything else.

  • The gentlemen who wrote this article is brilliant -- there is no doubt it my mind, and he definately raises some interesting issues.

    When I first installed Linux, I was .. not good at programming. Words like "compile", "./configure", and "make" were as forign to me as "Je parle francais comme une vache espanogle." ("I speak french like a spanish cow.") Once, I tried looking at the source code for BitchX. ONCE. The code itself was spread across a gazillion (I forget the exact number) files.

    Now I DO know how to code. I'm taking my super-happy C++ coding classes in high school, so I know my way around a compiler and the like. So, after reading this article, I thought "hey, lets see if I can understand this NOW!" Guess what? It's still spaghetti code. I still can't unstand a stick of it, other then the PRINTFs and SCANFs. That's it. And I got a 98% in the class.

    So I submit that it is not that nobody is reading the source for programs. Rather, TONS of people are reading it, they just don't know what they are looking at.

    ,-----.----...---..--..-....-
    ' CitizenC
    ' WebMaster, PlanetQ3F [q3f.net]
    `-----.----...---..--..-....-
  • I am reminded of the American Red Cross' philosophy toward swimming: "Every person a swimmer, every swimmer a lifesaver" or words to that effect. They wish to promote safety in the water by having everyone properly trained to a) swim and b) rescue swimmers in trouble. <MODE="oldfart">In the (perhaps never existed) golden age of computing, everyone knew enough about code to write things and understand at least some of other people's code.</MODE> Some serendipitous results of the OSS movement may be to increase the pool of potential peer reviewers who are qualified to critique code, improve code readability and documentation, (save the whales, halt global warming, and put a chicken in every pot and shoes on all the world's children). Okay, I know. But I think it's possible at least that things will improve in this regard if not to the utopian "Every computer user a coder, and every coder a debugger" state.
  • First of all, I am not a Linux zealot (though I use it daily), and I'm not a programmer. So, while I am not qualified to say anything, I hope I am seeing it objectively.

    Everything in that article seems to be true, but I don't think it tells the whole story.

    I remember reading something that Theo DeRaalt (sp?) said about the inherent security of OpenBSD. He said that while you can find at exploit security bugs in any OS, a bug only takes about an hour of work to fix. With OpenBSD, you get the patch as quickly as possible, but with commercial software, you have to wait and wait for an official patch to be released.

    Moving on, I don't understand how that Thompson compiler (which inserts malicious code into the login program, and into itself when recompiled) is a serious problem. I'm running Red Hat 6.0, and my version of gcc came straight off of the install CD. If, for whatever reason, I needed a new one, I'd see if I could get another precompiled version from Red Hat or the FSF. THEY aren't going to screw me over, and if they tried they WOULD be caught very quickly.

    It is true that having the source available allows crackers to discover potential targets, but this vulnerability doesn't seem to add up to much when compared to the advantages (securtiy-related, and otherwise) of having an open-source system. When crackers learn something new, it seems to spread very quickly. So quickly, I think, that it balances everything out in the end. In fact, it might actually tip the balance the other way, since you get a bug fix much quicker with open-source systems.

    These are just my opinions, and I'm really not qualified to say much, but I thought I'd share them in case other people want to comment on them and correct any misunderstandings that I have.

    take care,

    Steve


    ========
    Stephen C. VanDahm
  • Why has this been moderated up so high? This person is stating a blatant falsehood, as the bug s/he is referring to had a workaround the same day. Microsoft releases hot-fixes all the time if you pay attention to lists like ntbugtraq. Moderators stop encouraging flame wars.
  • Yes, I've heard that statement abused a coupla times too, but as I said (and you understood) "I" was computer savvy, meaning "I" have a clue, but don't check bugtraq every day, or know every damned setting by heart.

    I've helped people who are incredibly clueless when it comes to computers in general. They know how to use the tools of their daily work (that may be exellent) but they lack the deeper understanding that helpes you and me when something breaks or otherwise changes their environment.

    They don't mind having an "expert" setting up their system. They *do* mind having to call support because their system crashes.

    They want to learn and think about their *real* job and leave the backstage computer business to those who knows it better.

    My car doesn't magically "just work". However I dont need to know every detail to drive it. If I'm completely clueless repairs will be expensive. If I spend all my time under the hood, making it run perfectly, I would have to find another job (or another girlfriend)

    The same goes for computers. For most people there is a balance, where they know enough to avoid "deleting the internet" but spend most of their time doing "real" work.

  • Oh I wasn't speaking of Linux in particular. Linux has a maintainer and an army of experts. So does Windows and a whole bunch of free or closed software.

    Unmaintained OSS and (even worse) unsupported closed source SW is a terrible risk.
    Poorly maintained/supported code is only too common, and that goes for both the free and closed variety.

    Was Streetlawyer trolling? a troll savant? Don't know. My opinions are (as usual) my own.

  • by guran ( 98325 )
    I usually recognize trolls. (C'mon, I may only have been posting on /. a coupla months, but I'm not a net newbie by any standard)

    The original post in this thread could be a slashdot troll (no not a "slashdot troll", they deal with a certain lady and hot grits. A *real* troll on slashdot), since it expressed a non-/.-cosher opinion. I didn't care, since I heard the same argument so many times by very sane people outside slashdot.

    When you can state such main stream opinions for trolling purposes, it says more about the forum then anything else I'm afraid.

    So if it was a troll it was a troll savant. (a term I hereby claim instantly copylefted)

  • by guran ( 98325 ) on Monday April 17, 2000 @03:58AM (#1128492)
    MS's hotfixes and service packs are also free.

    Unless they are sold as upgrades like Windows98/2000 :-)

    Sorry couldn't resist. Actually I think Streetlawyer has a point.
    "Open source" does not help you if you can't (or dont have time to) understand the source.

    Most of the time "Open source" is not worth a shit without a maintainer. (Nor is closed source SW BTW)

    I must know where to get the *latest* version, I must know where to find *reliable* developers. I must know how to handle "Fix it *now*, cost is not important" situations.

    I need a solution that does not require an expert for daily use

    And "I" in this case is anyone who is computer savvy, but makes a living with running computers, not by making them run.

    Some OSS fulfills those needs, some commercial SW does too. Neither open or closed source rubbish does.

  • The C compiler backdoor mentioned affected only people who installed the binary,
    instead of recompiling the compiler themselves - binaries that have been tampered with
    can exist both in open source and closed source software. With open source, however, you can make sure you're actually using what the source generates.
    If, for example, you're using Red Hat Linux and you don't trust us for whatever reason, all it takes is
    "rpm --rebuild /mnt/cdrom/SRPMS/* ; rpm -U --force /usr/src/redhat/RPMS/yourarch/* /usr/src/redhat/RPMS/noarch/*". You can't do something like this with closed-source software, so this is actually another argument for open source.

    While it is right that a lot of people just use open source applications than actually reading the code, the fact
    that everyone can read it is still a big advantage - if someone breaks into my system, at least I can immediately figure out how and FIX IT instead of waiting for Microsoft or whoever to issue an updated binary.
    Alternatively, if I don't have the knowledge to fix it, I can just hire someone to do it (probably cheaper than taking the server offline until Microsoft releases a fix).

    Also, many people working on the same source has the definite
    advantage that everyone can use other people's fixes - if someone finds a bug in, say, sendmail, and a Linux person finds a quick fix, chances are BSD users can just use the same fix immediately (and vice versa).
    (Try applying an OS/2 fixpack to Windows NT as a counter-example in the proprietary world ;) ).
  • Actually the example you've cited shows that you SHOULD be using Linux (or *BSD or whatever).
    You won't find any "Microsoft engineers are weenies!" backdoors in open source code, at least not a couple of days
    after it has been released.
    At Red Hat (the same is probably true of most other distributors), we do check
    source for backdoors before putting it in the distribution. (Of course we can't guarantee we find all bugs that can lead to security problems though).
    Also, experience shows open-source is usually FASTER at providing fixes, and you have the advantage of
    "if your distributor doesn't provide a fix, just get it from another".
  • It would seem far more useful to me if source was "open" in the sense
    that you could get a copy of the code on production of a reasonable
    description of what you planned to do with it, what improvement you
    wanted to make, plus at least two references from people who were
    prepared to establish your bona fides. Kind of like the criteria for
    getting a reader's ticket to a law library.


    For what purpose? I think you're misunderstanding the concept of open source.
    It's not that everybody can just put in changes that will affect anyone but himself. No maintainer would permit a patch introducing a backdoor into the base release.
    Even if a maintainer did it, someone else would notice, bugtraq, slashdot and others would inform the public, and that maintainer wouldn't keep the package for long (open source licenses generally permit forks), and a fixed version would be made available immediately.

    Requiring to describe what you want to do is an unnecessary problem for people who just want to read the code for learning, or to see whether or not something can be optimized. Not every contributor will (or can/should) become a major contributor.
    Requiring references is even worse - how would anyone get started?
    If open source required references, I'd probably be cleaning toilets or selling shoes instead of working for Red Hat - when I got started, I didn't know anyone else doing open source stuff, so by your terms, I wouldn't have seen a single line of code, much less started extending and fixing things...
  • :) I must admit I can't imagine what their code looks like - I've never worked in a closed-source environment or any other environment motivated only by salaries.
    However, this particular thing should be VERY easy to track down even in totally messed up code.
    We all know the password is stored backwards in the binary, so it all comes down to searching for the string
    "!seineew era sreenigne epacsteN" in the code, then looking where it's accessed and
    removing that code.

    Anyone @microsoft.com: Here's an offer. Give me the source code (to all of Windoze and this dll) and I'll fix this backdoor in less than 5 minutes. Only condition: I want to be able and allowed to
    submit the code to wine [winehq.com] for whatever use they see fit. ;)
  • by bero-rh ( 98815 ) <bero AT redhat DOT com> on Monday April 17, 2000 @04:12AM (#1128498) Homepage
    With the same day's workaround being "delete the file and the functionality it comes with".
    That would be much like "yes, we're aware of the latest sendmail root-shell exploit. The fix is to rpm -e sendmail and lose your mail".
    Fixing a security leak caused by bugs can take a while (needs to be tracked down etc.) - fixing an INTENTIONAL BACKDOOR, especially if you know what to look for (such as the weenies string) is a matter of a few minutes, and I don't see why they didn't release a fixed dll.
  • Now I DO know how to code. I'm taking my super-happy C++ coding classes in high school, so I know my way around a compiler and the like. So, after reading this article, I thought "hey, lets see if I can understand this NOW!" Guess what? It's still spaghetti code. I still can't unstand a stick of it, other then the PRINTFs and SCANFs. That's it. And I got a 98% in the class.

    Sorry, but you *don't* know how to code, no matter what you teacher says. If all you can identify is PRINTFs and SCANFs, I'm already thinking about the great programs you create. If you don't know how to program, source code isn't for you. Think of is as a feature. :) Go get some books.

    --MSM
  • by MSM ( 100939 ) on Monday April 17, 2000 @03:24AM (#1128501)
    Sure, the source code is available. But is anyone reading it?
    They can or cannot. But if you don't build they won't come. Today how many programmers work with open source? It's kinda like saying in the early years of the car that it's better to have a horse, since with a car you didn't had roads to go everywhere. True, but this doesn't mean that the concept of a car is wrong. We can get to a point that there are more OSS than programmers, but as more and more companies adopt OSS, and put theis programmers to review the software they will use, this problem will be solved.

    Even if people are reviewing the code, that doesn't mean they're qualified to do so.
    True, and this is different of closed source how? But them you're comparing the knowledgement of the security review team (if there is one) on the company to the knowledgement of the entire world. Where do you thing would lie more people prone to find errors?

    It is easy to hide vulnerabilities in complex, little understood and undocumented source code.
    Yep, right again. But remember Netscape. How many years have they spend with a codebase hard to work with and extend? Slow and insecure? Why did this change when they openned the source? Because new people, excited to contribute needed something simples. Have you ever worked in a big software company, with 1st layer and 2nd layer managers? It's "do this, fix this and make it work". Building a house in sand.
    People joke that Microsoft doesn't want to open Windows because people would laugh at its code. Truth is, OSS is different, you don't have so much tight control as in a company. Some people say to not mix code and politics, but we must, to understand OSS's potential. Understand how different things are in a company and in a mailing list. What would happen if Netscape decided to not use Gecko? Think about that. I was reading about a table bug in Netscape 4 that was discovered in Netscape 1!

    There is no strong guarantee that source code and binaries of an application have any real relationship.
    Bullshit. Trusted sources... The example given here was to an extreme. (it is, for me, the greatest hack I know of). If you don't trust the source, just compile the source. Truth is, we need better tools for software deployment. Ok if a bug is discovered a patch is issued in days. And then what? How many people among the users will issue the patch? Most of them will wait till the next version. We need something down to the OS level that could automatically update software and libraries from a trusted source. But then again, this is a problem with most programs and OSs.

    Open Source makes it easy for the bad guys to find vulnerabilities.
    Bullshit. Most of the vulnerabilities are related to input, when the software comunicates with the world. When it reads a file, accepts a data packet, etc. This in any software. You just focus in the 1% of the code that has something to do with external data. Data validation to avoid buffer overflow, invalid commands or characters, etc... This in an open or closed software.

    But make no mistake, simply being open source is no guarantee of security.
    Did anyone say that you just have to open the source to be safe? You mostly touched points that are related to both systems, but are easily addressably by OSS. It doesn't mean that all the open software out there that is open is ineherently better than a similar closed one, but that is uses a better method of development and debug proccess.

    --MSM
  • by retep ( 108840 ) on Monday April 17, 2000 @02:35AM (#1128508)

    One interesting, and usefull, thing to do would be to intentionally put a harmless (say deleting a specific file that has almost no chance of existing like /usr/adfasdf.txt) peice of malicious code in one of the large open-source software packages such as Apache or Samba. Depending on how active the development is the code may be found in a day, or a year or even more. No-one knows as this has never been done before. But someone should try, if only to test if the usual security through peer-review will work at all.

    After that a similar test (perferably a whole bunch to be statisticly valid) on some closed source software would be in order. Any MS programmers here?

  • Ok, I'll bite. A troll is someone who lurks around and makes inflammatory posts to generate a reaction, or cause the moderators to waist points on them. ROTFLMAO means rolling on the floor laughing my ass off. I regards to your original post, you can get support for Linux, and as far as blaming MS goes, read the EULA. They can make your computer blow up, destroying a city in the process taking out millions of lives and not be responsible for a thing.
    Molog

    So Linus, what are we doing tonight?

  • Yeah, I went a little overboard with that comment but my point was that MS takes absolutely no responsiblity for anything that goes wrong with their programs. There is no warenty and you don't own the software. In this regard a company going with MS just so that they can have someone to blame is very foolish. Even if something does go wrong MS isn't at fault so tough. I must remember not to exagerate so much. My apologies.
    Molog

    So Linus, what are we doing tonight?

  • <p><i>"The C compiler backdoor mentioned affected only people who installed the binary, instead of recompiling the compiler themselves"

    <p>No! That was the point--the only way you could guarantee a clean compile from clean source was to have a clean compiler to begin with. And the ONLY way to have a clean compiler to begin with is to hand code it in machine code--if you're using a compiler that you obtained somewhere to compile a compiler, there's no guarantee (ever!) that it's going to be clean.

    <p>Actually, that was only half of his point. He also pointed out that the C compiler example was only one of a nearly infinite number of places one could plant a trojan horse in a computer. Once you get above the level of bare logic gates, you're forced to trust people when dealing with computers.

  • Now I DO know how to code. I'm taking my super-happy C++ coding classes in high school, so I know my way around a compiler and the like. So, after reading this article, I thought "hey, lets see if I can understand this NOW!" Guess what? It's still spaghetti code. I still can't unstand a stick of it, other then the PRINTFs and SCANFs. That's it. And I got a 98% in the class.

    For any project, serious or recreational, if it's spagetti code it will never reach its potential. Period. One person - i.e. the original author - may have a fairly comprehensive understanding of what the code does, but if nobody else can come close to understanding its functions without commiting major time to digging through the mess, it will never get the examination and development that having many people work on the code brings. Of my own personal projects, the early ones (before I discovered that copious comments not only made life easier but speeded up development because I could clarify my own ideas when writing the comments) are useless now, either as starting points for new projects or just for code reuse. My point is that for a project to acheive its intended goal, especially if security is the main focus, the source code has to be clear, well documented and fully modularized as far as possible. Failure to accomplish this, including random global variables floating all over the place and sections of code which end up getting treated as a black box because they are near-impossible to decipher, will leave a project falling short.

    Cheers,

    Toby Haynes

  • Nah, he's just saying that having 98% in a prog course is not equal to the ability to read cod. I fully have to agree with that. Being a programmer does not help you in these matters. Maybe being a gypsy or something would. I don't know.
  • Okay, so they argue that Open Source Software isn't perfectly secure. Very little is. I see no argument that Closed Source Software is any more secure. And this implies...?

    That both sides of the argument can carry on with the argument as before, whilst Security Focus gets the hits and banner ad sales and makes lots of money. Yay
  • The article quite astutely points out something that's been bugging me for a while -- that open source software likes to pretend it's analogous to the concept of "peer review", whereas it's actually "review by any boob that can manage to do ftp:", which has no equivalent in the scientific community. Either back-door bugs are obvious as hell, in which case they could be picked up by review by ten people, max. Or they're not all that obvious after all.

    Furthermore, the more people there are checking the source, the more there are out there introducing bugs, for the hell of it. Even if "Apache" is bug free, how can you tell that the disk you got passed by your third cousin who got it from his ex-boss who got it from "some guy" is?

    It would seem far more useful to me if source was "open" in the sense that you could get a copy of the code on production of a reasonable description of what you planned to do with it, what improvement you wanted to make, plus at least two references from people who were prepared to establish your bona fides. Kind of like the criteria for getting a reader's ticket to a law library.
  • I'm running my own little IT law practice here, and I sure as hell can't walk into all those line numbers and start deleting things. So I have to trust it to someone else. And who do I trust? Some random "consultant", who'll probably put in a new back door of his own, plus a trojan to blackmail me with later? Or Microsoft, who for all their faults, know that they have to produce fixes for these things on a timely basis.

    See, this is the "hobbyist mentality" which presumably explains why nobody I know uses Linux, et al. It's all fine for one guy running his personal homepage, and for a homepage, it may be the best you can get. But for someone who actually wants to do something with a computer, they don't have the option of spending valuable billable hours grovelling through code because someone wanted to call their competitors "weenies".

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...