Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×

The Myth of Open Source Security Revisited v2.0 207

Dare Obasanjo contributed this followup to an article entitled The Myth of Open Source Security Revisited that appeared on the website kuro5hin. He writes: "The original article tackled the common misconception amongst users of Open Source Software(OSS) that OSS is a panacea when it comes to creating secure software. The article presented anecdotal evidence taken from an article written by John Viega, the original author of GNU Mailman, to illustrate its point. This article follows up the anecdotal evidence presented in the original paper by providing an analysis of similar software applications, their development methodology and the frequency of the discovery of security vulnerabilities." Read on below for his detailed analysis, especially relevant with the currency of security initiatives in the worlds of both open- and closed-source software.

The Myth of Open Source Security Revisited v2.0

The purpose of this article is to expose the fallacy of the belief in the "inherent security" of Open Source software and instead point to a truer means of ensuring the quality of the security of a piece software is high.

Apples, Oranges, Penguins and Daemons

When performing experiments to confirm a hypothesis on the effect of a particular variable on an event or observable occurence, it is common practice to utilize control groups. In an attempt to establish cause and effect in such experiments, one tries to hold all variables that may affect the outcome constant except for the variable that the experiment is interested in. Comparisons of the security of software created by Open Source processes and software produced in a proprietary manner have typically involved several variables besides development methodology.

A number of articles have been written that compare the security of Open Source development to proprietary development by comparing security vulnerabilities in Microsoft products to those in Open Source products. Noted Open Source pundit, Eric Raymond wrote an article on NewsForge where he compares Microsoft Windows and IIS to Linux, BSD and Apache. In the article, Eric Raymond states that Open Source development implies that "security holes will be infrequent, the compromises they cause will be relatively minor, and fixes will be rapidly developed and deployed." However, upon investigation it is disputable that Linux distributions have less frequent or more minor security vulnerabilities when compared to recent versions of Windows. In fact the belief in the inherent security of Open Source software over proprietary software seems to be the product of a single comparison, Apache versus Microsoft IIS.

There are a number of variables involved when one compares the security of software such as Microsoft Windows operating systems to Open Source UNIX-like operating systems including the disparity in their market share, the requirements and dispensations of their user base, and the differences in system design. To better compare the impact of source code licensing on the security of the software, it is wise to reduce the number of variables that will skew the conclusion. To this effect it is best to compare software with similar system design and user base than comparing software applications that are significantly distinct. The following section analyzes the frequency of the discovery of security vulnerabilities in UNIX-like operating systems including HP-UX, FreeBSD, RedHat Linux, OpenBSD, Solaris, Mandrake Linux, AIX and Debian GNU/Linux.

Security Vulnerability Face-Off

Below is a listing of UNIX and UNIX-like operating systems with the number of security vulnerabilities that were discovered in them in 2001 according to the Security Focus Vulnerability Archive.

AIX
10 vulnerabilities[6 remote, 3 local, 1 both]
Debian GNU/Linux
13 vulnerabilities[1 remote, 12 local] + 1 Linux kernel vulnerability[1 local]
FreeBSD
24 vulnerabilities[12 remote, 9 local, 3 both]
HP-UX
25 vulnerabilities[12 remote, 12 local, 1 both]
Mandrake Linux
17 vulnerabilities[5 remote, 12 local] + 12 Linux kernel vulnerabilities[5 remote, 7 local]
OpenBSD
13 vulnerabilities[7 remote, 5 local, 1 both]
Red Hat Linux
28 vulnerabilities[5 remote, 22 local, 1 unknown] + 12 Linux kernel vulnerabilities[6 remote, 6 local]
Solaris
38 vulnerabilities[14 remote, 22 local, 2 both]
From the above listing one can infer that source licensing is not a primary factor in determining how prone to security flaws a software application will be. Specifically proprietary and Open Source UNIX family operating systems are represented on both the high and low ends of the frequency distribution.

Factors that have been known to influence the security and quality of a software application are practices such as code auditing (peer review), security-minded architecture design, strict software development practices that restrict certain dangerous programming constructs (e.g. using the str* or scanf* family of functions in C) and validation & verification of the design and implementation of the software. Also reducing the focus on deadlines and only shipping when the system the system is in a satisfactory state is important.

Both the Debian and OpenBSD projects exhibit many of the aforementioned characteristics which help explain why they are the Open Source UNIX operating systems with the best security record. Debian's track record is particularly impressive when one realizes that the Debian Potato consists of over 55 million lines of code (compared to RedHat's 30,000,000 lines of code).

The Road To Secure Software

Exploitable security vulnerabilities in a software application are typically evidence of bugs in the design or implementation of the application. Thus the process of writing secure software is an extension of the process behind writing robust, high quality software. Over the years a number of methodolgies have been developed to tackle the problem of producing high quality software in a repeatable manner within time and budgetary constraints. The most successful methodologies have typically involved using the following software quality assurance, validation and verification techniques; formal methods, code audits, design reviews, extensive testing and codified best practices.
  1. Formal Methods: One can use formal proofs based on mathematical methods and rigor to verify the correctness of software algorithms. Tools for specifying software using formal techniques exist such as VDM and Z. Z (pronounced 'zed') is a formal specification notation based on set theory and first order predicate logic. VDM stands for "The Vienna Development Method" which consists of a specification language called VDM-SL, rules for data and operation refinement which allow one to establish links between abstract requirements specifications and detailed design specifications down to the level of code, and a proof theory in which rigorous arguments can be conducted about the properties of specified systems and the correctness of design decisions.The previous descriptions were taken from the Z FAQ and the VDM FAQ respectively. A comparison of both specification languages is available in the paper, Understanding the differences between VDM and Z by I.J. Hayes et al.

  2. Code Audits: Reviews of source code by developers other than the author of the code are good ways to catch errors that may have been overlooked by the original developer. Source code audits can vary from informal reviews with little structure to formal code inspections or walkthroughs. Informal reviews typically involve the developer sending the reviewers source code or descriptions of the software for feedback on any bugs or design issues. A walkthrough involves the detailed examination of the source code of the software in question by one or more reviewers. An inspection is a formal process where a detailed examination of the source code is directed by reviewers who act in certain roles. A code inspection is directed by a "moderator", the source code is read by a "reader" and issues are documented by a "scribe".

  3. Testing: The purpose of testing is to find failures. Unfortunately, no known software testing method can discover all possible failures that may occur in a faulty application and metrics to establish such details have not been forthcoming. Thus a correlation between the quality of a software application and the amount of testing it has endured is practically non-existent.

    There are various categories of tests including unit, component, system, integration, regression, black-box, and white-box tests. There is some overlap in the aforementioned mentioned testing categories.

    Unit testing involves testing small pieces of functionality of the application such as methods, functions or subroutines. In unit testing it is usual for other components that the software unit interacts with to be replaced with stubs or dummy methods. Component tests are similar to unit tests with the exception that dummmy and stub methods are replaced with the actual working versions. Integration testing involves testing related components that communicate with each other while system tests involve testing the entire system after it has been built. System testing is necessary even if extensive unit or component testing has occured because it is possible for seperate subroutines to work individually but fail when invoked sequentialy due to side effects or some error in programmer logic. Regression testing involves the process of ensuring that modifications to a software module, component or system have not introduced errors into the software. A lack of sufficient regression testing is one of the reasons why certain software patches break components that worked prior to installation of the patch.

    Black-box testing also called functional testing or specification testing test the behavior of the component or system without requiring knowledge of the internal structure of the software. Black-box testing is typically used to test that software meets its functional requirements. White-box testing also called structural or clear-box testing involves tests that utilize knowledge of the internal structure of the software. White-box testing is useful in ensuring that certain statements in the program are excercised and errors discovered. The existence of code coverage tools aid in discovering what percentages of a system are being excercised by the tests.

    More information on testing can be found at the comp.software.testing FAQ .

  4. Design Reviews: The architecture of a software application can be reviewed in a formal process called a design review. In design reviews the developers, domain experts and users examine that the design of the system meets the requirements and that it contains no significant flaws of omission or commission before implementation occurs.

  5. Codified Best Practices: Some programming languages have libraries or language features that are prone to abuse and are thus prohibited in certain disciplined software projects. Functions like strcpy, gets, and scanf in C are examples of library functions that are poorly designed and allow malicious individuals to use buffer overflows or format string attacks to exploit the security vulnerabilities exposed by using these functions. A number of platforms explicitly disallow gets especially since alternatives exist. Programming guidelines for such as those written by Peter Galvin in a Unix Insider article on designing secure software are used by development teams to reduce the likelihood of security vulnerabilities in software applications.
Projects such as the OpenBSD project that utilize most of the aforementioned techniques in developing software typically have a low incidence of security vulnerabilities.

Issues Preventing Development of Secure Open Source Software

One of the assumptions that is typically made about Open Source software is that the availability of source code translates to "peer review" of the software application. However, the anecdotal experience of a number of Open Source developers including John Viega belies this assumption.

The term "peer review" implies an extensive review of the source code of an application by competent parties. Many Open Source projects do not get peer reviewed for a number of reasons including
  • complexity of code in addition to a lack of documentation makes it difficult for casual users to understand the code enough to give a proper review

  • developers making improvements to the application typically focus only on the parts of the application that will affect the feature to be added instead of the whole system.

  • ignorance of developers to security concerns.

  • complacency in the belief that since the source is available that it is being reviewed by others.

Also the lack of interest in unglamorous tasks like documentation and testing amongst Open Source contributors adversely affects quality of the software. However, all of these issues can and are solved in projects with a disciplined software development process, clearly defined roles for the contributers and a semi-structured leadership hierarchy.

Benefits of Open Source to Security-Conscious Users

Despite the fact that source licensing and source code availability are not indicators of the security of a software application, there is still a significant benefit of Open Source to some users concerned about security. Open Source allows experts to audit their software options before making a choice and also in some cases to make improvements without waiting for fixes from the vendor or source code maintainer.

One should note that there are constraints on the feasibility of users auditing the software based on the complexity and size of the code base. For instance, it is unlikely that a user who wants to make a choice of using Linux as a web server for a personal homepage will scrutinize the TCP/IP stack code.

References
  1. Frankl, Phylis et al. Choosing a Testing Method to Deliver Reliability. Proceedings of the 19th International Conference on Software Engineering, pp. 68--78, ACM Press, May 1997. < http://citeseer.nj.nec.com/frankl97choosing.html >

  2. Hamlet, Dick. Software Quality, Software Process, and Software Testing. 1994. < http://citeseer.nj.nec.com/hamlet94software.html >

  3. Hayes, I.J., C.B. Jones and J.E. Nicholls. Understanding the differences between VDM and Z. Technical Report UMCS-93-8-1, University of Manchester, Computer Science Dept., 1993. < http://citeseer.nj.nec.com/hayes93understanding.ht ml >

  4. Miller, Todd C. and Theo De Raadt. strlcpy and strlcat - consistent, safe, string copy and concatenation. Proceedings of the 1999 USENIX Annual Technical Conference, FREENIX Track, June 1999. < http://www.usenix.org/events/usenix99/full_papers/ millert/millert_html/ >

  5. Viega, John. The Myth of Open Source Security. Earthweb.com. < http://www.earthweb.com/article/0,,10455_626641,00 .html >

  6. Gonzalez-Barona, Jesus M. et al. Counting Potatoes: The Size of Debian 2.2. < http://people.debian.org/~jgb/debian-counting/coun ting-potatoes/ >

  7. Wheeler, David A. More Than A Gigabuck: Estimating GNU/Linux's Size. < http://www.counterpane.com/crypto-gram-0003.html >



Acknowledgements

The following people helped in proofreading this article and/or offering suggestions about content: Jon Beckham, Graham Keith Coleman, Chris Bradfield, and David Dagon. © 2002 Dare Obasanjo
This discussion has been archived. No new comments can be posted.

The Myth of Open Source Security Revisited v2.0

Comments Filter:
  • that just because you put your source behind lock and key doesn't mean it's any more secure. I hope more companies realize that competent programming and fast security patching is more effective than cloak and dagger secrecy.
    • Ignorance is always the first line of defense, whether in war or creating operating systems and applications.

      Probing the defenses: looking for where the code doesn't anticipate a certain condition, isn't very efficient, but has been pretty much the way vulnerabilities are found.

      Intelligence: lack of source availibility is depriving yourself of 1,000 eyes to find the vulnerability, thus it remains. If their closed code is stolen, without the benefit of freelance auditors, the problem compounds, exploits are found and can be executed when and where they can do the most damage. Open source is inviting those, 1,000 eyes of freelance auditors to report a vulnerability. There still remains the chance some unethical person will spot it and not report it, choosing to exploit it later, but they play roulette in that someone still may find the hole and close it.

      • by eam ( 192101 ) on Friday February 15, 2002 @03:55PM (#3014823)

        I'm always bothered by the articles which conclude that one OS is less secure because more vulnerabilities are discovered in it than in other OS's. I think it would be better to also consider how the vulnerabilities are discovered.

        If we know that RedHat Linux had 54 vulnerabilities last year & Win2K had 42, do we really know anything about the relative security of the two OS's? I would be curious to see the vulnerabilities broken down by how they were discovered. Were they discovered prior to being exploited or as a result of an exploit? It would also be important to know how soon patches were available.

        • I would be curious to see the vulnerabilities broken down by how they were discovered. Were they discovered prior to being exploited or as a result of an exploit?

          While it would be interesting to know, does it actually matter? Once a vulnerability has been discovered, and until it's fixed, it is a liability waiting to be exploited. The more independent liabilities there are, the less secure your software is.

  • To be fair... (Score:4, Insightful)

    by Brian Knotts ( 855 ) <.moc.sseccaedacsac. .ta. .sttonkb.> on Friday February 15, 2002 @03:05PM (#3014583)
    While he undoubtedly has a point, Red Hat, and other Linux distributors, bundle a lot more software than do proprietary UNIX vendors.

    That should be kept in mind when trying to draw conclusions from raw numbers of vulnerabilities.

    • The author (in-part) acknowledged that by mentioning that Debian has more lines of code than Red Hat and fewer exploits.
      • Re:To be fair... (Score:2, Interesting)

        by Brian Knotts ( 855 )
        It also really makes you wonder about those Red Hat custom kernels. I reckon I'll stick with Debian.

        I suppose you could make an argument, though, that Debian sort of cheats. These stats are no doubt from debian stable. But, what percentage of debian users are actually running stable, I wonder?

    • Re:To be fair... (Score:4, Insightful)

      by Surak ( 18578 ) <surakNO@SPAMmailblocks.com> on Friday February 15, 2002 @03:48PM (#3014799) Homepage Journal
      While he undoubtedly has a point, Red Hat, and other Linux distributors, bundle a lot more software than do proprietary UNIX vendors.

      There's actually even more to it than just that. :) In Open Source software, because the source code is available, more vulnerabilities are found to begin with.

      This is the point Microsoft and others try to make when they say that their closed source model is more secure is that...it's only marginally better at best of course, because whether the vulnerabilities are found or not, they're STILL THERE.

      So in comparing raw numbers, no its not a fair contest. There may be 20 exploits found in Debian, and only 12 found in AIX (or whatever the numbers are), but the question is How many more are in AIX that have yet to be discovered vs how many are in Debian that haven't been discovered yet? I'll bet Debian's number is closer to 0 than AIX's.

      Another thing to bear in mind: statistics can be maniuplated to say anything you want. :)

      • Nothing wrong with vulnerabilities being found. If they aren't found, that occasional "crash" is dismissed as an annoyance. Vulnerabilities are bugs that are picked apart for all they are worth including the means to pry open a back door.

        Finding these weaknesses, or "sploits" are a win in the long term for people who enjoy a reliable, bulletproof system. It has to be hacked and torn apart to the point of perfection before one can be proud of reliability.
      • Yeah Right! (Score:2, Insightful)

        by FatSean ( 18753 )
        You assume that exploits are found by combing the source code. The majority of discovered exploits are found by actually exploiting/being exploited by them!

  • by Anonymous Coward
    I just realized they chose the shortest month of the calendar year to do this. Do you suppose they have a 31 day plan and things will fall short and it will be rushed, thus leaving holes? ;)
    • The Road Ahead: M$ Addresses Security:

      Formal Methods: "Here, code this"

      Code Audits: "Did it compile?"

      Testing: "Put it in the final distribution."

      Design Reviews: "Are these coffee stains on the spec sheets?"

      Codified Best Practices: "Profits are up and we've extended our monopoly, here's your salary bonus."

      I wonder what they've planned for the other 27 days...

  • by linuxrunner ( 225041 ) on Friday February 15, 2002 @03:15PM (#3014629)
    of opensource the fact that when a vulnerability is found, it is then patched / fixed / hacked / whatever / and then distributed.

    I mean, lets be honest, how many of you programmed some code and it worked perfectly the first time? Maybe sometimes, but even the small programs we forget a " or a ; here and there....
    This is putting the works of many, many people together to compile a "program" that is larger than anything I could even dream of accomplishing. i.e., there are bound to be flaws we didn't see in the multi-millons of lines of code.

    Back to ontopic.... A security hole is found, we can patch it because we can see the code, we can make it BETTER.
    Microsoft....
    well, you just wait and hope they eventually make a patch, and half the time the patches suck and are re-exploited in a matter of days.

    I'm not claiming that opensource is non-vulnerable or exploit-free.... So this article seems somewhat pointless. Anyone who writes code, knows that an exploit free program of this size is just dreaming. What should really be looked at is the amount of time taken to fix and patch a problem.

    Just my .02 of rambling

    • by mshomphe ( 106567 ) on Friday February 15, 2002 @03:33PM (#3014719) Homepage Journal
      I mean, lets be honest, how many of you programmed some code and it worked perfectly the first time?

      In fact, I've learned that if code works perfectly the first time, something went terribly, horribly wrong....
    • ...when a vulnerability is found, it is then patched / fixed / hacked / whatever / and then distributed.

      That is, of course, if the vulnerable Open Source software is still maintained. Too many projects fall apart due to insufficient momentum, too small a user-base, or changes in the lives of project leaders.

      Sometimes, like in the case of the GIMP, an abandoned project will get picked up, brushed off, extended and enhanced. But this usually not the case.

      • But if you yourself are using it, you can still fix it yourself.

        If I find a problem in (insert 10 year old closed source app whose company was bought out and then the next one bought out then went out of business here), how am I any better off?

        If anywthing, what happens in the case of abandonment is another feather in the cap of OSS. It's not the endgame.
    • You have a very good point, but if the closed source vendor is serious about fixing security bugs you can report the bug to them and they can fix it. The creators also should have a better idea of how the code works, so you have less of a chance of the fix causing other bugs. This becomes more of an issue of the proper resources getting applied to the task, rather than how software is licensed. It also comes down to who do you trust. Do you trust an open source hacker to fix a security exploit without accidently or intentionally adding another exploit? With open source you can always check the code yourself, but most people don't have the skill or time to make sure there isn't an exploit hidden there. If you can't make sure yourself, you have to trust someone to do it right. Who do you trust?
      • Who do you trust?

        Big win for open source. With open source (or better, Free software) you get to choose an auditor you personally trust. Even in the worst case, the auditor is working for you, not them. With proprietary, you may choose the vendor or the vendor.

        • With proprietary, you may choose the vendor or the vendor.

          Exactly. With propriety software you choose which vendor you want to trust. You can also make agreements where you can audit the software itself under NDA, but that's usually not very helpful in reality.

          With open source, you can choose an auditor that you trust, but how many people or companies have someone they trust that have the technical abilities to audit SOMEONE ELSE'S software for security. It usually comes back to having to trust the people writing the software.

          I'm not sure larger companies are willing to trust hobbiests to take the time and use the rigorous procedures in developing their software that are required to create good secure software. I'm not saying there isn't exelently written, relatively secure, Open Source software. If you're selecting software on which the stability and good name of you billion dollar company relies why would you choose Open Source over closed source? The company using the software isn't expecting their internal people to be finding security bugs. They'll likely audit the software, but they want a solution, not problems. They don't expect to find anything in their audits, and they definately don't expect to be fixing it themselves. When it comes down to it they want someone they feel they can trust. That means a reasonable, proven track record of few security problems, and a quick response on fixing issues. They also look for someone who's going to be around for a long time to support the software. That usually means a large, stable company. Large, stable companies require stable income. They usually get that income from selling both software and services. Selling services alone is a much more risky business plan because it's hard to support someone else's software, especially when response time is critical, and if the source is open, anyone can take it and compete for the support contract. That competition drives down support costs, but often reduces the quality of the support in the process. Open source does have advantages, but I don't think those advantages are going to be the major determining factors in companies choosing security software.

          It all comes back to who do you trust. Most people don't trust people who don't have much to lose. Companies are also going to have trouble trusting people who have strong anti-business attitudes. There are some vocal people in open source with those views. That adds to the doubts. A lot of small doubts quickly add up to going with the option from the big company.
      • The creators also should have a better idea of how the code works, so you have less of a chance of the fix causing other bugs.

        chucklesnortSPEW

        Damn. There goes another perfectly good keyboard.

        Are you a programmer? How many different shops have you worked at where the guy who wrote the code is still guaranteed to be around a year later? Two years? Or, I suppose that detailed, thorough and 100% accurate software documentation has always been available everywhere you've worked?

        No, this is not an area where closed source software has any advantage. In a system of any size there are zillions of dusty corners that no one still working at the company understands, are not documented adequately (or appear to be, but the documentation is *wrong*) and generally require someone to dig in and figure them out again each time they're touched.

        IME, there is a much better chance with OS projects that the original author or else someone else who understands the relevant bit is still hanging around the mailing list (or someone who knows the relevant person and knows of their work), if for no other reason than developers who contribute significantly to a project tend to stay "in touch". This is even more true of the original designers.

        There are no guarantees in either environment, and the current reduced mobility of software engineers may have improved the situation somewhat at closed source shops recently, but my experience is that OS is better in this regard.

        • Right.
          Dusty corners where the current "old-timers" know to stay far away from. The dusty corners may be where the problems are, but anyone trying to mess with them quickly learns they have opened a can of worms. Very hard to repackage worms.
    • and half the time the patches suck and are re-exploited in a matter of days.

      Forget half the time, please name one time.

      • Oooh oooh, can I PLEEEEEAAASSSEEEE jump in here with a microsoft bash? I mean this is /so/ easy! ^_^

        (seriously, check TheRegister archives for plenty of "oops it was patched, patch is easily bypassed" type of security warnings for Microsoft.)

        In all fairness though, I do see a lot of release notes across all genres of software that read "patch 1.23b, fixed problem in patch 1.23 designed to fix problem introduced by patch 1.22"

    • Just because a piece of code is open source does not mean that it has been audited by competent security auditors. Of course the same is true of proprietary software. I think that open source software can be more secure, however, for the following reasons (though these could apply to proprietary software as well).

      1: Access to source: Just because it has not been audited does not mean that it cannot be audited. Software can be considered more secure if the code is at least available to be audited. For this reason, I congradulate Microsoft on the shared source initiative.

      2: Independent audit: When in doubt for a mission-critical scenario, hire someone ot audit the code or part of it. This is possible with porprietary software under some licenses and with permission from the vendor, but it is always true of open source.

      3: Compartmentalized design-- application runs under minimal permissions. This is a problem with proprietary (IIS) and OS (Sendmail) software alike.

      Open Source is no guarantee for security but it helps.
  • hummmm not quite (Score:4, Insightful)

    by TCaptain ( 115352 ) <slashdot.20.tcap ... m ['spa' in gap]> on Friday February 15, 2002 @03:15PM (#3014632)

    The author wants to "expose the fallacy of the belief in the "inherent security" of Open Source software" (many eyes make safer code) and gives the REAL way to make software more secure of which these 3 caught my eye:

    Code audits

    Testing

    Design reviews

    Correct me if I'm wrong, but isn't that exactly the "many eyes make safer code" theory? That open source, having the code available, can have more people do code audits, testing and design reviews than a company with closed source can.

    In the real world, he's right, those extra eyes aren't necessarily qualified...but still, on AVERAGE wouldn't there be MORE qualified eyes to do this stuff along with the unqualified?

    • by jasonu ( 101628 ) on Friday February 15, 2002 @03:48PM (#3014797)
      It may be true that "many eyes make safer code", but only if the many eyes actually review the code, which is what the author said. In OSS, there are more qualified eyes with the ability to audit, test and review the code, but that doesn't mean that any those eyes are actually doing it.

      This means that OSS has the *potential* to be more secure, but as shown by the article, that potential is not fully realized.
      • This means that OSS has the *potential* to be more secure, but as shown by the article, that potential is not fully realized.

        Yeah, yeah, yeah, that's what he says because his Mailman program from Red Hat 6. had holes. What a rotten extrapolation! Let's not FUD ourselves into a stupid panic.

        The point of free software is to develop a community of users and gain mutual benifit by sharing code and development effort. Mailman? Pardon my ignorance of a bell on the "proffesional" Red Hat distro I never owned. How widely deployed was this package? If it was never that widely used, of course the bugs would remain. Thousands of downloads does not translate into thousands of users really and we might assume that a large portion of those users have upgraded their machines. It is much more correct to extrapolate free software security from Apache, sendmail, exim, openssh, xfree86, the list is very long indeed, where there is a real comunity of users. If a bell or whistle is broke, it can be replaced.

        Red Hat, by coming closer to the bad old days of software distribution has left their user base open to some of the bad old day problems. Difficulty in getting updates makes problems. Who would put 6.2 on a machine? No one expects a CD to heal itself yet I'm tempted. I've heard good things about up2date but it's not as easy or dependable as apt-get update and upgrade by a long shot. That cozey old 6.2 environment... nah. Shifting focus from service and equipment sales to software vending is a bad bad idea. Should we let some small problems Red Hat has had run us back into the arms of MicroShit and the like? Nope.

        The good news is that low usage also translates into low venerabilty for the rest of us. It's not like Mailman is "the standard" forced on everyone, and I doubt any of it's bugs are as bad as say Outlook's. Think about it. Did we suffer Code-RedMan a few months ago? No we did not. Nor did we suffer network instability over BIND problems and or any other Linux/BSD holes.

        The free distribution methods are showing themselves to be best. While I know it's possible for my poor little Debian boxes to be cracked, I also know that the chances are far less than any windoze compooter. The most common applications ARE well reviewed, the rest are so variable as to make life hard for the would be Linux cracker. What potential is ever fully realized in nature?

    • you are correct that there are MORE qualified eyes to look at open source software, but there is no guarantee that they will.

      rather than picking apart minor quibbles in the article, and thus trying to disclaim it entirely, we should look at the big picture and learn from it.

      just my 2 cents.

    • Who performs systematic code audits on Free Software? Who is competent to do so for, say, an operating system kernel, and does not spend his or her time tracking down and fixing actual bugs?

      Code audits are rather boring, and the usual incentives surrounding work on Free Software do not seem to apply. In addition, a lot of code is poorly commented and incomprehensible, works only by accident (but is correct nevertheless, in the mathematical sense of the word), and so on.
  • 1. Formal Methods

    Yeah, good luck getting Johnny Hack-job that has an associates degree in C programming to use formal methods. I can imagine the interview process now:

    Interviewer: Are you familiar with formal methods?
    H4X0R: Huh?
    Interview: Are you comfortable with set theory and first order predicate logic?
    H4X0R: I know how to code. I learned how to program in C. I am an 3l33t h4x0r.
    Interviewer: *sigh*
  • by Tonetheman ( 173530 ) on Friday February 15, 2002 @03:15PM (#3014634)
    It is true that you can have just as many security problems in Open Source, but those security problems are not hidden behind clever marketing. The code is out in the open and can be fixed by anyone who takes the time to do so.

    Microsoft could even have a better track record than some Open Source systems and I think that I would still choose the open source way.

    If you rely on obscurity to be your software security then you will lose every time. In the end it is the freedom to choose and to change in the open that makes a system secure.

    Tony
  • No, Dare (Score:3, Interesting)

    by ekrout ( 139379 ) on Friday February 15, 2002 @03:17PM (#3014642) Journal
    No, Dare, you're wrong my friend.

    In fact the belief in the inherent security of Open Source software over proprietary software seems to be the product of a single comparison, Apache versus Microsoft IIS.

    No, I'd venture to say that although you are correct in citing IIS' tendency to destroy the Internet every few months when another virus comes out targeted at the Microsoft web server, there are most definitely other pieces of Microsoft proprietary crap that looks pretty lame when compared to its open source or free software counterpart.

    Ever hear of Microsoft Outlook?

  • by bourne ( 539955 ) on Friday February 15, 2002 @03:17PM (#3014646)

    The author's point is correct - while any Open Source package may have been audited, it isn't neccesarily audited well or at all.

    But flash-back to the recent announcement of the Sardonix Security Portal [sardonix.org], which aims to be a central clearinghouse for tracking audits and auditors. The goal is to have a list of 1) what's been audited, 2) who audited it, and 3) what that particular auditor's track record is on other software - were holes found after they said it was clean?

    Obviously this is a new project, and it's founded on the ashes of an earlier effort that didn't get much involvement, but it's a big step in the right direction and it's got DARPA funding. And it probably will do much better jobs with Open Source software than with Closed Source.

  • by gandalf_grey ( 93942 ) on Friday February 15, 2002 @03:17PM (#3014648) Homepage
    Open source is inherently more secure, if all users follow proper practices, and if all users are programmers. But all users do not follow proper practices. Do all users check the source, install it properly, make themselves aware of security concerns and keep their systems/software up to date?

    Of course not.

    But, better that it's more securable in theory (due to the open nature of the source) than not securable at all.

  • Splint: Secure Lint (Score:2, Informative)

    by Anonymous Coward
    FYI, Dave Evans has released Splint, the successor to his popular LCLint program. This GPL program will automatically find security problems in your source code.

    Splint is a GPL'd extended-lint type code analysis program which not only checks syntax (and semantics!) but now includes checks for security vulnerabilities. Essentially you run your code through Splint and it will spit out a detailed list of problems. As with LCLint you can "decorate" your code with stylized comments to provide semanatic information to the parser which allows even more thorough checking. Click here for more details and downloads of Splint [splint.org].

  • Ya but.... (Score:4, Insightful)

    by nadie ( 536363 ) on Friday February 15, 2002 @03:18PM (#3014650) Homepage

    The term "peer review" implies an extensive review of the source code of an application by competent parties. Many Open Source projects do not get peer reviewed for a number of reasons including

    * complexity of code in addition to a lack of documentation makes it difficult for casual users to understand the code enough to give a proper review

    "Casual Users" are not peers. The term "Peer Review" means that, in this case, the code would be reviewed by other hackers (software engineers), not by the general public.

    I am not a hacker, I don't have the skills or knowledge to find security holes in software libre by reviewing the source code. All I can do is use the software, and if I come across a sympton of a problem, I can email the developer to ask what is going on, which often results in a patch within a short period of time.

    • Re:Ya but.... (Score:4, Insightful)

      by swillden ( 191260 ) <shawn-ds@willden.org> on Friday February 15, 2002 @07:19PM (#3015779) Journal

      "Casual Users" are not peers. The term "Peer Review" means that, in this case, the code would be reviewed by other hackers (software engineers), not by the general public.

      Hackers/Software Engineers are still "casual users" in most cases. The issue isn't the presence or lack of the general technical knowledge to examine the code, but the effort and focus.

      As an example, I'm a programmer with close to 14 years of experience, and a good deal of it focused on engineering of large systems and on security work. I won't go through my CV, but by most anyone's standards I'm eminently qualified to audit code for security defects.

      However, when it comes to, for example, Mozilla, I'm unquestionably a "casual user". Why? Because in spite of my extensive experience and knowledge with software systems in *general* I have little knowledge of the inner workins of Mozilla in *particular*. And it matters. A lot. Even though Mozilla is implemented with my most familiar toolset (heavily abstracted C++), it would take me days if not weeks of focused effort to understand the software enough to be able to begin a security audit.

      I know this because a few months ago I attempted to fix a bug that had been bothering me for some time. Although I found the Mozilla code to be well-written, nicely structured and generally easy to work with, it still took me almost two full days to understand it enough to correct that one small defect. I probably spent too much time in random curiosity-driven wandering, but even factoring that out it was a *lot* of work.

      Doing a security audit is an undertaking on a completely different scale from adding a feature; it requires a fairly detailed understanding of large chunks of the code, and a thorough high-level understanding of how the modules fit together.

      This is not to say that closed source is better, because all of this analysis is, for practical purposes, impossible with closed source. However, don't confuse possession of deep technical skills with deep understanding of a particular piece of software. A hacker who isn't a casual user of most of his software is a hacker who doesn't use much software :-)

  • by kindbud ( 90044 ) on Friday February 15, 2002 @03:19PM (#3014654) Homepage
    I use vsftpd, (vs stands for "very secure" and is a goal, not a declaration of its status). Included with the source are a number of explanatory files you are familiar with: README, INSTALL, Changelog, LICENSE. There is another you probably haven't seen: AUDIT. AUDIT lists each source code file and a rating of 1-5 to indicate how much scrutiny it has received from other competent parties. 1 indicates no scrutiny and 5 means many competent programmers have reviewed it. Most of the files are rated 2 or 3.

    I think this is an excellent idea, one that should be expanded upon by other developers.

    Oh, and vsftpd 1.0.1 can be obtained from this ftp site at Oxford [ox.ac.uk]. It's written on Linux but I run it on Solaris with just a tweak to a #define.
    • you should send the authors a patch or explanation so others less familiar with programming can run it on solaris too.
      • Oh, I have done so, several times. I like this software a lot better than the other ftp servers. The Makefile.sun included with vsftpd was written by me (for the Sunpro compiler, rather than gcc). The need to tweak a #define came in the latest 1.0.1 version. He hasn't released 1.0.2 so my changes haven't made it into the tarball yet.

        Chris develops on Linux, and though he's pretty good at writing portable code, he doesn't have a Solaris system to test on. The #define in question is one of those "feature defines" one often finds. Linux distros come with libpcap, Solaris doesn't. That's all it is.
  • by Tony ( 765 ) on Friday February 15, 2002 @03:19PM (#3014659) Journal
    It's dangerous to simply judge the security of an operating system simply on published vulnerabilities. First, discovering vulnerabilities is a non-trivial task; secondly, some operating systems recieve more frequent audits, resulting in a higher number of discovered vulnerabilities; and thirdly, some operating systems are more-transperant, resulting in a higher number of discovered vulnerabilities.

    Take, for example, Solaris. Solaris is the most-used Unix in the world; it is under more external scrutiny than any other Unix, and so you can expect more discovered vulnerabilities than for HPUX or AIX. This doesn't mean AIX or HPUX are intrinsically more-secure; it just means more discovered vulnerabilities on Solaris.

    (I don't claim AIX or HPUX is as insecure as Solaris; I'm just saying it's impossible to judge based on number of discovered vulnerabilities.

    (And Solaris is pretty secure.)

    Then, the BSD and Linux variants are more transperant; anyone can look at the source code, and so possible vulnerabilities are easier to identify.

    Nice article, and excellent analysis. My quibbles don't undermine your conclusions; I just *hate* it when people simplify security to number of discovered vulnerabilities.

    Security is much more complex than that.
    • Not to mention the idiots who are adding up the number for the different distros and saying Linux has that many vulnerabilities, when in fact, many of those holes are probably redundant between the distros...

      Security is difficult at best to measure, because if you knew your system had holes, then you would know it was insecure. You might say to believe your system is secure is to live in blissful ignorance .. :-D

      Ben
    • by Anonymous Coward
      AIX, Solaris, HPUX, *all* use BSD code (which was open source then closed for a proprietary implementation), so the comparison isn't open source vs closed source so much as it is closed source + formerly open source vs open source.

      The argument in favor of open source is that "anyeon can fix a bug". Can, not will. There's a big difference, and a lot of .bomb failures littering the information superhighway because they didn't realize that difference.
    • Which of course, brings up the question of who counts the unpublished vulnerabilities, that may or may not get fixed in the next "cumulative security patch".
  • What a load of crap. (Score:2, Interesting)

    by 7-Vodka ( 195504 )
    I'm sorry, but I don't believe anyone can make such sweeping statements such as 'OSS is more/less secure than closed source'.

    It just doesn't jive. Some closed source software is more secure, some OSS is more secure. It depends on the talent, hardwork and organizational skills of those involved in the individual projects.

    Even if one found a methodical way to compare the mean security level of OSS and closed source software, it would be of no use!

    What use is it telling someone Closed source software is in general more secure than OSS when they're only interested in a web server? What they need to know is how secure their potential solutions are.

    Also, knowing which method in general produces more secure code won't influence a development team. They have more important things to worry about, ie. how they intend to profit from their work.

    • I'm sorry, but I don't believe anyone can make such sweeping statements such as 'OSS is more/less secure than closed source'.

      The article didn't say anything even close to that; what it said is that the widely-held belief (particularly among the Slashdot-crowd) that Open Source Software is somehow inherently more secure than closed source software is largely a myth. Try understanding what you read before you call it a 'load of crap'...

      It looks pretty silly when you insult the article and then make a post where all your points are pretty much the same as in the article (ie. OSS and closed source software can both be secure or insecure).

      • Try understanding what you read before you call it a 'load of crap'...

        Maybe I didn't make myself clear, what I intended so say was that the idea of 'sweeping statements' like the one the article debunks are a load of crap, not the article itself.

        It looks pretty silly when you insult the article and then make a post where all your points are pretty much the same as in the article (ie. OSS and closed source software can both be secure or insecure).

        As I said, it wasn't my intention to insult the article but the idea which it calls a myth ( and it's opposite). In fact, I was insulting the notion of one side being more secure, as well as the notion that finding such an answer is useful. That's what I called a load of crap, the whole debate in the first place :)

  • I'm curious as to the relative market share of the various operating systems they listed. Wouldn't it be expected that more popular systems would have a greater percentage of their security holes found? If only ten people used some os, you'd expect very few vulnerabilities to be found...
    • I agree. For example, how often would anyone use a MacOS X Server? If you believe that the stats are an absolute benchmark of security, then it'd be one of the best... Plus I think they put zeros at times the system didn't exist or people weren't checking them for security. (Did BeOS really have no security holes 1997-1999?)

      I would have liked to see the stats calculated by how many times those computers have been actually compromised. Not to mention, how many of those vulnerabilities were potential security flaws, and not ones that are acutally exploitable? That makes the more paranoid/open systems appear less secure.

      However, Win NT/2000 and Redhat scored as the worst on the "Number of OS Vulnerabilities by Year" table. Just as I expected... (Win9x is a OS for users--is it really fair to compare it with server OSs (or systems used for both) in this table?)

  • Gene Spafford gave a very interestring talk on Why Open Source software only seems more secure [linuxforum.dk] at LinuxForum 2000.

    It was a real eye opener for all of us who had read The Cathedral and the Bazaar [tuxedo.org]

    For instance this from one of the slides from the talk:

    Linux compromises dominate - nearly 4 to 1 over Windows
    Commercial Unix compromises usually rare
    Windows/Unix compromises are 2 to 1
    MacOS compromises do not occur (before OS X)

    The slides are still interesting even after two years
    • Ok so I read the pdf all the way through...

      He only sites one source that Linux has more reported flaws than windows does. I couldn't find the data online. I can't believe his source is trust worthy unless I can see how the flaws were counted because people tend to over count because of the distributions. (His data is different from other data I have seen).

      The rest of the proofs that Linux is less secure is because their are so many executables installed by default and the documentation is in different formats. The other "proof" that Linux is insecure is the number of lines of source in the kernel.

      Forgive me if I'm not convinced.

      (Note: I'm not claiming that open source is more secure than closed source although I think it is generally fairly secure.)

  • I think what people are losing site of here is the options you are provided with in OpenSource. While on a Windows platform there are relitivly few companies that make server software(ie FTPd, http), while on the OpenSource Platforms there are many more choices.

    How many people would run WuFTPD on a production box while there are other options around like Pure-FTPD [sf.net] or ProFTPD [proftpd.com]?

    But for windows for example there are relitivly few closed source HTTP Servers. Namely IIS, while on the open source side there is everything from Apache [apache.org] to Abyss [sf.net].

    So what this brings me to, another point of Open Source Software, because there are many *options* in a production enviroment for the choice in software, the only costs of changing to a product that is more secure is the time to install it. While in closed source to get Microsofts newsest and most secure IIS 6+++ bundeld with Windows ZP 2003, you will have to shell out a few grand. Thats where security matters in the end, how much money does it cost you in a production enviroment. We are a bunch of capitalists at heart you know :-)
  • by maddman75 ( 193326 ) on Friday February 15, 2002 @03:31PM (#3014710) Homepage

    AIX
    [6 remote, 3 local, 1 both]
    Debian GNU/Linux
    [1 remote, 12 local] + 1 Linux kernel vulnerability[1 local]
    FreeBSD
    [12 remote, 9 local, 3 both]
    HP-UX
    [12 remote, 12 local, 1 both]
    Mandrake Linux
    [5 remote, 12 local] + 12 Linux kernel vulnerabilities[5 remote, 7 local]
    OpenBSD
    [7 remote, 5 local, 1 both]
    Red Hat Linux
    [5 remote, 22 local, 1 unknown] + 12 Linux kernel vulnerabilities[6 remote, 6 local]
    Solaris
    [14 remote, 22 local, 2 both]


    Personally, I find remote vulnerabilites to be a MUCH greater concern than local ones. Looked at this way, we can see Linux clearly coming out ahead, which the champ Debain with only one vulnerability.

    The author does make a good point about open source giving a false sense of security. Just because the source is available doesn't mean that it has been thoughouly audited. Still, the freedom to do so is there.
    • But even this comparison is not sufficient to get any meaning. What about the severity of compromise? If I have a remote compromise on a Solaris system that gives me access as an unpriviledged user or is just a DoS, I'm less worried about it than a local compromise on OpenBSD that gives root, even though in the general sense I might trust OpenBSD security more than Solaris. Giving a number of vulnerabilities without a severity indicator is worthless. Even with some indicator of severity, you really can't get much meaning from the numbers. Any security expert knows that the security of the system depends more on the skill of the administrator than the OS of the system. In my opinion, articles like this are a waste of time.

      RagManX
  • "One deterrent to the mass review of certain Open Source projects is a high level of complexity in the code, which can be compounded by a lack of documentation."

    Please, closed source is no different. Just cause a company produces code for money does not mean they have tons of documents and all the code is easy to read. Bad argument just because a code base is opened or closed does not brand it overly complex or neat and clean automatically.
  • This was a fairly reasonable (if unexceptional, being a rehash of a rehash) article, until the author got to his recommendations:

    However, all of these issues can and are solved in projects with a disciplined software development process, clearly defined roles for the contributers and a semi-structured leadership hierarchy.

    This is almost certainly not the path to better free software. Mass movements in the free software community develop bottom-up, not top-down. If the community rises to the challenge of creating secure software, it will happen for the same reason as any other of our successes: because individual contributors see it as worthwhile.

    So if you want it to happen, don't focus on rules and leadership. Focus on ways to increase the visibility of good security work and to credit its practitioners. Make people care.

  • The Sardonix [sardonix.org] project is intended to address some of this problem. "Many eyes make bugs shallow" but only if many eyes are actually looking. Sardonix seeks to encourage source code review [sardonix.org] with an auditor rating system based on performance. Programs will also be rated, according to who has audited them. Naturally, we provide a set of resources [sardonix.org] for people to use in their auditing.

    Wanna make security better? Come do something about it. [sardonix.org]

    Crispin
    ----
    Crispin Cowan, Ph.D.
    Chief Scientist, WireX Communications, Inc. [wirex.com]
    Immunix: [immunix.org] Security Hardened Linux Distribution
    Available for purchase [wirex.com]

  • It's funny that this story immediately follows the one where Bruce Schneier says it best:

    "Publication does not ensure security, but it's an unavoidable step in the process." [counterpane.com]

  • by Jordy ( 440 ) <jordan.snocap@com> on Friday February 15, 2002 @03:34PM (#3014726) Homepage
    There is quite a big difference between security problems in commercial operating systems such as Solaris and security problems in open source operating systems such as Debian.

    The Debian team did not write most of the software that comes with the Debian distribution. Sure they make patches and try to keep things up-to-date, but the software that is in their distribution is included for completeness more than anything else.

    Sun on the other hand did write most of the software that comes with Solaris (or at least obfuscated where it came from.) They are directly responsible for security problems with the software they distribute.

    When a security problem occurs in Apache, surely it's an Apache security problem that just happens to affect everyone who has Apache installed. If they have Apache installed on Windows, one can't claim it as a Microsoft security bug and blame Microsoft for not auditing every peice of code that happens to compile for their OS.

    No one forces the end-administrator to install 99% of the software included with an open source distribution. It is up to the administrator to only install software which they are comfortable with. If the authors of Emacs don't do frequent code audits, don't install it (that's not to say they don't.)

    Now... one thing distributions can do to make the end-administrator's job a bit easier is to include statistics along with the application for things like past security vulnerabilities, time since last vulnerability, last code audit, etc. to help them make better decisions about what to install and not to install.

    Of course, going the route of only including fully audited code in a distribution just doesn't work. If people need inn, they need inn code review or not. Granted they might take a look at the source while they are compiling it, but the chances of them finding a massive security hole with a curory glance is pretty slim.

    That's not to say that distribution vendors are free from blame; especially fully commecial vendors who should at least do some form of audit and mark which packages haven't been audited as 'unsafe,' but come on now... the real blame belongs with the administrator and the developers.
  • Bug Severity? (Score:3, Insightful)

    by MattRog ( 527508 ) on Friday February 15, 2002 @03:37PM (#3014739)
    Why is it that people say X holes here, or Y bugs there.

    Bugs are given ratings on their priority, I assume security holes are as well.

    I looked through some of those security listings and noticed that some are for applications that are bundled with the OS (so I'm not sure that they should be counted as an OS issue) and that don't result in actually compromising the system (perhaps crashing an application, or corrupting a file, yes). Not that I'm saying that is a 'good' thing but certainly crashing a little-used application which may not even be running on the default install isn't the same as gaining root access nor should they be treated as such; some form of 'validation' of the numbers is needed, e.g.:
    Easily Exploited (278):
    -- Root Access: 234
    -- Crashes programs: 44
    etc.
  • Does anyone find the use of formal methods any practical use ?

    I studied them at uni, and found them dreadful things to use at the time. The main benefit to using them seemed to be that they took about 5 mins per line of source code - anything you do that makes you spend that amount of time looking at your code is going to help you find problems. But I might have been biased, because I wasn't an experienced programmer back then.

    Does anyone have any different opinions on this ?
    • I've used them IRL for Real Work going into the Symbian OS [symbian.com] and YES they're invaluable and YES you do find a lot of bugs that would otherwise have gone into the final product and YES closed source is more often inspected (formal meaning here now!) than open source is.


      Now don't get me wrong - I'm all for open source and it would be great if the community could focus more on methods and not just code&features .. but we're not there yet :)

  • I dare you to put an AIX box external to the internet! Sheesh. M$ doesn't get it... this guy doesn't get it either. The best way to improve quality is with everyone testing things out in the open. If you try to hush up things, nothing gets fixed (as is the case with most of the commerical *ix platforms) and holes abound. The reason you don't hear too much about the other *ix platforms is because not as many people are interested in hacking them. A piece of software with no bugs is likely a BAD piece of software... not a good one. So report your problems.... report them all.... or be like M$ and many others who try to hide them as best they can and fix them when convenient.
  • by jeff13 ( 255285 )
    OK, these technical issues are beyond me, but my perception of this is... in a big picture way...
    Private companies claim security features in their software. They tell their customers that with this, security is assured.

    Has any free Open Source softeware EVER claimed this?

    As far as I know, every Sys Admin I've ever talked to tells me that nothing is secure on the Internet... it's simply not designed to be! Never was! Hence, claiming Open Source software isn't secure is false - as nothing is secure. What I mean is, claiming security is a lie, you're only as secure as the Admin can make it.
  • by EXTomar ( 78739 ) on Friday February 15, 2002 @03:42PM (#3014768)
    Open Source was and should never be billed as a "magic bullet" that will fix anything. This includes security.

    However there is an important feature of Open Source projects over proprietary stuff is that "openness" breeds honesty and trust.

    If a bunch of people say Apache is secure(pulling an OSP out of the air) it is not only because people use Apache and found the claims to be reasonable but people have looked at its open internals and believe that its design is secure. If a bunch of people say IIS is secure(pulling a related closed product out of the air) there seems to be less credibility. Although people do use IIS no one really knows much about the internals of IIS except Microsoft.

    Especially with MS's recent performance, are you going to trust the vendor's claims that their closed product is safe and secure? At least with the source you can hire people to do security audits on Open Source programs.

    Keep in mind that Microsoft and Apache Project both have the same "no warrenty" on their programs. If you use them and something goes bad(ie. you get hacked) it isn't our fault. It turns out that Microsoft's scheme isn't better and it costs more(you have to buy the product, you have to buy support from Microsoft, you have to pay for them to look at your problems). So why do people continue to believe MS over Apache?

    And lastly, Open Source doesn't fix user stupidity. Apache for instance can be very easy to break if you configure it very poorly and IIS can be very secure if you take the time to tighten it.
  • Ok Microsoft's bug policy is a little shifty, but... linux still isn't bugfree either. Basically we're talking millions of lines of code so the number of bugs is going to be in the thousands. Now with open source the thought is that anyone can review the source and find and fix bugs. Now in actuallity I would say that the number of people that actually have the ability to understand the code is much much much smaller than the number that use the software. How many people here have written software? And how many times does your a bug show up over the littest mistake, that took you 3 hours to track down? Its for this reason that bugs will always be around. Microsoft might not have the manpower due to be a private company to track down and fix all but open source (linux, bsd,etc) don't have the raw numbers of abled programmers to track down the bug. At least open source is alittle bit more, well open about it.
  • The data for Debian GNU/Linux is completely flawed. The OpenSSH CRC attack compensator bug is not listed, for example, and many remote vulnerabilities for which DSAs where issued aren't counted, either. (And the bugs other distributors fixed in 2001, and not in 2002 like Debian.)

    In any case, if you are a Free Software zealot, you should seek for better arguments than security. Otherwise your friends will come back to you and ask, Why have you betrayed me?, when their machine gets hacked although they use Free Software which has been reviewed by thousands of capable programms.
  • The REAL point (Score:3, Interesting)

    by GSloop ( 165220 ) <networkguru@sloo ... minus physicist> on Friday February 15, 2002 @03:49PM (#3014806) Homepage
    Software shouldn't be viewed any different that any other tool - cars, VCR's, Microwaves etc.

    Sure, there are some additional problems, but most come from the design and implimentation _approach_.

    If you've read "The Software Conspiracy" from Mark Minasi, you'll be enlightened about software design, and our expectations.

    Closed vs. Open isn't the point. Either can be just horrible, or quite wonderful. But the devil is in the details.

    What I think many miss is that BUGS = INSECURITY! Not all bugs will cause an insecure system, but some will.

    To make a more secure system, we need to make a bug-free system, or nearly so. Look at these software design and implimentation methods.

    Formal Methods
    Code Audits
    Testing
    Design Reviews
    Codified Best Practices

    These are the very practicies that will give good code, even bug-free code if they are followed carefully.

    Now, as part of the whole solution, you need more than a solution. You need a "push" too. Pull isn't enough by itself.

    It's my opinion that there are a couple of factors that could make this happen.

    User demand. We havn't seen much of this, but it may be growing. We also need to work to change the expectation of users. Most of us even, feel that "Oh, it just crashes sometimes" is an acceptble answer. In fact, how many of us, just add the "just reboot, it'll fix it" to the mix. I'm as guilty as anyone. But this just perpetuates the expectation that software isn't very reliable, and we shouldn't expect it to be. Lets change that.

    Finally I think the legal route should be available too. [I'll get lots of flames here, but I'm ready...] Like any other DEFECTIVE product, the user should be able to redress damage from a product that wasn't reasonably designed. [Many of you will be howling to burn me at the stake now, but read on if you can] The standard for liability is a reasonable effort. I think those that don't use a strict design and implimentation method are not using due care. These methods have been around for some time now. We just don't use them. It's also fairly clear that they can work. How well they can be implimented in real commercial products we can't know, becuase I don't know of anyone that really uses this type of design method - do you? [And not just in name. In real methodical plodding fashion...]

    Lastly, as in Minasi's book, many of you are now screaming - "It'll cost WAY TOO MUCH!"

    Bah! How much of your time is spent chasing bugs down in commercial products. Sure, it only cost $100 at the store, but you put in 35 hours figuring out how to work around bug a,c & c. It crashed, and lost your document. It took 3 hours of tech time to find and restore the right version of the data file, or worse, it wasn't backed up, and poof! Companies spend way too much on support of bad products. These costs never get allocated to the real source, but instead it's just lumped into the general support costs. That just allows the vendor to shift the cost to your company, rather than having an "honest" cost of the product up front.

    If software vendors had the real threat of laibility, they would then get serious about coding practices. If they didn't, the corp boards and shareholders would make sure it happened. A few examples, and we'd have better software.

    Finally, I think that legal liability is the only way this will happen. Until everyone is forced to a higher standard, everyone will seek the lowest common denominator. If you produce better software, but you're new, how will you charge more for it? I just don't think the "market" will fix this. [Not that the courts aren't part of the market, but many will argue they're not, incorrectly IMHO.]

    In the end, frankly, OSS might be easier to fix, but who cares? I think the design and implimentation before and while the code is written is much more important. From that perspective, I think OSS has a more difficult time imposing that regimented framework on it's coders and design people. But it's lots easier to show and embarrass the OSS people, precisely because the code is open - thus a better motivator perhaps?

    Well, I've said my piece - do your damage.
  • The most common security problems have to do with default services, things that are installed with little or no user intervention to promote ease of use

    Microsoft typically will give you the kitchen sink, everything runs even if you need very little. RedHat linux does a similar thing, if you install "Everything" it also starts all the daemons.

    If you don't spend 30-45 minutes turning off unwanted services, portscanning your machine, and looking up patches/updates at CERT/RedHat/SANS etc, forget it.. your system will probably get compromised in a matter of days. This goes for *ANY* operating system, you simply have to test it and make sure you are running the minimum necessary to do the job.

    The main reason you hear more news about microsoft systems getting infected is simply that there are many more of them, and many more are running the simple default configurations. Linux machines are really just as vulnerable IF YOU DON'T PATCH AND TEST THEM

    Here's a little guide [beimborn.com] to turning off unwanted services on a redhat box, and how to audit your systems with a portscanner

  • The idea behind Open Source is that any old person can pick up the code and start coding. The problem with that is that the average coder isn't qualified to do security coding. Please note that this is NOT TO SAY that the average 'closed source' programmer is any more or less qualified; I dare say most of them aren't. But the 'more eyes' arguement can only apply if those eyes know what they're looking for, and I dare say also that the relatively low barriers to entry in the OSS world would make for more 'elementary' coding mistakes, and for software that 'starts wrong' with poor infrastructures, simply because they're often learning processes for the creator.
  • In theory, open source has a greater POTENTIAL to be secure than non-open products. I say potential because while it may not happen in practice, there's a lot more opportunity for numerous people to look at it with diverse perspectives. Of course if they don't look at it, it doesn't much matter.

    the other security benefit of open source is that you have the POTENTIAL to audit code before you install it. If security was absolutely critical to you, you could look at the innards of every app you download, skim it for buffer overflows, etc. In practice most people don't bother, but they could if they wanted too.
  • by Anonymous Coward
    1. Who verifies the counts for closed-source software?

    2. I see Red Hat has an "unknown" vulnerablity. WTF is that? Is it "I think there might be a vulnerability here but I don't know"?

  • Sendmail source has been available for decades, and security holes have been reported steadily for its entire life. What went wrong?
    • It had to compensate for features which were never originally planned for

      • It had to compensate for features which were never originally planned for

        Gimmie a break. So does every piece of sotware ever writtten (at least the ones that anyone usues). Users=feature creep.
        • Sendmail hasn't had much feature creep. Its problems come from excessive cruft from the early days. It used to have to deal with Berknet, UUCP, FidoNet, and a few others, all of which had different addressing syntax.

          The open source process isn't good at throwing obsolete features out.

  • is wrong because it lists the glob problem twice, once for ftp and once for libc.
  • code review (Score:1, Troll)

    by Restil ( 31903 )
    Open source security may be a myth or a theory,
    but the fact remains. For better or worse, at least I am 100% capable of finding the bugs or security holes if they need to be assured of such.

    You can say all you like about how little guarantee there is with the code being open, but with the code closed, I can only find problems, I can't assure myself there aren't any more.

    -Restil
  • by strags ( 209606 ) on Friday February 15, 2002 @04:17PM (#3014919)
    ... is the relative speed at which open-source problems are located and repaired.

    Just for fun, here [jscript.dk] is a handy summary of some Windows issues, including an XMLHTTP vulnerability that allows a malicious website to read any file on your harddrive, that has been a known issue since December 15th.
  • by markj02 ( 544487 ) on Friday February 15, 2002 @04:22PM (#3014942)
    It's an aphorism among software engineers that languages don't matter, people matter. They claim that you can get a handle on security with better management, better design, better development methodologies. Too bad software engineers have been saying this for decades, and they have completely failed to deliver.

    The only known and proven way you can get problems like buffer overflows under control is to use high-level languages and tools that make them impossible. Yes, your programs run slower, but a compromise is much more expensive than a couple more machines. Yes, there will still be plenty of other security holes possible, but we can address those through better tools as well.

    Microsoft's management approaches to security are doomed to failure, as are efforts and arguments in the open source community that the open source process magically addresses security problems. Microsoft's real security initiative is their switch to C# and "managed APIs". The open source community should take notice. Unless systems like web servers, file servers, mail servers, and authentication under Linux get rewritten in safe, high-level languages like Java, C#, or others, Linux will be so unreliable relative to Microsoft's and other systems that it will become irrelevant.

    (However, given the choice between buggy Microsoft C++ code and buggy open source C++ code, I'll still take the buggy open source C++ code any day--it's easier to fix and fixes come out more rapidly.)

  • "We're not suggesting that Microsoft must give up all proprietary rights to its protocols and interfaces, or allow anyone to implement or use its standards. We are saying that they must be public, not secret."

    Why is that? I would finally love to be able to mount (read&write) a NTFS partition should the need arise. Now they don't have to give up properiatary rights to their protocols or interfaces; thats fine. They can have (c) Microsoft etc; however people SHOULD be able to implement and use it's standards for interoperability, so I disagree with that statement. The protocols/interfaces/records/structures should be public and people should be able to interoperate with a Windows machine without having to reverse engineer protocols and structures.
  • by tmoertel ( 38456 ) on Friday February 15, 2002 @05:10PM (#3015194) Homepage Journal

    Let's cut back to the big picture. Pick any desirable characteristic of software -- resource efficiency, robustness, quality, and, yes, even security -- and guess what? The process by which the software was created largely determines how much of that characteristic the software exhibits. Good work, good code. Crappy work, crappy code. Not exactly a news flash.

    Now -- and here's the important part -- take any software, developed by any process, and then consider any desirable characteristic. Do you get more of that characteristic by letting everybody see the source or by keeping it hidden away?

    That's the argument for open source.

    [As I responded to the author's original posting on Kuro5hin.]

  • The time between a bug being discovered and being fixed. That is a kicker. Let's suppose that you had two pieces of software. One had twenty security holes found in a year, but every hole was fixed in one day. The other one had five holes, but they took six months to be fixed. Which piece of software had more known security holes at any given time?

    The big issue is 'how many unresolved security holes are there for software X at any given time'. Even more than the number of bugs, that is a really significant number. Microsoft execs are whining about people discussing bugs out in public. The fact is that people started doing this in order to get companies to correct their code.

    I won't say that OSS is more secure than proprietary software. I will say that OSS on average tends to have a much higher turnaround for getting bugs fixed and not leaving a system with known problems for very long.
  • by Philbert Desenex ( 219355 ) on Friday February 15, 2002 @06:04PM (#3015418) Homepage

    I admit that this comment is going to sound very ad hominum: We need to examine Obasanjo's claims carefully. He's worked for Microsoft [gatech.edu] very recently.

    Ordinarily, I wouldn't call attention to this, but Microsoft as a company has a really bad track record of astroturfing [aaxnet.com] just about any kind of on- or off-line forum:

    Sorry, Dare, but that's the facts: if you lie down with pigs, you wake up smelling a bit like pig excrement.

  • by gnovos ( 447128 ) <gnovos@NoSpAM.chipped.net> on Friday February 15, 2002 @06:39PM (#3015562) Homepage Journal
    Every tim eI read one of these, I am always astounded how they can't use simple logic in thier arguments. They argue that X operating system had found more bugs than Y operating system. The assumption, the illogical assumption, they make is that X operating system must have more bugs in it than Y.

    Since, logically, there is no way to determine which one has more total bugs (found plus unfound), the only recourse is to assume that both systems have roughly equivilent numbers of bugs.

    From that foundation, whichever system can demonstrate more FIXED bugs is going to be the one that is more stable. All of the bugs listed by the article are not outstanding bugs, they are fixed.
  • Bruce Schneier talks about measuring trustworthiness but then goes into a list about featuritis. While I don't think that enabling/disabling features by default is the answer (someone could just script the annoying click-thru enables anyway), that's not what I want to see. There is a way to measure trustworthiness, and its how I'd like to see Microsoft measured.

    1. Honesty

    When a vulnerability is discovered, Microsoft should freely admit it, admit it was their mistake, and not try to pass the blame or put a spin on it.

    2. Accountability

    Microsoft should be willing to accept responsibility for their products and any problems they cause. No more click through absolutions. No more blaming it on harware or third party applications or user error. If something I bought needs a fix, they should make it freely available, to the point of sending me a disk in the mail. If I shelled out $200 for their cardboard box, they can spend an extra buck to send me a disk and a stamp. If they feel a need to charge $201.50 in order to achieve accountability, so be it.

    3. Responsiveness

    No more brushing things under the table, hoping noone will post an exploit to bug traq. No more suppressing information for months until they feel like dealing with it. Microsoft is getting better about posting fixes online, but they have a long way to go.

    5. Openness

    Microsot should tell us what each product and each fix is doing -- *exactly*. They should describe the problem instead of villianizing those who find it. They should allow people to fix their own problems. I'm not mandating the open sourcing of Windows, but if they were serious, they'd think about it. Even if you need to sign a 100 NDAs to get it. A much more reasonable and realistic request is the opening of the .DOC file format.

    6. Cooperation

    Microsoft should be more willing to work with other Software companies. No more DOS or Browser or Mulitimedia player wars. No more games with SMB or Java or HTML. No more buying out or undercutting the competition. Microsoft should accept that they aren't the only software developers in the world and encourage a more heterogeneous environment. Not only is it good for security, its good for business.
  • In fact the belief in the inherent security of Open Source software over proprietary software seems to be the product of a single comparison, Apache versus Microsoft IIS

    And Pine/Kmail Vs Outlook, and Netscape/Mozilla Vs IE (ANYTHING Vs IE). Basically everything that connects to the Internet that has an analogue between open and closed source has been less badly cracked on the open side.

    There have been some belters on the open side, of course and I've had a worm that got in through Bind myself. But, there is no way that I would ever trust closed source software to connect to the 'Net again.

    I suppose that my experience is just "anecdotal evidence" but my experience matters more to me than any number of useless metrics, and the metrics given are useless; how sever were the bugs and how long did a fix take to appear? How many of the fixes appeared before an exploit was seen in the wild? The method used punishes the systems which fix bugs before an exploit appears and rewards those that sit and hope that the bug is never "hit" and so don't spoil their "security score" by issuing a vunerability report.

    As for the suggested methods of producing secure software, big deal! Apart from formal methods these are all in widespread use by people interested in security. Formal methods, for that matter, do not (and can not) guarantee correct software; I have met two desigers involved in the Airbus 320 project and one of them refuses to fly in the thing and no one can forget the pictures of that Airbus doing loops over Italy with a full load of passengers.

    The problem with the "solutions" is not that no one knows what to do but that many (eg MS) don't bother trying. Given a package which has not been properly reviewed before release it's pretty obvious that the open source version is better insofar that it gives the user a chance of doing it for themselves. In an ideal world the source does not matter; in the real world it does.

    I don't think we need ex-MS employees coming round here preaching about security, frankly. Closed source removes power from the user and leaves them helpless in the face of bugs that require even a one character patch to the code. Open source gives the user a chance which s/he may or may not be able or willing to take to fix bugs quickly, or to find them first before the black hats, but at least it gives them the chance. Only an idiot would claim that that does not lead to higher security.

    The only valid point in this article is that programmers sould write better code and check it more before release. Well, DUH!

    TWW

HELP!!!! I'm being held prisoner in /usr/games/lib!

Working...