Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Linux Software

Can Open Source Be Trusted? 304

Kostya asks: "I attended a infosec seminar put on by Tripewire and eSecurity on Tuesday. They had Dr. Gene Spafford from Purdue University giving a lecture on the changing landscape of infosec; he's the director of CERIAS. In his lecture, he argued that Open Source is not the solution to making trusted systems. Trusted systems are built according to a formal specification and are tested and confirmed against a formal testing and standards process. His assertion is that Open Source systems such as Linux are developed in too chaotic a system to ever reach a trusted state. But isn't that what the Linux community prides itself on? Do you think he's right? If so, what can we do to develop more trusted systems?"

"I also menitoned OpenBSD to him as an example of a secure system that was open source. I argued that it was exactly because of the OpenBSD/FreeBSD development model (i.e. closely controlled with a top down hierarchy) that it was able to be more secure. Dr. Spafford still felt that OpenBSD did not fit the criteria of a well-trusted system: because it was not designed to a formal spec, and there are no formalized tests or standards being applied to it. Are there some ways in which we can get OpenBSD more trusted by testing it against some infosec standards?

For the rabid reader, I would just like to point out that Dr. Spafford NEVER disagreed with the 'more eyeballs means less bugs' tenet of faith that so many open source advocates preach. He just felt this was irrelevant to his point--how do you judge whether a system is more trusted than another system when there was no design spec or goals listed out to which to test the system against?

All in all, it was a challenging lecture. I felt myself start to get irritated, but by the end of his lecture, I was convinced he had a good point. How do you think we can address his criticisms?"

Honestly, I would say the same thing about a lot of commercial software as well. Just because you sell something doesn't mean that it's been designed properly, and likely just because something is free doesn't mean it's been slapped together with duct tape. Further more I'd trust a program with source more than one without and many open source developers are always willing to accept a better design. I'd also like to say that just because there aren't many pieces of Open Source that proceed with fully documented design goals, that this won't always be the case for future projects.

This discussion has been archived. No new comments can be posted.

Can Open Source be Trusted?

Comments Filter:
  • On an unrelated note, Spaf (Dr. Spafford to you, buddy) also runs the usually-funny Yucks [purdue.edu] mailing list. This list has long seemed among the cream of the crop of net-humor to me.
  • "His assertion is that Open Source systems such as Linux are developed in too chaotic a system to ever reach a trusted state. "

    What does the way something is developed have to do with the final product (or a given release), and the tests performed on it? You are testing the product, not the developement environment, surely?
  • Basically, he's right. All he's saying is that with no formal design spec or test process, a system can't be considered secure. Yes, we have some design specs in the form of RFCs, POSIX and so on, but we certainly don't do rigorous compliance testing for them with each new release. For systems that aren't done that way, though (a category which applies to more than 99% of all available software, at a guess) I'd much rather take an open source one than a closed source one.
  • Because of the way Open Source is developed it will have less bugs than some commercial software, but it will never be developed to a formal specification using tried and tested Software Engeering principles.
    Commercial software (i.e. closed source) benefits from not being developed in a chaotic way and can be more secure. Techniques such as the Cleanroom approach, software inspection can lead to almost zero-defect software. Which is something I don't think Open Source can ever really achieve.
    The use of formal methods for specification and verification does achieve secure systems, the only way Open Source can match commerical systems developed in this way is for them to be developed in a similar fashion. Sure, you can still give the source away, but developement does need to be centralized for these techniques IMHO.

  • by hattig ( 47930 ) on Friday June 23, 2000 @02:29AM (#980996) Journal

    Open source software can be trusted more than closed source software when it comes to security, for all the reasons that you all know (quicker bugfixes, code open to scrutiny, etc). Closed source software can have hidden APIs, bad implementations and bugs, and the release cycle is slow.

    OpenBSD is interesting, as they do audits on software to get rid of the security holes. They can only do this because the source code is available.

    Of course, software like Sendmail, various ftpds, POP3 daemons etc, all mess up the security aspect of an OS. The OS can be as secure as it can possible be whilst still being usable and useful, but if the software being run on it is vulnerable, then backdoors into the system will be found. Having the source code available allows the cracker to find better access methods than having to guess and feel their way into a system.

    You just have to remember that there will never be perfect security, and plan accordingly.

  • by Suydam ( 881 ) on Friday June 23, 2000 @02:30AM (#980998) Homepage
    Dr. So-and-so is defining trusted as being designed to a formal spec. That definition is constructed, whether the intention is there or not, in such a manner that he is right. Under that definition, Open Source cannot be a trusted system.

    My assertion is that open source challenges the notion that you need a formal spec to develop trusted software. Much like the submitter of this story, I would hold up OpenBSD as an example of a system that I consider trusted, yet was not developed under any formal spec. Perhaps it's time to realize that formal specs help to get things done correctly, and they certainly help get things done quickly (by preventing, in theory at least, feature-creep), but they certainly are a requirement.

  • Honestly, I would say the same thing about a lot of commercial software as well.

    I suspect that Dr Spafford would agree with you. Whether or not a piece of software is open source is orthogonal to whether or not it can be trusted.

    Just because you sell something doesn't mean that it's been designed properly, and likely just because something is free doesn't mean it's been slapped together with duct tape.

    Just because you sell something doesn't mean that it's not open source.

  • I don't see that there is any connection between where your code comes from and the specifications it is built to.

    A benevolent dictator, acting much like Linus, could accept only code that brings the product closer to the design.

    The test suite or testing procedure could be released along with the code. Sure, goals like "ISO 9000 security compliance" are less popular than "a working operating system", but that doesn't mean you have to keep your source closed. And it doesn't mean you can't accept patches that bring you closer to your goal.

  • Trusted, Assured, Safety Critical, these are all areas where IMO Open Source won't work. They require a level of discipline and upfront analysis and design that doesn't merge well with release early and often. OSS creates great software for large user bases, however it tends to create products rather than enterprise applications.

    The key to trusted, assured or Safety Critical is the specification. You must know in advance what you have to prevent. Its no good after you've lost all of your data fixing the bug that allowed it open.
  • The article didn't mention any bias against open vs. closed source. Actually (if you'd read instead of shouting out OPEN SOURCE ROOLZ just to be cool), he mentioned that more developers looking at the code usually tends to produce a better system. However, open or closed source, if a system is not designed to rigorous specs and tests, it cannot be trusted. I think he has a good point.

    It doesn't matter whether the source is open or closed.
  • it will never be developed to a formal specification using tried and tested Software Engeering principles
    Can you back this assertion up? I see no reason why Open Source software cannot be developed in a good proven way? Sure, some software will not be, but that software will not be used in critical systems. A lot of Open Source software originates from the academic community, e.g., Exim was written at Cambridge University, and these programmers will have used good software engineering techniques to write their code.

    I don't know of many big Open Source projects that have just been hacked out from the start now, there has always been a lot of planning going on, and surely less bugs means more security?

    Commercial software (i.e. closed source) benefits from not being developed in a chaotic way and can be more secure.
    I bet a lot of the closed source software that is developed is not developed in a less chaotic manner than the Open Source way? If you call distribution of programmers "chaotic" then yes, but I believe that you can have more chaos when all the programmers are in the same room and some are doing things they don't want to really be doing, than when all the programmers are spread out around the world (and never see each other even!) but have the desire to write the software.

    The use of formal methods for specification and verification does achieve secure systems
    Formal specification and verification is all very good, but it takes so long that it isn't used in any commercial companies, except in some kind of loose analogy. Commercial Companies want to get their software out of the door quickly, this does not lead to good secure quality software. Open Source can take its time, witness the Linux 2.4 kernel - getting it right before it is released. OTOH, Gnome 1.0 was a mistake, they tried to compete with KDE before the software was ready.

    Tools such as rational rose can be all very good when used correctly, but all to often the designers do not have the time to get familiar with the tools they use, so they underutilise the software in such a way it takes them twice as long to do something that could have been expressed on a sheet of paper.

    So you can see how Open Source works, and it looks chaotic, distributed, and messy, but the code isn't, the people have an interest in the code beyond making a fast buck, and this means that bugs and security flaws will be detected, and fixed, and the software will end up being of a high quality, when it finally gets released. Most commercial companies do not have the luxuries known as time, dedicated programmers, etc. They have money to throw at a problem.


  • Eric Allman is the sendmail hacker, ESR wrote fetchmail ...

    And yes, sendmail sucks. run qmail [qmail.org] or postfix [postfix.org] instead.

    (I can't belive that I actually reply to such a blunt flamebait ... )

  • Don't know what exactly a system has to be to be considered 'trusted' but OMG tested MICO [mico.org] against their CORBA spec for free in recognition of their efforts. MICO passed and hence can be legally branded as a CORBA implementation. Could MICO be considered a 'trusted system' then? It was tested in a very formal way.
  • Then what makes a trusted system? What systems would be considered trusted?

    Just because something is closed source, that doesn't mean it's developed to a formal specification with formalized testing.

    Furthermore, what constitutes a formal specification? Both OpenBSD and Linux derive their security models from the Unix security model. That model *is* a specification. But is it a *formal* specification, and if not exactly what constitutes a formal specification?

    FInally, as for formalized testing, I don't know that much about what Linux kernel hackers do, but I'm to understand its a very chaotic environment. I agree with the professor here: I don't think Linux has had formalized testing for *anything*, let alone the area where it matters (security.) Yes, to a bazillion eyeballs all bugs are shallow, but formalized testing means creating a formal benchmark test that puts the program into virtually every conceivable situation. I'm not saying that closed source developers do that (isn't it obvious? look at the number of security holes in the Windows operating systems over the years.) but I do think that Linux, and software development community as a whole, needs serious improvement in the areas of testing.

    From what I can see, the proessor's whole gist is this: software development needs develop an engineering culture. Software needs to follow a systems development life cycle (SDLC)where formal specifications (requirements documents) are written up in advance, the software is developed within an engineering culture, and there is formalized testing and user acceptance. These seemingly superfluous controls, especially in the area of infosec, are vital to controlling the inevitable bugs that crop up and to making sure that software meets the requirements.

    I think Linux and *BSD at least need to take a hard look at requirements gathering and testing phases and see if there is any room for improvement.

  • by Cuthalion ( 65550 ) on Friday June 23, 2000 @02:45AM (#981019) Homepage
    Open source can be more trusted than closed source

    Formally designed and reviewed software can be more trusted than chaotically assembled software.

    These seem orthogonal to me (and to each other! wakka wakka wakka!). Sure, most all of the software out there is NOT formally designed for security first. A lot of it is open source, a lot of it isn't. Open source obviously doesn't make a programme or suite instantly bulletproof, neither does formal design and review. Nothing is 100% secure, trustworthy, or bug free. Loads of things can help or hurt the process.
  • I'd agree that open or closed source has little to do with trusted or not trusted. Either open or closed source software can follow strict design guidelines or be flakey. With closed source we get what a company wants to sell us and no way to prove if it is or isn't what we want. With opened source it's exactly what we want it to be. If that means trusted then it'll be trusted because otherwise what would be the point of coding it? The bottomline is that we are in control of our destiny.
  • by Kha0S ( 5753 ) on Friday June 23, 2000 @02:50AM (#981026) Homepage
    As long as trusted systems are evaluated in a many-month long process as defined in the Red Book (NCSC Trusted Systems Evaluation Criteria), most Open Source OS's will continue to fall, simply because they cannot afford to have the testing or certification performed.

    What we really have to remember is that Open Source OS's simply don't have the features that the trusted system evaluation criteria dictate -- it has nothing to do with whether or not they're secure in practice, but has *everything* to do with if they're secure in theory, such that a poor implementation can't break the security model.

    Memory that's segmented in hardware such that even increasing your process priveleges doesn't allow access to the memory space of another process? Filesystems that log every transaction (including read/stat operations)? Systems that log every system call reliably, in an untamperable state? These are the features of government critically evaluated trusted systems, and until Open Source OS's support them, we shouldn't gripe. =)

  • I think that it could be worthwhile to reconsider Ken Thompsons all time classic Reflections on Trusting Trust [umsl.edu] in this context.

    Peer review (as one of the streanths in opensource) won't alone give you a secure system, there are far to many other factors to be considerd. Peer review is however one very important factor.

  • by emerson ( 419 ) on Friday June 23, 2000 @02:52AM (#981032)
    >Honestly, I would say the same thing about a lot of commercial software as well.

    Well, yes. But that's not the point. You're playing the ad hominem deflection game: "seeeeee, they're just as bad, toooo!!!"

    Boiling this issue down to open versus commercial is completely orthogonal to the actual point: well-designed, specced-out systems are going to be more trustable than ad-hoc slapped together ones.

    Where the code comes from should be irrelevant -- if the spec is good and the code actually matches, all is well, regardless of origin. Granted, open source makes bugs easier to find, but also adds an element of chaos into the implementation phase that could be detrimental; the bazaar doesn't necessarily follow specifications. It's a tradeoff....

    Trying to cast this issue in the light of open versus closed is really just playing the 'when what you have is a hammer everything looks like a nail' card with the Slashdot open-good-closed-bad mentality.

    Not every software development issue can be solved by a free license. Really. I promise.
    --
  • Careful formal testing is among other things dull work. Definitly less sexy than the usual areas hackers like to hack. So it is not impossible, it just might proceed very slow if at all.

    Good chance for some company to make some bucks.

  • Without hearing the seminar it sounds as if Dr Spafford is talking about Trusted Systems, not security in general.

    Trusted systems are usually of the kind where every action is auditable and traceable; system administrators do not have access to delete logs or change audit trails, etc. The 'trustiness' and 'security' of these types of systems is rather designed through system architecture and specification of how, exactly, everything works, and very strict definitions of security access and permission. Bugs and exploits dont even figure into wether or not the system is 'trusted'. It's a measure of how it works; ie, your sysadmin cant add a $50000 bonus to his salary in the economy system (he probably cant even access that data) and wipe the logs of his doing it, bugs notwithstanding.

    Of course, no commercial or free 'standard' unix lives up to that, simply because of design (some trusted unix-like systems do tho).

    Which, I think, would be his point; no open source OS currently exists that implements something like that, and the usual design methods in opensource favour functionality and features above things like total fascistic control and auditing of every action taken in the system.

    That is not to say an opensource model wouldnt work for a trusted system; it would probably even be better, but bugs are irrelevant to the very idea of wether a system is considered a Trusted System or not.
  • by Paul Johnson ( 33553 ) on Friday June 23, 2000 @03:05AM (#981042) Homepage
    Spafford's point was that before you can trust a system you have to know what you are trusting it to do. Simply saying "the right thing", or even "the right thing according to POSIX", isn't enough.

    To describe this you generally start off with what you don't want it to do. Examples include "kill someone" or "let someone write to a file without authorisation". Then you have to say exactly what you mean by that (e.g. how might the system kill someone, and what constitutes "authorisation"), and before you know where you are you have a document several hundred pages long, much of which should be Z or VDM. Then you need to check that for any holes.

    Then you have to prove that the system fulfils these requirements. Now a full formal proof of this is going to be a larger project than the original software by an order of magnitude, even with today's automated support. So the only feasible solution is to write the software very carefully. You have to identify each piece of code that might cause a non-spec event to occur, and then explain how it prevents any execution which might be outside the spec. And since this is important, you have to leave an audit trail behind so that potential clients (who are going to be staking lives and fortunes on your software) can see that you have done this properly. Unless you do all this, your system cannot be trusted.

    (Aside: you also have messy recursive problems with trusting your development tools and hardware)

    Put it this way. We all know Linux is reliable, right? But would you stake your life, or even your house, on keeping your Linux box up continuously for the next 12 months? I sure wouldn't. I wouldn't even do that with BSD. There are a few bits of software I would do that with, but... they were all written to these kinds of standards.

    No matter how you slice it, this stuff requires a lot of hard work and bureaucracy. The question of "who will watch the watchers" is particularly germaine to the creation of trusted software.

    Paul.

  • We think of Linux and Open Source projects as being "secure." They are safe from most attacks in general... Once a problem arises, it is quickly fixed by the community -- and it is very likely to be found quickly.

    The problem is that "trusted" within the infosec community means that you can reliably assume in a beyond-mission-critical environment that the system is totally secure from attack.

    This means a careful, meticulous design, and very rigid, formal implementation and testing. The aspects involved are far more comprehensive and cohesive than Open Source generally considers...

    We look at Open Source software in terms of the safety of a package. Sendmail is not secure, or QMail is secure, and imapd is not secure or... You get the idea... Building a trusted system means looking at the big picture -- as low level as how kernel data structures are defined and manipulated, to as high level as individual file permissions and pretty much everything in between...

    IIRC, "trusted" Solaris tends to lag a couple versions behind the general release of Solaris and takes a bit of a performance hit. Why? Because it takes a lot of time to evaluate, and fix an OS to make it "trusted."

    A simpler way of looking at the difference is this: A "trusted" system goes to great lengths to meticulously ensure that no edge cases exist. A "secure" Open Source system takes the shotgun approach: With enough monkeys pounding on enough keyboards for a long enough time you'll get the works of Shakespeare... Translation: Open Source counts on enough talented, skilled developers working on a project to cover all the cases without anyone specifically telling them to do so.

    In the end, Open Source may or may not come out with a product whose security matches that of a "trusted" system -- but you wouldn't be able to recognize it if it did come out. You couldn't *verify* it.

    -JF
  • ...completely adhering to specs, it probably would be trusted. However no one ever done that -- even systems that are supposed to be "trusted" contain parts that aren't. Probably some embedded stuff can be "trusted", however it's the place where security problems are the least important, and only proper behavior with certain hardware is necessary -- what is much less of a challenge compared to, say, HTTP server + operating system, desktop environment or even a router. If people had many decades for the development of each version of operating system, switch from "chaotic" development to formal specs and proofs that a piece of code adheres to them probably would improve things, however this is not the case, and even if it was, it's possible that specs would end up being developed even slower than anyone would be able to follow then, thus slowing the pace of development to a halt.

  • Yes, we have some design specs in the form of RFCs, POSIX and so on, but we certainly don't do rigorous compliance testing for them with each new release.


    Is there anything preventing building automated test suites? I would think that there are plenty of excellent tools to build the actual software: Expect, perl, etc. The problem is what tests to run and the time for someone to write code that doesn't extend the functionality of the system. It is useful, but until it actually catches some bugs before they are found by other means, there isn't any glory in it.
  • Sorry, guy.

    The "but it's better than Microsoft" arguement is wearing thin, and might soon be rendered meaningless.

    You'll have to come up with more meaningful ways of praising Linux and Open Source in the near future.
  • I feel that open Source can be trusted, because we have the opportunity to check the code.

    Check the code against what? Besides looking for obvious bugs, how do you verify that the code meets the requirements if they have never been written down?

  • What does the way something is developed have to do with the final product (or a given release), and the tests performed on it? You are testing the product, not the developement environment, surely?

    It's a well-established tenant of modern 'quality' theory that it's not enough to 'test quality into a product.' One cannot simply ignore the process by which something is produced, and just test it as a final step.

    I think this article, however, goes further than this, and actually hits Linux and Open Source at one of it's weakest spots: the lack of a top-down well managed design. Linux has no team of central designers determining the basic structure of the OS. It instead relies on knowledgable developers using 'Unix' in general as a reference design to produce a 'kinda sorta' Unix clone OS. The severe lack of any real architects at the top overseeing the whole effort is an issue that isn't well addressed by the 'Open Source' ideologues.

  • But does Open Source mean that there CAN'T be a design spec or any formal methods? Just because the current leading lights don't use them, doesn't mean it CAN'T be done.

    There could be design specs and use of more rigorous techniques up to the use of formal methods, but would Open Source developers be prepared to submit to that discipline? There is no reason why the code produced should not be free (both senses), but are there enough developers with both the skill and the inclination to work to that model in an Open Source effort? Perhaps the problem is that those inclined to work on Open Source projects like freedom to use whatever techniques and tools they like, and this makes it hard to verify that the results meet the negative (no unexpected behaviours) as well as the positive aspects of the specification.

    There's plenty of closed source stuff that doesn't have any formal methods - does that mean that it can't be done? No. Maybe it's harder to do with open source, because the development cycle tends to be more relaxed, and it's harder to impose methodologies on those working on it, but that doesn't make it impossible.

    I think that only a tiny proportion of closed source software has been developed with a serious input from formal methods. I am not sure that detailed and accurate design specs are all that common either.

    Being unable to impose methodologies on voluntary contributors is probably the main obstacle to having a typical Open Source project meet the view of "trusted" that Spaf was using.

    Attitude to those who write specifications also plays its part. We will never get an Open Source project with a rigorous demonstration that it meets a usefully detailed specification unless the writers of the specification get at least as much respect as those who write the most difficult code.

    I don't know if he said it at that infosec seminar, but Spaf has also pointed out that not enough people are being educated in computer security, and how to build more trustworthy systems. The shortage means Universities can't keep faculty with experience in this area, and this leads to the vicious circle of less teaching and research that means the shortage continues.

  • Trusted systems require that the programmer be paranoid. I think that this is a very simple requirement, but one that is very hard on the psyche of the programmer. Do you really want to spend all of your time worrying if your code will break? Oh... wait... we all do!

    The difference in trying to build a trusted system is the formalization. The team which tries to break a system (or certify it), is called upon to review the design, and try to find all weaknesses that might be used to break it. While this could be done with open source, you'd have to find someone to play the role of review team. This could be a serious problem.

    I think that it's certainly possible for an open source project to build an operating system to match "C2" specifications, or better. The problem with meeting the Orange Book requirements is that they require trained people to review and test a system. This requires a large commitment of time, effort, and especially, money. Then when you are done, you have certified one version of the system, and nothing supsequent. The open source model is one of many releases, with bug fixes as we go, which has many strengths, but makes the rapid obsolencence of what would be the "blessed" version a big weakness. So, it's possible, but I don't think we should try for it.

    I do think that there is a better way, if the testing procedures can be automated, so that the code can be covered from every angle, by the a set of computers pounding on a system, this could be MORE secure than a "Trusted System", at least at the C2 level. A set of workstations could be set up to pound a system, and try to find weaknesses, ala "SATAN", with emphasis on stack overflow weaknesses, etc. If a computer does all the functional tests that a human review team does during C2 certification, this is a good step in the right direction, if nothing else. Add to this a group dedicated to adding test conditions for new flaws, as they are found, or hypothosized, and I think you've got the winning combination for a truely secure platform.

    What is really needed is a way to make all of the consequences of a line of source code visible. With C, C++, Delphi, Fortran, et al, there is no way to see all of the possible (side) effects of a line of code. That simple iteration, could have a bounding condition that causes a major hole in security, or it might be perfect, how can anyone know? I consider this to be the problem of brittle code

    Object Oriented Programming is a good step in the direction, it does, if properly used, make code testable down to the object level. There need to be better tools for hammering on each object, and making sure they work as delivered, but at least you have something which CAN be tested on a unit level. OOP is good, but still not good enough. It does go in the right direction, because cracks in a system can be traced down to a single component, but those components can be brittle. Testing those components to a high degree brings us close to the goal (my goal, actually) of perfect code. Close, but no cigar.

    A platform needs to be built which can make writing a computer program as simple as working with physical objects, such as hammer, nail, etc. When you pound a nail into a 2x4, you know right away if something goes wrong, and what all the effects are. It's easy to test, and visually verify the results. Coding needs to be the same way. This would take a lot of the "magic" out of the process of coding (and uncertainty). What ideas are out there for making this happen? Is a system of individually tested components good enough?

    I have a few ideas, which I posted more than a year ago with my own manifesto at basicsoftware.com [basicsoftware.com], ( and I need to review those ideas myself) but really haven't contributed anything myself. So I'm in the same boat... my contribution is a possible pointer in the right direction, and a keen eye on others, and their response.

    The goal should be to make software better than any possible real world analogy. We need components that don't fail, and work predictably in a all conditions. If that's not possible, it would be good for the component to signal exactly what went wrong. Nails holding things together loosen over time, software doesn't have that weakness. Nails don't signal failure, but wouldn't it be nice if the nails in your roof signaled failure before the disaster struck?

    --Mike--

  • First of all, based on this definition of "trusted", the open source issue is irrelevant. Software could be "trusted" whether you could know the source or not, so long as the thing was designed to a precise formal specification and underwent sufficiently rigorous testing. None of the software I use (directly) is "trusted" in this sense.

    That doesn't mean I don't trust my software. I put a certain amount of trust in it -- some more than others. But this is "trust" in the more usual sense. The sense in which it is used in the article above is an unfortunate jargonisation of a word with a well-defined meaning in everyday use. Let us rather refer to these systems as formal systems. That's probably an overloading of the term "formal", but at least there's less chance of getting confused relative to "trusted". Of course, you would expect such a formal system to be highly trustworthy by merit of the rigorous design and testing, and critical systems should almost certainly be constructed in a formal manner.

    In general, I think more systems should be handled in a more formal manner. Indeed, in my opinion, lack of formality is one of the major reasons we have crap software. Usually we have specifications written by marketroids or by hackers scratching whatever itch is pestering them most at the moment. This doesn't make for good formality. Even when formality is attempted, it's rarely done well.

    What's surprising about open source is that it seems to have a generally higher reliability than many commercial offerings, despite the fact that the development process is unfunded and chaotic. In that sense, it is relatively trustworthy, despite being informal. This isn't an absolute statement: there's a huge spectrum of quality, and it's mostly the top end of the quality that's interesting.

    It would be interesting if a formal open source system were to be developed. The code contribution process could still be informal, but the specification and testing would have to be more organised. Infrastructure systems like DNS and TCP/IP stacks would do well from this, I think. Would there be enough interested parties to write the test harnesses? Who knows.

  • What we're seeing here is a big, old, fat dichotomy between the software engineering community who consider themselves engineers and the open source community who thinks of themselves as programmers or coders. I've traditionally always fallen into the latter group, however there is no doubt in my mind that the SE tactic of actually coming up with a sensible plan in advance is better. *This concept does NOT threaten open source software.* What it threatens is the culture of open source community. There's a difference between hundreds of programmers simultaneously fixing bugs within a predesigned and well thought out architechture, and those same hundred programmers randomly going off on their own design tangents. Any successful open source project has some kind of centralized authority- even if it a guy who just does the builds. What I'd like to see is a vast expansion of the whole TODO file concept. Rather than have a handful of items listed such as: "Somebody needs to figure out a way to blah blah", the TODO list should be a detailed spec. The process of coming up with that spec could itself be an distributed process but a very detailed and heavily reviewed spec should be developed before everyone starts coding.
  • Under that definition, Open Source cannot be a trusted system.

    Untrue. It hasn't been done yet, but it's very possible. However, it wouldn't be a "typical" OSS project:
    • A contributor would be required to have at least some experience with formal techniques for building trusted systems.
    • It should use better, more advanced tools for distributed development than the vim/make/CVS combo by which most Unix hackers nowadays swear.
    • It shouldn't use C/C++. (Ever tried to do closed-world program analysis with C++?) Languages such as Eiffel, which provide better support for formal development techniques, should be used.

    I would hold up OpenBSD as an example of a system that I consider trusted, yet was not developed under any formal spec

    Not to make this seem like an attack, but your personal opinion regarding OpenBSD isn't really relevant to the rest of the world. Right now, having a software system be labeled as "trusted" means something: that the system's developers used the best techniques available to make sure that it could fit the regular definition of "trusted". If we abandon it in favour of "I think it's pretty secure, and so does the general consensus, so why not just call it trusted", the word loses its meaning and purpose entirely.
  • I think that an important part of what you have to understand on this issue is that he's refering to a very formal concept of a trusted system. If you read the government guidelines on building trusted computer systems (e.g. the Orange book), one of the specific factors that is involved in designing systems at level B3 (IIRC) and above is that they be formally specified and proven to meet that specification.

    While it's easy to gloss over this kind of requirement, there's some reason to think that it's actually a good idea. By the time you get to a class B system, you have to think about things like mandatory access controls, covert channel analysis, and the like. A formal demonstration that A) your system specification succeeds in meeting those goals and B) the system as built successfully implements the specification seems like a reasonable basis for reaching a high state of trust.

    It's not impossible that you could build a free software project that could achieve this kind of goal. It certainly seems to be the case that every time someone says that free software can't do this, that, or the other thing, they've been proven wrong. But it's going to be tough to attract people to a project where you have to do stuff like keeping records of what you've done to meet design specifications, which is actually a requirement of high level trusted systems.

  • Oh, really? I can make my system secure against physical theft using linux. It's simple. Follow the Loopback-Filesystem-HOWTO and recompile your kernel to support crypto. I prefer the serpent and idea algos, but YMMV.

    Next, copy losetup, bash and your kernel to an unencrypted /boot partition. Encrypt everything else. Add an option to lilo specifying that /boot/myscript.sh is init. I disrecall if you need to specify boot as the root partition and remount it, so some experimentation may be necessary.

    In the myscript.sh, there should be the set of commands to run losetup and prompt you for your passphrase(s) to mount your partitions. Enter them. Script should then remount root and exec the real init. System boots normally.

    Dear sir, I humbly ask that you attempt to bypass the login: prompt and access my data on a system so configured. You may use any tool you like.

    Open Source not secure my arse...

  • When reading that type of article, I am always reminded what a senior IBM network engineer once told me:

    "If it has to secured, DON'T put it on a computer".

    I used to believe he was joking but now, with a little more experience, I totally agree with this declaration. If it's sensitive, for goodness's sake, don't put it on a computer, and especially on a computer which is connected to ANY form of network.

    With all due respect to a senior 'net citizen such as Dr Spafford (who is certainly more intelligent than I am or ever will be), it is true that Linux (or *BSD) evolve in a chaotic and ramshackle way. But we should always remember a few points:

    • Open source is always better than closed source, whenever security is a concern. That's a given, period.
    • The network effect ensures that, today, any computer systems which is not connected to the 'Net sees its value decrease. Therefore, unfortunately, most organizations have to connect part or all of their internal networks, which already represent a security risk, to the 'Net (an even greater security risk) in order to just survive. The result is easy to guess: we are going to see more and more security risks, whether or not we follow IT security specs or not.
    • Remember: your security is only as good as the implementation of the specifications itself! Good specs are worthless in the hands of morons, who will just take shortcuts to roll the product out the door faster. Even good specs can be broken down by a tiny little bug in the end program, even if said software is crafted by geniuses. Case in point: the recent bug found in the random number generation routines of PGP. Good software, good specs, tiny bug = Window of opportunity for attackers.
    • Most security specifications were designed before the growth of the Internet. Most do not take into account this enormous growth or the fact that an "always connected" system or network will always get attacked, sooner or later, just for the eck of it, by script kiddies. We also need to remember that most 'Net sites went from UUCP-dial-and-disconnect to HTTP-always-on-all-the-time in a relatively short while! How can, say, a UUCP-security specs hold up to HTTP? And I am not even talknig about distributed systems stuff such as Gnutella or Freenet, which could (will? have?) become a security risk in the future!
    • The Internet itself grows chaotically, at extreme speed. Therefore, it is probably better to have a system grow chaotically and with extreme speed to keep up with the security risks. This gives a huge edge to Open source OS, since major security problems can be fixed in a matter of hours, not weeks or months with traditional software vendors... who may be following precise specifications and procedures that are just too slow for 'Net time! Case in point: some major security problems and DoS attacks were solved in a matter of hours after their publication with patches in the Linux kernel. Of course, not having these problems (OpenBSD!) is even better...
    • Finally, except for OpenBSD, no project that I know of has been undertaken with the idea of putting security first, from the ground up, with the emulation and the benefits that Open Source brings.


    Want to keep something a secret? Remember the advice of that engineer: write it down on a piece of paper, use a one-time-pad, lock the paper in a steel box and put the box in a military-grade safe. Burn all other traces and throw the ashes in a vat of acid. But, for goodness sake, DON'T leave it on a computer!

    Of course, this is just my US$0.02...

  • Right. You can test as much as you want but you can't prove the program will always do what is desired (the obvious example is the Windows BSOD).

    The trust has to begin with a trusted algorithm, then the trust has to follow in an unbroken chain through all the coding and testing. And the entire system has to be trusted -- although you might do this process with a cryptographic library, you also have to trust the system library routines, the program loader, the network library and drivers... At least with environments such as MULTICS the trusted items can be well isolated from the untrusted, but it's still a big job making and keeping it clean.

    Fortunately, there is a difference between a provably secure environment and an environment which is secure enough in practice. People have been creating small provably-secure environments at great expense for 30 years, but most people can't use or afford the results.

    There have been efforts to blend mathematical algorithm provers with programming tools. Perhaps someone will succeed with something general enough to be able to review existing code.

  • Bruce Schnier says ...

    ... Security is not something that can be tested for.

    Makes sense if you think about it. And it blows a truck right through the "you need a formal spec to test against" premise.

    I think Schnier makes much more sense from a theoretical point of view.

    From http://www.counterpane.com/crypto- gram-9911.html [counterpane.com]

    The only reasonable way to "test" security is to perform security reviews. This is an expensive, time-consuming, manual process. It's not enough to look at the security protocols and the encryption algorithms. A review must cover specification, design, implementation, source code, operations, and so forth. And just as functional testing cannot prove the absence of bugs, a security review cannot show that the product is in fact secure.

    No mention of a formal spec.

    Go Bruce!

  • by dublin ( 31215 ) on Friday June 23, 2000 @03:52AM (#981091) Homepage
    Open source can be more trusted than closed source. Formally designed and reviewed software can be more trusted than chaotically assembled software. These seem orthogonal to me...

    In fact, the two are orthogonal to one another in most aspects, but they can be aligned. I don't see any dichotomy between open source and the kind of intense process that produces trusted systems, it's just that the open source movement hasn't matured to that level yet.

    Spafford's observation is correct today, but he makes the erroneous assumption that open source community will continue its current practices of doing inadequate planning and testing to ensure real trusted systems. This is one of those areas where the arrival of the commercial backers of open source, like IBM, an offer substantial contributions.

    The one thing that has marked the open source movement from the beginning is the ability to respond and change quickly to get the job done. We are in the adolescence of the open source movement, and since trusted systems do require more process than the standard open source method provides, and since people will want trusted open source systems, it's only reasonable to assume that the open source process will morph as required to "scratch the itch"
  • "How do you think we can address his criticisms?"

    A few outdated man-pages, HOWTOs and pointers to Web-URLs just won't cut it in this context. In Trusted systems you will need formal proof of ownership, wholistic design, white-box testing/analysis and black-box testing, plus other things I can't come up with now. Documentation on the whole process of conceiving and creating the system would be a great benefit. This would have to be Applied To The Whole Shebang(tm), down through every library, including formalization of every function and good comments on most of their lines.

    Now doing this properly _after_ the actual implementation has always been a bad idea, and would further require much harder work than if it had been done in the first place. However, this is an impossibility when regarding how the Open Source-process really works. It would never be as good as it could have been.

    If more companies started supporting Open Source solutions, they could perhaps fund this kind of work and release it to the public (either for free or for a fee). It could benefit these companies, because now they can Trust and use free software. Actually, I saw a book that documented the whole Linux kernel once, so I know that has been done successfully.

    Of course, "trust" is a word of many meanings. I for one trust many Open Source solutions simply because I know they have stood the test of time again and again. However, I'm always aware that new versions may break things considerably, and the documentation is not always updated. That is why Open Source is not currently a good process for building really Trusted systems. (This has nothing to do wether you release the source or not, which should always be a benefit to trust.)

    So to me personally it is good enough, and currently the Open Source-process has quite a few benefits over closed source in this context (and many more regarding price and freedom):

    1) Peer review (less bloat, great functionality and inter-operability, harder to put in trojan-functionality)
    2) Large techsavvy userbase (quicker bugfind- and fix cycle, easy to get help and gain a community)
    3) Ability to find and fix errors or improve the system yourself (Although this should never be necessary in a trusted system. Doing so may also contaminate the system with eg. bad binaries. However, it can be done safely if you use the right process of doing so.)

    Note that these points are connected to the very fundamentals of how Open Source works, and should be seriously considered by companies that not merely want to ride the "Open Source Wave".

    However, if I were to buy a Trusted system from a company, I would make sure there was a contract that held them accountable. That would be one point to proprietary software I guess. I think it would be hard to find a company that wants to be held accountable for Open Source code (written by others) that they have merely certified..

    And lastly, always remember this: There's no such thing as 100% security. You cannot prove security, only prove insecurities or specific lacks thereof.

    So don't put your trust arbitrarily.

    - Steeltoe
  • by nhw ( 30623 ) on Friday June 23, 2000 @03:54AM (#981094) Homepage

    My assertion is that open source challenges the notion that you need a formal spec to develop trusted software. Much like the submitter of this story, I would hold up OpenBSD as an example of a system that I consider trusted, yet was not developed under any formal spec. Perhaps it's time to realize that formal specs help to get things done correctly, and they certainly help get things done quickly (by preventing, in theory at least, feature-creep), but they certainly are a requirement.

    I'm assuming you mean 'aren't' a requirement.

    You may trust OpenBSD, but many people won't. The US Government, in particular, won't; nor will any company/agency/whatever that defines 'trusted' in a way that corresponds with TCSEC or ITSEC.

    'Trusted' means a lot more than 'look, historically, it's got a great record' or 'we audit all our code'. Surely, these elements do form part of the equation, but there's a lot more to it than that: configuration management systems, proper specifications, and proofs of code against those specifications, a structured engineering process etc.

    TCSEC and ITSEC do define themselves, at least partly, in terms of formalisms. The sames is true of DEFSTD 055 in the UK, and presumably similar standards in the US. Quite often, what is required of a trusted system is a proof of the security of that system.

    Most of the systems that are developed for high levels of assurance under ITSEC/TCSEC are specified in highly mathematical notations that your typical UNIX hacker doesn't really have much interest in. The Certificate Authority for the Mondex smartcard system is designed to ITSEC E6 (which is roughly equivalent to TCSEC A1, for those more familiar with the Rainbow Books): the formal top level specification runs to 500 pages of Z.

    Even once you've got a specification to work to, you still have to implement it. Now, if proof of source code against specification is required, you can throw away your C compiler right now, because proving properties of C programmers is a nightmare: you want a programming language with a simplified semantics, with dataflow annotations, and an associated toolset. Something like Spark ADA.

    Some open source Unices may have a good record for security, but I doubt they'll ever meet the higher assurance levels. Most of the people who enjoy working on open source don't have the skill set or enthusiasm for the sort of work required here. How many of you wince when I say 'formal axiomatic semantics'?

    Moreover, the customers for systems like these want to be able to hold someone accountable. I know that in the context of your typical company, this is an 'old hoary chestnut' and a much debunked myth, but the fact is that when the subject matter becomes sufficiently serious, support becomes a real issue, and the only way that companies _can_ sell is by standing behind their products.

    I'm not saying that a trusted system (in the current context) could not be developed open-source, but that there are obstacles:

    • The open-source development model does directly contradict most of the software engineering principles that are called upon in the development of trusted systems.
    • The lack of skills and interest among open source developers to get involved in things like IV&V, code proof, development of formal specifications.
    • The need for an entity to sponsor and support the code (preferably one with deep pockets).

    The unfortunate fact of the matter is that writing trusted code is quite hard, and often requires a different mindset from 'hacking'. OpenBSD may have neat features like encrypted swap, and an audited SSH component, but it doesn't have an FTLS, MAC, or (God forbid) object code MCDC testing.

    Cheers, Nick.

  • ... And you need to remember that his bread is (as I've said before) partially buttered by closed-source development, and that butter may be threatened by open-source code and economics. Also, beyond just his livelihood, open-source does not necessarily obey his notions of secure design and practices. Academics REALLY HATE when real-world practice doesn't fit into their theoretical structures.. So inconvenient!

    We can definitely learn from this man, listen to his experience and knowledge, steal his ideas, and write more secure software, but leave the obsolete preconceptions behind. Then again, he should know that the only reliable crypto is open/peer-reviewed crypto, and security in general needs to be scrutinized by many people of different talent areas to be of quality.

    Your Working Boy,
  • by Spoing ( 152917 ) on Friday June 23, 2000 @04:00AM (#981097) Homepage

    The following is dry, and opinionated, from the POV of an old-timer VV&T/QT/Tester.

    I'm big on specifications, and will argue both sides of a contract when a spec is violated. I've even been in a couple shouting matches over them, fighting for the correct implementation, not supposed "flexibility" though they do need to be bent at times.

    Fortunately, the shouting matches are rare and as a Contractor Scum(tm), I never take them personally...only as a barganing point and to help stiffen the backs of those who are easily swayed. It's a shame when good projects go bad, but that's other people's money!

    Good specifications are invaluable in eliminating all sorts of conflicts and allow projects to actually end without different groups wanting to kill each other.

    Unfortunately, specifications are by necessity limited in scope. If it's not in the spec, it can't easily be added. If it's in the spec, it can't be modified easily.

    On a formal contract, adding in goals like "The system shall be fast" don't work well, so more detail is usually specified; "The system shall retrieve a query on the client stations within 4 seconds at all times".

    There's always a few details that slip by, and if the people on the project aren't reasonable the details will cause quite a few social and technical problems.

    Even relying on an outside specification is a problem...since APIs/protocols/... are usually vauge on some level.

    The people who implement it and the environment have a much greater impact on the results; there will be good and bad free software / open source projects...as there are good and bad commercial projects.

    From what I've seen, I'll trust open source as much or more in most cases...but I'll test it first.

  • by GregWebb ( 26123 ) on Friday June 23, 2000 @04:01AM (#981099)
    I'm sorry, but you seem to be missing the point here.

    OpenBSD is a wonderful, secure system. If I had to trust that an out-of-the-box, off-the-shelf system wouldn't give me a security problem on my hypothetical servers, OpenBSD would be there in a second.

    What this guy is talking about is rather different. We're talking trusted functionality. Trusting that this software controlling your nuclear power station does exactly what it says. Trusting that your rocket will fly into space properly. Trusting that your ABS brakes will stop your car.

    Now, if we're being at all practical, this requires a tight, formal specification which can be effectively tested. You can't ensure that a system works properly if you don't know what working properly means, while you can't practically ensure that it will work properly if you don't have a tight, complete and agreed definition to work from. Anything else means you'll have to spend a long time chasing down problems, which may well tun out to be fundamental to a design technique used in the implementation of the system.

    Current 'open source' development styles simply do not permit this. There isn't any way to get this level of control, or even of proper design. Now, that doesn't mean that it's impossible to implement such software _as_ open source, merely that current methods won't work.

    Frankly, though, I'm not sure there's any real point. Open development works very well with consumer applications marketted at computer nerds such as ourselves. We're prepared to put up with problems to get the bleeding edge in a certain respect. Release early and often is clearly sensible, while there are plenty of people who are demonstrably prepared to use this incomplete, unstable software and help the developers make it complete and stable.

    Now, let's move this across to the field this guy's talking about. Let's imagine we're talking about the hypothetical ABS brakes on your equally hypothetical car. Release early and often becomes, to be perfectly honest, dangerous as it results in brakes whose functionality isn't certain. You can't be sure they'll stop your car. So, do you drive it? No. How do you find the bugs? Well, you can play with it on test cells, tracks, simulators and the like. But, how many people have them?

    Now let's imagine the final release has a bug in it - a major problem, but not totally impossible. Let's suppose you manage to spot what's causing the bug. Should the team running the project take your submission? Well, I can't say I'd recommend it to them. If you're coding a bigfix without total knowledge of the system, its specifications and design parameters - which is inevitable in this environment - the potential for an unseen effect is huge. They're better off to get their own engineers, who know the problem well, to reimplement it. That way, they can know that it won't produce an unforeseen consequence elsewhere due to inadequate domain knowledge.

    Releasing the source code for outside inspection may well help others to trust their code performs as they say it will, but it's not going to usefully do much more than that.

    We are talking about a problem space which few of us here will ever encounter. It's hugely different, and the same models aren't necessarily true. We aren't talking about trusting your WP not to crash or report your every word back to its makers, we're talking about trusting your nuclear power station not to go into meltdown. And for that, the current 'open source' development methods are wholly inadequate.
  • by Kaufmann ( 16976 ) <rnedal AT olimpo DOT com DOT br> on Friday June 23, 2000 @04:01AM (#981100) Homepage
    You've managed to miss the entire point of the discussion. As the original poster pointed out, this isn't about "enough eyes make all bugs shallow"; it isn't even really about bugs. It's not about "open == good && closed == bad", no matter how much faith the Slashdot crowd may have in this doctrine.

    It's about OSS not being developed in a careful, thoroughly planned and controlled fashion. It's about the fact that right now, no OSS system out there can be satisfactorily considered "trusted", not even the high-and-mighty, "look ma I'm secure" OpenBSD.

    I've pointed out elsewhere why a formal definition of the term "trusted" is important. Shortly, it's not enough to simply say "the general consensus is that it's secure, so let's just say it's trusted".

    (And your absurd overgeneralisation to the effect that all expert reviews are corrupted doesn't help your case one bit. FYI, in the Real World (i.e. outside of Slashdot), ad hominem is frowned upon.)

  • Every function, every procedure has a well-defined set of pre-conditions and post-conditions. What goes in the middle, and how many people are involved in the process, is all irrelevent.

    Your testing is all very well-defined. If the pre-conditions are violated, then your input is in error. Your routine should determine this (how is unimportant) and exit safely. To test this, run the routine in a test harness and see what happens when you enter "normal", extreme and erronious data. (Ok, hands up! How many of you have heard exactly the same thing from Comp Sci lecturers, throughout school and University? Just cos they're lecturers doesn't make them wrong. At least, not all the time.)

    The next round of testing is the post-conditions. If your routine exits and violates a post-condition, then the logic in the routine is faulty. The case being tested is not being handled correctly, according to the specification. So you fix the routine. Big deal.

    Who defines the pre-conditions and post-conditions? That's simple. For IP-based networking, the IETF RFC's define those very nicely. For device drivers, the specs for the device do the same. Then, there are POSIX and UNIX98 standards docs. Since Linux is a mish-mash of BSD and SysV, the specs for both of these are usable.

    All in all, the problem with Linux is NOT that there aren't enough formal specs (there are plenty!) but that Linux doesn't have any wide-spread test harnesses, or databases of pre/post-conditions.

    If someone (anyone!) could come up with a way of plugging ANY Linux module into a test harness, and pound it with different cases, and then have the responses checked against the pre/post-condition DB to see if the module violated the specs, you'd have ALL the formal testing you'd ever need and auditing would become a breeze.

    Dibs Linux for B1 before Windows!

  • "Trusted Software" or trusted systems in general are supposed to meet a spcification that was written by the DoD in 1983 (which was an update to docs written in the early 1970's) that outlines appropriate guidelines for access to remote systems and embedded software.

    Specifically, it wants all access to all objects on the system to be fully logged, leveled security, and blatant marking, plus whatever else is in there and too dry to read.

    However, remote access to systems is a job not only for the operating system of the host, but also for the network it runs on. Being that networks in 1983 were a little different than they are now, I would hope that a system that was meant to provide access to possibly classified data would rely on more than simply the security of the selected operating system, regardless of the openness of the sourcecode. The system in that case would take into consideration the OS, the firewall, and the network connection from client to server. The possibility probably exists to buld a system based on Linux that could be trusted, but would need to be spec'd out with "system" referring to more than just the OS of a host computer.

    When it comes down to it, would you really want the good name of Linux drug in the mud by some military stiffs who can't secure the Army web servers?

    --m

  • Gene Spafford's argument seems to proceed simply as a tautology from his definition of "trusted", if I've understood the summary correctly. By this definition, a system is "trusted" if and only if a formal specification and testing process exists, and the system satisfies the spec and passes the tests. Thus it's trivially true (if we accept the definition) that a system that doesn't satisfy these criteria isn't "trusted".

    It is *not* true, however, that an Open Source system could never satisfy this definition. Set your formal specification and get your Open Source coders to work on fulfilling the spec. When they're done, then voila! A "trusted" Open Source system.

    Alternatively, one could reject Spafford's definition of "trusted", as not matching the intuitive notion of trust. One could reasonably argue that "trust", in the common and intuitive sense, can only be realistically achieved by extensive peer review. I would furthermore argue that satisfying a formal spec is not enough for trust, because a spec might fail define a system that most of us would intuitively view as trustworthy.

    If all he's saying is that all we need is a spec in order for a system to be "trusted", without saying what such a spec has to be like, then I say he's wrong; because a spec can specify any kind of untrustworthy foolhardiness you imagine.

    To demonstrate this, I hereby give a formal specification of "trust": A system is "trusted" if and only if it crashes at least once for every six hours of operation. The formal test is: Run the system for sixty hours; if it crashes ten times or more, then it's "trusted". There you are, a formal spec and testing process. But hardly anyone could describe such a system as "trusted".

    BTW, although I'm disagreeing with him, I remember Gene Spafford with great respect as a major driving force in the early days of Usenet, and the author of an O'Reilly book on security. I'm a bit dismayed that so many Slashdotters don't recognize him.
  • by GregWebb ( 26123 ) on Friday June 23, 2000 @04:08AM (#981107)
    That sentence actually made me wonder if Cliff knows what the guy is talking about or not.

    If we approach this from the viewpoint of trusting that our OS will not crash or be hacked (or whatever) then this argument has some merit. But we're not, and this isn't really about commercial vs. community development. Trusted code, after all, normally means that we're talking a custom job for a particular application, rather than off-the-shelf.
  • Once there was a young monk who saw another young monk from a rival temple on the road.

    "Where are you going!" he said in a rude voice to the rival monk.

    "I am going wherever my feet take me" said the rival in a mysterious voice.

    This flustered the first monk. When the first monk went to his masters and related this, they beat him, and said "You fool! Ask him, 'What if you had no feet?'"

    The next day, the monk saw the same rival walking down the same road.

    "Where are you going!" he challenged.

    "I am going whereever the wind blows," replied the rival solemnly.

    This flummoxed the first monk, who was not expecting this. Again he related this to his masters, and they beat him, and said "Idiot! Ask him, 'What if there were no wind?'"

    Thinking he had this figured out, he lay in wait for his rival, who in due course came down the road.

    "Where are you going?", he challenged.

    "I am going to the market to buy vegetables" said the rival.

    ---
    The point of this is that when somebody is thinking on a deeper level and carefully factoring in your own methods of reasoning, he can defeat you. The Morris worm was a perfect example.

    I agree that open source is not a panacaea, but neither is formal specification and testing. Paranoia suggests multiple and orthagonal methods for enhancing security, not relying exlusively on one strategy.

  • What does the way something is developed have to do with the final product (or a given release), and the tests performed on it? You are testing the product, not the developement environment, surely?

    Nope; that's not true actually.

    Probably more than half of the work that goes into the design and implementation of trusted systems has nothing to do with 'code', or how the final product tests.

    When you get to high levels of trust, you cease to regard testing as an adequate assurance. You want to see evidence that the system has been specified carefully and correctly, and that the code meets the specification. Needless to say, if you don't have a specification, you can't be trusted.

    The most that testing can tell you about a program is that you haven't managed to make it Do The Wrong Thing. For high levels of trust, you want a proof that it can't. And, if Doing The Wrong Thing includes disclosing top secret data, missile launch codes or cooking a patient with gamma rays, then I'd bet you'd prefer the latter too!

    Cheers, Nick.

  • Is there anything preventing building automated test suites?

    Automated testing sucks. Don't think it's going to save you any time or catch defects any more reliably or faster. It's tempting, and can be useful, but it's usually a big waste of time.

    To do it right, you need to basically duplicate everything that the target program does, and verify it's output. When the code changes, your scripts often break. Guess how many people on a project of 30 people will code that?

    Now, how many people will want to do automated test scripts on a project with often and early release schedule?

    Having said that, limited automated testing can be very valuable. Usually testing protocols or data manipulation...ideal to catch some security issues.

  • 'Formal testing is borring': Yes. It is.

    "Good chance for some company to make some bucks.": Yes, it can be!
  • An analogous situation:
    1. I release a spec, let's call it "One2000". It consists of only one line, specifically "In order to conform to One 2000, a program must print out an endless string of ones." (I'd probably give the ASCII value for '1' too, just for the sake of completeness.) I register One2000 in all the necessary ways.
    2. My friend J. Random Hacker writes a program, let's say in Perl: print '1' while 1;
    3. I (trivially but formally) test JRH's program and conclude that it does, indeed, conform to the One2000 spec. I thus brand this program "One2000-compliant."
    4. Should print '1' while 1; be considered a 'trusted system'?

    My (richly illustrated) point is, it's not just about getting tested. There are specific formal procedures involved. It's not as simple as it's made out to be.

  • I'd argue that OpenBSD =WAS= written to a formal spec. It's clearly written to the BSD4.4-lite spec or it wouldn't be BSD! Also, "formal spec" is a loose term. To me, the phrase "Any user can use only services they are explicitly authorized to use" is a formal spec. After all, you're defining a pre-condition (input: valid(user), valid(service)) and a post-condition (output: true if allowed(user, servic), false otherwise).

    There are no other conditions which comply with the phrase, and those pre/post conditions completely define all cases.

    Thus, OpenBSD's aim to be secure out of the box, where secure is defined as preventing unauthorized access to services or data, IS a very formal spec and one that can be tested against.

    The problem comes when managerial types define "formal specification" as something with a number attached to it, ISO 9000 compliant development, a rigidly-defined heirarchy of management, an EULA, Powerpoint slides, and a cheese-and-wine party.

    NONE of these have anything whatsoever to do with formally specifying anything. Least of all the ISO 9000 stuff. ISO 9000 is the least formal doc I've read in a LONG time. It's worse than useless.

    A formal specification is simply that. A specification of what goes in, and what comes out. THAT IS ALL! Specifications say NOTHING about implementation, they are simply definitions of what a given black box does.

  • This notion that software must have formal specs and testing in order to be trusted is similar to the notion in the manufacturing industry that you must be ISO 9000 certified in order to be trusted to produce quality products. ISO 9000 is said to be a quality assurance standard, but it is basically a rigid certification process to show that business processes are specified and standardized. In other words, it doesn't mean the manufacturer makes good parts. In fact, it makes no measurements or judgements with regard to actual product quality. It just means that the company uses the same design process, the same production process, and the same testing process for every product, and that all of these are specified and audited. It's an assurance that no one does anything except "by the book". The book may be recipe for quality or not, but they are certified to be following it regardless. So if the manufacturer makes products of a certain quality, you can trust that future products, designed and built with the same processes, will have similar quality. A competitor may build superior products, but lack the same rigid procedures (maybe they trust the intelligence and experience of their employees), and therefore not be certified.

    ISO certification is a great comfort to purchasing managers who prefer trusting a bureaucratic process to trusting individuals or their own judgment. This is similar to companies that require job candidates to have a certification, rather than independently assessing the candidate's skills and knowledge. It's not meant to be an equal opportunity process. It's meant to be a discriminating process that allows a decision to be made without anyone having to exercise their own judgment and thereby risk their reputation. It means you aren't risking your own reputation, and you don't trust anyone else's.

    The fact is that making judgments of quality or security is difficult, and most people are lazy. Nothing prevents anyone from coming up with a way to measure security and putting OpenBSD to the test. This mentality, however, would make it an imperative that the software be retested with every change. Any what assurance is there that the test itself, or the person conducting it, can be trusted. ("We'll need to see some certification").

    Certain types of people will always have greater confidence in the legal recourse to hold someone else accountable than they have in their own judgment. It's just too bad that those people are given so much credit.

  • I like Schneier (note the two 'e's) as much as the next guy.

    But what we're discussing here isn't just whether a system is secure: it's whether it's trusted, which has a very specific definition. That's the entire point. Is OpenBSD, by itself, secure? Very much so. Is it trusted? Nope.

  • Ahhh, I see the New Jersey mentality is at work again. "If it works (however marginally), why bother getting it right?"

    No wonder so much software sucks. I wonder how long a civil engineer would stay out of jail if he went by "if it hasn't fallen down so far, why should we bother making sure that it won't?"

  • by alexhmit01 ( 104757 ) on Friday June 23, 2000 @04:27AM (#981134)
    Trusted Systems refers ONLY to the spec. The spec must match a certain criteria, and then the OS is designed and tested to that criteria.

    Remember the NT C1 (now C2) compliance thing? Because NT's design happened to include some of the elements of the C2 definition, they were able to come up with a configuration that could be trusted. Not bugfree or secure, but trusted. (Note, NT's C2 security used to involve no NIC, but I think they fixed it).

    OpenBSD is really fucked secure, but isn't designed to the spec, doesn't include the ACLs and other stuff needed for DoD compliance, etc.

    Neither does FreeBSD, but remember the Trusted FreeBSD [slashdot.org] project? They are trying to make a B1 compliant (trusted) BSD based off FreeBSD.

    Also, Operating Systems are not inherently trustable. It is the entire system that earns a security rating. It largely involves a fine tuned control of file access, but not a fine tuned fixing of bugs.

    Alex

    Yes, I picked NT as my "trusted OS" mostly because it will generate the stupid /. effect of "waah waah waah, NT is horrible, you must be a moron" the required "you are the stupidest ass in the world" and other stuff like that.

    Get a grip kids! OSes are NOT the end all and be all of life. Further, drop the prejudices. I am an MCSE, that does not make me a moron, clueless, or a lemming. Indeed, just because all the MCSEs that you know are dumb does not mean that they are dumb BECAUSE they are MCSEs. Some of us happen to be very competant administrators, and happen to have a certification. Some of us actually learned are shit to get it, instead of just memorizing study guides.

    Quit hurling insults to people that sometimes disagree.
    Alex

  • First, the big fallacy in his argument: Open Source is not the Bazaar! Specifically, there is nothing to stop a person from getting a group of programmers together, writing up a formal spec, paying them to write code, paying a third party to do a formal audit and testing it against formal standards, and then releasing it under an Open Source license. It wouldn't cost any more than the analagous proprietary effort he's presumably advocating.

    Secondly, you can use the Bazaar to reduce the cost of developing such software. Take a Bazaar-developed program that does pretty much what you want it to, draw up a formal spec, pay some programmers to audit the program and bring it up to spec, and then go through the formal testing. If you pick wisely, this could greatly reduce the cost of such development, and it should comfortably meet his definition of "Trusted".

    Thirdly, I question whether formal specifications improves security; granted, that's no consolation if you're working for an organization that requires it. Formal specification merely means that some thought went into design before the program was written (or modified in the above example). It is very difficult to test a specification for security problems, for obvious reasons. Also, it is easy to write a program that matches a formal specification, yet introduces subtle security holes.

    Fourth, there is no reason why you can't perform formal testing (for what it's worth) on any piece of Open Source software, Bazaar or not. Then you have a formally tested program. You must refrain from upgrading without running through the testing procedure again; just like proprietary security software.

    The bottom line is, if you want formally trusted software, you've got to spend some money. Open Source will not prevent this, Free Software will not prevent this, neither the Cathedral nor the Bazaar will prevent this. That doesn't mean they should be excluded from consideration.

    ----
  • by EnderWiggnz ( 39214 ) on Friday June 23, 2000 @04:30AM (#981136)
    THe problem with specifications, is exactly its benefits.

    When WYSYWYG interfaces, someone point out that not only is What You See Is What You Get, but also that What you see is ALL you get. This means that while you can see on the surface what document or page or whatever that your creating, and how its looking, but thats all your going to get, nothing else, no more, no less.

    Take example a standard graphics program like the GIMP, and compare it to POVRay. The GIMP's WYSYWIG interface is really slick, but with POVRay, you can create ray traced images that would be next to impossible in a WYSYWYG environment, but you dont get to see the exact creation before its done.

    With programming, a formalized, structured process ensures that the program will give you what you want, but it will never provide more than that.

    True "Innovation" will never occur. No one may spot the flaw in the security model, no one may realize that 40-bit encryption is a bad way to protect DVD's from being copied, no one may predict that a Record-Industry defined "secure" file will only be effective for a couple of .. minutes?

    But by setting Goals, but allowing for large amounts of flexibility of the Programmers allows for products to be delivered that are not only Programs that meet their requirements, to Products that truly meet their needs.

    And trust me, clients requirements and their needs are almost always two completely separate things.

    Use Gnumeric as an example - the author did what? Copied Excel. What did excel do? Copied Lotus? Lotus? They copied VisiCalc.

    But the question remains, these products are meeting requirments, but are they really meeting the NEEDS of the people that use them. Couldnt someone have thought of a better way of setting up a spreadsheet? Making the formulas? Hell, why are there formulas at all?

    But I'm sure you get the point.

    Linux may not have the strict methodology that modern business management and Quality Testing require, and that, is exactly why it ends up with higher quality products in a shorter amount of time.


  • Open source can be more trusted than closed source.

    Formally designed and reviewed software can be more trusted than chaotically assembled software.

    These seem orthogonal to me (and to each other! wakka wakka wakka!).

    Not really. Remember, most of the internet was built on "formally designed and reviewed" "open source" software. Look at the BSD development cycle, for example. It is much more controlled than say, the release-early-and-often-new-micro-version-every-fi ve-minutes Linux world (although significant kernel changes have to be funneled through Linus anyway, save the flames). Open Source != chaos. I think you will find that a lot of the major open source software that has really built the community and lead us here, was built slowly and steadily with lots of peer review, against a standard, by every day engineers...not some random kids moonlighting on opposite sides of the globe.
  • by YASD ( 199639 )

    Call me chaotic
    Formal specs are safe but dull
    While I live, I hack


    ------
  • Find an automated test environment that _works_.

    The trouble is, you can have something like Lint go through and say 'that looks dodgy'. But you still can't prove that the code is doing what you want.

    Our company has an in-house testing tool. We write long (LONG!) lists of test scenarios in Excel, then use the tool to check that the outputs of each function are what we expect. This has 2 flaws: (1) the tests may themselves be wrong, or (2) the design requirements for the function may be wrong. If the tests are wrong, either a bug can get through or a valid bit of code gets flagged up as being wrong. If the design is wrong (eg. the design asks for us to add 1 instead of subtract 1) then it'll be coded wrong too, and we'll never see it.

    So the layer above this is to put all the functions of a module together, and then do some more higher-level tests on that. And then put all the modules together, and do some more tests on that. And you've got a pretty good chance that your code is right after all that, but you're still not sure.

    As for automatically generating the tests - where from? Do you have an explicit, computer-parsable set of requirements for your project? Have you ever seen, or even heard of such a thing? Ever? I'm afraid it just doesn't exist. The design is done by humans, and the test scenarios are extracted from the design by humans. Humans are fallible. Shit happens. The best you can do is test as best you can. And if you're thinking about getting the test scenarios from the code - well, d'oh! if you're testing the code with tests extracted from the code, nothing's ever going to fail! :-)

    We're writing safety-related and safety-critical software for car engine controllers. We're going on to other stuff like drive-by-wire. We can't afford errors, so there's oodles of testing. But shit does happen, so all we can do is say "well we tested it the best we could". Nothing is ever 100% safe, and 100% testing is provably impossible. So the problem is getting folks to make sure the designs are right, then that the code meets the designs. Open-source seems to fulfill that requirement pretty well. The problem is for older things like 'sendmail', where a design doesn't actually exist - newer projects like Apache would (I imagine) have proper design documentation.

    Grab.
  • by Tower ( 37395 ) on Friday June 23, 2000 @05:07AM (#981160)
    Hmmm, well - you can easily test if the networking components of open source software live up to the RFCs that they are designed to meet. Sounds like a bunch of specs to me. A description showing overall protocols and spefic situations, with specified function and response to stimulus sounds like a pretty good control document to me.

    The HTTP RFCs are a good example.
    This is what you MUST do:
    This is what you SHOULD do:
    This is what you CAN do (if you feel like it):
    This is what you MUST NOT do:

    Pretty cut and dry, and rather effective (the web works, don't it?).

    I'd say that most JAVA implementations (from most companies) would fail full compliance to the Java spec (well, at least some of the Java specs... there's so many these days).
  • (Note, NT's C2 security used to involve no NIC, but I think they fixed it)
    Just as an aside, there is signifigant evidence to the effect that Microsoft paid for their C2. Specifically, the people originally hired to do the C2 evaluation later quit and went public with the information that Microsoft attempted to bribe them into certifying NT. Why did you not hear about this? The same reason you probably didn't know that www.microsoft.com.br was defaced over Mermorial Day weekend, (that story was only available in Portuguese) Microsoft has considerable weight with the media in this country.


    Now on to more important topics... If open source is by definition an untrusted system, then why did the NSA contract a secure version of Linux? I can't think of the name of the company off hand (it's a firewall company out of California) but I know that the NSA contracted them to build an NSA certified secure system, for use by the NSA (and later everyone else) based on Linux.

    And BTW: Alex, before you start whining that I'm just an anti-NT zealot, I am also an MCSE and while I agree that being an MCSE does not make you a moron, I would hope you realize that Microsoft does NOT test for the administration skills you really need.

  • There are two ways to get a trusted system really. 1) You can have a spec and test it exhaustively *yourself* or *trust* that the coder did so. 2) You can have no spec and trust the hundreds of people who are more paranoid than you and actually read the code. 1) = MS type products 2) = OS type products Neither model is perfect. Neither model is infallible. I know which one I *trust* more though...
  • A trusted system does not necessarily mean that it is secure.

    I can specify a (software) system in a formal way using formal languages like Z or TROLL (TROLL really is a formal specification language, honestly). But this does not mean that it has to be more secure, it only means that I am able to define the systems behavior *exactly*, so that the resulting software system behaves in a predictable way.

    This is working fine in theory but to my knowledge there are no working (as in usuable) tools to automate the derivation of the software system from the formal sepcification. So you are still left with several gaps in the design/implementation process.

    Proofing of your specification is simple: Transform your spec into predicate logic clauses und do a resolution :-) (the transformation produces an awful lot of clauses so your complextity grows *very* fast).

    Until this process is fully automated, formal specification and *verification* (!) is too difficult and cumbersome for widespread use.

    So this talking about trusted/untrusted systems is completely unrelated to any Open/Closed Source security debate.

    +++ after all, it's just my thoughts +++

    bye,
    blurred
  • by frost22 ( 115958 ) on Friday June 23, 2000 @05:39AM (#981173) Homepage
    Dr. So-and-so
    Oh man ... you just had to make yourself an idiot, had you ? Gene Spafford is longer around (on the net, and specifically in the computer security field) than most of you knew computers, let alone heard the Word "Internet". He is about the last person someone with a grasp for net.history might call "Dr. So-and-so".
    is defining trusted as being designed to a formal spec. That definition is constructed, whether the intention is there or not, in such a manner that he is right
    Well, Spaf is one of the foremost experts in his field. So, you redefine "Trusted Systems" - thereby disagreeing with him - and your credentials are ... what ?
    Under that definition, Open Source cannot be a trusted system
    Others in this thread have already shown this this be false.
    My assertion is that open source challenges the notion that you need a formal spec to develop trusted software.
    My assertion is that you don't have the faintest idea what you are talking about

    f.
  • Trust has nothing to do with if you trust the software to work correctly or not. Trust has a very specific meaning when it comes to secure systems. Most likely he has the DOD Orange Book in mind. As far as I know, none of the free OS's have an Orange Book security rating (thats what MS is talking about when they claim NT is C2 certified). The reason for my thinking this is it costs a considerable amount of money to get a system certified in this manner. Up to C2 the amount of documentation needed is not overly burdensome that Open Source couldn't pull it off. Above that (B1, B2, B3, and A1) you need a mathmatically provable design and specification. You need huge amounts of documentation and checks at all points of development. It is possible, but highly unlikely that Open Source could achieve B1 or B2. IBM managed to achieve B1 back in the 80's with a particular version of MVS and RACF. Only one system (at least as of the early 90's when I was writing a paper of the subject) has ever achieved an A1 level. The joke was that the system got it's rating by never remaining up long enough for anyone to do anything with the system. Here are a couple of links for more information regarding the Orange Book: http://www.multics.demon.co.uk/orange/index.html http://jcs.mil/htdocs/teinfo/directives/soft/ds520 0.281.html
  • 3) The system spec is designed, with clear interfaces, I/O limits, and the like. The product is developed opensource style to that spec, with some folks builting the test suite and tools.

    Source + Spec + Test
    I think that might be even better.

  • But if you notice, then the NSA is contracted someone to build a secure Linux. That secure Linux MIGHT be Free Software (or internal only), but it wouldn't be developed as open source.

    Free Software - Software available to all respected the rights RMS believes are important

    Open Source - Source available, can modify source, etc., not as many guarantees as free software

    Developed as Open Source - Bazaar approach, many eyes, many contributers, anyone can submit code, roll their own patches, pressured to accept all patches, chaos, etc. Produces some really neat stuff, but it is totally chaotic.

    There is a distinction there. The "Open Source Model" of millions of programmers cannot produce something to a strict spec, as that requries careful management, coordination, etc., the Mythical Man Month still holds.

    Anything can be made Open Source/Free Software. there is a difference. Being Open/Free merely involves licensing, Open Source Development is the chaotic model that Mozilla, Linux, and a few other projects function with (Apache?). Others have been more controlled (the BSDs, Xfree86, GNU, etc.) while still being made available as Free.

    On the NT thing: I'm not whining about it, I'm getting sick of the nonsense I see here. Microsoft's tests don't test much, they test a bare minimum. However, I'm sick of being called a moron because I know systems other than Linux. I routinely use Linux and NT, some Solaris, and learning FreeBSD and OpenBSD, unfortunantely I'm just not smart enough to realize that Linux is all you need. :)
  • I think you responded to something that wasn't in my original message. Take a look below and feel free to hit me with a clue stick if you have a legitimate gripe not voiced in your reply.

    Yes, things should be properly designed prior to coding. Yes, testing should be an early consideration prior to coding. Manual -- and implementation independent -- test scripts will be created before any code on good projects.

    1. Sidebar: Are you talking about mixing testing and design with coding? That's a bad idea.
    Theoretically an automated test could be created prior to coding. In practice, nobody does that unless they have a very limited scope for the tests they want to perform...or if they want to waste an amazing amount of time!

    Each automated test script I've created using over a half dozen different tools all have the same problem;

    1. Scripts are highly dependent on the actual code -- the implementation. Change anything, and the test scripts I've created in the past have almost always required modification just to get them to push data through the system. Checking the results at each step is an entirely different level of complexity.

    I haven't been able to create scripts prior to code being delivered unless _I_ wrote the code. Mixing coders and testers in the same group is just a bad idea on multiple levels. If you don't know the objections, I'm not going to tell you...ask a few people you respect.

    Remember, you're validating the spec not how the spec is implemented. How do you, in an automated fashon, track deviations unless you limit yourself to IO?

    The only exceptions are when you focus on data and protocols; does the input x always result in the specified output y? Works for files and data regaurdless of volume, though how long a test takes can result in different results.

  • Well, if you want a software system to be secure, then there's a lot
    of components you have to trust, like OS kernels and compilers. No
    one designs and implements these on a per contract basis, so everyone
    depends on properties of generic tools.

    Spafford surely is right: security can't come just from being open
    source. But I think being able to look at the relevant source code is
    a very powerful advantage when trying to design secure systems.

  • So far as I know, no one has ever proven a usable compiler correct. So a fully formal proof is just not an engineering feasible.
  • Nice post, but I'm not sure what you mean when you say the `open
    source development model does directly contradict most of the software
    enginerring principles that are called upon in the development of
    trusted systems'. Do you have a specific contradiction in mind, or
    are you just making a an assertion about hacking culture? The latter,
    I think, is as irrelevant as an analogous generalisation about most
    commercial software development would be.

    PS. I note your email address is in Oxford: are you a member of
    Roscoe's group?

  • To me, the phrase "Any user can use only services they are
    explicitly authorized to use" is a formal spec.


    Does that mean you see no ambiguity in the phrase `explicitly
    authorised', or that you think that any way of disambiguating it is
    equally good?

  • I think this man is more concerned with developing a secure world than he is by profiteering off a particular economic model.

    I agree, my issues are largely related to the academic vs. real-world differences. OSS works pretty well in the real world, but it doesn't fit into a traditional academic perception of security design (though peer-review of code and algorithms _is_ a standard tenet of secure design). Perhaps it was unfair to put the caveat at the top of the note, it should not have conveyed that much emphasis, but I still stand by my belief that Dr. Spafford may have his objectivity clouded somewhat by institutional oldthink and, possibly, by conflicting interest.

    However, as an earlier post pointed out, Spafford's lecture seems to be aimed at a controlled, replicable, process (gee...that sounds like software engineering).

    Yes, that's why I said (in that earlier post [slashdot.org] I referred to) that while we can agree that software designed by small teams of competent designers and coders makes for stronger, more secure software, that isn't the actual point. The point is, in the real world, we have to deal with all kinds of software from different companies and groups, and often the decision process for selecting software isn't the most logical or objective process. So how do we live in that world? Dr. Spafford puts forth a view of a software world in which we'd like to live, my thing is we don't live in that world, it's not coming anytime soon, so how the hell do we deal with what we have to deal with, while doing our best to improve matters when we can? Again, it's academic vs. real-world. Both can appreciate the Right Thing, but you can't always get it in the Real World, for a variety of reasons. It's a respectful difference of opinion, not a flame.

    Aside from the fact that he gives his information out for free (it's payed by research money but donated to the community at large), it's illegal, immoral, and, from a socio-economic point of view, inappropriate to steal.

    Good artists borrow, great artists steal. And smart artists properly attribute.


    Your Working Boy,
  • It would be nice to know what the requirements are of this spec. If the spec were or is an open spec it would then be possible to make Linux, OpenBSD, or any other OS abide by this spec.

    I think that it is important to understand as some have pointed out that trusted does not mean secure. This is one of those english language semantic things. Trusted means that they have a set of requirements that this OS / program is supposed to do and it does them. Basically they can trust that if this happens the software will do what they wanted it to do. Secure means that if someone tries to hack in or exploit a buffer overrun then the system wont let them. OpenBSD is secure, NT is trusted. This does not mean that NT is not secure, it is in its own right, but I wont get into that. It also does not mean that OpenBSD cannot be made to be trusted either.

    The advantage to them of Open Source would be that if the sytem is not trusted, they have the source and can make or have the system made into a trusted system.
    Wouldn't it make more sense for them to take a secure system and make it trusted as well?

    Just my .02 cents, but I think that the goverment needs to start rethinking some of its policies of doing things.

    send flames > /dev/null

  • the man is talking about formally-specified and verified systems, where everything is specified in some mathematically-based specification language, and then the implementation is demonstrated to have desirable properties in relation to the specification. Chaotic, bazaar-style development doesn't match with this style of development at all.

    However, this kind of formally verified system is extremely costly to develop, extremely difficult to adapt to changing circumstances (and retain the verified properties), and still doesn't guarantee that it does what you want it to do - mistakes in the specification or mistakes in the verification process are just as likely as mistakes in coding.

    Frankly, for 99.9% of the software written in the world, this kind of thing is utterly impractical and will remain so. I don't mind consigning the remaining 0.1% to cathedral-style approaches (though open source can still help spot bugs that the verification doesn't catch).

  • Neither open source nor closed source by itself makes a system secure. There are way too many systems, both open ones and closed ones, that are swiss cheese, to even think that security can derive from either mechanism.

    With regard to security, what open source would allow you to do is verify whether or not a given system truly is designed to a rigid trusted specification. None are now, and OpenBSD seems to have the greatest potential. But I'm absolutely NOT going to trust the security of a closed system just because the marketing folks, and their hired security consultant, says it's perfectly secure.

    The question I think is this: If you do have a system you believe is truly secure, does making it open source compromise that security? I believe that if it did, it is flawed in design and can't possibly meet spec.

    Where closed systems get an advantage with regard to security is when they are really not secure. That advantage gained is a delay between market introduction and discovery of its insecurity (perhaps by reverse engineering).

  • The problems:
    • Security cannot be achieved by debugging.
      Read the decade of CERT advisories [cert.org] for Sendmail and BIND to convince yourself of this.
    • Linux, and UNIX, contain some terrible basic security-related design decisions.
      The notion of "root" is bad enough, but "set-UID to root" is worse. This results in far too much code being trusted. In particular, it should be impossible to run non-trusted code as root. This means no root log-ins, for example. In a secure system, as your privileges go up, the amount of software you're allowed to run goes down. In a sandbox, you can run anything. As administrator, you can only run a few tools that do very specific things with lots of checking. This is completely alien to UNIX.
    • Fixing known holes only protects against the inept.
      A serious attacker will find their own holes, and will keep quiet about them until they break in and steal something. Fixing known holes protects against script kiddies.
    • If you don't have a security model you can write down on a small piece of paper, you can't enforce it.
      DoD has a simple security model, which is reasonably enforceable. See the Orange Book. [ncsc.mil] There's Linux support for it. [rsbac.de] You take a performance hit and can't run some popular software. But in that direction lies real security.
    • Discretionary security doesn't work.
      With discretionary security, users can turn off security. "chmod 777" is the usual way. With mandatory security, if you're processing SECRET information, nothing you or your programs can do makes it non-SECRET. The problem with discretionary security is that it's extremely difficult to tell if the system is in a secure state, and it's very easy to make a change that opens a security hole.
    • Secure systems are no fun
      OK, you've got a secure system. Want to run Napster? No way; it's acting as a server and transmits your files. Want to download a game? If it will run in a sandbox, OK, but it can't talk to other players. Running a web browser may be OK, but the browser will be shot down if it tries to launch MS Word to read a .doc file. And the browser will need some work to live within the restrictions of a secure system.

    A secure open-source system is quite possible. But it won't be Linux as we know it.

  • I see no ambiguity in it, except "contrived" ambiguity. (You can argue that I didn't say WHO was doing the authorising, but I don't believe that formal specs have any business moving into lawyerese, in a bid to meet every possible context that a statement can be interpreted in. To me, it is clear that the authorising agent is the person or organisation responsible for that service on that system, and no other. To me, that's straight logic. Remember, computers think in logic, not double-speak. Computer programmers are there to tell computers what to do, clearly, concicely, and logically. Thus, logic makes much more sense than looking for hidden meanings or coded interpretations.)

    When a statement genuinely IS ambiguous, but has been given as the pre/post condition pair, then ALL interpretations which comply EXACTLY with that condition (both in the positive and negative) are equaly valid.

    Say you have the statement "data must be kept on disk 1". Then the converse of that is "!data must be kept on !(disk 1)". By applying both these, you can conclude EITHER that data is restricted to disk 1, and everything else is restricted to disks other than 1, OR that data is restricted to disk 1 and that everything else can go on any disk. But since your pre/post-condition says nothing about anything other than the data, then as far as that condition is concerned, the "everything else" could be on Mars. In this case, logic would argue that there's no implicit interpretation, so it really doesn't matter.

    The way I would always go, though, in ALL cases, is take the most restrictive interpretation of the pre- and post-conditions. You can always loosen up a bit, but it's a royal pain to tighten things afterwards.

    Thus, using this axiom, the first conclusion would be the one to use.

  • Are you using automated testing across an entire project? If so, and you find it benificial, tell me how. I've only found limited application of it, and it required quite a bit of maintenance to get those results.
    1. Scripts do not "break" as they are updated with the new changes before testing the next release.

    This doesn't sound like the earlier posts. Script changes after the specifications are created isn't the same as scripts that are made once and run throughout the project. Making updates can suck up an amazing amount of time.

    Are you talking about testing at the end of a cycle, ongoing, or both? The projects I tend to do usually have a GUI-intensive part covers a few hundread forms plus related specialty screens. That part doesn't work well with automated tests. The backend parts do, though, since the interface to them tends to change very little.

    For me, testing starts with a formal test plan (from the spec), occurs constantly, and the remaining time is used to plan for the milestone releases and do documentation. Automated testing is time consuming and isn't worth setting up for most rapidly changing projects. In limited parts, yes, across the whole project noooo.

    1. However, if you're talking about coders testing their own project, then I agree.

    Yes, definately. Each group talking as early as possible and hashing out the details is definately benificial.

    I wouldn't dare create a script for something that's changing on a regular basis unless it were small or I had a big staff (about ~1/2 size of development group).

    If you get away with this kind of thing and don't drive yourself mad, I'd like to know how!

  • If you cannot see the source code, then all the formal design or testing amounts to nothing. At best, you have someone's guarantee that the system is secure.

    Also, empirical tests are insufficiently strong to prove anything. You can test the binaries ad nauseum and not find every security flaw.

    So basically, this is just another bullshit attack on free software.
  • In the ABS example, simply run rigorous tests of the system to ensure that it behaves properly in all conditions. If it fails and the car smashes into a wall at 50 mph or the brakes lock up, no, of course you can't trust the system. If everything operates correctly, what the hell is the problem?

    The problem is that you aren't sure that your tests cover all the cases and that you haven't left anything out. More importantly, you aren't sure exactly how the system should react in all situations. For instance, what should the maximum response time on a component of the system should be? If you don't specify this then you can't test it to make sure the system responds correctly. BTW, you need specifications like this in order to make sure the system can handle the environment its in and doesn't say start applying the brakes too late.

    Basically what the formal spec does is it lets you determine how much testing you need to make sure that you've covered all the cases and that the program will respond the way the spec says it will. Personally I would prefer a software that went through this sort of procedure rather than the first public release of some open source program controlling the brakes on my car or the controls on the plane I'm flying in.

  • Is trust built on experience any less valuable than trust built on formal methods

    An excellent point. It seems as though many of the other threads grasp at this concept without saying it explicitly. In the case of OSS we have people constantly using viewable code. With so many coders making use of said code day in and day out, if something is wrong, it is eventually found and fixed. The proof of reliability can be found in the number of people who have used the code without problems.

  • by deno ( 814 )
    His assertion is that Open Source systems such as Linux are developed in too chaotic a system to ever reach a trusted state.

    It may be true that lot of free software starts its life chaotically. However, claiming that big, succesful projects are developed chaotically is complete nonsense. The proces of specifying the direction in which free software moves is different from what people working in traditional software developement may be used to. Maybe these new organisatory schemes are difficult for Dr. Spafford to understand, but claiming that developement of "Apache", "Gnome", "KDE"... are chaotic is complete nonsense.

    As for the question wether Open-source project can become "thrusted" or not, this depends on only two factors:

    • Goals of the project
    • Your personal definition of "trusted system"

    If these two are in synch, chances are that Open source project will reach the state where you can trust it much faster than a project coded in traditional way.

  • Because that's implied by getting the C2? the way I understand it, no networked machine can get C2.
  • Trust, defined: the level of reliability of a system. ie, you can trust that system X will not crash often.

    WOOOOOSH!

    That's the sound of my point going over your head. Open Source problems are fixed, fast. They tend to be less severe than other "secured" products on the whole. I use products that fix their bugs fast, and don't produce too many of them in shipping releases. Another "trusted" system - NT 3.51, I wouldn't trust to hold my porn, let alone classified government secrets. Yet, strangely enough, Microsoft managed to get it C2 certified.

    Certification != trustworthiness. That's my point that Gene forgot, along with empirical evidence.

  • It is ambiguous, and non-trivially so: the implementor will have to
    make decisions about how to channel bureaucratic authorisation into a
    permissions model, and these kind of matters can involve subtle
    security issues.

    The kind of disambiguation you describe is very simple minded: it
    is simply schematic ambiguity, of which the `explicitly authorised' is
    not an instance. Even so, I don't think that `most restrictive
    disambiguation' is an effectively applicable criteria.

    To put it bluntly, the kind of informal specification you advocate I
    think is likely to reduce the visibility of potential security
    vulnerabilities.

  • I think there are a mix of advantages and disadvantages to open source
    development from the point of view of secure development. I don't
    think there are any `contradictions', however.

    I'm not sure what to make about the red book criteria: does `in a
    trusted facility' mean that if I have some ideas about design of the
    code while at home in the shower that the criteria is invalidated? I
    can't comment since I am not familiar with its trust model, but it
    smacks of `security through obscurity' to me. I doubt that it could
    be made to work outside of an organisation like NSA or GCHQ, which is
    interesting, but not really the topic under consideration.

    I think that the TCSEC criteria are likely consistent with open
    source development. What standards happen to prevail in `many' open
    source projects really is irrelevant: of course all of what you
    describe must take place, and with proper tools. I think I could
    imagine a plausible such group of open-source developers.

    I have a dim idea that we may have met: did you apply for the MSc a
    few years back (1994/5/6?) and then switch to a law course?

  • Since, according to an article in Ars Technica [arstechnica.com], modern processors will effectively rewrite code at execution time, can any program or OS running on a recent processor be considered a "trusted system" by the definitions used by Dr. Spafford and others?
  • They are mixing up the development model with the final product.

    Can "Linux" be trusted? What do you mean by "Linux"?

    If you mean a particular kernel version, as released by Linus... there you have it. Can you trust it? depends on your criteria. Do you *need* to trust it, or can you simply take a certain version and stick with it?

    Linux is more about a process than the technology.. open source is about lots of developers working together in a scientific (as opposed to market driven) way to produce better code and better software.
  • Right.. but this has nothing to do with open -vs- closed.
    Nothing is stopping a company from making 'Secure Linux' and rigorously applying compliance tests for each new release.

    It has absolutely nothing to do with whether something is 'open' or not, it simply has to do with packaging.
  • Well...not really. Smart cards are kind of an unfortunate example,
    because there have been so many `out of the box' attacks launched
    against them. Eg. it is perfectly practicable to crack codes in
    most RSA-based smart cards by analysing their power consumption.

    That's not really the point you are making, but it shows a problem
    with pure hardware-based systems. It may well be the case that the
    highly modular designs in PC systems might be harder to attack with
    these kind of attacks due to their complexity, but saying so seems to
    be anathema to many in the security industry...

  • We use some limited automated testing at work. Where I find it most useful is when I intend to rewrite a piece of code (for efficiency, to extract a common piece into a library, whatever) without changing the behavior or interface. I construct a few tests that treat it as a black box, make them work on the original version and then make sure I don't break them in making my changes.

    In general, any piece of code htat has to test some output is relying on the programmer that wrote it knowing the answer in advance. I know from experience that unless there is a spec that spells it out, there is a decent chance that somebody will mess up. Of course, there's a decent chance of bugs in the spec too.
  • I'm sorry, but you miss the point. As long as people keep thinking this is about security, we'll never properly understand the requirements for a trusted system.

    The whole point here is a system where you can trust that its functionality will conform precisely to the specifications. It's not security or trusting that there aren't back doors, it's trusting that it will do what it's supposed to. That may include security features, but that's not the point.

    I'm sorry if this sounds awkward, but a lot of this topic is showing a basic ignorance of some aspects of software engineering by 'open source' fans and / or developers.
  • Er, one thing. Z isn't a programming language, it's a specification system. I'm sure my Z lecturers would agree that I'm no expert on the subject, but from all I was ever taught about it there simply isn't enough information in there to produce code directly from the spec.

    Don Knuth's comment is worth bearing in mind, but it's a distraction in many ways. He's saying - or at least appears to be saying - that he has proved that the fundamental design and architecture is correct, but isn't certain that, in the translation of specification to code, he hasn't made a typo or two. That's still possible, but a very different source of bugs.

    The problem with the standard 'open source' development model is that it's too chaotic to tightly control adherence to specs, IMO. That's what our original source here seemed to be talking about when he doubted whether 'open source' development could ever produce trusted code.

    Your last two paragraphs seem to miss the point somewhat, though. A trusted system needs to have the specification tightly drawn up before a single line of code is created, if we wish to have any serious prospect of trustworthiness. If you try and draw up a 'de facto' specification after the event, it's inevitably going to contain problems as you're merely documenting behaviour that isn't necessarily trustworthy in the first place. Trusted reimplementation may be possible, but I'd still want a new codebase.

    BTW, before anyone gets annoyed, the only reason I've been referring to 'open source' development as opposed to open source development is that, in this context, it implies something about the structure of the development team and that's mostly what's relevant. I could produce a project with a tightly controlled, exclusively internal team which decided to publish its source as it was going along. We'd still have source which was open but it wouldn't be conforming to what's normally accepted as the open source development model.
  • Maybe we are talking about my religion... =)

    Seriously, though, if you are interested in a *web server*, my remarks still stand. I don't if Apache has been ported to OS/400 or Multics.

    Besides, how many companies, these days, can afford a Multics- based computer (does it still exist?) or an AS/400?

    My dad (who was a security officer on a big iron somewhere) used to mention that, even with OS/400, it usually took less than 30 minutes, for a good security consultant, who was probably much better than the average script kiddie, to gain complete access to a machine.

    Compare & contrast with what OpenBSD claims on their website: "Two years without a local root exploit".

    I am ready to admit that "Big Iron" means much better security than a PC+your choice of OS. That does not mean *good* security, though. Simply better security.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...