Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Security

Computer Security Criteria 300

Rolf Marvin Bøe Lindgren writes: "For most human endeavors that involve some sort of risk, there are powerful, recognized public interest groups or even government-appointed organizations that investigate and analyze dangers, prescribe guidelines, determine criteria for acceptable risk, etc. This does not seem to be the case for software! I work for a ship classification company. The purpose of such companies are, very simply put, to determine how safe seagoing vessels are, for instance in order that insurance companies can decide insurance premiums. There are, needless to say, numerous conventions and special interest groups to determine safety at sea. That is, as far as I know (and I would very much like to be proven wrong), except the computer systems that the ships use. there are restrictions, laws and regulations involved in just about any object that goes into a ship except the computer system. Everybody seems to know, for instance, that UNIX is safer that Windows, but there are no safety, reliability or security criteria established by any recognized authority that can be used to defend one computer system over another."

"Now, I could ask Slashdot how to go about to form a recognized body, but I have access to competence in that particular matter. What I would rather like to know, is this:

  • What might a set of safety criteria be like (I am just now most interested in criteria for computer systems that would address such issues as vulnerability to worms, viruses and crackers)?
  • How should one go about to find competent and interested people who would like to be part of a body like I describe, or consultants to one?
This discussion has been archived. No new comments can be posted.

Computer Security Criteria

Comments Filter:
  • Human Life (Score:3, Insightful)

    by spookysuicide ( 560912 ) on Monday March 11, 2002 @02:30PM (#3143794) Homepage
    I would venture to guess the reason there are so many regulatory bodies involved in overseeing the safety of such things as highways, seagoing vessels, planes, food, etc. and not software, is that in the former situation human life is directly at risk while in the latter human life, is at best, indirectly at risk and usually not at risk at all.
    • Standards in Coding (Score:2, Interesting)

      by longwinded ( 445283 )
      I would venture to guess myself that software is very hard to regulate in a normal sense. On complex pieces of code (KISS standard aside) it is nigh impossible to completely prevent bugs 100%of the time. That's the ideal, but that's why we call it ideal.

      If a bug occurred in a very unique situation, and it took 20 years for that situation to come about and it caused just one death, people would still ask, "Why wasn't anything done to prevent this?" Something was done (hopefully), and standards and regulations help, but in the end that's pretty much all you can do.

      I don't think he's asking as much about viruses or hackers in this case, but those are also valid questions. I tend to believe what is stated downstairs in this argument -- I've seen Unix systems that were wide open and Windows Boxes sealed tighter than a drum, and it's up to the admin. I'm not really sure if there's a test or anything that sysadmins must pass to become licensed sysadmins, but if there isn't (and I'm not talking about certification) there should be one, at least as far as sensitive data like this is concerned.

      Also, a real life situation when computer software caused multiple deaths was in the case of the London Ambulance Service, which used a very poorly constructed computer system and was directly linked to 20 or 30 deaths (due to late ambulances) in the few days it was active.
    • Re:Human Life (Score:4, Insightful)

      by Squeamish Ossifrage ( 3451 ) on Monday March 11, 2002 @04:01PM (#3144403) Homepage Journal
      Software can't kill people directly, but it controls hardware than can. Also, people frequently depend on systems which include software for life-critical purposes.

      Think:

      1. 911 call centers
      2. Industrial robotics
      3. Air Traffic Control
      4. Engines with embedded software controls
      5. The telephone network
      6. The power grid
      7. Medical equipment

      I'd like to point out that there are documented deaths from software failures in most of these categories.
    • Considering how many enraged users of MS software there are out there I'm not sure that Billy G's life isn't at risk :-)
  • hm! (Score:3, Funny)

    by prizzznecious ( 551920 ) <hwky@@@freeshell...org> on Monday March 11, 2002 @02:32PM (#3143806) Homepage
    "How do you find people willing to pontificate about what makes one system more secure than another," he naively asked Slashdot. Then came the deluge.
  • Criteria (Score:5, Informative)

    by DecoDragon ( 161394 ) on Monday March 11, 2002 @02:33PM (#3143813)
    Have you looked at any of the work done by SANS (http://www.sans.org) or NIST (which is not necessarily what you're looking for, but in the area of providing guidance, http://www.nist.gov)?

    SANS has been publishing a series of "consensus" documents, asking for feedback from people on topics such as securing Windows and Unix versions. They've also put together a working group (pay to join).

    If you have looked at these sources, I would be interested to hear how they do or do not fit in to what the author of the original question is looking for.
    • In particular SANS has their SCORE initiative, which seems as though it might be somewhat applicable.

      http://www.sans.org/SCORE/

  • by onion2k ( 203094 ) on Monday March 11, 2002 @02:33PM (#3143814) Homepage
    I work for a ship classification company.

    Big ship..

    Little ship..

    Big ship..

    Medium size ship..

  • by mattsouthworth ( 24953 ) on Monday March 11, 2002 @02:33PM (#3143819) Journal
    well, have you checked out these things?

    http://www.commoncriteria.org/

    http://csrc.nist.gov/cc/pp/pplist.htm

    • Yes, google knows about:

      CCITA
      ISO/IEC 15408
      NSA Rainbow

      Which might be of note.
    • Yes! Exactly. There are several standards for the evaluation of computer security. The more accepted today is the Common Criteria of Information Security Evaluation (Common Criteria for short) and the good old Rainbow series from the US Gov't. Particularly the RED book for the evaluation of trusted computer systems and the orange book for the evaluation of trusted networks. There are many more, but the problem is not so much that we need these standards, but that many companies are not willing to go to the expense of implementing them. This leads to shotty software because no organization or company is paying to check out all of the possible flaws in their systems.
  • Most secure (Score:5, Insightful)

    by Geekboy(Wizard) ( 87906 ) <[spambox] [at] [theapt.org]> on Monday March 11, 2002 @02:34PM (#3143824) Homepage Journal
    The most secure method is to apply the KISS method. (keep it simple, stupid) The fewer lines of code, the fewer places an attacker can gain access. Use lots of encryption, (check on theoretical attacks mostly), and use physical safeguards for the system. You possibly want to use OpenBSD, because of the history behind it (4 years with no remote exploits on a default installation), but choose your base carefully. Encrypt all communications (ESP networking) and make sure you have double and triple safeguards. Better be paranoid, than exploited.
    • by Glorat ( 414139 ) on Monday March 11, 2002 @03:14PM (#3144104)
      Here is another clue I got today from my uni lecturer. If you wanted to run a secure web server, would you run it on NT, Linux, Solaris or the Mac?

      *Up go hands of Linux advocates*

      Answer: Mac because it is the least available operating system and as such fewer attacks have been created for it, even if there are hypothetically more bugs. As such, you would be less likely to suffer a problem, all else being equal

      Back to the article, would a measurement take into account this type of situation? Does Mac get a high rating for low rate of incidents or a low rating because it (probably) has more bugs than Linux. Open question
      • your lecturer is advocating security through obscurity. by that measure, the most secure possible web server is one you've written yourself, regardless of your competence level, because it's one of a kind.

        I would like to see that point of view competently defended in the public court of security experts.
        • The court security experts have an built in incentive to push whatever it is that they are pimping, because it keeps them employed. I have dealt with 'security' groups at my employers who seem to universally specialize pushing paranoia to management and holding the sysadmins and system programmers who enforce security practices hostage in tedious policy meetings.

          I would estimate that 95% of successful hacking attempts are either internal compromises or moderately-skilled users using pre-programmed exploits.

          Security through obscurity, combined with good user policies and applications is quite effective. You cannot hack what you don't know about.

          • You cannot hack what you don't know, but at the same time, you cannot depend on other peoples ignorance to protect you.

            I think that it's more important to look at factors like how often the software you are using is updated in response to security flaws and how easy it is for you to replace your software given an update.

            Basically, if the software is never updated, or if the server cannot go down under any circumstance, then maybe the more obscure platforms/software maybe an answer.
            • I think that it's more important to look at factors like how often the software you are using is updated in response to security flaws and how easy it is for you to replace your software given an update.

              Right. And respekt KISS. If you want to securely serve static pages, don't use a full Webserver with CGI and other stuff! Use the simpelest possible! (See e.g. D. J. Bernstein publicfile [cr.yp.to] for such a solution.)

              A lot of todays insecurity arises from feature creep and reinventing the wheel.
          • You cannot hack what you don't know about.

            Yes, you can. Your Windows box can randomly throw 'sploits at a box you don't know exists until it finds one the admin didn't patch or didn't know about. Often, you don't know your Windows box is doing this, because you don't know that it's been thoroughly zombied.

            One-of-a-kinds generally don't help as much as you might think because what you gain in obscurity you lose in maturity (ie, some script kiddie stuff will still work becuase the author made the same mistakes that were found and removed from Apache years ago).
      • Answer: Mac because it is the least available operating system and as such fewer attacks have been created for it.


        Ah yes, good old security-through-obscurity. Shouldn't you say, "there are fewer publicized attacks created for it"?

      • Answer: Mac because it is the least available operating system and as such fewer attacks have been created for it, even if there are hypothetically more bugs. As such, you would be less likely to suffer a problem, all else being equal

        Let us supose that someone discovers a bug in the MAC O/S that is only relevant to online control. almost no MACs are used for online control, what is the probability your bug will ever get fixed?

        The original story in this case is posted with the intention of obtaining a particular answer. The poster is not really interested in what system would be secure, he wants to have his original prejudice reinforced.

        Design of ship control systems is a real time control problem. As such it is not an application for which 'Linux' is a solution, you have to be much more precise and specify exactly which real-time enhanced Linux you are considering. It would also kinda help to actually specify the problems to be addressed

        As for 'security', one would hope that you would not be hooking your control systems up to the Internet or running any sort of user application other than controls for the ship. The references to worms viruses etc suggest to me that the poster does not understand the problem or is trolling for anti-Microsoft stories to tell his manager.

      • by gnovos ( 447128 ) <gnovos@ c h i p p e d . net> on Monday March 11, 2002 @04:12PM (#3144465) Homepage Journal
        If you wanted to run a secure web server, would you run it on NT, Linux, Solaris or the Mac?

        *Up go hands of Linux advocates*

        Answer: Mac because it is the least available operating system and as such fewer attacks have been created for it, even if there are hypothetically more bugs. As such, you would be less likely to suffer a problem, all else being equal.


        This is short sighted, becuase it does not take into account what you are securing AGAINST. If you are securing against random, non targeted attacks from script kiddies, you might be right, becuase said script kiddies aren't going to spend the time to figure the system out... but if you are trying to secure against a real, concerted attack by agents of a competitor trying to steal your ideas or ruin your business, then you have made a very grave mistake.

        When you say "all things being equal", then you are saying that 1 defaced web page is exactly equal to 1 stolen top secret formula, which is preposterous. A hypothetical question can not consider all types of attacks to be equal and still produce a valid and meaningful result.

        If you use that logic, then using a completely open and unsecured network would be ok if you sealed the computer in a locked metal box, since it would deter physical attacks by baseball bats (ALL attacks are of equal value, right?). Or you could say that adding the line "WWJD" to the telnet login prompt would be a valid defense since it would lower the instance of attacks by Christians by 80%.

        Go set him straight.
      • Wrong. Use Linux with Solaris as fallback or the other way round. These systems are compatible with each other and you can use one as a backup for the other.

        Maybe use both with a hard-coded failover or a combined system where both OSes have to be successfully hacked in order to compromise the system. Depends on what kind of security you need.
      • Answer: Mac because it is the least available operating system and as such fewer attacks have been created for it, even if there are hypothetically more bugs.

        There are at least two huge flaws in this; firstly, a generic attack (or a manual followup to a generic probe) is more likely to work, and secondly the hack-attack numbers reflect a smaller population, not necessarily a smaller proportion of a population. It's a great comfort to know that you're unique as you sit there looking at your Mac server full of zeroes.

        If I wanted to take advantage of the features advocated by the lecturer, I'd use something like Roxen on Linux on a MIPS box, chrooted and as far as possible readonly (chown/chmod then chattr +i then remove the chattr binary, and if possible also mount -o ro).
    • Re:Most secure (Score:2, Insightful)

      by catsidhe ( 454589 )
      At what point does 'simple' become stupid?

      Trivial but emblematic example: assuming that the string you are being sent will stay below a certain size means you don't need boundary checking. Congratulations! You have just made the main loop faster by orders of magnitude and much more human-readable!

      Oops ... buffer overflow attack!

      But, but, but the code was simple! Its not my fault!
  • Risks (Score:4, Informative)

    by xphase ( 56482 ) on Monday March 11, 2002 @02:36PM (#3143843)
    Sorry for not making a huge long rambling post, but you really should check out the Risks Digest [ncl.ac.uk]

    --xPhase
  • What might a set of safety criteria be like (I am just now most interested in criteria for computer systems that would address such issues as vulnerability to worms, viruses and crackers)?

    While there may not be a body of standards regarding security, there are some de facto standards regarding redundancy of data, the breakdown of different methods of communication(connection versus connectionless protocols) are quite well defined as standards, and the general structure of professional applications. Taking these as a starting point, one could build a list of vulnerabilities for each of these standards. For example, in a connectionless environment, one would be worried about DDOS attacks, and methods for identifying the assailant. In a connection-based environment, physical issues such as allowing someone to get access to a LAN line with a laptop inside the company building would be something that would require at least some preventative measures(ID cards at the door, social policies about bringing in computers, etc.)

    How should one go about to find competent and interested people who would like to be part of a body like I describe, or consultants to one?

    Be very careful. You will need to find people who are trustworthy AND brilliant. Good luck.
  • Air Gap... (Score:3, Insightful)

    by warpSpeed ( 67927 ) <slashdot@fredcom.com> on Monday March 11, 2002 @02:38PM (#3143852) Homepage Journal

    First and formost, keep the computer system closed. Do not hook it up to any outside networks. No networks, no phone lines, no serial connections. That will elimiate quite a bit of risk for attack.

    If that is not an option, then run the outside network connection through a very tight firewall.

    ~Sean
    • It's not just susceptibility to exploits, though: you're also concerned about the stability of the system. What's the uptime like? What are the chances of the system crashing at a crucial moment? How long is the troubleshooting/resolution cycle? Which components need to be redundant? &c. For most shipboard systems, I'd imagine stability and reliability are much more important than security. Look at it another way: the Air Force probably isn't too worried about someone rooting the navigation system on their stealth fighter, but they damn well care about the stability of that component!
      • I agree with you 100%. I did not talk about the stability aspect of the systems. Who would run critical systems on NT or Windows? I was just assuming that we were talking about Linux, so you know... :-)

        ~Sean
    • ... then put the system in a special room which contains a thermal sensor, a sound sensor, and touch sensors on the floor. Oh, and don't forget to put a laser on the air vent in the ceiling...
    • it can only be safe if it is not connected to the outside, off, and unplugged from the wall.
    • Air Gap (Score:2, Informative)

      by slugfro ( 533652 )
      Implementing a system with an air gap is definitally a good security measure. However, it is really only practical for certain systems. On a ship, an air gap might be applicable for systems that run the ships controls (i.e. engines, environmental controls, etc). These systems may be very important for ship safety and have no need to be in contact with the outside world.

      Then there is the navigation and communication systems. These are very important for a ship but may require limited access to the outside (GPS, etc). This should be completely seperate from the air gapped systems above and of course implement all other possible security measures (firewall, etc).

      On a modern ship there will likely be a third level of systems used for personal communication. Web browsing, Email and the like are not vital to the safety of human beings onboard the ship an thus do not require as stringent security.

      Using a multiple-system and multi-tiered security model like this may affer the best combination of security, price, and convenience due to not having to secure everything to the highest order.
  • Security (Score:5, Insightful)

    by AlaskanUnderachiever ( 561294 ) on Monday March 11, 2002 @02:38PM (#3143853) Homepage
    Well I know everyone's going to shoot this one down but I personally see a huge amount of time, effort and expense wasted on my own company's systems to protect them from the "scourge of the internet" when, upon detailed inspection, there is no good reason that 95% of these boxes NEED connectivity. Before you go about inspecting the various methods of combating the madness (firewalls, routers, off the wall OS, tying up the PHB, etc.) ask yourself "do our critical systems need connectivity and if so, to what degree?"
    • Protection "from the Internet" is only one part of the issue. Analyzing the security issues should include an analysis of the local issues. Let's look at the ship scenario, and come up with some potential non-Internet dangers:

      1. How well protected is the local terminal? Does it run critical guidance software on a Windows 9x box that anyone can hit Escape to log into?
      2. Does the ship have a LAN? Perhaps it is a cruise ship with an 802.11 (whatever) network to keep computers, registers, etc. around the ship connected. How easily can a laptop-ed cracker get in?
      3. Are the ships systems setup (via satellite obviously) on a VPN back to the mainland home office? How secure is the satellite? The VPN?

      These are just a few potential worries off the top of my head that do not, intrisically, have anything to do with Internet connectivity, or even necessarily with connectivity at all.

  • by Anonymous Coward on Monday March 11, 2002 @02:39PM (#3143857)
    Closest is the international Common Criteria [commoncriteria.org]. It's the indirect descendent of the old military orange book (you know, C2 certified, etc.). The attempt is to come up with multiple standards for each security critical component. The components are evaluated against the standard. A higher rating means they meet the standard to a stricter engineering criteria.

    Some sample standards (or "Protection Profiles") include proxy and packet filtering firewalls.

    My sense is the folks overseeing the Common Criteria would like industry groups to sponsor Protection Profile development. For example, banks could come up with profiles for wire transfer components, ATMs, etc. The shipping industry could be another.

    BTW, if you visit the Website, there is an interesting line of Common Criteria-branded clothing, for the geek who has everything!
  • ... in a ships context:

    Backup systems have to be in place, and why captains have to be able to navigate manually. Just like how yachts have to have motors in case sails break, etc... and to be able to safely navigate in ports.

    The threat of virii could be minimal because the physical security of the ship's navigation systems should be locked down. No internet access, no floppy disk drives, closed systems, etc.

    However, there have been failures. I remember a Navy Submarine running Windows NT or something, and it crashed (the OS, not the sub). They had backup systems, of course, but they looked pretty stupid. Windows NT Crash on Navy ship [info-sec.com]

    The key point here is that you can test systems anyway : running for long periods of time, checking memory leakage, hardware failure periods, etc... and bugs that come up are corrected for free, usually, when you're talking about expensive navigation systems.

    Sure, you can lose money for being out of action for a few hours, but that could happen due to any number of other mechanical failures too, so you just calculate some kind of percentage chance of failure based on past history of the navigation system?

  • by tshoppa ( 513863 ) on Monday March 11, 2002 @02:40PM (#3143868)
    I work for a ship classification company.

    And I work for a railroad that moves a half-million people a day. I like to think they're not too dissimilar industries - when my computers shut down, the railroad stops running. I'm guessing that when your computer stops, the ship stops moving. That it doesn't sink or explode (i.e. there are hardware items that relieve excess pressure, etc.)

    There are some differences. My trains have low-level hardware (based around gobs of vital relays) that will stop them from running into each other. I doubt ships have anything like this.

    The standards for what you or I do are drastically different from what someone writing software for an airplane's fly-by-wire system has to do. There, if the computer stops or starts doing the wrong thing, it falls out of the sky. Scary stuff.

    So, it depends on what the computer controls, but you haven't given us this information.

  • Sounds to me like the shipping industry is behind the times -- there are lots of other industries that have standards for computer systems. The FDA is becoming much more strict about computer validation and there is a great deal of documentation and testing required to implement a validated computer system. There are also many, many recognized Quality Management Systems in existence that apply equally well to a computing environment.

    >Everybody seems to know, for instance, that UNIX is safer that Windows

    Sorry, I couldn't ignore this... Validation of a computer system is about proving something is fit for purpose. Documenting requirements, design, performance, data integrity etc. It ain't about what OS you run. There's not a sane business person in the world who will rally behind someone masking anti-Microsoft sentiment as "computer security".

  • by Alcimedes ( 398213 ) on Monday March 11, 2002 @02:42PM (#3143878)
    Um, hate to break it to you, but how the hell do you hack a system that's on a ship and self contained? everyone's talking about virus this and worm that, who gives a crap? my guess is that the ship's navigation systems are secluded from anything that would have outside access.

    what i'm guessing he wants to know is something more along the lines of this.Windows NT cripples US Navy Cruiser [info-sec.com]

    in which case, he's really asking which software/OS is the least likely to puke and leave you up a creek without a paddle.
    • by bluebomber ( 155733 ) on Monday March 11, 2002 @03:31PM (#3144233) Homepage
      It sounded more like he's asking about general classifications of software systems in terms of security. Maybe he's looking for a scale like the following. (I'm pulling this out of my ass, a real classifcation committee would have much better rules, and they would spend longer than five minutes putting such a list together.)

      1 - Non Secure

      This describes a public terminal (e.g. what you might see in a shopping mall or your local university computer cluster) that is running MSDOS. The keyboard and mouse aren't even locked down.

      2 - Half-Assed Security

      This describes a public terminal that is securely bolted to the desktop and is locked shut. A log-on prompt appears, but is easily bypassed (e.g. Windows 95, or a Linux box that is bootable via an accessible CDROM or floppy drive). [Alternative: the logon prompt appears but passwords are available by shoulder-surfing, e.g. "employee only" terminals in retail stores.]

      Levels 1 and 2 are a black hat's paradise.

      3 - Almost Secure(tm)

      This describes probably 95% of the unwashed masses connected to the internet. This machine has a firewall and virus scanning installed, but the virus definition might not be up to date, and the firewall isn't what you'd describe as industrial strength. Some security patches may or may not have been applied, but are probably not completely up to date. This machine might present a challenge for your ordinary script kiddy, but an experienced cracker can probably find a way in. Configurations in this category would include most Windows installations, default Linux installations (older Red Hat, I don't think the newer ones start everything up) that start up every service under the sun, and a public web servers that are "sort of" secure but have holes in CGI scripts or are missing security patches. This also describes a lot of corporate wireless networks.

      The black hats enjoy level 3 probably more than 1 and 2, just because of the (slight) extra challenge.

      4 - Pretty Good Security(tm)

      This describes a machine that is physically locked down, but still connected to the network (generally behind an external firewall). Security patches are applied within hours of announcement. Logs are human monitored, and are written either on another machine, or on permanent media (e.g. printer or CDROM). There are no more services running on this machine than absolutely necessary (in other words, a mail server ONLY has ports 25 and 110 open).

      In practice, these don't generally get cracked. When it happens, it is usually physical security -- telling someone your password, sending your password via email, etc. A break-in might also be caused from a yet-unpublished remote exploit in one of the major services (sendmail, bind, apache, etc.) These machines are often susceptible to certain types of DOS attacks (when such attacks can't be stopped at the router/firewall).

      5 - Unbreakable security

      This descrbes a machine that is physically secure (i.e. the hdd is locked down inside a secure chassis), and has no external network connections. It is also shielded from van Eck and other eavesdropping.

      You won't get into this machine without weapons, "truth serum", or monetary inducements to certain priveleged individuals. Also worth noting is that this machine isn't really practical for everyday use...

    • Perhaps there is a LAN on the ship?

      Perhaps someone dials in via satellite, gets some virus, and later plugs into the LAN to see what is for dinner and it spreads.

      Like you said - the nav systems should be seculuded, but you never know. Perhaps the Captain likes to look at the info in his cabin?
    • by Sinus0idal ( 546109 ) on Monday March 11, 2002 @03:54PM (#3144372)
      This isn't any longer the case.

      My father is a marine consultant, and I have been to several ships with him, which rely much more heavily than this on computer systems these days.

      One specific example-

      The charts used to navigate by a ship were running on an NT workstation on the bridge of the vessel. It is no longer a requirement for up to date backup charts to be kept on board. A CD is sent to the ship each week updating the charts to the latest version, but the backup paper charts that are kept are not updated at these regular intervals any longer because of the increased reliance on the NT charting software. The GPS onboard the ship updates the ships current position on the charting software running on the NT workstation so the master can see where they are with respect to the course that has been plotted previously.

      This same ship contains a small network, only consisting of 4-5 computers (its only a coastal tanker). One for charting on the bridge, one controlling & monitoring the amount of oil flowing on/off the ship in dock etc.. but..

      The ship also has access to email (and consiquently attachments) at sea via Immersat satellite software + (uhh-ohh) Microsoft Outlook. If a member of the ships crew were to open an email attachment apparently from the office, which was in fact a virus, and the network security was not up to scratch, it may have the capacity to shut down not only the ships main course plotting software (sending them to backup paper charts), but to disturb the monitoring of oil/balast on & off the ship in the dock.

      There are also proposed inprovements which would in effect link in the course plotting software with the autopilot, thus controlling the ships movements from the PC's course plotting software (unless of course, any evasive action were needed to be taken - the master would switch to manual).

      This is only a small example of the problems that could genuinely be caused if a virus infected some of the more modern ships in todays world.
      • The ship also has access to email (and consiquently attachments) at sea via Immersat satellite software + (uhh-ohh) Microsoft Outlook. If a member of the ships crew were to open an email attachment apparently from the office, which was in fact a virus, and the network security was not up to scratch, it may have the capacity to shut down not only the ships main course plotting software (sending them to backup paper charts), but to disturb the monitoring of oil/balast on & off the ship in the dock.

        Don't worry! I'm sure that Crash Override, Acid Burn and Cereal Killer will save us all by hacking into a Gibson with their iBooks!
      • The charts used to navigate by a ship were running on an NT workstation on the bridge of the vessel. It is no longer a requirement for up to date backup charts to be kept on board. A CD is sent to the ship each week updating the charts to the latest version, but the backup paper charts that are kept are not updated at these regular intervals any longer because of the increased reliance on the NT charting software. The GPS onboard the ship updates the ships current position on the charting software running on the NT workstation so the master can see where they are with respect to the course that has been plotted previously.

        Well this doesn't sound too horribly dangerous, although it's a little sloppy IMOP. Presumably (correct me if I'm wrong) it's acceptable in this situation if the navigation system is subject to short periods of unavailability? Just how bit a problem is it if that NT box is totally destroyed in mid-voyage, however?


        This same ship contains a small network, only consisting of 4-5 computers (its only a coastal tanker). One for charting on the bridge, one controlling & monitoring the amount of oil flowing on/off the ship in dock etc.. but..

        The ship also has access to email (and consiquently attachments) at sea via Immersat satellite software + (uhh-ohh) Microsoft Outlook. If a member of the ships crew were to open an email attachment apparently from the office, which was in fact a virus, and the network security was not up to scratch, it may have the capacity to shut down not only the ships main course plotting software (sending them to backup paper charts), but to disturb the monitoring of oil/balast on & off the ship in the dock.


        Well obviously that's a huge problem just waiting to happen. I certainly would never sign off on such a system. But the question remains just how much better would be good enough? Just how catastrophic, for instance, would it be to lose that balast monitoring system?


        If this system can be taken offline safely for, say, an hour at a time, then I would not say changing OS is necessary - a sensible program of security and reliability enhancement can easily make a windows based network perform at a level that's acceptable in that case. Given how much these vessels cost it would seem horribly short sited to scrimp, so I would recommend:

        • Strategic network firewalling that blocks any communication not needed for the functioning of the systems as intended, as a prophylactic.
        • A thorough software scrubbing. Obviously Outhouse has to go. MSIE can and should be completely eradicated (yes, Virginia, you really can do that, despite what MS claims.) This list could get pretty lengthy, but it boils down to removing risky software, and replacing it with less risky equivelants when that is needed.
        • Each machine should be torn down to exactly what is needed on it, then imaged. There are several ways you could go from there, depending on the exact circumstances, but one good option is simply to have a couple of cloned replacements for each station ready and locked in the ships safe. Alternatively, cloned harddrives only could be kept, along with plenty of spare parts, if the ship will always have a qualified tech no board to make repairs.

        Switching Operating Systems might eliminate the need for some of that work, but much of it needs to be done regardless. Hardware failures need to be planned for, in particular.


    • Go back, reread the articles, put on your critical thinking cap and try to explain to yourself what must have happened.

      The article talks about a software problem, not an OS problem.
  • Rainbow Books (Score:3, Interesting)

    by Slashamatic ( 553801 ) on Monday March 11, 2002 @02:42PM (#3143882)
    One of the oldest sources were the rainbow books, namely the Red and Orange books that were produced by the NCSA. The Orange book addressed standalone systems and the Red book addressed networked computers. Regrettably some systems managed to be passed even though the criteria must have been 'nudged' to allow them to do so. The criteria addressed security but sort of left other aspects out. It was a standing joke that you could switch a computer off and bury under concrete and it would pass the A criteria of the Orange book.

    Later the EU produced their Green book which looked at availability as well, this is kind of good for information systems but it doesn't really cover real-time control systems.

    A long time ago, I worked on real-time control systems. We divided our systems into control/measurement, supervisory and at the top, information systems. At the lowest level, we are talking hard real-time and simple enough to be very reliable. They had to be as they were typically sitting by a man-sized chemistry set. The supervisory systems gave the pretty interfaces, they could crash, but generally they didn't. These were for control rooms, and whilst bypassing them was possible, it wasn't easy. The top level system ran all kinds of complicated software applictions that could and would occassionally crash. Apart from the crudest electrical standards for the stuff in the plant and the control room, there were no evaluation criteria.

  • by spaten-optimator ( 560694 ) <arich.arich@net> on Monday March 11, 2002 @02:42PM (#3143890) Homepage
    I worked for a famous defense contractor located in Fort Worth, TX. My department was responsible for writing requirements for software that was installed on fighter aircraft.

    When using a requirements-based system (where you write requirements for software and then the software is written from the requirements), there are multiple checkpoints. First, the requirements document for the software must meet or pass certain criteria. Second, the software must meet or pass the criteria put forth by the requirements document. Third, the software is rigorously tested.

    Now, in fighter planes, the software must be incredibly robust - you don't want planes falling out of the sky - and in defense projects, bureaucracy tends to inflate the whole process.

    That being said, requirements are an excellent way to control the quality of software, or an installed computer system.

    And this is important! We all remember the movie Hackers, in which the Davinci virus was going to cause a bunch of oil tankers to tip over into the ocean. And we all know how closely that movie parallels reality.
  • Talk to the FAA (Score:4, Informative)

    by blair1q ( 305137 ) on Monday March 11, 2002 @02:44PM (#3143897) Journal
    The FAA has well-known procedures in place for certifying HW and SW for safety. Look up DO-178B, for instance.

    It'd be almost trivial for the shipbuilding industry to adapt them to their somewhat lower-risk environment.

    --Blair
  • Sure Windoze apps have buffer overflow holes like [insert good analogy here], but when was the last time your WinVERSION came installed with a plaintext remote login server? Or have you seen Windoze setup with directory services exporting crackable password hashes? .. Unix is safer in many respects (especially to the scheduled-to-be-obsolete Win95/98/ME series), but I don't know that it's a cold statement of fact to say everyone knows that UNIX is safer than Windows. Depends on the attack vector.
  • UL (Underwriters Laboratories, Inc) [ul.com] started as a way for Insurance companies to make sure that electrical devices would cause fires and burndown building that they were insuring. Policies wouldn't pay for a fire if it was caused by a non UL certified device.

    Unfortunately insurance doesn't pay if software is defective. There has been some talk about insurance companies writing policies to cover web site breakins, if this happened I'm sure UL (or some other company) would quickly step up to certifing software and configurations.

    The real problem is maintaining the certification. What might be good at one point could have undiscovered problems that when discovered cause a once certified piece of software to become delisted.

  • by mmcgreal ( 259944 ) on Monday March 11, 2002 @02:49PM (#3143936)
    "Everybody seems to know, for instance, that UNIX is safer that Windows, "

    This is a poorly worded, and completely unsupported opinion. I despise the Evil Empire's crappy software as much as the next person Slashdotter, but Windows can be just as secure as anything out there, it's just that it's so poorly administered most of the time, it's often left unsecured, and therefore gets abused constantly.

    Ultimately both operating systems have superusers, so both OS's are inherently dangerous. How is Unix any safer if you only have to exploit one vulnerability to take the whole system?

  • by twitter ( 104583 ) on Monday March 11, 2002 @02:51PM (#3143959) Homepage Journal
    Nuclear is the most regulated place in the world, right? Well, even there you have to have people who can think and exercise judgement. Check out 10CFR50-2 [nrc.gov]for this very important definition:

    Design bases means that information which identifies the specific functions to be performed by a structure, system, or component of a facility, and the specific values or ranges of values chosen for controlling parameters as reference bounds for design. These values may be (1) restraints derived from generally accepted "state of the art" practices for achieving functional goals, or (2) requirements derived from analysis (based on calculation and/or experiments) of the effects of a postulated accident for which a structure, system, or component must meet its functional goals.

    The same logic underlies all design. At some point you have to have engineers you trust and they should be versed in the "state of the art" and all applicable studies.

    In the nuclear industry we can and do rely on vendor studies. Who else but GE is going to know the maximum power levels that are safe with their reactors? They built a full scale model and proved it.

    In the software industry, as you have noticed, things are a little less clear. First, Microsoft is an unethical company. (gotta go before finishing!) You and me both know that Windows is an unstable system. It changes all the time and those changes break programs. Some would even say that Windows is unstable without any changes, and indeed sites that use it typically see 30 day uptimes and no better. Anyone who would relly on such a thing for something that in is in any way needed to protect the public safety is incompetent. How that might be worked into a ship is a matter of judgement. I would not use it except as a game platform in the rec room or to look after some system that is superfuous.

  • Solution (Score:2, Funny)

    by madmagic ( 318186 )
    The answer is obvious if you're looking for the best way to secure an onboard system: hide the ship.

    -mm
    obscurity mon ami
  • First, let me point out I work with both *nix systems and Windows. Both have problems. I'm not going to address these problems.

    My thoughts on this are, what levels of security are required? I've never heard of someone hacking an oil tanker, but just because I've never heard of it doesn't mean it hasn't happened, or is impossible.

    My opinion is that the most important thing you would need software for is navigation software, in order to determine location, and software for weather reports, so you can plan ahead for adverse weather conditions. Can you get both for either OS? Sure (but I don't know names). Do they work? Well, if they didn't we'd have a few more ships crashing into reefs.

    It gets away from secure systems, in my opinion, and more towards robust systems. Maybe it's just words, but I view secure and robust being different.

    EFGearman
    --
  • You wanna get everyone looking over our shoulders all day long!

    M@
  • The real problem has very little to do with software and very much to do with the people running the software.

    I don't care how secure your unix system is, if your root password is "password" or you let root telnet into it, you're system is insecure. Selecting "unix" over "NT" should not save you money on insurance if it's the same moron running either machine.

    Not to mention that there is some inherent risk in change. If you declare that "Unix is secure" and give a break to anyone using it, you're going to end up with a former NT administrator forced to admin a system he knows nothing about. (The same would be true in reverse.)
  • by NateTG ( 93930 ) on Monday March 11, 2002 @03:03PM (#3144021)

    I recall that a while ago some navy ships were stopped dead in the water due to computer failure, so there are legitimate concerns. Most ships have a large number of fallback systems - notably crew - that can recover from most problems.


    Large ships also benefit from a reasonable physical security structure - limited bridge and engine room access for crew - that help computer security


    In light of a natural physical isolation, limiting the net access of the navigation computers is a natural and effective security boost.


    Most of the 'essential' computer systems that are currently used are not OS based, but embedded. It would be silly to worry about the electronic fuel pump in your car getting a worm. These embedded systems are often virus proof because they use ROM program space. Any bugs are the result of programmer error and insuficient testing



    So, I suspect that only high-level systems like navigation are vulerable to worms. Now, let's take a look at possible damage


    Massive failures can be caused by hardware, so there must be a backup system regardless of the software that you choose


    The same redundant systems can also be used to keep the master system honest



    In general good policy and management is more important that what software is used.

  • by JoeShmoe ( 90109 ) <askjoeshmoe@hotmail.com> on Monday March 11, 2002 @03:03PM (#3144026)
    Maybe now that companies are offering hacker insurance [ecommercetimes.com] some standards and guidelines will develop?

    On the other hand...when has the computer industry ever mirror any real world industry? We still don't have the equivalent of the Consumer Product Safety Commission nor is there product liability, recalls, or defect-related lawsuits.

    If there were, Microsoft would make the Ford/Firestone fiasco look like nothing.

    - JoeShmoe

    .
    • Lloyd's of London started the idea of classifying itmes basd on their survival rating. This is where the idea of ship classification came from, IIRC, and is where the phrase "brass bottom boat" came from. Boats were rated on thier likelihood to survive, and those with brass bottoms were rated with the highest survivability, and therefore received the best insurance classification (talking days of the West India Teading Company here). Ironic that that poster works for such a modern company.

      Saying that the admin makes a difference (which it does) is not much in the eyes of an insurance underwriter. You could say the same about a driver of a car, or even the captain of a shit (what is the captain of the Exxon Valdez doing these days)? You could be the safest driver on the road, but insurance just sees an 18 year old male with no prior accidents.

      An NT box with a good admin can be made safer than a *nix box with a poor admin, but insurance looks at classifications.
  • by cplcap ( 110242 ) on Monday March 11, 2002 @03:09PM (#3144066)
    There is one answer... the US government has published a civilian version of a process that the DoD has been using for a while. It's called the NIACAP (NSTISSC 1000), here. [nstissc.gov]
    Simply put: It defines a complete, scaleable, tailorable and relevant process to design, test, certify and maintain a system for use.
    IF: 1. Good, well informed individuals identify vulnerabilities during system design and testing,
    2. The upper management commits to following the maintenance plan, and
    3. The priciples of good system design are followed (i.e. KISS, enforcement of least privilege), then many security issues are non-issues.
    IMHO, one of the most important things in certifying a system for a critical app is to get the underlying SW from a reputable vendor, one who identifies "Day 0" exploits immediately, preferrably one on the Common Criteria List, and offers a modularized package to limit the amount of unused but potentially vulnerable code in the system. No system is going to be immediately perfect now and for its entire lifespan, but follow a good maintenance plan and you may even be able to make a M$ system secure!
  • by Arandir ( 19206 ) on Monday March 11, 2002 @03:09PM (#3144070) Homepage Journal
    It all depends on the industry in question. Take as an example, light bulbs. When you buy a lightbulb for you bathroom light, no one really cares. But when you buy a light bulb for your car headlight, you start running into safety regulations. And when you buy a light bulb for your left airplane wing, the FAA is going to be breathing down your neck.

    I help build software for invasive diagnostic medical devices. The FDA (and similar organizations for other nations) is very concerned about the software we use. They don't have a checklist of brands, makes and models of software, since that's not the nature of software. But they do audit our development process. ISO compliance is easy. FDA compliance is hard.

    For our next project, some boneheads decided on Win2K and "embedded" Win2K. I personally think the decision is stupid. But it probably won't affect the final quality of the device. Why? Because it won't be a stock Win2K, it will be the embedded version, stripped of everything we don't need. We will be in charge of the hardware it runs on. It will be tested under rigorous protocols. Etc.

    The FDA doesn't care that it will have Windows on it. But they will care that it operates safely. That means it can't crash while diagnosing a live patient.
    • That means it can't crash while diagnosing a live patient.
      Not true; it only has to fail safe. The FDA wouldn't care if it crashed, so long as:
      1. The machine could not malfunction in a way which would harm the patient, and
      2. The machine would not report erroneous data which could lead to harm from subsequent mis-treatment of the patient.
      How you'd demonstrate such things given the legendary instability of Windows, I have no idea.
  • Networked ships? (Score:2, Interesting)

    To be perfectly honest, is the computer on a ship going to be networked externally? Maybe the control systems on board are linked by a network, but surely there is no need for vital systems to be connected to the outside world?

    If the ship needs a internal network connected to an open network, then it should be entirelly physically separate to the control systems. No firewalls, no fancy security measures. Just no route between the two.

    More of an issue is software reliability and stability. I won't get into the linux/windows argument, but generally, a more stable, stripped down system can be easily achieved with linux. In windows, you run the whole OS, no two ways about it, even if it just adds to instability and problems.

    Generally, on essential computer systems, such as those on planes, radar, life support systems, and sattelites, are as simple as possible, and undergo rigerous testing. The development is often frozen early on because of this, resulting in reduced features, but better overall performance. It can take several years for changes to propagate throught the system... this can be annoying if it is as simple as a GUI change (say, one display needs to be frequently accessed, but requires several button presses, where another, rarely used display has instant access off the yoke).

    Hardware reliability could be a problem as well - though I should imagine these systems are ready built by people who know what they are doing. I wouldn't trust off the shelf boxes and bog standard cat 5 linking them.

    Redundant systems are probably a very good idea - as is some form of power conditioning and UPS system, as ships power may not be the best.

    There is a lot to consider, but I think you may just as well turn to someone who has experience with aviation computers as well as someone who knows a lot about closed network security.

    And imagine.... maybe the dodgy oil tanker plot in hackers could come true...
  • Risks of www.dnv.com (Score:3, Interesting)

    by mosch ( 204 ) on Monday March 11, 2002 @03:22PM (#3144175) Homepage
    Your webmaster, for instance, does not understand how to properly create a website, therefore their website creation software should be listed as high-risk.

    Click on 'classifications', then try to use any of the links on the left, register of vessels and such. The link for that is file:///registerofvessels. Needless to say, that link doesn't work too well on a public internet.

  • by ahde ( 95143 ) on Monday March 11, 2002 @03:33PM (#3144247) Homepage
    "Captain -- the minesweeper program's crashed again!"
  • Yes, [everyone knows] UNIX is safer that Windows. BUT, that is in general, not specific.

    I can write my own Unix, make it fully posix, even pay for legal use of the unix name. (I don't have the money, but I could in theory) I'm a fairly good programers, and I've done some OS level work. However I know next to nothing about writting a secure system, and apart from the backdoors that I intentionally put in my code, there will be many accidental security holes. However it would still meet the standards to be called unix by all measures.

    The point is your standards need to mandate a solution that works. Require code audits by qualified external parties if it is net connected. Make sure your external parties are well chosen (example Bruce S. or applied cryptology fame, or his company), but make sure you have several different experts represented. Make sure the requiremetns are reviewed. Accually, you probably have processes for reviewing the machanical areas of the ship, extend those processes to the software. Remember, anything you can do in software I can do with gears (though in some cases I don't know if there is enough metal on earth to accually make all those gears, not to mention the relability) so your mechanical review process should extend to software.

    Do you let your suppliers buy an engine (eg from Cummins) off the shelf and put it in, or do you require that your mechanicial engineers examine the engine design first. If they can buy any engine, then they can put in any software. If you need to see all the engine design, then you need all the software design.

  • FDA Examples (Score:2, Informative)

    If you want examples on a governmental body checking computer software, look no further than the FDA. The Good Manufacturing Practices for 21 CFR Parts 210, 211 and 810 are the bane of anyone trying to get FDA validation for their company. It covers everything from system setup, networks, vendor experience, change control, electronic signatures and testing. It will make IT sysadmins cringe in fear.

    Simply do any Google search on "FDA 21 CFR" and you'll find hordes of information that you can use.

  • Medical hardware (Score:2, Insightful)

    by Fopster ( 565757 )
    Many medical diagnostic machines must be validated by the FDA for use. A friend of mine works for a medical instrument company, and the hardware/software check are quite involved. You might see how the FDA and the various hardware manufacturers handle this issue.
  • Ahh..

    Another day, another "Tell Me Exactly How To Do My Job post masquerading as an Ask Slashdot question.

    :)
  • In Microsoft's anti-monopoly case, Microsoft's lawyers had to use WordPerfect to prepare their case because MS Word didn't meet the relevant bar association standards. If I remember correctly Word didn't count words reliably, so both sides couldn't be certain that they were looking at complete documents.

    Also I believe there is a similar set of standards for accountants using spreadsheets.

    Most of us just assume that our software is going to work and tell horror stories when it doesn't, but for those whose very careers depend on the accuracy of their programs, software is indeed very closely monitored.
  • so Internet security isn't an issue. For a shipboard computer, you only need two things:

    * No network connections to non-trusted systems (i.e., onboard crew and passenger personal systems)

    * Solid stability and reliability in operation.

    Given those, your ship computers should be secure.

    • In the future, if not the present, internet security for shipboard computers WILL be an issue.

      You can expect that navigation systems will at some point receive updated charts or Notices to Mariners via the Internet.

      You can expect that navigators will receive up-to-the-minute, detailed reports about harbors they are about to enter.

      You can expect that shipboard control systems will interface with shipboard navigation systems, which by reason of the aforementioned scenarios, will effectively have a traceable data connection to the PC whose monitor you are staring at right now.

      What is necessary are firewalls: 1) between the satellite-uplink internet connection (duh, of course they have this, they'd be stupid if they didn't); 2) a packet-inspecting firewall between the LAN that has full internet access and the navigation system allowing only those packets pertaining to navigation to pass; and 3) a packet-inspecting firewall between navigation and control systems.

      The navigation system may be allowed limited access to the internet, perhaps only to certain sites. The control system should have NO access to the internet; rather, it should only be able to communicate with the navigation system.

      Of course, I say all this with NO expertise and NO experience in shipboard IT infrastructure.
  • by Lish ( 95509 ) on Monday March 11, 2002 @04:29PM (#3144570)
    The Common Criteria:
    here [commoncriteria.org] and here [nist.gov].

    Which supersedes the Orange Book:
    here [ncsc.mil] and here [iastate.edu].
  • You might find my Secure Programming for Linux and Unix HOWTO [dwheeler.com] useful. It's a set of guidelines for writing secure programs, including writing web applications, clients, viewers (including word processors), setuid/setgid programs, and so on. It's focused on Linux and Unix, but most of the general principles apply to all systems.
  • OK, first off if your looking primarily from an insurance standpoint any number of criteria can be used.

    Since the computers are in a marine environment are they resistant to (salt)water?

    Is there a knowledgable(sp) computer tech on board, and what are his additional duties. Is he/she there to make sure the computer system stays online or is he/she also cleaning out shitters?

    One computer system is much like the other much as one OS is much like the other. Both Linux and Windows (pick your version) have it's bugs, and will the particular bugs have an effect on the operation.

    In any operation there ideally should be enough spare parts around that you could build another complete unit if needed, but there's never an ideal situation.

    The list could go on and on and on, but there are a few major points...

    1) environment
    2) support personell
    3) inventory
    4) access

    Most here will be talking from an electronic security aspect, but on a ship the major focus should be physical security.
  • It's been pointed out that ships whose absolute-position navigation systems (GPS, LORAN, radar, etc.) conk out depend on dead reckoning: determining position based on speed and initial course.

    It occured to me that this is the way software purchased are too often made: rather than determining exactly what is needed, purchases are based on what's already there and how fast development has proceeded. It seems like people buy the newest version not because they need it, but because it's available. Most users I know would be doing just fine with Word 97, (heck, most of them would do great with WordPerfect 6 for DOS) but they have upgraded to Word 2000 then Word XP because it's there. (I used to use WP6/DOS extensively, and it NEVER crashed on me.)

    If Microsoft spent more effort making Word 2000 and Windows 98 more stable than succumbing to feature creep, the world would be a better place.

    If people wouldn't upgrade for the sake of upgrading, they could demand that future software versions be compatible with older versions: a document in Word XP should be openable in Word 1.0.
  • by Webmoth ( 75878 ) on Monday March 11, 2002 @05:53PM (#3145135) Homepage
    Many people have brought up the SECURITY question here, myself included. But the issue is SAFETY.

    SECURITY asks, will the lock keep out intruders?
    SAFETY asks, will the lock allow personnel to pass quickly in the event of an emergency?

    SECURITY asks, will the window resist breaking in an intrusion attempt?
    SAFETY asks, will the window resist breaking if accidently impacted? Can the window be used as an egress in an emergency? If the window breaks, will the fractured glass cause injury?

    SECURITY asks, can intruders compromise the ships navigation or control systems?
    SAFETY asks, will failure or compromise of the navigation or control systems have a negative impact on life or property?

    SECURITY asks, does the system have permission to perform task A while being restricted from performing task B?
    SAFETY asks, are the navigation or control systems able to the specified job in the specified manner?

    SECURITY asks, how will access be controlled in the event of a system failure or compromise?
    SAFETY asks, how will catastrophic failure be prevented in the event of a single system failure or compromise?

    Hopefully, these questions will give you an idea of the kinds of questions a computer systems safety panel would be responsible for answering. Security is concerned with authority, which is NOT the question here. Safety is concerned with protecting the life and health of personnel and the physical integrity of assets.

    That being said, Michael should go back and revise the headline to read "Computer SAFETY Criteria."

  • There has been a lot of work on establishing standards for safety critical systems. search google or try http://www.afm.sbu.ac.uk/safety/ as a start
  • Everyone's saying "it should be designed like such and so" and "keep it out of the water" (duh) and so on. That's all well and good, but the question here is about measurement. You've got your theories, you've implemented them, now how do you decide whether they hold up?

    The only way I can think of is to do some good old fashioned actuarial analysis. It's a lot of work and a lot of time, but basically answering this question involves (1) collecting gobs of data and (2) analysing it. As well intentioned, well-researched, and sensible as the rest of the front end design advice might be, it's basically a lot of handwaving. It's about where the rubber hits the road, not a theoretical discussion about what chemicals should be used to make tires.

  • ... is that in ordinary engineering testing works better and that most mechanical engineering practices are well established. Furthermore the components ships are made of (except the software) are well understood and often have been tested and verified for hundreds of years.

    Software is different in three regards.
    1. It is on of the hardest disciplines knewen to man, together with creating mathematical theory and probably genetic engineering.
    2. In physical engineering you have tolerances. Often systems only fail if some component is close to a tolerance border or a substandard components was used. This does two things: You get a slightly different set of test parameters for every system deployed and doing redundacny is very easy and trivially added, as the tolerances usually are not all at the lower limit. With software you allways get exactly the same system, no inherent redundancy and no slightly different test -environment for every system.
    3. Software engineering is a young discipline. I have serious doubts that it is far advanced enough at the moment to really have quality criteria that can serve as a solid basis for risk management. So the only thing you get is the gut-feeling of knowledgeable people. Far better than nothing, but in my opinion a lot of the practical use of computers in critical systems is just one gigantic experiment. The adoption is far to fast from any sane engineering viewpoint.


    I think this will change drastically as soon as software makers start to have real liability for the products they sold (free software is a seperate issue), like other engineers do. Then it might just happen that you will not find anybody willing to do software where their feeling tells them the art is not advanced far enough. And software production will be slower and more careful. And even more important those that fail repeatedly will have to leave the business!
  • by zlooj ( 565827 ) on Monday March 11, 2002 @09:22PM (#3146228)
    IEC 61508: "Functional safety of electrical/electronic/programmable electronic safety-related systems".
    This standard, which also applies to software (see 61508-3: Software requirements), defines some very stringent requirements for systems that have anything to do with safety, i.e. where a failure of the system could endanger life.
    See the IEC's website [www.iec.ch] for more...
  • by bul ( 565904 ) on Tuesday March 12, 2002 @09:03AM (#3148205)
    Computers for main functions (propulsion, steering, cargo) in a ship have been in use since the mid seventies, and although lagging somewhat behind in the beginning when it came to Rule coverage, all major Shipping Classification Societies today have Rules which cover above use of computers onboard ships. This relates both to hardware and software. E.g.:For DNV (Det Norske Veritas) see Rules Pt.4 Ch.9 (Instrumentation and Automation) Sec.4. This is 2,5 pages of what experience have taught us are the most important aspect concerning computers onboard. However, everything else in Pt.4 Ch.9 concerns computers as well as other technology platforms, the Rules are written to be as technology independent as possible. The gradual increase due to expense Considerations in the use of PC's as workstations, , are something we haven't taken lightly. The hardware needs to prove itself by going through environmental/EMC testing (See Rules Pt.4. Ch.9 Sec.5 and Standards for Certification 2.4), and the software is tested by Approval Test of Application Software, where normal operation as well as reaction to most probable system failures are tested. Admittedly the first Windows versions were not secure, but today's versions are mostly acceptable, that is if you know which precautions to take. Of great concern is young eager software designers who haven`t learned their lessons and read necessary safety documentation before diving into the design phase. It seems DNV as a Classification Society have a similar problem. We would not object if you do some more homework and then revert with your findings! By the way, DNV does have a group working with software analysis as well, as far as I know they are mostly used in the consulting role, for manufacturers developing extremely safety critical systems. One last information: DNV consists of 5400 individual spread all around the world, all trying their best to fulfil our intentions of keeping our customers on the right track with regard to safety matters.
  • Read this first ... (Score:3, Informative)

    by Zero__Kelvin ( 151819 ) on Tuesday March 12, 2002 @10:58AM (#3148888) Homepage


    Bruce Schneier's Secrets and Lies : Digital Security in a Networked World [amazon.com]. Many of your questions will be answered, and you will walk away from the reading with much better questions.

It is easier to write an incorrect program than understand a correct one.

Working...