Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Security-Meantime Between Rootshell? 104

darthtuttle asks: "Hardware has a concept of meantime between failure, so how about applying a similar concept for software. Here's how it works. Cracks can be described by the level of access gained, some examples are: remote root, remote user (root if run by user root), remote group, local root, local user, local group, and so forth. Applications or services have their own measurements and descriptions as well. Most all types of cracks can be listed in an order and a higher level crack is equal to each of the lesser level cracks. For example: a remote root is also a remote user and remote group crack. Now measure the mean time between incidences! Do people find ways to break in to your system every day? Every week? Every month? Every year?"

"Rating a complex system would mean combining the ratings in some meaningful way. for example if you are measuring a RedHat install you might need to consider the name server, sendmail, and all other services running on the system on top of the kernel. Given a method to do this you could rate an entire infrastructure. I'm sure the insurance companies would love this. It would give them a way to measure the chance of you spilling the beans on your customers data.

I'm curious, do you think this would be useful if it could be done reasonably? What kind of mean times do you'd think you'd see for the various products out there?"

This discussion has been archived. No new comments can be posted.

Security-Meantime Between Rootshell?

Comments Filter:
  • You missed the most important variable: "what is the system used for?". Obviously, a Pentagon webserver receives more attacks than a non-networked desktop.
  • by Anonymous Coward
    The current trend among the linux community is to consider break-ins as Windows users would consider being infected by a virus - something that happens and that you can't really happen. You won't get any break in if you just try to half-secure your system - that is, if you read security advisories regarding your distro, and if you apply the correct patches in a decent period of time. Administrating a system securely requires you less time than doing stupid computations about the average time between two break ins.
  • by Anonymous Coward
    okay, lets say you're in a class of 30 people (say a uni lecture or suchlike) and each person is 30 years old. Thus:

    30 people x 30 years of life each == 900 man-years (literally, the number of 'men' multiplied by the average lifespan _so_far_, i'm just saying that they're all precisely 30 for simplicity. As long as the individual life-spans _add_up_ to 900 it's no big deal).

    Now, 900 man years -- someone in that class should be dead soon... :(
  • by Anonymous Coward
    Mean Time Between Important Slashdot Articles.

    Currently, 3 years.

  • by Anonymous Coward
    I agree with both sides of the *BSD argument. I use OpenBSD and Linux every day. I play with FreeBSD sometimes. I use Linux for my desktop (need SMP, multimedia, and VMware) and OpenBSD for my home firewall.

    I love the nice tight feel of the BSD's. I like the FreeBSD install much better than the over complicated or install-to-much Linux installs.

    Even though I really like the BSD's, Linux has got a lot of momentum behind it at this point. Some of the coolest (for me anyway) opensource projects are all too often Linux-only.

    True, a default Linux install can be less secure than say an OpenBSD install. Part of the problem is in the install and default configurations in Linux. I fully believe Linux can be as secure as OpenBSD if set up correctly. And there's some really neat security projects going on (SElinux, LOMAC, etc) and will really tighten up security for Linux (and offer more options and control than OpenBSD can at this time).

    Linux also seems to have more choices for encrypted filesystems. I like the lookback single file/device vs. the CFS encrypting each file that the BSD's use. ppdd was/is cool too, waiting for a version that works with a recent kernel because it's awesome to encrypt your root filesystem.
  • While this is a fine plan, similar indeed to the measurements for hardware, for human violence, and many other areas, it isn't really practical for software. Not because the idea - longer time between more severe events, and that initial conditions influence the times - is invalid, but because there are too many variations.

    It would not be difficult to measure the period of cracks in default installations of $OS with no added software, exposed directly to the internet at a low-profile location. Such numbers would be useless. Almost nobody actually has true default installs of anything, and virtually every system has configuration changes, software added or removed, various local administration practices enforced, etc. It also tells yout nothing about the effect of firewalls and intrusion detection, or the impact of merely *being* a higher-profile target or a site which provides certain services to the world.

    In short, there are conservatively a million different inputs to this function, among the least important of which is what the system looked like after the initial OS installation. Until such magical time as OS vendors find a way to make every possible user happy with the set of software and configurations installed out of the box, customizations will remain the rule, and at least in the Unix world, so many customizations are possible, exercising so much different code from so many different sources, that no reasonable analysis of this type is possible.

    In short, neat idea, but not possible to implement in any meaningful way.

  • I know quite a lot of math, both number theory and complexity and computability theory, and I read Gödel, Escher, Bach when it came out. To me, this sounds like a fair summary of the problem with this approach. There's no need to be so rude when you're wrong.

    With hardware, you can imagine a scenario where you ask it to do exactly the same thing a million times (say, eject a disk); it can do it right 999,999 times and fail on the millionth occasion. But because of the problem the original poster outlined, in order to measure the time between failures of software you have to assess the frequency of the events which tickle the bugs; in the case of the behaviour of script-kiddies, this means that what's supposed to be a very simple statistical measure of reliability incorporates a complex and controversial social model of the behaviour of an unpredictable group of people.
    --
  • by Cato ( 8296 ) on Saturday May 19, 2001 @11:10PM (#210964)
    The security world has a concept of 'security assurance' - this is the confidence that you can have that a given set of security features (e.g. granular privileges, ACLs, audit logging, label-based security, etc.) actually work. This is taken from the old Orange Book used to rate computer system security, which has its problems but remains a useful model.

    Bruce Schneier has been talking a lot about the need for monitoring and response to intrusions, based on the reasonable premise that you can't prevent 100% of all intrusions for all time (even OpenBSD has had remote root holes some years ago, and most holes are actually due to specific applications). If you accept this, it seems sensible to define goals for monitoring and response times.

    Defining such assurance levels is not easy - on one project, I wrote requirements for security assurance that defined quantified goals for such things as 'minimum time to detect break-in' and 'minimum time to respond to and stop break-in'.

    If you are interested in quantifying requirements for security (and other 'soft' requirements such as performance, availability, reliability, usability and flexibility), have a look at Tom Gilb's work - his site is at http://www.result-planning.com/ and most useful book isn at http://www1.fatbrain.com/asp/bookinfo/bookinfo.asp ?theisbn=0201192462&vm=
  • Speaking as a part-time sysadmin in a Solaris environment, I can only hope that referring to Solaris as one of "the most secure OSes" was a grotesque joke. I would not categorically say that it is worse than Linux or Windows, but it is definitely no better- new exploits for Solaris come out all the time- currently I know of one rootshell exploit which is just hanging over our heads because it's in a subsystem which can't be turned off, and which Sun has so far failed to patch. I can't speak to the other OSes you mention, but "how many expolits do you hear of for such-and-such" is totally irrelevant to the actual security of the system, and the fact that you include Solaris in the list makes me rather skeptical of the rest of it.

  • I can make the example of the company where I work (no, I won't mention the name).

    They buy mostly dual-proc machines since, given processor obsolescence, they will last longer: it's the same reason why I bought a dual-proc at my home, and after three years it's not yet ready to be dumped.
    Back to the matter at hand. Those computers (most running NT) sit idle 95% of the time, because the limitations are not CPU power, but ADMINISTRATIVE (what belongs to whom), ADMINISTRATION-related, what kind of setup is needed, whether it's to be high-availability), assorted problems with the OS (load a host more than X and NT - or Win2k - will go BANG), and general reliability problems (if you listened to Microsoft's specification, you'd have one site hosted on each server, no more.
    Still, the double CPU thing somewhat limits obsolescence, and so it persists.
  • Two comments:

    A better metric might be the ratio of intrusions/attempts

    For the rest of the text, one have to assume that intrusion attemts are common enough to measure. If there are less than one script kiddie attempt/month or something like that, chances are very small that someone would direct anything but a random attack at the site.

    But, I would also suggest adding an Average Intrusion Age, i.e., the number of days/weeks/years the methods used for the intrusion attempts was known to the general public. In my opinion, that way a metric on how interesting a specific site is to hack can be obtained;

    The theory is that only people that are interesting in hacking the specific site will bother using the newest methods. Script kiddies will be forced to use older methods, waiting for the availability of public tools, and should the hack attempt fail, they are likely to focus on other sites instead.

    By combining that metric with the average time taken by you to patch a sequrity hole from the moment it the hole was known to the public, one would get an indication of the probability that an intrusion attempt will succeed.

    That knowledge, toghether with the number of attempts/month can be an indicator of if the systems sequrity is good enough.

    Second comment:

    Hardware can be meaningfully rated with a MTBF value because the errors are random, physical (often mechanical) defects. It is a measure of how likely a given operation is to fail.

    With software, usually the same operation always fails. Software errors are design errors, not random failure.

    I would like to add some nuiances to your statement; Yes, hardware failures due to malfunctioning or breaking components are random - providing the design itself is flawless, but there may also exist design flaws.

    The same does exist in software as well. Some failures are caused by design flaws, others are introduced due to human error (typos that makes it through the compiler etc). The latter are per definition random, and can happend anywhere in the code.

    When a software fault occurs, it does the same error every time that piece of code is executed with the same data. But the same applies to a mechanical component as well; if it can break in one way, that's the way it breaks when it breaks.

    However, mechanical objects can break in an unpredictable number of ways since mecahnical (and electrical items) are affected by a hughe number of unknown and apparently random outside parameters, but that is true for software as well: Software very seldom execute in the same way - due to circumstances not controlled by the actual application (such as other applications, actual sequence of user or I/O input, swapping, availability of resources, timing and interrupts etc).

    Thus, the failure might appear to be random to an outside viewer in the exact same way mecanical failures can appear as random.

    /Flu

  • The average number of deaths per 1000 people per
    year is 9.3 so the MTBF for humans is ~108
    man years.
  • I know for a fact my Meantime between rootshell is "forever".. Nobody has ever hacked me...
    I also know for a fact nobody has ever TRIED or even cares about trying. My computer could quite possably have more security holes than swiss chease.. I'm a non-target..

    I try to secure my box becouse random script kiddys don't care if the victom is a major bank or some random users game box. They just want to prove they are cool by distroying something.

    I also avoid script kiddies.

    So what would it count for?
    Nobody ever hacks my computer becouse they don't want to. But ultra secure computers get hit daily. The diffrence is the importence of the box not the quality of security.
  • Dual or quad CPU systems tend not to be much good for web serving as this tends to bottleneck on the network card, we've recently abandoned E250s in favour or netra t1s, but there are still a few E450s serving very heavy CGI loads. I will admit that I hate to see our database servers too, I'm going to turn one into a minibar when we finaly get rid of the dam things.

    One of ten Netra t1s...
    Apache Server Status for (restricted)

    Server Version: Apache/1.3.12 (Unix) mod_perl/1.24

    Current Time: Monday, 21-May-2001 11:35:36 BST
    Restart Time: Saturday, 19-May-2001 19:14:10 BST
    Parent Server Generation: 0
    Server uptime: 1 day 16 hours 21 minutes 26 seconds
    Total accesses: 1650360 - Total Traffic: 3.8 GB
    CPU Usage: u250.07 s78.74 cu7.47 cs2.02 - .233% CPU load
    11.4 requests/sec - 27.1 kB/second - 2444 B/request
    25 requests currently being processed, 7 idle servers

    One of two E450s....
    Apache Server Status for (restricted)

    Server Version: Apache/1.3.6 (Unix)

    Current Time: Monday, 21-May-2001 11:43:40 BST
    Restart Time: Monday, 21-May-2001 00:00:00 BST
    Parent Server Generation: 234
    Server uptime: 11 hours 43 minutes 40 seconds
    Total accesses: 62688 - Total Traffic: 396.9 MB
    CPU Usage: u1 s1.11 cu.09 cs.04 - .00531% CPU load
    1.48 requests/sec - 9.6 kB/second - 6.5 kB/request
    12 requests currently being processed, 6 idle servers
  • That is why for web work I use zope. I can assign acl on a per function basic in my python code. It would slow it down if I did it on everything however if judiciously used you can make solutions near impossible to crack.

    I would love to have zope type acl capabilities in linux and am happy that a group is working on it to replace this user/group/world thing.

    Check out the kind of security problems zope has had. Almost every one is actually an admin error that got "fixed" so others could not make that mistake. Ie making things essentially suid root. I think if linux had full ACLs with very find grain control and large list sizes the security problems in linux would drop like a rock.
  • Crispin - Where have you guys been? I was wondering when you would re-release the 7.0 version.
    Takin' care of business:
    Does this release take care of the compilation problems of RH7?
    That's a matter of perspective :-) Immunix OS 7.0 [immunix.org] ships with StackGuard 2.0 [immunix.org] (which is a modified GCC 2.91) as the standard compiler, and glibc 2.2. It also ships with FormatGuard [immunix.org] protection throughout.

    Can I build a 2.4 kernel with this?
    We're not shipping 2.4 kernels yet, but we are working on forward porting. Note: You should not try to compile kernels with StackGuard. You either need to patch the kernel make files [wirex.com] to turn StackGuard off, or use RPM to switch to the non-StackGuard compiler [wirex.com] while building kernels.

    I would really like to use XF86 4.03
    We are a server company, so we focus on server support, and not really desktop stuff. However, our engineers like to run Immunix on their desktops too, so we share what we use in our contrib directory [immunix.org].

    Crispin
    ----
    Crispin Cowan, Ph.D.
    Chief Scientist, WireX Communications, Inc. [wirex.com]
    Immunix: [immunix.org] Security Hardened Linux Distribution
    Now available for purchase [wirex.com]

  • Here's some actual research in this area:
    • At last week's IEEE Symposium on Security and Privacy [ieee-security.org] Bill Arbaugh [upenn.edu] presented a very interesting paper on trend analysis of exploitation, as represented by CERT incident reports. Summary: most attacks exploit known security vulnerabilites that a site admin did not patch.
    • Jim Reavis at Securityportal.com [securityportal.com] did this great study [securityportal.com] examining the "days of recess" for each of Red Hat, Solaris, and Windows NT. "Days of recess" is the total number of days that an exploit was known but no patch available, summed over all vulnerabilities for that platform.
    • At WireX, we are working on a related concept that we call "Relative Invulnerability". Here, the idea is to consider the number of vulnerabilities for a "base" system (e.g. unpatched Red Hat 7.0) that appear over a period of months, and then consider how many of those unpatched vulnerabilities are successfully mediated by some protective technology such as SELinux [nsa.gov] or Immunix [immunix.org]. The fraction of vulnerabilities stopped is the "relative invulnerability" of the defensive technology. This is written up in a paper that is currently being reviewed.
    Crispin
    ----
    Crispin Cowan, Ph.D.
    Chief Scientist, WireX Communications, Inc. [wirex.com]
    Immunix: [immunix.org] Security Hardened Linux Distribution
    Now available for purchase [wirex.com]
  • by PigleT ( 28894 ) on Sunday May 20, 2001 @02:18AM (#210974) Homepage
    Just a friendly word or two, then: beware of complacency.

    I've been on a site where I specced out the firewall rules with the native sysadmin; between us, none of *our* boxes ever got cracked in the last 18 months. However, that didn't stop some schmuck sticking a RH6.x box with rpc.statd on a public IP# - give it a week, *boom*.

    3hrs' turnaround including a forensics copy, custom build of RH and restoring data, I was quite proud of me. And it's never been cracked again, either. And then I finally found a writeup of a similar incident, and read `check for kernel modules', took another look at the forensics copy, and felt very small...
    Maybe another statistic to add to `expected time to (between) cracks' would be `expected turnaround time' as well - part of your security strategy has to be having a spare box to replace anything with.
    ~Tim
    --
    .|` Clouds cross the black moonlight,
  • Well, I for one use OpenBSD for all daily needs at work and that's what really counts. It doesn't matter moneywise if I use all the Linux ISO-image distros at home for free vs. that my company buys every single OpenBSD release for money and I put it on n+1 servers running everything from firewalls to DNS to desktop systems.

    I still use a home rolled Linux system for home use because of the much larger number of multimedia applications and support for SB Live! and SMP in Linux which OpenBSD lack currently.

    Other than that I have no reasons what so ever to run Linux on production systems at work whereas I can't think of putting any Linux system on the line to become cracked next to no time once exposed to the Internet.

    ++ Ray

  • "Put simply, oBSD is the single most secure OS in existance, and fBSD the highest performing."

    I agree that OpenBSD is probably the most secure OS out of the box, maybe even the most secure OS period. But the idea that FreeBSD is the highest performing OS is just plain silly. Pick any commercial enterprise level Unix variant at random and it will outperform any open source OS including FreeBSD. When FreeBSD can scale efficiently to 64+ processors like Solaris and AIX then maybe it will be in the same ballpark as these operating systems. Until then its small potatoes.

    Of course you could have been trying to say that that it was the highest performing *BSD variant, in which case I'd say you were right.

    Lee Reynolds
  • if you havent seen the average large companies systems then you might be in for a surprise. check out http://www.rackspace.com/complex/complex_configura tions.php under the advanced configuration -- that type is routinely used for low end company websites with some database backends.
    For webservers check out : http://www.rackspace.com/dedicated/recommended/ser ver_sun.php
    under the enterprise section. this is the average sort of website i deal with routinely. note that rackspace tends to skimp on redundancy...usually you see more redundant servers on most large company websites.
  • by Zurk ( 37028 ) <zurktech AT gmail DOT com> on Saturday May 19, 2001 @04:38PM (#210978) Journal
    *sigh* i agree about the root exploits part but youre simply trolling for the BSDs which is silly IMHO.
    any normal (read not yours) company will have at least dual or quad CPU hardware running in a cluster for their webservers..in most cases this may be outsourced or hosted at exodus or other NOCs. OpenBSD can support only 1 CPU at this time so that blows it out the water. FreeBSD is in a niche (not may admins can use it -- companies hire from the mass market so the skillset is definitely limiting fbsds existance) and doesnt scale properly as far as CPUs go (yes i know about the fbsd 5 improvements -- it aint here yet and still has the giant kernel lock).
    Linux is gaining more and more since the availability of admins is there and its easy to set up (even if crackability exists) and is familiar enough with apache. it also scales well now with the 2.4 kernel and supports a lot of rackmount hardware at datacenters.
    Solaris is usually what you find with netscape iplanet or apache at most companies..it scales well but costs an arm and a leg.
    NT/IIS is another alternative for those cheapo firms who cant afford to hire admins to run UNIX.
  • While physical security is an absoulte must, I find that most compromises are remotely done. Few people have the balls to walk into an office and take over a network drop. Few would be gutsy enough to jimmy the lock on a wiring closet door or cabinet. Any teenager can run a script on you from the outside and feel relatively safe because he's using a dialup account he registered in his English teacher's name. Now if we're talking about a site that has something to really protect, sensitive data not the boss's pron, than yes physical security is a must. When you really have something to hide, the worst attacks come from within. Social engineering comes at you from all sides. Joe Blow in the mail room is really short $$ one month and someone offers him big $$ if he can just slip into person XYZ's office after work and nab a copy of their inbox. Things like that are more common than you think. Basically what all this means is that both types of security are a must. Physical security is of much less importance to the average Joe though. Do I really worry about who can walk up to my Linux firewall in my house and boot it into single? Not really. Do I care who can query Bind on my box? Of course. Do I care who can query apache on my box? Oh hell yes. I don't want my provider turning me off for violating their anti-server policy.

    --

  • The setup of the machine (as in the combination of software) is effectively irrelevant. Reported exploits most commonly happen in individual pieces of software. It _is_ possible to rate software based on the frequency of exploits reported in one piece of software. Even the most complex exploits hit enumerable combinations of software, not thousands of variations[1].

    Widespread exploits depend on out-of-the-box insecurity.Similarly, security ratings of locks depend on their out-of-the-box characteristics, not when you've 'customized' them with a hacksaw.

    However, the uncertainty of security ratings is almost certainly dependant on the install base of the software. So that, eg the certainty (not the value) of the security level of Windows variants is much higher than anyone else, while that of eg MVS should be fairly low as there are far fewer folk with access to mainframes.

    -Baz

    [1] This situation is different where there is a widely deployed insecure protocol, such that almost every implementation can be compromised by exploiting the same flaw in the protocol. However even this boils down for the most part to knowing the OS patch level.
  • by nakaduct ( 43954 ) on Saturday May 19, 2001 @04:23PM (#210981)
    Would you work for a company that boasts about its 'mean time to bankruptcy', or hire someone
    who's improving their 'mean time between felonious drunken assaults'?

    Hardware failure is inevitable and (generally) unpredictable. Gross statistical measures are one of the few meaningful ways to plan and budget for failures. Security is not the same way -- breaches can be avoided through vigilance and good management. Talking about 'mean time to exploit' is a cop-out -- it's surrendering responsibility to the whim of fate.

    cheers,
    mike
  • I think the concept is neat, but I don't think it involves a complex calculus to ascertain a system's security rating. You don't need to add, multiply, or average the package ratings on a machine....

    Just read off the lowest one. The security on a machine is only as good as the least secure package.

    Of course, you run into the problem of who will be the authority issuing these ratings. No software developer would be honest or forthright about their number (or the measurements would be entirely uncomparable), so a trusted external body would be responsible for the ratings... and they'd be liable for all manner of litigation from trademark infringement to libel to spoiling a market for a product by telling the truth about it.

    So I predict it'll never happen. It's hard enough to get vague information about security breaches. Nobody'd dare quantify the risks.
  • I guess I am just lucky, but I have never had any of my systems compromised. I take care of about 200 systems and have never had any problems. I know of people who do experience this type of thing, but I find if you even pay a little bit of attention to security, you generally won't have any problems.
  • First, as the poster above said, that is a hardware error, and exactly the kind of thing that affects MTBF numbers for various hardware devices.

    Second, cosmic rays do not actually cause memory errors. What can cause errors are alpha particles, usually given off by decay of trace radioactive elements in the ceramic casing of ICs. This is a problem for any IC, and they have to be designed to withstand it. I don't know if that is actually a significant source of errors in modern memory or not, though.

    Of course, the #1 cause of memory errors is defective memory, but that is another issue entirely :)
  • by norton_I ( 64015 ) <hobbes@utrek.dhs.org> on Saturday May 19, 2001 @04:24PM (#210985)
    Hardware can be meaningfully rated with a MTBF value because the errors are random, physical (often mechanical) defects. It is a measure of how likely a given operation is to fail.

    With software, usually the same operation always fails. Software errors are design errors, not random failure.

    While measuring the frequency of breakins is perhaps a useful metric, it shouldn't be confused with something like a MTBF for hardware. Also, the frequency of breakins due to script kiddies that scan and more-or-less target systems at random and just want a shell, or to deface your webpage, versus a deliberate and directed attack against you to steal/corrupt data are completely unrelated. The latter may have access to more sophisticated tools, better knowledge of your network and software, etc. Trying to apply numbers gained from random attacks to indicated your defendability against directed attacks is severely misguided.

    Also, attempting an MTBF rating doesn't take into account visibility. If I drop most incoming connections through my cable modem and run a port scan detector, most people scanning my whole ISP will not even notice I am there. This doesn't work for a public website that many people know exists, even if it does drop their traffic to port 31337. Hardware MTBF is usually given in "operating hours" or some other well specified metric. I don't see how to do that for software. A better metric might be the ratio of intrusions/attempts, but since I would wager the majority if intrusion attempts, and even many successful ones are never discovered, that isn't a really good metric, either.
  • by Nailer ( 69468 ) on Saturday May 19, 2001 @05:25PM (#210986)
    Why gives a damn about root compromises? Surely no applications on your sytem ever run as root? Since when does my FTP server need to be able to create files in /dev?

    No wait, I forgot - most Unixes (apart from TrustedBSD, Solaris, Trix, etc) are still using a completely non secure non granular permission system - ie, in terms of security, a broken one.

    Ever service a machine provides should run under an account with the same name with permissions to do what the program does and no more.

    Capabilities can provide some of this function, but they still can't fix some aspects of this fundamentally broken system. Ie, I have some word processor documents stored on a server. Some users need to read and write the files, another group needs to read the files, and all other users should have no access at all. There are ways around this, but its hacky and makes the system much harder to administer, compared to a four line ACL.

    The Linux ACL and Extended attributes program [google :) ] is trying to fix this, and is already being used in production systems. Butuntil its in the kernel nothing is being written for it and there's still vast quantities of broken applications.

    And while we're on the topic: please don't ever assume UID 0 belongs to an account called root (apps, not documentation). Drone about STO all you want, but obscurity as a layer on top of real security simply does slow crackers down. Haven't you ever used a honeypot?

  • Have you considered the fact that your box at work is behind a firewall?

    That might explain the lack of scans against that particular box. (Your co-workers could always begin scanning your box, if it would make you feel better about all of those ports/services that you closed)
    ---
    Interested in the Colorado Lottery?
  • Are rated by number, that being the amount of time it takes to break it or cut through it.
  • by astrophysics ( 85561 ) on Saturday May 19, 2001 @06:32PM (#210989)
    Your proposed standard levels of exploit are very unix centric. While such a set of "levels" might be appropriate (or even best) for comparing current unix-like operating systems, I think you'd want to try to make it more general. Things like: unauthorized permanent storaged read, unauthorized memory write, unauthorized code execution, unauthorized network write, etc.

    If you applied such a system to our current linux, you might think it kind of silly since there would be a fair bit of redundancy (anyone who got root could do anything). However, I hope that in a couple years there will be several security systems that plug into linux that will make your current concept of root, user, and group privledges inadequate.

  • My linuxbox at work has been up for 3 weeks and not a single portscan, connect on telnet, rlogin or ssh.. My Linuxbox at home on cable.. 5-10 portscans per day, atleast 10 connections on telnet, ftp and other services per day. (none of them actually work of course, they're locked down) Its all about where the box is sometimes..
  • Mean time between failures typically means the average time between two failures given a large/infinite pool of machines/widgets.

    Humans have a mean time between failures of something like 900 years. That means for every 900/man years lived on Earth, someone dies (somebody correct me if I'm wrong on that number). Humans however have a Mean Time to Failure of about 72 years. That's the average time a human lives before failing.

  • Like this: Gödel
  • I don't understand this. Does that mean you take 900 and divide it by the number of humans, 6 billion, and that gives you the number of years between someone dying somewhere on Earth? (probably a few seconds)

    Wouldn't the number work out to be very close to 72? I don't understand why 900 is so large, unless I'm doing something wrong and comparing apples to oranges.
  • We need a mean time between reboots. Just so people the linux vs windows camp can have something thats not subjective to say the next time a flame fires up.

    Of course there is netcraft's ratings. but thats not really that acurate.

    -Jon
  • No one runs webserver's with MacOS, therefor no will will try to expliot them.

    No-one except for the army that is.

    -Jon
  • Warning- This is offtopic relative to the rest of this thread.

    Crispin - Where have you guys been? I was wondering when you would re-release the 7.0 version.

    Does this release take care of the compilation problems of RH7? Can I build a 2.4 kernel with this? These questions and many more... are not answered on your web site! I used Immunix 6.2 for a while, and I liked it a lot, but I would really like to use XF86 4.03, the 4.2.x kernel, and latest Gnome.

    Please let respond in this forum or to alewis@knightsbridge.com. I have a lot of interest in seeing what my company and my customers would think of this product.

    Thanks!!!
  • Not all software. It would be possible to write a VERY simple program that runs on a computer, possible without an OS that could never fail except for hardware reasons. Immagine a simple loop that does nothing more then print "*" to the screen.
    =\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\= \=\=\=\=\
  • GEB = Godel, Escher and Bach by Douglas Hofstadter.

    Hmm, how do you do O-umlaut? (I know Goedel is acceptable in German, but won't be in a bookshop database)
    ----

  • Bruce's argument is that the possibility of an exploit puts all known security holes into the script kiddie category in this cryptogram newsletter [counterpane.com]
  • I'm curious, do you think this would be useful if it could be done reasonably? What kind of mean times do you'd think you'd see for the various products out there?"

    no, because all you can do is measure the past of a piece of software and that doesn't tell about the current version or a modified version. the function for calculating the score for a piece of software would be unfair because it probably couldn't take into consider oldness and popularity of a piece of software and if it is open or closed.

    it would be better to rate admins by their history.
  • It wouldn't make it as secure, because OpenBSD isn't only the kernel, it's the whole distribution. All of it is pretty closely screened for security holes, so it's naturally more secure than a Linux distro that just takes the most recent version of everything. It's also naturally not as up-to-date, which is one of the reason why it can't compete with Linux (or even the other BSDs) in a general comparison - it lacks too may features.
  • Soory, but security breaches happen. Security is never absolute, and "vigilance and good management" only goes so far. Managers may be naive enough to believe otherwise, but fact is, the true cop-out is the statement "we can never be hacked".

    Of course, from a marketing point of view, that cop-out is preferable to being realistic, and I suppose that's the main reason why this is never going to work: companies don't admit security breaches, they cover them up.

  • Put simply, oBSD is the single most secure OS in existance
    Sweet mother of God, if your head were any further in the sand, the liquid magma would be burning your scalp. Would you like to know more? [oreilly.com]
  • I find that most compromises are remotely done
    Actually, most comprimises are done by an employee. Besides, the only box you have public should be your firewall/gateway, and it should be proxying connections to anything you actually need to be public, right?
  • System security covers just that; systems. Not software. Do your users know what to do if somebody claiming to be 'Bob from IT' calls up asking for a password? Could a nice looking person in a suit, with a laptop, stroll into your office, sit at an unused cubicle, claiming to be a consultant, and plug into a live network drop? Do developers or other non-IT people have the ability to put up live systems, say, for development work, testing, or just because they happen to know where they can find a network drop with external access is? Do you have a disaster recovery plan? A hacker or an earthquake will destroy your data all the same. In a similar vein, what sort of locks are on your electrical closet? Can some idiot with a Linux boot floppy with a whack of filesystem drivers get to any machine with data on it? Go read up on the "Orange Book" computer security rating system. Honestly, software is the least of your worries.
  • Okay, you have:
    6 billion people
    900 people-years MTBF

    Now just divide to find the answer:
    900 people-years / 6 billion people
    = 0.00000015 years between failures (deaths)
    = about 4.73 seconds between failures

    Make sense now?
  • That's great... 'in the future, linux will be more secure than OpenBSD is now'

    Well, if you are right or wrong, the fact is that the 'trusted' features everyone is raving about are available for OpenBSD right now (even if not part of the main system) go read deadly.org to hear about the patches... Not to mention the 'TrustedBSD' project that aims to impliment ACLs on FreeBSD, which will no doubt spread to oBSD and NetBSD with security improvements in the oBSD code...

    ---=-=-=-=-=-=---

  • How about the best performing OS on a single processor? Would that description make you fell better. If not, too bad because it is certainly true.

    ---=-=-=-=-=-=---

  • Then isnt this a measure more of the sysadmin than of the software or infrastructure? Mean time between failing to fend off scriptkiddies.
  • Drone about STO all you want, but obscurity as a layer on top of real security simply does slow crackers down. [Emphasis mine.]

    Absolutely. But this isn't Security Through Obscurity.

    The complaints about STO from the security community came from the days when fewer people understood security. Some people who thought they did (but didn't) reasoned that hiding information was the same as removing it.

    This led to all sorts of strange things, including "proprietary" protocols and algorithms. Many of these secret encryption algorithms were easily broken because the algorithm was flawed. One of the first things you learn about crypto is that keeping an algorithm secret does not increase its strength. Similar things have happened to many proprietary protocols.

    The concern about obscurity in any form has been that it's often used as a crutch, and that's led to a community backlash.

  • I would be inclined to agree with you. I can't talk for OpenBSD because I've not tried it, but I definitely appreciate the craftsmanship that seems to have gone into FreeBSD (a nice summary of which is presented here [cons.org]).

    The rag-tag "throw a zillion monkeys at the problem" chaotic nature of the Linux evolution doesn't help for things like consistancy - something that I appreciate in FreeBSD. Then again, I'm a Win32 coder at heart... :op

  • ..."How am I supposed to tell how long it will take to fix bugs we don't even know about yet?"

    The problem is that you don't know the bug is there until it is exploited. So the question becomes: how do you estimate the number of bugs in a program? There are several rule-of-thumb based on statistics, but those aren't reliable enough to list as a spec.

    The basic issue is that hardware manufacturers can test a statistical sample of their product and use those results to estimate MTBF for that product. With software, each program is unique, so it's difficult to say with any certainty that tests done on past software will extrapolate to new software, even if the statistical analysis is sound.

    And when a program "breaks" (ie. a flaw is discovered), all copies of that software (or configuration) are affected. If a company buys 1000 copies of the same hardware, they can be confident that, on average, only a certain percentage of that HW will fail before the MTBF point.

    With software, does it matter if the average time-to-exploit is high, if *your* current software package is hacked two weeks after installation? All one thousand software copies in the organization are now vulnerable; all have to be fixed/replaced. So that pretty MTTF spec isn't really very useful anymore.
  • As a software environment gets more and more sophisticated, the interplay of the components get more and more chaotic... it's not like anyone designs an environment from the ground up these days... you put parts together... you really can't foresee every possible combination of variables, especially when you didn't code it all yourself.
  • thousand [K] [L]ines [O]f [C]ode
  • I wasn't downplaying the need for redundancy. Load balance a stack of single-CPU oBSD boxes and you're good to go. Notice that the (webserver) UltraSPARCs on the Rackspace page are all single-processor machines, too. And as far as a database, I wouldn't even consider using *BSD or Linux for a critical database. I'd go Solaris + VxFS or IRIX + XFS. (Perhaps Linux + XFS or Reiser after they've proven themselves).
  • The site www.army.mil is running WebSTAR/4.2 ID/70636 on MacOS.

    The site 140.183.234.14 is running Phantom/2.2.1 on MacOS.

    The site www.goarmy.com is running Netscape-Enterprise/4.0 on Solaris.

    The site www.cia.gov is running Netscape-Enterprise/4.1 on Solaris.
  • Mac OS is used by at least a few million folks, most of whom don't know what they're doing, and many of which have nice equipment and lots of bandwidth (artists, DTP folks, etc). Can you think of a better group to hax0r and use their resources?

    The reason Mac OS (classic) doesn't get Hax0red has to deal with the OS's architecture. A circa 1982 design with no command-line interface, no unix or dos roots, and no real non-gui way to control the beast. The closest I've seen was a background-only daemon that listens on a certain TCP port for AppleScript commands which it will then execute. Not too useful.
  • by green pizza ( 159161 ) on Saturday May 19, 2001 @06:04PM (#211018) Homepage
    any normal (read not yours) company will have at least dual or quad CPU hardware running in a cluster for their webservers

    Jippity! If "any normal company" has clusters of dual and quad CPU machines to run their websites, I hate to see the hardware that runs their databases! And on the same token, I guess I haven't experienced these websites from "any normal company".

    I agree that it's a bit of a shame that oBSD isn't an SMP monster, but that fact alone really isn't much of a problem these days, especially with 1.0+ GHz processors being the norm. Of the websites I help maintain, one handles an average of 1.2 million requests per day (average of about 14 requests per second, and about 8 GB/day). Granted over 95% of that is static content, but it's all hosted through a Pentium 233 running a heavily-patched version of Red Hat 5.2 and the load average rarely goes above 0.15. Another website handles the registration and accounts of a regional academic competition program and gets an average of 5 CGI hits per second. Using MySQL and Apache+ModPerl on a PII-266 atop Red Hat, the whole works chugs along fine with a load average around 0.10.

    oBSD on just one modern CPU may have its limitations, but it could easily saturate a pair of 45mbit DS3/T3 links with dynamic (PHP/perl/etc) content without much cpu load at all.
  • Mitnick is/was nothing more than a social engineer sure he did some nice cracks but he was using the information that someone else found out about. He just decides to use this information.
  • of course "Social cracks" are the largest security risk but the orginal comment of "some exploits are so fucking complex it would take fucking Mitnick to do it" makes it seem like Mitnick was the best hacker that ever existed who is a false statement.
  • but measureing how likely a specific version of netscape will segfault at any given second might be usefull to those thinking of chancing an upgrade. Actually this is a silly idea. There is already a system in place in most companies that should suffice. It goes by many names but we call it the "Decent, Obtuse, Hellish!" System or DOH! for short.
  • software fails also... ever get a blue screen?

    Yeah, but the point is that hardware is a physical thing that suffers from the weaknesses of being a physical thing. It has certain tolerances you can't exceed, otherwise it'll break, and it's going to wear out eventually anyway.

    Software, OTOH, is an idea, and doesn't suffer from any of the "weaknesses of the flesh"; you can't wear out a concept, can you? The old adage saying you can't build bug-free software isn't saying anything about the nature of software, it's a statement about the weaknesses of the humans who write software. Finding a way to write bug-free software is the Holy Grail of software verification research.

    And who says you can't write bug-free software, anyway? I've done a few decent "Hello, World" implementations in my time...

  • I always get confused with these arguments about which operating system is the most secure. Sure, out of the box most linux distro's are alot less secure than openbsd, if you spend the whole 10 minutes it takes to disable all the inetd servers except for the needed ones, disable telnetd, disable ftpd if it's not needed, check which binary's are running suid root, would this not make it as secure as say openbsd ? I rarely hear of any exploit code which attacks the linux kernel itself, therefore it's reasonable to assume the lack of security of a distribution is simply the way it was put together, not linux in general.
  • ASFRecorder doesn't actually decrypt the content. All it does is capture the packets sent by the server and reconstruct them into a file. If the stream is protected and encrypted with DRM, you can't play it without the license, even after using ASFRecorder. The thing is ... nobody uses the DRM protection options. That's why the author wrote the program, to send a wake-up call to content providers. Who slept right through it.
  • How do you do this red screen of death?

    BSOD Properties and Other Customizations [pla-netx.com]. This page has a little VB3 app to easily let you make the changes. Or, if you don't want to bother with that, it tells you what to add to SYSTEM.INI.

    I made my BSOD red for a while too. But I found it induced too much anger in me, so I switched it back.

  • As far as hardware failure goes (MTBF, MCBF, et cetera), things like heat, dust, corrosion, and general wear-and-tear can be predicted accurately. However, when it comes to the mean time before crack, it's almost unpredictable. The utility for unlocking secured Microsoft WMA files was released the day after the format's launch. DeCSS took a while longer, but it was eventually done. ASFRecorder effectively circumvented Microsoft's intent to prevent users from storing copyrighted video content (I forget when the first version of ASFRecorder was released, but the final one is dated late June 2000). There are plenty of other examples out there, which show that the time it takes to crack something can vary dramatically. These examples prove that predicting the "mean time before crack" is like trying to predict the weather in New England down to the square mile: it's nearly impossible, you're almost always wrong, and it's not worth the effort.
  • I'd imagine rating hardware would be easier, as the main failure points are physical components breaking due to heat, etc.. Software has a lot more variables, including the hardware it's run on, experience of the administration setting it up, and such...
  • Interestingly, Lockheed Martin Corp. (UK division) uses CMM to assess all of it's software developers, I know a member of a team that is flying out to Washington to do a CMM assessment of a company out there for a whole week.

    They actually have people emplyed who specialise in CMM and a great deal of planning goes into it all. But then when it has a military application I guess you can never be too careful.

  • Nope, my linux box doesn't "do" blue screens, win2k doesn't seem to either and my 1 remaining 98 box has had some registry editing done and now gives lovely Red Screen Of Deaths every 5 minutes, they scare everyone else more when they're red cos they haven't seen them before and we all know red = danger.
  • None of which invalidates anything he's done. The risk of effective "social cracks" is probably the _largest_ security concern out there..
  • I'd guess, though, that most, if not all software is "broken" in some way. And, as such, it pretty much follows that without a decent sysadmin to keep things ticking over, to keep reading logs and applying updates, all servers are "bound to fail sooner or later". It's "not a question of 'whether' - it's a question of 'when'".

    In that respect, then, your argument winds up supporting a "mean time between rootshell" calculation; for reasons discussed elsewhere, though, such an idea is pretty daft..

  • There seem to be rather obvious flaws in this entire concept. First off, how do you determine the extent of a security exploit? In the worst-case scenario, you're not going to even be *aware* of having been cracked, and it's precisely those attacks that you don't know about that can be the most dangerous.

    Secondly, I don't think there's any way of coming up with an objective standard, a base line by which such a meantime could be judged. No two servers are alike, and even within broadly comparable applications it's more complex than just measuring traffic to the server, average load, etc. Some servers are more visible, or more desirable (from a cracker's perspective) -- either from a kudos point of view, or in terms of the potential use that can be obtained from a cracked server. It's pretty meaningless to say "my server's been up for months without being cracked" if it's some no-name machine that nobody knows or cares about.

    Security's also not just to do with the software -- the computing power of the machine also needs to be taken into account. You're not going to be able to crack a password list as quickly on a 486 as on the latest pentium machine, for example.

    So: no, it can't be done, reasonably. And asking what kinds of mean times we'd expect to see is, frankly, trolling for a platform flamewar..

  • Study a little more math, read GEB, and then you'll know.


    Art At Home [artathome.org]

  • Sorry my tone came off more impolite than intended. The subject line is the slogan of the "Mtv Cribs" program.

    Software, of the other hand, is a digital entity, so its function doesn't change with time. If it was broken on the 10,000th time around, it was broken all along. Whether anyone noticed it was broken is completely another issue.
    Extrapolating the future from the current situation will get you in trouble. Reality changes. No software can take into account all future input. Look up "misfeature" in the jargon file.

    On a somewhat related note, GEB is indeed Gödel, Escher, Bach.
    Check it: HTML Character Entities [http]


    Art At Home [artathome.org]

  • I think you understood the opposite of what I was trying to say - exactly because you cannot predict all the future uses a software unit will have, you cannot perform a statistical analysis of when, if at all, it'll break.

    It'll definitely break, you can't know when or how until it happens.

    Art At Home [artathome.org]

  • my point was merely that mean time between failure is meaningless in both hardware and software.
    Art At Home [artathome.org]
  • This is not a new idea to security in general. Safes and vaults are rated in terms of "hours," meaning how many hours, at a minimum, that container will resist breach against the current state-of-the-art cracking methods. (Yes, it's called cracking there too.) I suggested applying a similar system to the report given for security assessments by my company, but quickly found out how impractical it would be.

    The reason why you can do this with a safe is because there are so many known quantities. You know what the safe is made of, and how it is built, and the properties of both, with no hidden surprises. The vulnerabilities of both stay constant over time as well; you don't ever hear about a previously undiscovered buffer overflow in carbon steel :) And finally, the methods of attacking safes and vaults do not change quite so quickly as hacking methods evolve, so you can know authoritatively what would be done by an attacker, and account for all of it.

  • Please forgive me, O clueful one, for offending your knowledgable sensibilities and enlighten this poor soul - What lacks in my mathematical education? And who/what/why is/are GEB?
  • Actually, you could get a random software error. If I recall correctly, one average every 3 months a cosmic ray from space will actually toggle a bit in your system memory. So, technically, you could have a random software error. Not likely =) but possible!

    But if it is the RAM that was affected directly by the cosmic particle, wasn't it the hardware that was at fault here? You can have software measures to counter it, such as checksumming the memory, but the failure itself did not originate in the software.

  • GEB = Godel, Escher and Bach by Douglas Hofstadter

    Thanks.

    BTW, the way to type a letter that does no exist on your OS. In DOS, you could type &ltAlt&gt-ddd where ddd is the decimal entry from the current code-page. In Windows there is the "Character Map" utility for that. I guess other OSs has their own means to that end.

  • Sorry my tone came off more impolite than intended. The subject line is the slogan of the "Mtv Cribs" program.

    Apology accepted :-) I guess my lack of familiarity with American culture causes me to sometimes accept things at face value when they're supposed to be subtle references.

    Extrapolating the future from the current situation will get you in trouble. Reality changes. No software can take into account all future input. Look up "misfeature" in the jargon file.

    I think you understood the opposite of what I was trying to say - exactly because you cannot predict all the future uses a software unit will have, you cannot perform a statistical analysis of when, if at all, it'll break.

  • by Betrayal ( 263036 ) on Saturday May 19, 2001 @03:56PM (#211042)
    The reason why statistical terms such as "mean time between failure" is commonly applied to hardware is that hardware is bound to fail sooner or later. Accumulation of damage from friction, shaking etc. means that sooner or later all things physical will break. This is just the second law of thermodynamics - it's not a question of "whether" - its a question of "when".

    Software, of the other hand, is a digital entity, so its function doesn't change with time. If it was broken on the 10,000th time around, it was broken all along. Whether anyone noticed it was broken is completely another issue.

  • software does have a similar concept. i remember it from my software engineering course last fall. i belive it is "Mean Time to Failure" ;) go figure...
  • Average time between failure is a mean value.
    Time1+Time2+Time3/3 = mean Time
  • software fails also... ever get a blue screen?
  • I just looked it up in "Fundamentals of Software Engineering" by Carlo Ghezzi, Mehdi Jazayeri, and Dino Mandroli.
    it is "Mean Time to Failure" (MTTF) and its defined as "the average time interval between two failures".
    sorry no good links but MTTF is what your looking for.
  • Put simply, oBSD is the single most secure OS in existance

    While a little fanatic, if you qualify it like "oBSD is the single most general purpose OS in existance" it rings closer to the truth. Of course, even the most secure OS in the world is worthless in the hands of an inept administrator.


    She feels me I can taste her breath when she speaks.
  • FreeBSD is in a niche (not may admins can use it -- companies hire from the mass market so the skillset is definitely limiting fbsds existance)

    In five years of running an ISP I never had any problems finding highly qualified FreeBSD admins. Linux is gaining more and more since the availability of admins is there

    If that's your metric then perhaps Windows is the way to go. There's a ton of MCSEs out there!

  • by Lover's Arrival, The ( 267435 ) on Saturday May 19, 2001 @03:55PM (#211049) Homepage
    4 years with no remote exploit in the default install. Thats what I call a good 'mean time to root'.

    Our company changed over to OpenBSD from Red Hat because we were fed up of all the root exploits, all the patches all the time, and the incoherent way in which linux as a whole tends to be organised - ie, linux is the kernel, not the OS. OpenBSD is the entire OS, and is much more sane, IMHO.

    The problem with linux is the general chaos. Great for hackers, and definately much better for the desktop user, but oBSD and *BSD's generally are much better in production environments.

    Put simply, oBSD is the single most secure OS in existance, and fBSD the highest performing. Take your pick, its no contest as far as BSD v linux is concerned for my company.

  • Alt+### works on Windows as well. All versions.
  • How do you do this red screen of death?
  • oops i meant bind not sendmail =P
  • The difference is, of course, that hardware fails, statistically, in a more or less random fashion. Exploitable software fails once, and then near *constantly* from that one mistake.

    During my first 5 minutes of looking at BlackICE output, I got hit from about 4 different directions as people scanned through the net...

  • See the last few cryptograms. He talks about insurance for security. Readers respond in either supportive or disagreeing tones.

    Cryptogram March 2001 [counterpane.com] has an article about it, for example.

    --

  • Let me clarify something here. I'm not looking for mean time between rootshells for a individual system, but for a type of system. For example, what's the mean time between exploits in BIND 8 in general, not your specific BIND 8 installation. The more general question really is, why don't we measure how often a product has flaws discovered. If I need to choose between RedHat and OpenBSD and security is my biggest concern I'm going to look at 4 years without a remote root exploit and chose OpenBSD, but what if I'm deciding against Solaris and HP-UX? What's the difference?

    The reason I find this meaningful because it help measure work load. Am I going to take the machine down once a week to apply a patch, or once a year? The shorter the time between new exploits the more work I have to do to maintain it.
    --
    Darthtuttle
    Thought Architect
  • The problems with metrics on software has always revolved around the abstract nature of software itself. Almost any sort of metric can be misrepresented one way or another.

    Long ago, companies measured programmer productivity by using the KLOCKs, 1,000-line blocks of code. The more KLOCKs you could kick out in a given time period on a task, the greater the perception was that you were working harder. We all now know how easy it is to manipulate that perception. 500 lines to add two integers, sprinkling 1000's of lines of useless looping code documented to look like it's crucial to the system.

    Proper measurement of failure is further compounded by the complex nature of most products written in OOP. Underlayered components, physical devices, and operating system issues could be mistaken as problems with the software application, when in fact the application itself requires no modification to fix the problem. Metrics also rely on fixed points of time as references which make matters worse as some problems are beyond the scope of the project (i.e. product works fine, but customer later upgrades video drivers that cause app to break).

    Carnegie Mellon University [cmu.edu] has pioneered the software maturity analysis area with its Capability Maturity Model [cmu.edu] for software shops (think ISO9000). If I was a large customer (say Boeing), I would probably make my purchase decision more along the lines of the CMM rating of the software team that created the product rather than some silly arbitrary metric that most suits probably wouldn't comprehend anyway.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...