Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Microsoft

Windows NT 4.0 C2 Evaluation finished 155

DevNu11 writes "Windows NT 4.0 SP6a + Hotfixes + Trusted configuration finished evaluation under the TCSEC program. This page has a configuration guide for deploying a system in a C2-evaluated configuration. A text on the bottom of the page points out the differences of NT being secure and that someone could configure NT to be secure."
This discussion has been archived. No new comments can be posted.

Windows NT 4.0 C2 Evaluation finished

Comments Filter:
  • Give me a break-- the whole NSAKEY thing is most likely benign. And even if you believe differently, you can change the second damn key.

    However, I'd say this adds some light to the subject.

    Part of the C2 evaluation process is "Fix bugs. Repeat." Perhaps the testers found some sort of minor bug in the source code that could only be corrected by the addition of a new key (or that could be fixed *most easily* by the addition of a new key). Microsoft adds the key to appease the testers (who happen to work for a branch of the NSA). What's the logical variable name for the key? NSAKEY.

    Is this a plausible explanation?
  • If Linux had gotten C2 certification then everybody would be happy and say how good it was. Now that NT got it everybody is trying to shoot it down. How about talking how Microsoft could improve it... I know most readers of slashdot are pro-linux anti-microsoft people but because one OS got something you dont have to start bagging on it
  • etc. ;)

    -------------
    The following sentence is true.
  • Nope, I don't put Linux in the same trash can as NT, because NT has the advantage in the following categories: 1. most blue-screens when doing something non-standard. 2. Most confusing, unnecessary, and poorly-documented OS feature: The Registry! 3. Biggest disconnect between what the GUI and sell-copy says is required for configuration and what is really required.
  • Remember that a C2 cert doesn't apply only to the OS - it covers the specific system, configuration and all, that the OS is installed on as well. Even if they managed to get it C2 certified on a system with no removable media and no network connection, that doesn't mean much to the average NT admincritter - Microsoft's not gonna certify their install...
  • Well, if you have a good understanding of these certifications, you'd realize they have a strong point - since this certification only applies to a single configuration running an install of NT 4 with SP6a, with no removable media or network devices, it doesn't mean much to the average NT admin schmuck. Even if a Linux install was C2 certified (hey, maybe VA should build a Linux system, configure it, and get it C2 certified - I bet it could make Red Book), it wouldn't mean much to ME - my Linux install isn't certified. The only thing it proves is that the OS has the potential for certification.
  • NTFS is actually originally based on HPFS. NT used to include an IFS (installable filesystem) driver to support OS/2's HPFS partitions, IIRC. Doesn't it anymore?
  • In order to be C2 certified, administration habits, physical security of the server, among other things have to be evaluated. So NT (or linux) could never be certified. Certification is on a per-site basis.
  • When Is M$ gonna figure out the computer users of the world are slowly being de-sheep-erised. I mean allready thousands of former M$ fans who swallowed everything Dollar Bill said cause he is supposedly the smartest guy around (eeeh. Guiness Book of records says smartest guy around is a 12 year old jap who speaks 42 languages with an underteminable IQ somewhere above 220...Dollar bill can only speak (arrogant) english and (broken) code.) What I am trying to get to is that as the world starts to (slowly) figure out what this whole computing thing is all about, they'll stop falling for pretty words and pseudo-certifications that are voided when you connect you machhine to the internet. They'll figure out that the greater majority of system security breaches are based on exploiting buggs in (mostly OS) code. Thus
    stability = security = UNIX damnit.
    By creating an operating system that has tight passwords and whatnot you have not created a secure system, not as longs as it crashes every few weeks. Because a crash, stall etc. by non-human intervention speaks of a system that a malicious HAXOR can easilly lead into a willfull exploitable system crash. Wham ! Buy-Buy privacy. Ever tried to deliberately crash a nix ? It's pretty hard these days and getting harder all the time, because the underlying code was designed from the ground up to be stable, and that makes it secure. Ever notice how a new security update for IE appears about once a week. Because M$ is allergic to decent debugging. They rather ship buggy stuff and have you download a few million updates every week. (Ever thought of how that fux (sorry) your phone bill ?) Anyone who is still sheep-ized after this do yoursellf a favour and go see the UNofficial M$ home page. [microsith.com]
  • somebody moderate that one back up. it's not flamebait.

  • by Sulka ( 4250 ) <sulka.iki@fi> on Saturday December 04, 1999 @07:42AM (#1480417) Homepage Journal
    Procedure for C2 NT installation, from the doc:

    Unpack and set up hardware
    Set power-on password
    Install Windows NT
    Restart Windows NT as Administrator
    Verify video driver
    Install Printer and Tape Drivers
    Install Service Pack 6a
    Install C2 Update (KB Q244599, Q243405, Q243404, and Q241041)
    Enable hardware boot protection
    Remove the NetBIOS Interface service
    Disable unnecessary devices
    Disable unnecessary services
    Disable Guest account
    Remove OS/2 and POSIX subsystems
    Secure base objects
    Secure additional base named objects
    Protect kernel object attributes
    Protect files and directories
    Protect the registry
    Restrict access to public Local Security Authority (LSA) information
    Restrict null session access over named pipes
    Restrict untrusted users' ability to plant Trojan horse programs
    Disable caching of logon information
    Allow only Administrators to create shares
    Disable direct draw
    Restrict printer driver installation to Administrators and Power Users only
    Set the paging file to be cleared at system shutdown
    Restrict floppy disk drive and CD-ROM drive access to the interactive user only
    Enable NetBT to open TCP and UDP ports exclusively
    Modify user rights memberships
    Set auditing (if enabled) for base objects and for backup and restore
    Disable blank passwords
    Restrict system shutdown to logged-on users only
    Set security log behavior
    Restart the computer
    Update the Emergency Repair Disk

    No POSIX, eh? I can understand most of the mods, but to me it seems like the machine pretty much becomes a dumb terminal after all of this.

    sulka
  • You can audit ugly object, too.

    That's good; I think most of the objects in NT would be pretty ugly. :) [I'm just joking around, NT fans please chill].
  • Remember, there is also a difference of _Linux_ being secure and that someone could configure Linux to be secure. No matter how much more secure Linux or NT is than the other, both operating systems must be set up correctly. That includes applying updates. I personally highly doubt that any Linux distros could receive C2 right out of the box.
  • This page [microsoft.com] has a configuration guide for deploying a system in a C2-evaluated configuration.

    Try 1:

    Microsoft VBScript runtime error '800a000d'

    Type mismatch: 'CInt'

    /security/inc/scripts.txt, line 279


    Try 2:

    The page cannot be displayed

    There is a problem with the page you are trying to reach and it Cannot be displayed.


    Please try the following:

    Open the www.microsoft.com home page, and then look for links to the information you want.

    Click the Refresh button, or try again later.

    HTTP 500 - Internal server error
    Internet Information Services

    Technical Information (for support personnel)

    More information:
    Microsoft Support

    Try 3:

    Same as Try 2. Guess I'll install Linux instead. At least RedHat's Web Site is up.

    xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

    Hmmmm. This means you can't serve web pages on a C2 NT system?

  • They can make it secure if they pull the cat5 from the NIC. In the meantime, this is what you get from viewing the page (used Konqueror to view it ):

    Microsoft VBScript runtime error '800a000d'

    Type mismatch: 'CInt'

    /security/inc/scripts.txt, line 279

  • An outtake from the full text: Platforms Included The evaluated hardware configuration includes Compaq Professional Workstation 5100, Compaq Professional Workstation 8000, Compaq Proliant 6500 Server, and Compaq Proliant 7000 Server. No other model may be substituted if the setup is to conform to the evaluated C2 configuration.

    evaluated C2 configuration.

  • On the subject of OpenBSD, I'll quote Theo from misc@openbsd:

    > You know what C2 means?
    >
    > It means you have ACLs, and you log a number of > system events.
    >
    > So ACLs and syslog.
    >
    > Really.
    >
    > Oh, except you also need GOBS AND GOBS OF MONEY to get it certified.
    >
    > In my opinion, ACLs are just a way for system adminstrators to shoot themselves in the foot.

    I wouldn't look for a rating of this sort in OpenBSD.
  • When Is M$ gonna figure out the computer users of the world are slowly being de-sheep-erised. I mean allready thousands of former M$ fans who swallowed everything Dollar Bill said cause he is supposedly the smartest guy around (eeeh. Guiness Book of records says smartest guy around is a 12 year old jap who speaks 42 languages with an underteminable IQ somewhere above 220...Dollar bill can only speak (arrogant) english and (broken) code.) What I am trying to get to is that as the world starts to (slowly) figure out what this whole computing thing is all about, they'll stop falling for pretty words and pseudo-certifications that are voided when you connect you machhine to the internet. They'll figure out that the greater majority of system security breaches are based on exploiting buggs in (mostly OS) code. Thus
    stability = security = UNIX damnit.
    By creating an operating system that has tight passwords and whatnot you have not created a secure system, not as longs as it crashes every few weeks. Because a crash, stall etc. by non-human intervention speaks of a system that a malicious HAXOR can easilly lead into a willfull exploitable system crash. Wham ! Buy-Buy privacy. Ever tried to deliberately crash a nix ? It's pretty hard these days and getting harder all the time, because the underlying code was designed from the ground up to be stable, and that makes it secure. Ever notice how a new security update for IE appears about once a week. Because M$ is allergic to decent debugging. They rather ship buggy stuff and have you download a few million updates every week. (Ever thought of how that screws your phone bill ?) Anyone who is still sheep-ized after this do yoursellf a favour and go see the UNofficial M$ home page. [microsith.com]
  • Totally wrong. Certification is on a per-configuration basis. So you get your specific installation of Linux and put it on a specific type of hardware... the original 3.51 NT was certified on a DEC and a Compaq system. The actual C2 final report even goes as far as listing every part number for the components like the floppy and cdrom, ram, hard drive in the computers.
  • Hi!

    Freshmeat.net's security section lists several projects that are attempting to bring some of the more useful aspects of mandatory (that is, non-discretionary) access control to Linux. RSBAC and LOMAC are two examples. LOMAC (my project) is developing a loadable kernel module that adds a kind of MAC to standard off-the-CDROM Linux kernels. It's specifically designed to be unobtrusive and to avoid causing incompatibilities with existing software. RSBAC provides a richer selection of MAC functionality than LOMAC, but is implemented as a kernel patch rather than an LKM. Folks interested in MAC for Linux might want to take a look at these and other security-related projects on freshmeat.

    - Tim
  • by mertner ( 90928 ) on Saturday December 04, 1999 @07:05AM (#1480429) Homepage
    What would it take to get a version of Linux certified in the same way? Lots of money, or just lots of carefully configured pices of software? Is it not something

    While I think general consensus is that NT's C2 certification is pretty useless (it has to be configured in a way to make it of even less use than normally), it still puts NT on the scoreboard when compared against Linux.

  • > Looks like trusted XENIX is going to be the highest rated.

    Trusted Xenix (TX) was a TIS (now NAI) product. They haven't sold it in a long time. It was based on Xenix (a Microsoft product, believe it or not). Some of the folks that worked on TX told me that compatibility killed it. Since it took a long time to march through all the heavy-duty software engineering required to get the TCSEC rating, versions of TX tended to lag behind the times when compared to non-trusted UN*Xes in terms of functionality. Since it was seldom capable of running the most popular applications of the day, its sales suffered.

    I dug a copy of the TX distro out of a closet a while back. Someday I'm going to install it just to see what it's like. BTW, ratings are assigned to given OSs on given hardware. TX was rated on some really old Intel stuff - i386's, I think. So it might take some digging through used computer sales to reconstruct a historically accurate TX installation.

    - Tim
  • The article is dated a year ago. From the site:
    Last updated: December 02, 1998
  • But then you have to think... who's going to submit "Linux" for certification? I could do it, right? Not really, I don't think. What would be my benefit? I believe that any C2 evaluated (and accepted-- remember ANY OS can be evaluated... even C64/OS) OS would need to be submitted by a vendor prepared to sell and support that particular flavor... like maybe SuSE 6.3C2 or something. That would work.
  • Same site? Two documents. Oh sorry, maybe I'm raining on the Linux community's movement into the FUD department.
  • Are you referring to the html source? This is readily available . . . Are you talking about the news sources (which I agree might be questionable)? How does that pertain to a web site? As for dissing products that aren't open source, that's what the readers and posters do, slashdot just presents the news tidbits that might be of interest to tech minded people. I have never seen them "dissing" any closed source products in the past.
  • its probably been thought before, but why can't someone take a Debian distro, branch it (we love GPL, don't we :)), and attempt to make a niche "C2 certifiable distribution", exactly as MS has done.

    Start small- kernel patches (maybe submit them to linus for integration even), and the bare basic tools, all modified for this specific distro.

    if Mandrake can target performance, surely another distro can target security. and if everythings GPLed (as it would have to be if based on debian), changes could gradually be integrated back into the main tree. it'd take a while for *that* to happen, but in the mean time, the linux community could point to this distro and claim C2 certificationability (to noun a word :))


    forgive the incompleteness of this post- i need this browser window, so i'm just posting now :)
  • by Anonymous Coward
    People...

    Please understand the difference between "certification" and "evaluation" before foaming.

    A particular installation of a software product on a machine and its particular physical environment can be certified at a particular level while the software product itself cannot be certified.

    One could take a B2 evaluated "Trusted Information Systems, Inc. Trusted XENIX 3.0" and install it in a particular physical environment and have that installation certified at B2 or whatever level one can afford. The same "Trusted Information Systems, Inc. Trusted XENIX 3.0" can also be installed on a public machine with little yellow stickers on it with logins and passwords, and, while "Trusted Information Systems, Inc. Trusted XENIX 3.0" remains evaluated at a B2 level, the particular installation would probably not recieve the same B2 (or any) certification.
  • Security ratings are just like MHz ratings for CPU's. There are way too many parameters to take into account that a single number/certificate doesn't mean a whole lot.

    In the CPU world, MHz ratings make sense only within the same family of processors: a system running 450MHz i586 can be safely said to be faster than a system running a 333MHz i586. (Even then it's not a reliable measure.) But you cannot compare, say, a 450MHz i586 and a 350MHz RS4000 -- you might say, well, the Intel must be faster since it has a faster cycle. But what if the RS4000 can do in 10 instructions what takes 100 instructions on the Intel? Well, if it were a 350Mhz RS4000 and a 450MHz RS4000, we'd know the latter is faster, but it's very hard to compare across different processor families.

    Same thing goes for certification. In the NT world, the system is more or less uniform across all deployments, so a certification for it makes sense (just like comparing MHz ratings for one particular family of processors). But now in the Linux world, there are just way too many different configurations. Treat each distro as the equivalent of "processor family" if you will, to draw the analogy. What does a rating on, say, a typical RH6 installation mean to other Linux distros, or even other versions of RedHat? Two different Linux installations can be so different that a certification only makes sense if you stick with one particular distro, one particular release of that distro, and even the same configuration used in the certification process.

    What I'm trying to say is, certification is useful only when you're comparing static, non-changing systems. The term "Linux" encompasses too much -- it makes no sense to "certify Linux" and think that the certification gives an accurate picture of security on Linux.

    And then, you have the human factor to account for. Everybody knows that the most "secure" system can be the most vulnerable if the sysadmin doesn't know what he's doing. In a way, security certifications like this should be taken with a grain of salt -- just because NT, or even Linux, is "certified to be secure", doesn't mean that you can now just go to the store, buy a copy of the system, install it, and you automagically gain the same security as the certification says. Absolutely not -- you must hire a competent system administrator before your system has any degree of reliable security. Doesn't matter if your NT or Linux box is "certified" to be C2 or C3 or C-whatever, all that guarantees nothing unless you have the right person behind the machine.

    "There is no such thing as out-of-the-box security. If it's out-of-the-box, it's not secure."

  • by Hawke ( 1719 ) <kilpatds@oppositelock.org> on Saturday December 04, 1999 @08:07AM (#1480441) Homepage Journal
    Um, B-rated OS's require MAC capability. I do not believe OpenBSD has that. At the B level, its not just an administration thing. The MAC component really makes the systems unusable for normal work.

    MAC == Mandatory Access Control. Basically the OS supplies some rules about resoruce access that trump the rules provided by permissions. Think of tagging processes with a tag like "Secret". A process running at Secret can open Secret, Classified, and Unclassified files, but everything it writes is always tagged Secret. It can't read TopSecret files or write to them.

    By the time you add in control of covert channels, you have to jump through some really weird hoops to get a B rating.

    C-2 rated systems require a Secure Attention Key (basically some way to guarentee you have a real-login screen, and not a fake one. Ctrl-Alt-Delete in NT) which I don't think the Open Source unixen have yet. Other than that we're in good shape.

    Solaris has a B-2 rated OS (Trusted Solaris) and a C-2 rated OS if I recall correctly. C-2 mode on a Solaris box turns on a lot of auditing, turns off Stop-A, and does a few other things I forgot.

  • Perhaps the testers found some sort of minor bug in the source code that could only be corrected by the addition of a new key (or that could be fixed *most easily* by the addition of a new key)

    IMHO, words `easily fixed' and `secure' don't mix too well. If it was such a minor bug why would MS potentially compromise NT's security? It would be logical to assume that the designers of secure OS shouldn't take an easy path when it comes to bugs that put security on the line.

    And even if you believe differently, you can change the second damn key.

    Given time (and more OS design knowledge) I could also rewrite NT's kernel. Using your logic: why bother with bug fixes? A user can patch those damn buffer overruns himself. Why preset permissions on critical files/folders on any UNIX? A user will do it.

  • it's down all the time! WoW! I think I hit it! This must be Intel delivering its last promise to MS in making NT secure before it defacts to Linux!

    That's amazing. Does this mean the big red brick holding the backdoor open is also C2 compliant?

    it's funny. laugh

  • actually AFAIK c2 level security _requires_ that the system should not be connected to a network or have a floppy. of course i could be completely off my rocker and be wrong on this one, so please correct me if you _know_ otherwise :-)
  • Every SysAdmin worth his/her salt and most of the rest of us know you're right. However, who are these certifications for? The people controlling the purse strings. They don't realize the implications of a C2 rating and they also don't realize it is a particularly configured install on a few limited machines. They just know they can get NT certified.

    The entire certification process just becomes a tool to spread FUD. Fear that anything that doesn't carry certification will be broken into. Uncertainty that anything else could be better. Doubt that they could be wrong.

    Given that, getting a particular distro certified would do the Linux community good. It doesn't matter that the kernel will become out of date. It doesn't even matter that it might be a stripped down disto that can't do everything. What would matter is that one could say "Linux is C2 certified".

    Perhaps VA could partner with one of the commercial distros to create a system that could be certified.

  • C2 certification requires (among lots of other things, of course!) the ability to assign file access rights to the granularity of a single user. In other words, access lists (ACLs).

    NT has this, but the standard linux filesystem (ext2) does not. This is certainly the biggest problem with a "standard" linux setup going for C2 certification. Without a different filesystem, it doesn't stand a chance.

    In fact, from the standpoint of real security, and not just meeting some somewhat meaningless security label, the lack of ACLs in Linux is a real problem. Is this going to be fixed in ext3? Does anyone know?
  • http://www.openbsd.org

    This is for those who can't wait until Red Hat and the rest bother to come up with something that is reasonably secure out of the box.

    OpenBSD attempts to do just that.

    Of course, nothing beats an informed administrator, but it can't hurt to start with a base that's designed to be securable.

  • Sure maybe not now, but how long will it take one of the many hackers/programmers in the OpenSource family to find a way through "yet another stupid MS security scheme"?

    Last thing I read on an MS security issue, they were XOR'ing passwords with the stream susageP, Pegasus spelt backwards! (for WinCE logons). ;)

    Can they crack Linux or BSD filesystems that are encrypted with the REAL encryption available for free? TripleDES, Blowfish, etc? At least we don't put blind faith into a corp that continuously lets us down after stating how wonderful, stable and secure their crap is.

    Linux is stable, can be secured network wise far better than any MS "OS" and can be secured to the extreme as far as filesystem goes.

    Lets not even mention OpenBSD, or it would REALLY start looking embarassing for Mega$haft.

  • I know a bit about the Certification and Accreditation process, seeing as I've had to oversee the process for two systems I work with daily and started another for a FreeBSD-based system I'm developping on my own time.

    Despite the hoopla being tossed around about this operating system or that operating system having a C-[1|2|3|4] rating, this is the straightest possible poop: You can't get a C-* rating for an operating system. A C&A package accredits a complete system I.E. a specific OS version and its accompanying system tools, a specific hardware setup, a specific set of included applications/services and a very specific method of connecting that system to other systems, including networks. Part of the process is to have the Comm-weinies run a cracking tool against the system after the system specification is gelled. Without each and every part of the system accounted for, the package won't go through. After the process is completed, if you change anything in the system that is not accounted for in the original C&A, you're got to go through the process again, albiet in an abbreviated manner. You may not think that's so much, but that includes simple things like adding RAM, installing a larger hard drive or installing a faster network card... all of which require a C&A revision. And if you want to update parts of the OS, just open a vein.

    Believe me, ladies and gentlemen, the C&A process is a PITA, mostly because you have to submit it through people who have nearly no idea what you're talking about and you have to either dumb it down for them or bring them up to speed.

    Now, there is one useful thing about the C&A process: you can shamelessly plagarize another C&A package. If your system you're C&A'ing uses UNIX, you can rip off a lot of information from a similar system.

    Now, the problem is that even though you have a lot of smart cookies working on these C&A packages, the truly clueful in the mysteries of system security are kinda few and far between. This is in no small part because a clueful person can make 3 to 10 times his base military pay in starting salary in the civilian world. So you've got the nearly-clued writing up these packages for approval by the not-very-clued, a process which takes a couple months, and in the meantime the script-kiddies have written enough to make the process meaningless. And, when you get down to it, C-2 isn't all that hot. The ideal UN*X security model qualifies for at least C-3... and you've just got to find an implementation that doesn't have enough holes in it to invalidate that.

    You really have to check into the specifics of this NT system that's getting the C-2 certification. If I remember correctly, the last NT system that MS was harping about with a C-2 rating was accreditted as long as it wasn't hooked up to a network. It quite possibly have been the world's most secure Solitaire machine.
  • C2 evaluation is a security level that only applies to machines when they are standalone. This is nothing exciting or new.
    http://www.govexec.com/dailyfed/1 299/120699j1.htm [govexec.com] explains the rating;
    C2 products have demonstrated they can:
    • Identify and authenticate system users
    • Limit data access to only approved users
    • Audit system and user actions
    • Prevent access to files that have been deleted by others

    C2 certification only applies to stand-alone, non-networked machines.


    Ooooh. An NT server/PDC is C2 certified. As long as it's not functioning in a network. Woo-hoo. Hear my excitement. The sad part is the powerful spin this has been given.
  • auditing _is_ implemented in nt, so you can audit such events as a user tried/failed to log on access file, delete file, access/delete registry keys etc...
  • No POSIX, eh? I can understand most of the mods, but to me it seems like the machine pretty much becomes a dumb terminal after all of this.

    You've got to remember that POSIX isn't that useful under WinNT. Only the most basic POSIX standard is implemented (1003.1, I think), and any program that uses POSIX can't use any other subsystem (Win32, OS/2, etc). In other words, POSIX isn't very useful under WinNT and is mainly limited to command line programs. No networking, either. Disabling it won't result in a large loss of functionality to the OS.

    It is possible to extend the POSIX subsystem by getting a package like OpenNT, but I doubt many people do that....

  • Remove the NetBIOS Interface service
    and
    nable NetBT to open TCP and UDP ports exclusively

    Remember : NetBT == NetBIOS over TCP/IP

    I am missing sommething, or we have a contradiction here ? (disabling NetBIOS and configuring it for TCP/IP ...)
  • by Anonymous Coward

    Try this link [ncsc.mil] Sammy and learn. Class D is basically no certs and Class C1 is slightly less secure than C2.

    Man, this from 5 minutes of searching. People should know better.

  • >The article is dated a year ago. From the site:
    >Last updated: December 02, 1998

    That's exectly what I noticed (others who went there were using IE?) with Netscape on Linux.

    I have been told that some companies have gone back one year just to counter the effects of the y2k bug. This could mean that (possibly) the web site sections of microsoft might have decided to set their year as 1998 (Since it's not such a critical dept) and go on. I did see the 1998 as well, wonder why others dont see it.
    --
  • by Anonymous Coward
    You might want to do a little of your own fact checking before espousing someone's else's opinions as truth. C2 ain't the lowest.

    Jeez people! How about a little self-thought?
  • by noop ( 72121 ) on Saturday December 04, 1999 @08:45AM (#1480462)
    http://www.radium.ncsc.mil/tpep/epl/epl-by-class.h tml
    is a list of what's ranked as what..
    Looks like trusted XENIX is going to be the highest rated.

    I'm not sure the SAK is required, openVMS 6.0 and 6.1 are listed as C2 and it doesn't mention anything about a secure logon keysequence( they do for nt)

    you knowm, since they don't rate at the level anymore, linux could just claim C1 rating, and most people would assume that it's one better than NT.
    hehehe
  • Do these security evaluations take into account things like buffer overflows? I know when we were getting DG/UX rated for B2, we weren't really on the look-out for that sort of thing. There could have been some strcpy's in the C library that we didn't catch...
  • by Anonymous Coward
    Trusted DG/UX has been rated B2, which is pretty good. I'd bet a few bucks OpenBSD could achieve
    a similiar rating.


    You really should learn more about certification before you start foaming at the mouth like that. OpenBSD can't even achieve C2, since it lacks ACLs.

    For B2 security, you have to majorly rethink system architecture. It requires MACs (mandatory access controls--I doubt you've ever used a system that has these, because if you had, they're sufficiently different that you'd never mistake OpenBSD for being remotely capable of them). What you're left with after you implement MACs is nothing like traditional Unix in terms of feel and semantics.

    Remember, Unix was never designed to be secure. Neither was TCP/IP. It shows.

    (not intended as a slam against either. I'd much rather use Unix, any flavor, than Trusted Unix.)
  • Disclaimer : I have not read the C2 spec and have no plan to waste any time on an obsolete certification.

    However, from what I know of the whole C2 thing, it does'nt mean much. You can have a remote root exploit wide open (buffer overflow, etc.) and still be certifiable. Basically, the C2 cert involve permission management, audit and accountability of the OS/app being certified. Else, how can they keep with the onslaught of vulnerabilities being found every day ?

    Thus, thrusting an OS on the basis of C2 certifiability is pretty pointless. So would be having Linux certified.

    Security wiz are welcome to correct me.
  • Wow, is this the first MS story on slashdot that's of actual reading value that isn't showing MS in a negative light? I'm completely surprised. Maybe things are changing for the better here.
  • I think everyone's just anticipating the way MS might use this result for anti-linux FUD. They've got the dollars to shout loud enough to seriously mislead people about what the rating means. If the playing field was level I'd agree with you.
  • Linux 2.[123] have a SAK (SysRq-k)

    --------
    "I already have all the latest software."
  • by Chris Johnson ( 580 ) on Saturday December 04, 1999 @10:14AM (#1480475) Homepage Journal
    I'm picturing a checkbox labelled "Allow untrusted users to plant Trojan horse programs" :) of course, it defaults to off except for when you set Office to 'Active Content' :)
  • Encrypted File Systems don't matter for certification - the evaluated configurations assume you have physical control over the machine. If the operating system makes sure only authorized people can have access to files, that's enough (though the definitions of "authorized" and "access" are more stringent in B1 than C2.)


    In reality, losing physical access to your machine is a realistic problem, especially for laptops, so encrypted file systems are a good thing, if you've got enough horsepower to run things using them. Back in the early 80s, that wasn't realistic; today it is. Part of the problem is that the choice of algorithms back then was DES (way too slow for software on 1-MIPS machines), or NSA-developed secret algorithms (again, generally done in hardware), or algorithms not developed at the NSA (politically unlikely to get approved, and until the 90s generally either too weak or too slow or both.) But using hardware crypto means you're not using a general-purpose machine, so that's unlikely to be useful. For non-multilevel-secure OS's, I suppose you could put hardware on a disk controller, but most applications that could justify that kind of expense would need to run multi-level security, so why bother.
  • Mostly they are for the US Gov and people who contract for them. The US gov requires different levels of security depending on what you are doing.

    If someone wanted to certify linux I think the best aproach would be for a hardware vendor to make a custom distro that was closly controled.
    and certify it on known hardware.

    (Yes you can closly control it, just say if you want to be C2 get it from us, and use only our hardware).

    Someone will probably do this at some point. As soon as they have a good reason to do so.
  • They don't seem to say that NT is 'Certified C2'. I believe only individual installations can be certified, and there is no reason any other good OS cannot do the same.

    Note they talk about how this is a guide as to how to configure NT in a way that is c2 eligible, not C2 certified. C2 certification covers many other things outside the OS (building security, etc..). So simply put, they are just telling you what configuration you should put NT into if you are trying to make it part of a systme you want to get C2 certified.
  • True... but has anyone outlined the steps needed for a distro from its "default" install to make it C2?
  • Ever notice that almost no corporation ever links to another site?
  • This is true but I don't think that this is because Solaris is unable to attain this level of certification. Solaris 2.6 has attained ITSEC certification of E3/F-C2 effective since December 1998 and Trusted Solaris 2.5 has attained E3/F-B1 effective since November 1995.

    http://www.itsec.gov.uk/
  • Actually Solaris has not obtained TCSEC approval as stated by Anonymous Coward with reference to the listed URL http://www.radium.ncsc.mil/tpep/epl/epl-by-vendor. html but do have a similar equivalency.
  • I'm not sure if this prevents you from reading a file name, though.

    It doesn't. (Although this can be restircted via file permissions)
  • Sadly, it's already been broken (IIRC). Search the coderpunks archive for details. It's doesn't necessarily mean that someone can read your files, but there are some problems with how keys are stored which makes it much easier for someone to compromise the security of the keys.

    It's not broken -- more a case of if you're really stupid (don't do what you're told) then it can be compromised. See this link [microsoft.com].
  • I'm not saying that NT is secure, I'm saying that it hasn't been intentionally weakened. Jesus Christ, you're like one of those idiots that *knows* that DES has a huge trapdoor in it, just because the NSA was involved in its development.

    Look, Microsoft has never been a good company for security. They have *always* cut corners and taken the easy way out when it comes to security. But that doesn't mean that they intentionally put holes in the operating system.

    Here's what I think happened:

    The folks over at the NCSC told Microsoft that having one key for verification is definitely not a good thing-- what if the key is compromised? They suggest that Microsoft add more keys, or (even better) make the keys user-configurable.

    Now, Microsoft doesn't want just *any* company to be able to sign system-level software, so they decide that user-configurable keys are not the answer. The only other way to do it is to hard-code another key into the OS. The programmer assigned the task is told that the key is there for the benefit of the NSA evaluators, so he (somewhat logically) calls the keypair NSAKEY.

    Also: I firmly believe in applying bug fixes. If you believe that NSAKEY compromises your security, then I encourage you to change it with any of the programs out there that will do so. But if Microsoft only put the key in for the sake of appeasing a bunch of testers, then they really don't have much reason to delete the key-- especially if the irrational people that are absolutely terrified of it wind up changing it anyway.
  • Want to know the most secure OSs ever? here is a small example of some of them (quoted from www.wangfed.com, emphasys mine):

    Through the next two decades, we developed and maintained a series of high assurance Trusted Computer Systems, each of which received the first-ever National Computer Security Center (NCSC) high assurance at its given Class (as defined by the National Security Agency (NSA) Trusted Computer Security Evaluation Criteria, or TCSEC). In 1984 the Honeywell Secure Communications Processor (SCOMP) received the first-ever NCSC A1 evaluation, and was followed in 1985 by the MULTICS at B2, and in 1992 with the XTS-200 at B3. The XTS-300 received the first-ever Ratings and Maintenance Program (RAMP) evaluation at B3 in 1995, with several subsequent generations in the evolving XTS-300 product family following suit. The latest B3 RAMP of the XTS-300 (running secure trusted operating program [STOP] operating system Version 4.4.2) was completed in 1998.


    Found on www.wangfed.com [wang.com].
  • As Anonymous Coward suggests, the Posix services let government buyers check off Posix compliance when deciding to buy a product. To some extent, that's the same reason C2 security is important. On the other hand, it's probably easier to get the auditors to give you a waiver on Posix than on C2.
  • I'm using IE 4.something (at work) and I consistently see 1998 as the date.
  • Of course, the numbers [attrition.org] tell a different story...
  • by billstewart ( 78916 ) on Saturday December 04, 1999 @11:13AM (#1480497) Journal
    A well-designed MAC system doesn't interfere with normal work, as long as your normal work doesn't involve kernel hacking or developing trusted applications, or developing networking applications beyond a limited scope. But basic user-level stuff can be very normal.


    MAC systems actually made doing system security much easier. You put the operating system files at Security Level 0, and make all the users live at Level 1 or higher (e.g. UNCLASSIFIED), and the no-write-down MAC enforcement means that users can't mess with any critical files, and can't mess with kernel-written logfiles. Other log files can go at System High (if you're not running with stricter No-Write-Up rules) so user-level processes can write to them, but can't read them, or just use a separate security compartment to put them in.


    AT&T System V/MLS accomplished most of this by munging the Group ID mechanisms to carry MAC information, both for security levels (UNCLASS, SECRET, etc) and for security compartments (PROJECT X, NUKES, CIA, COMSEC, etc.) This was back in the 80s, and it was the first Unix system to be B1-rated.


    What about Superuser? Some B1 systems kept it, and just did a lot of work to limit bugs and damage, while some split it up into multiple less-super users. AT&T System V/MLS kept it. The B2 Least Privilege requirements make it much more difficult to avoid ripping root apart; I don't know what current B2 systems do. Covert Channels are the nasty part of B2 ratings - it was hard enough to hide subtle timing channels and things like that back when machines were much slower - now there's enough horsepower to play even more games, and I'm not convinced a general-purpose machine can do a good job of blocking them.


    Secure Attention Key wasn't originally a C2 requirement; it was either B2 or B3, but it's easy enough to implement and solves enough other problems that everybody does it.


    Secure Networking was still hairy research back when I was working with this stuff. The problem is that a network really just sends bits back and forth, and you have to be able to use those bits in a way that you can prove who's on the other end of the wire, what they're authorized to do, and that nobody else is doing something unauthorized with the bits you're sending. It's an obvious job for crypto, but that wasn't very usable back then except for DES chips and NSA secret custom stuff. The main technologies people were developing at the time were IPsec-like encrypted ethernets, usually with DES hardware on the ethernet cards, where the crypto primarily provided authentication. Putting crypto on the cards means the security features don't depend on the operating system - this means you can run a multi-level network with single-level dumb MSDOS machines, and worry about how to integrate multi-level OS's separately. (The crude way to integrate a multi-level OS into this system is to use multiple Ethernet boards, one per security level, and use OS protections to limit which boards get to be which security level.) But it's still a hard problem - TCP/IP living in the kernel is much harder to secure than UUCP living in user space.

  • All these documents say is that it was evaluated
    at c2 level. Did they pass the evaluation and
    become certified or did I miss something?

  • by pb ( 1020 )
    There are three programs on here that got A1 ratings!

    Isn't that where you need to mathematically prove your program is secure?

    (or can you just bribe them with the steak sauce?)
    ---
    pb Reply or e-mail rather than vaguely moderate [152.7.41.11].
  • > Now, this to me at least indicates that either
    > this news is old, or Microsoft is using outdated
    > testing criterium.

    If you look at http://afpubs.hq.af.mil/ you'll notice that not all of the governments forms, publications and other offical documents are not dated 1999... And another thing... Microsoft did not test NT, the government did.

    Please do research before posting such a comment.
  • You don't see it because you are looking at the index page to the security evaluations, *not* the actual C2 Evaluation for NT 4.0. The index page is marked 1998 (the content is probably last modified that date, the index itself dynamically generated). The evaluation itself is dated 1999.

    But then hey, why not spread some conspiracy theory instead... I mean it's much more fun isn't it. No need to be accurate where Microsoft are concerned.

  • Read the posts by slashdotters ;) so much for that theory ne? It's either

    "NT sucks durhurhur linux is much more secure but we feel that we are above paying for testing for certification"

    or

    "Durdurdur Linux is more secure, NT sucks, this proves nothing, M$ is stupid"
  • So this says, through a finite amount of configuration work, you can get NT to a set amount of security. This doesnt even tell you whether it would be remotely feasible to do this.

    Also, I am curious how NT fulfills the auditing requirements... How the hell can I find out what user deleted a certain file? Perhaps I am just stupid, but I never saw any hidden option to log absolutely everything on the system. (Something like a journalling file system comes to my mind here)
  • But I'm thinking that NT has to be one of the most secure OS'es out there. Seriously....how can someone hack into it when it's down most of the time... hehe

    Steve

  • The article is dated a year ago

    I don't know where you saw that. At the top of the page it clearly says:
    Last Updated: December 02, 1999
    (emphasis added)
  • by Money__ ( 87045 ) on Saturday December 04, 1999 @07:20AM (#1480509)
    I've taken the liberty of converting the most of the C2SecGuide.doc to HTML and posting here: http://slashdot.org/comments.pl?sid=9999 [slashdot.org]

    Your comments welcome!

  • Not to admit that I'm an MCP or anything ;) BUT--

    You can configure NT to audit pretty object access (including file delete), but it (obviously) puts such a load on the box that no one in their right mind would do so on an ongoing basis.

    To see for yourself, on an NT server:

    Start -> Programs -> Administrative Tools -> User Manager for Domains -> Policies -> Audit
  • Wrong.

    The article discusses post-SP6/SP6a hotfixes that were released recently. SP6 itself was released in November 1999.

    Don't you have something better to do than bitch about the relevancy of /. articles? Especially when you are wrong?
  • LOL! Yeah, think about it: NT running on Coppermine would be immune to hacks -- it's down all the time! WoW! I think I hit it! This must be Intel delivering its last promise to MS in making NT secure before it defacts to Linux!

    (Standard disclaimer: my sense of humor is different from yours. Moderators take note. :-D )

  • &lt Sure, Linux "could", Solaris "could", BSD &lt "could"...NT did. One more case study &lt where NT is better than the competition that &lt will not be belived by the Linux zealots.
    Oh we'll believe it allright. We'll also improve Linux to where it could pass said certification.
    &lt I suppose the government is actually paid off &lt by Microsoft and that's really how they got &lt the rating, right?
    Right on the money, actually.
    To achieve a security rating, you have to submit to all kinds of poking and proding, paid for by the person or organization applying for the rating.
    So, in a manner of speaking, yes M$ did pay off the gov't.
    I believe this is why Linux won't achieve C2 rating in the near future. No companies have the spare $$$$ and see enough advantage in the certification to get it done.

  • NT 4, like 3.5 before it, is now certificed under C2 'Orange Book' specifications. That means it is certified as a workstation *without any connection to the outside world*. As soon as a NIC card is installed, the C2 certification is meaningless. It really isn't that difficult to secure a computer that isn't connected to a network. In addition to the C2 'Orange Book' certfication, there is C2 'Red Book' certification which is operating systems in a networked environment. As far as I know, Novell NetWare is the only commercially available operating system that currently has C2 Red Book certification, although I don't know what configuration was used to obtain this certification.
  • Unix Groups are basically ACLs - as long as you've got a quasi-friendly group creation interface, they're even useful ACLs.


    I'll comment more on MAC/B1/B2 issues in a reply to the parent article, but AT&T System V/MLS, the first B1-rated Unix system, felt very much like regular Unix. There were occasional useful things you couldn't do, and hacking the kernel or anything that required setUID was right out (:-), but it was also easier to secure things when you wanted to.


    The big issue in those days was what to do about networking - you couldn't just hang TCP/IP off the kernel, but on the other hand it wasSystem V, not BSD, and while we had TCP/IP it hadn't yet taken over the world - you could still do useful networking with uucp, though you had to set it up in a somewhat limited fashion.
  • Which means, of all the server vendors, only Compaq saw fit to get their kit evaulated.
  • by Anonymous Coward
    What kind of idiots are moderating today? Informative? It's totally wrong! Does any made up fact that's slanted against NT qualify as informative around here?
  • MSweb obviously made a typo.

    The index page is labled Dec 2, 1998.
    The NT4 page is labled Dec 2, 1999.

    -sh

  • I can tell you many filenames on the drive :P

    it's almost garunteed to have all ntldr, io.sys, pagefile.sys etc :P

    NTFS is _secure_ but obvisously can be mounted on other OSs. Ofcours, like other people have pointed out, W2K supports NTFS cluster level encryption.
  • by Anonymous Coward

    C-2 rated systems require a Secure Attention Key (basically some way to guarentee you have a real-login screen, and not a fake one. Ctrl-Alt-Delete in NT) which I don't think the Open Source unixen have yet.

    Ctrl-Alt-Del on NT can be masked simply by filtering keyboard device. See this page [sysinternals.com] for helpfull source code.

    And with support program (preferable service) you can create new desktop object and mimic Logon screen :-))).

    Conclusion: Anybody with "Install new device" privilege can owercome SAS. In reality this privilege have every Admin.

  • Nope, not quite. On the page that lists products by hierachy, it says that they've stopped evaluating stuff at C1 level anymore. C2 is the lowest security level that they will evaluate at. (Any lower than that it's probably not worth their time) -=- SiKnight
  • I'd be interested in knowing what Un*ces in general have various levels of certification (yes, I recognize that it must be as part of a particular configuration). For instance, what levels of security certification have been granted to OpenBSD systems?
  • How about making a contribution to the Linux NTFS driver community in return? ;-)

    MS is implementing file system encryption in the next version of NTFS, see http://www.microsoft.com/msj/1198/ntfs/ntfstop.htm . I'm not sure if this prevents you from reading a file name, though.
  • umm.. I meant "pretty much every object access"... You can audit ugly object, too. ;)
  • As others have pointed out, this is an evaluation rather than a certification. NT is not inherently secure (being based on that stupid DCOM/ActiveX architecture with its remote stack), and you have to munge it up with hot fixes and service packs to even get it to the point where it's can be C2 evaluated. Sorry, "ROCK", NT still sucks.
  • Take the unproven assertion that Microsoft made a deal with the NSA, add a mix of anti-NT bias (gee, it could *never* have made C2 on it's own merits!), and poof, we have conspiracy theory!

    Perhaps you wish to imply that *all* C2 rated OS makers made deals with the NSA? Maybe they all have "backdoors" too, and you just don't know it?

    Anyway I don't see jumping to conclusions as necessarily "insightful."

    -------------
    The following sentence is true.
  • by Seth Scali ( 18018 ) on Saturday December 04, 1999 @07:28AM (#1480539)
    "What would it take for Linux to get a C2 rating?"

    Well, we certainly are doing well in the security arena. Open Source allows us to fix a number of bugs, and to identify trouble spots before they become vulnerabilities. Also, Linux has a hell of a lot of people that will back up its security when properly configured.

    But here's the problem: Define Linux.

    Okay, let's say we want to get Linux certified at the C2 level. Well, that's just fine and dandy. Are we going to just submit the kernel? Or are we going to submit programs (bash, mount, losetup, etc.), too? If so, what versions? Are we going to submit an entire distribution?

    It wouldn't be possible to get a C2 rating for Linux in general. There are too many different distributions, platforms, bugfixes, and updates out there to get a handle on-- the best we can do is rate a particular version (at a particular bugfix level) of a particular distro. So, just because (say) RH 6.0 gets a C2 rating doesn't mean that Slackware 3.6 is less, more, or just as secure.

    Even if we do get a version of Linux (in general) rated (for the sake of argument here, let's go with the idea), what about the next version? Microsoft is gonna have to go through the program again with W2K. Figure that we went from kernel version 2.2.0 to 2.2.13 in a space of less than a year-- and 2.2.14 is due out soon. It would be pointless to try this, because we would wind up constantly having to get it re-tested.

    And let's not even talk about the price of such testing.

    In other words: Forget about government security ratings for Linux. It's too dynamic to be given a static rating. It's also very reliant on the operator (as is NT, but that *seems* less obvious to most people).
  • Did anyone else notice that you needed service pack 6a AND a hotfix? Seems to me this means that before those fixes MS was failing the test.

    I for one had thought that MS had just given up on C2 for NT4, but apparently they were trying for all these years. Wow.

    They also never said that it had passed. Windows NT 4.0 has been evaluated at the C2 level in six different configurations They never say they got it passed (they do point out that passing would involve evaluation of physical security and administration proceedures).

    The TPEP Evaluated products by vendor page [ncsc.mil] only shows NT 3.5. Perhaps it hasn't been updated yet.
  • Trusted DG/UX has been rated B2, which is pretty good. I'd bet a few bucks OpenBSD could achieve a similiar rating. Solaris could probably make C2 at least, but hasn't been rated officially AFAIK. Several *nix vendors make "trusted" versions, like Trusted IRIX and Trusted HP/UX. However, based on my experiences with both of the those OSes 'normal' configurations, I don't think they would do too well... though it is mostly an administration thing.
  • AFAIK, when they went for 3.51 security it was not connected to a network; however this _appears_ to be with the system connected to a network:

    Server operating as primary domain controller
    Server operating as backup domain controller
    Server operating as a member server
    Server operating as a non-member server
    Workstation as a domain member
    Workstation as a non-domain member


    Like the previous poster I'd like to know what it would take for Linux to be submitted for evaluation. With encrypted filesystems it may stand a chance of a better rating....

  • From the TSCEC FAQ page:

    The Trusted Computer System Evaluation Criteria (TCSEC) is a collection of criteria that was previously used to grade or rate the security offered by a computer system product. No new evaluations are being conducted using the TCSEC although there are some still ongoing at this time. The TCSEC is sometimes referred to as "the Orange Book" because of its orange cover. The current version is dated 1985 (DOD 5200.28-STD, Library No. S225,711) The TCSEC, its interpretations, and guidelines all have different color covers and are sometimes known as the "Rainbow Series" (see TCSEC Criteria Concepts FAQ, Question 4). It is available at

    Now, this to me at least indicates that either this news is old, or Microsoft is using outdated testing criterium.

    Also, when looking at the TSCEC programs that were evaluated and passed, complete listing, NT4 is not on the list. NT3.51 is, but not NT4. Also, Microsoft never made mention of wether or not it had passed the evaluation, only that it had been tested 6 different times.

  • by Issue9mm ( 97360 ) on Saturday December 04, 1999 @07:38AM (#1480557)
    I don't know that Linux has ever been officially evaluated. It's not on the list.

    Here is the list [ncsc.mil] stating all evaluated programs ever.

    It's interesting to note that Trusted Irix got a B1 rating... hmmmm....

Life is a whim of several billion cells to be you for a while.

Working...