Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Microsoft

Microsoft Hack a National Security Threat 218

Scott Treadwell writes "The Center for Strategic and International Studies (CSIS) stated in a 73 page report, that the government and the private sector should be concerned about the " trustworthiness" of future Microsoft products. This, in the aftermath of the October hack into the Microsoft's network in which an attacker allegedly gained access to Windows source code. "With most military and government systems powered by Microsoft software and more generally reliant on [commercial, off-the-shelf systems], this recent development can pose grave national-security-related concerns""
This discussion has been archived. No new comments can be posted.

Microsoft Hack a National Security Threat

Comments Filter:
  • by Anonymous Coward
    ...it took a hack into the holy temple microsoft before Redmond's code was considered a security threat?

    Personally, when MS and the NSA started colluding, that's when I started considering MS products to be a security threat. I could never figure out why the justice dept. would want to shoot itself in the foot by hobbling their (indirect) relationship with Microsoft.

    Anyway, I suppose the hacker(s) could disclose whatever source they have and claim to do it in interst of National Security(tm). Kinda' like Zimmerman publishing a book containing the source for PGP inorder to get a grandfathered, post facto export restriction waiver.

    Perhaps this is all some sort of grade facde by Microsoft to get their Win2K source under the many eyeballs that will (hopefully) make MS's thousands of bugs shallow.

    But hey, as the saying goes, "[...] when the going get weird, the weird turn pro [...]" and MS's shares have taken a huge hit lately.
  • Remember the "NSA key" brouhaha around Microsoft a while back? I'd be equally concerned about MSFT putting stuff in their code on purpose, and not just some group of crackers. Not to mention the inadvertent (or are they? Hmmm.) security bugs that show up every week or so.
  • The problem is that the "bad guys" may be more resourceful than those working for the government. Also, any code review might have been made under the assumption that this source code would not available to the attackers. Finally, consider this: the source code was lifted from Microsoft's site, Microsoft presumably knows their own software better than anyone and should be best able to secure it. If their site was compromised, how secure should other sites feel that don't have the "Home OS advantage"?
  • It's so true. I remember when I very first used Linux, after being in the world of Microsoft and Macintosh for years, and I thought it was really odd and backwards that I had to log in and out all of the time to do simple things! But now I can see that there are lots of advantages. Did you know that there are no viruses for Linux at all (except one that was made as an academic exercise just to show it could be done)?

    That is what I like about Linux most of all - its security and stability. Its good when I'm working on my portfolio to not have to fearfullt press the 'save as' button all the time. I wish there were more art programs for it though.

    The beauty of the 'I Love You' virus, AFAIK, was not in the program code itself but in the insightful understanding of Human Nature it showed. I know when I recieved one, I opened it (tee hee!) in a fit of curiosity. I really was disappointed though :-(

  • Actually its more along the lines of what may have been altered as well as seen. Their is no real way for them to know if something was altered unless they do a full audit, im not even sure they could do such a thing in a reasonable amount of time due to the sheer size of the beasts they area talking about.

    Also this code was stolen, it was never open so the bad guys may know something no one else can find out because no on else can see it.

    In the long run i dont think either method is inherently more secure. Security by obscurity is know to fail, it has been proven many times. Now this is not the same as publishing a full exploit which imho is reckless and dangerous since it just leads to increased script kiddie attacks.
  • What surprises me is that they deem a revelation of source code a security risk. That, if anything, shows a lack of faith in OSS.

    As much as it may chagrin me to admit it, Microsoft has some thirty-five thousand people working for it, and while they may not be able to or want to audit their code in an OpenBSD [openbsd.org]-like manner, I am sure they have an entire security department. And I am also pretty sure that they know that security through obscurity doesn't work.

    My point, and I do have one, is that Microsoft does have its stuff together, to a certain extent. W2K and NT4, while not suitable for an Internet server, do well in a Microsoft only Intranet environment. If the government gets scared because of 9x or NTKRNL code being let out, what must they think about things who's code has always been available? Yes, it allows for public contributions and improvements, but it also allows for public analysis, scrutiny, and discovery of bugs.

    Definitely not a Karma whore,
    Mike "My Bucket's Got a Hole in It" Greenberg


  • ...this recent development can pose grave national-security-related concerns

    Umm... Now I'm scared. Hasn't Microsoft always been a security threat? I started this post off as a joke, but then I realized something -- the US government is filing an "Anti-Trust" lawsuit against Microsoft. Now, last time I checked, "Anti-Trust" means that you don't trust them...

    So why are they just now saying that they should be wary of Microsoft products? Strangely, I'm reminded of that ad Microsoft ran in Germany, with pictures of penguins with an elephant's trunk, etc..., saying something to the effect of "What if your penguin becomes something else?" It just seems so fitting when reversed. What if your "super-high-security" Windows server suddenly becomes the carrier of a virus the crackers did, when they "cracked" into Microsoft? Nothing! It's closed source; you're left to fear Microsoft. But what about Linux? If it suddenly warps into the same thing, you remove a few lines of code, and go over it to make sure it's secure.

  • As if Microsoft doesn't have at least dozens of full source backups, scattered over several locations, online and offline. The most hackers could modify would be few online copies. It probably didn't take MS more than an afternoon to check and restore all the source files.

    The article on the CNN site (and the junk "study" they chose to publicize) is nonsense. Since Time-Warner owns CNN and AOL owns TW (or it is about to), and AOL is battling Microsoft, it is obvious why CNN would select to publicize that particular instance of junk science. Some little weasel at CNN is trying to get on the good side of the new bosses.

  • by Snowfox ( 34467 ) <`snowfox' `at' `snowfox.net'> on Saturday December 30, 2000 @10:28AM (#1426751) Homepage

    Which Microsoft hack would this be?

    Is this the Windows9x-on-top-of-DOS Microsoft hack?

    Is this the "invent your own language" MS Word Grammar Checker Microsoft hack?

    Or is this the mutex display bit "one program freezes your OS" Microsoft 3.1 and 95 hack?

    Or is this the web-browser-turned-drive-explorer hack?

    Or is this the always-locking-up ftp hack?

    Maybe this is the "some versions of Direct 3D render bitmaps upside down, others don't, depending on which version of the interface you probe" Microsoft hack?

    No, I'll bet it's the unstable "oversized int destroys your registry and requires reinstall" Microsoft hack.

    Nyet. It's got to be the brain dead Outlook stationery format Microsoft hack.

    No wait, I'll bet it's...

  • by Anonymous Coward
    Security-Enhanced Linux [nsa.gov]

    It's about time the government

    • got some software that does what they want, not what M$ wants (or what some hacker in Russia wants)
    • advocated the use of standards (non-MS TCP, non-.doc)
    • reaped some payoff from (arguably) the most successful government program yet (ARPAnet)
    Yes, The People's memory is short. Nixon died a hero, Ollie North has a radio talk show and we still use punch card ballots. Maybe it'll take a M$ brekaup AND an hack AND another destroyer blue-screening before they get a clue. It seems that only scorched-earth crises like these work.
  • I sure milatary applications are different, but I doubt that most government agencies have any speacial relationship with MS, In my experience government departments buy and use compters in much the same way any business would.
    The point is simple, if the code was lifted, than the code is out there, and the bad guys have it, and the good guys don't. That puts the government, and business that rely on MS code for security are at greater danger than would have been otherwise (not that relying on MS for secutiry was ever a good idea.)
  • True. There are specific procedures for creating a document on a classified system and then producing an unclassified version of it. And for the reasons mentioned above (pieces of the document remain in the doc long after they are deleted), no unclass MS Word, Excel, Powerpoint documents are allowed to be created on a class machine (it is a security infraction / violation). Only pure ascii text and binary pictures.

    I don't get this part: how can this be a Cracking (as opposed to hacking as CNN inaccurately refers to it) problem? No secure system is permitted to be on the Internet without proper encryption, which is supposed to secure the information independent of what OS is being used. So no one can get sensitive information if it is handled properly, i.e., according to the rules.

  • Did you know that there are no viruses for Linux at all (except one that was made as an academic exercise just to show it could be done)?

    i think the reasons that there aren't any viruses for linix is because 1) linux is still not as widespread as Microsoft. no reason to make a virus that will attack a minority of users when you can make one that will attack the majority; 2) most linux users (i'm assuming, could be wrong) will be the type who keep up to date on viruses and the like and therefore would know not to open attachments if they aren't sure what it is.

    "Leave the gun, take the canoli."
  • by Anonymous Coward
    Are we forgetting that Perl and Shell scripts are also "small text file"-based. The problem isn't the "stupid text files", it's the fact that a user on Windows is running as the equivalent of root. Any program you execute (on purpose or by accident) can have total control of your machine. THAT is the mistake.

    I can run a "small text file" on Linux that could wreck my machine, but I'd have to be running as root. Otherwise, it can only mess up my personal files, and never touch the system. Unfortunately it'll be quite a pain to re-engineer Windows to actually properly implement security, but that's not MY problem :)

    And yes, I am a Linux user and I love it!
  • All windows development happens in Redmond.

    But... [cisar.org]
  • >>The "ASCII text" restriction isn't exactly an ideal protection.

    That's true, of course! But the government doesn't rely on pure technology to solve the security problem. They also screen and train people so that they can trust them with security.

    There are no ideal protections, but there are 'cleared' and 'uncleared' people.

  • These concerns are certainly valid! Micro$oft products are inferior for mission critical tasks, and most everyone knows it. Granted, M$'s OSs are great for every John Smith home user.

    What I find interesting is the comment about how national security is inheritly as risk because of a potential leak of source code. Why is that dangerous? After all, people @ m$ have had access to the code all along. I think they see the danger because the code was potentially gained by malicious activities.

    With that said, what about open source? Wouldn't an OS like Linux be more dangerous because 'hackers' can get the source without effort? NOPE. In fact, I'd like to draw the conclusion that open source OSs like linux are more secure because the everybody can get the code. Security flaws are fewer because of the pure number of eyes looking at the code.

    Perhaps the Fed should stop dumping tax dollars into M$ (especially since they are sueing the f* out of 'em). Think about that... FREE and true security, and some extra $$ to give back to the people who pay the bills.

    --cr@ckwhore
  • by SuiteSisterMary ( 123932 ) <slebrunNO@SPAMgmail.com> on Saturday December 30, 2000 @10:47AM (#1426760) Journal
    We'll consider a default Window ME install to be very useable, but rather insecure. Now, add a small filesystem layer that encrypts and decrypts everything to and from the hard drive. Replace the usual login password with something that checks an individual's physical traits(such as DNA or maybe fingerprints). Make sure that it's checked as soon as possible. I'd replace the BIOS with whatever checks for the DNA/fingerprint. We'll also assume this workstation isn't physically connected to any other.
    Spoken like somebody who has no idea what computer security is. I'll start with a few of the basics, just to get you started. For more information, in a fairly simple format, find O'Reilly's "Computer Security Basics."
    1. There's no access controls. Great, you're checking DNA. What stops me from walking in and yanking the power? Or buggering off with the box itself for later decrypt?
    2. What sort of user protections are there? You named Windows ME, so we'll use that. No ACLs, no auditing, nothing of the sort. A 'secure' system audits, generally through hard copy (which can't be invisibly altered) every, and I mean EVERY action.
    3. This thing has a floppy drive, and a CD-ROM, so I won't even get into the idea of walking in with a linux boot disk and a parallel port ZIP drive and copying the drive for later perusal and decrypt. See point 1 above.
    4. You failed to mention any sort of backup scheme, as well as disaster avoidance/recovery. That means I can deny the system to you with a flick of the circut breaker in the basement/closet.
    5. Also, this being Windows ME, anything beyond the most basic of fault tolerence is impossible.
    6. You failed to mention any sort of human protections; DNA/Fingerprints are very easy to get ahold of. I can convince you to put your finger on it, one way or another. Do you have a 'duress' password you can supply, which will trigger a silent alarm, but not tip off the intruder?
    7. Again being commodity hardware, this thing probably isn't TEMPEST shielded.
    8. This being Windows ME, it doesn't support process isolation, etc etc. I can write a two line program, in Visual Basic, that will grind the machine to a smoking halt.
    9. If it's not connected to anything else, in any way, that obviously precludes a network, or the Internet. Suddenly it's not so usable.
    10. This being Windows ME, you have NO way of doing a code audit, and no way of guarenteeing the swift and competant fixing of any bugs.
    11. I won't even get into the inherant stupidity of trying to use WinME for anything, including games. Before you saying anything else, please do read up on the subject at hand. Start with the O'Reilly book referenced above, then a few others I can name, "Practical UNIX and Internet Security" being first on the list.
  • Security through obscurity will always haunt microsoft. Obsuruity is obviously the only security they have. That is why they want to shut down bugtraq. Perpetual critical security updates are their only defence against people who uncover flaws in their operating systems. By nature of the way they operate, the crackers will always win. They will not post an update untill an exploit is found and used. The damage having already been done, they use a band-aid.
  • Okay, if we believe for a moment that M$ was not entirely truthful in regards to the intruders not having changed anything [or if M$ hasn't nboticed the changes as yet], then there is certainly a certain danger.

    Imagine the intruders, in the weeks they had access to win02 dropped a couple of trojans.
    Imagine that any computer running that system in future can be rendered useless / or can be highjacked [chipnapped?] by anyone who has a certain key [like Alt+Tab+F12; Shift+F12; Shift+F12].

    Now imagine that a second trojan is activated by the first one and the second's job is to collect all passwords/access codes, etc., compile a list and rename the list as a system or dll file, to be stored in an unsuspicous location.

    The rest would then be easy... What if the computer in question belongs to someone working on calssified materials, or worse, on bank loan approvals, etc?

    For once I agree that there is a fair chance of some danger to anybody's security - ie. every country's national security. More importantly, a danger to the security of our employers...
  • All of a sudden, you have an incredibly secure system, with the same useability(maybe a little slowdown for encryption/decryption, but there are fast, secure algorithms availble). So no I've already refuted the "inversely proportional" part.

    No, you haven't. Current security systems are bastardized, ignored, and just plain not implemented well. What you are talking about are cheap, easy ways of increasing our crappy security.

    Pretend for a moment that we take the computer you are currently using, and remove all processor cache from it. Now, someone could say "There are always tradeoffs involved with processor design; if you increase the performance of one thing, you will degrade the performance of another." Your response would be to say "That's nuts... all I need to do is add cache to it and look, it works great." You are correct, but only because the processor is so piss-poor designed that there were still easy design additions that could be made for big wins.

    However, once you've made all the easy decisions, suddenly the tradeoffs rule comes back in full force. To improve, say, an Athlon, with current technology, would be a difficult undertaking. Because the chip functions as a gestalt, a unified whole, the easy answers don't work. Speed up the FPUs, and do nothing else, and your performance will hardly change at all, because the data can't come in fast enough. The easy gains are gone.

    Security is much the same way. Yeah, we can graft some easy stuff onto our crappy systems nearly for free, but as you approach 100% secure, the ease-of-use goes down the toilet. OK, so the drive's encrypted... maybe only the people who know the password should be allowed to use it. Now the user has to enter a password. What if they leave the partition mounted and leave the machine? Should they be forced to re-enter the password every so often? That cuts into ease-of-use. Are they allowed to conduct big transactions without checking again that they are still the authorized person?

    After the easy gains, easy-to-use and secure are mutually exclusive, because easy-to-use implies that there are fewer steps and checks being made, and that implies there are fewer steps and checks to bypass/fake if you are trying to breach security.

    Of course, the true picture is more complex, this is a simplification. There are other axises in question, like complexity of the security implementation, complexity of the security use, expense, etc. Perhaps we point a camera at the user and try to make sure the face never leaves or changes. But then, perhaps a mask on the face can bypass this, so if we want to prevent that eventuality, perhaps we need to make some other check.

    If you've had physics, it's like pressure, volume, and temperature in a gas. All three are related, and all else being equal, less pressure means more volume. All else being equal, more security means less ease-of-use. The full picture is more complicated, but the rule-of-thumb is still quite true.

    (And to get back on topic, another one of those axises involves the security of the rest of the system. The entire point of the article is that Windows has now been proven to be weak on that axis as well, along with the ones we are so familiar with.)

  • by WebCowboy ( 196209 ) on Saturday December 30, 2000 @10:54AM (#1426783)
    The folowing is no exaggeration. There are EXACTLY ZERO power plants--nuclear, or otherwise--in North America that run their critical systems on Microsoft products. This is for several reasons:

    1. Microsoft does not make a HARD REAL-TIME OS. For critical systems this is essential, because timing of critical tasks cannot be interrupted by non critical tasks such as switching operator screens or animating cursors and icons. You are more likely to see QNX or something similar in a power plant.

    2. Microsoft waives all responsibility for death, injury or serious financial loss due to bugs in their software--REGARDLESS of it's use--in it's standard EULA. "No warranty, expressed or implied" and all that crap. Specifically they state that Windows and it's apps are not suitable for critical medical, aerospace and utility applications. So much for paying for "accountability and liability". If your CANDU goes China Syndrome because of a Microsoft BSOD you can't sue Bill OR his company because they warned you. Similarly if a bank loses your money or the government your tax return they cannot sue Microsoft either. Nobody should depend on Microsoft for accountability--they offer NONE. What they offer is for-a-fee technical support and the fact they are a relatively old, stable company that can offer those services and periodic upgrades for the forseeable future.

    3. Microsoft is simply not willing to provide the support that mission critical systems demand. In the typical high-priced, ultra-stable critical systems the source is usually closed, but what you pay for is one-on-one support. If a bug is discovered, the company will send an engineer to look at it and the company will even write a patch to fix your particular problem ASAP. No waiting weeks to months for Service Pack 2 or Hotfix Q286745 or whatever.

    4. The most critical of systems don't even rely on PC technology or commodity hardware at all. Even if all the "Critial" PCs crashed, the power plant would not shut down or blow up. It would idle along, all safety systems intact. The operators couldn't adjust any setpoints until the PCs came online, but the current setpoints would be in place. Safety and other ultra-critical systems rely on old but dependable technology used in your typical embedded systems. The continents power systems do not rely on PCs at all. They rely on little $2000 Z80-based PLCs and RTUs, or even electromechanical relays pneumatic or hydraulic systems that have worked well and are subsantially the same as they were in the 1940's and 50's.

    Keeping these points in mind, rest assured that planes won't fall out of the sky, there will be no blackouts or hospital patients killed due to a Microsoft Malfunction.

    OTOH, you could have your web banking account tapped dry or your Prozac prescription exposed because of un-patched security holes in a Microsoft product (or even poorly secured and administered systems of any sort). THOSE systems rely on closed source, often MS-based commercial software. It's not that closed source is the devils work--it's that Microsoft cannot and will not support their products in a manner REQUIRED for mission critical systems. THAT is what worries me...
  • Of course, the good old Mac OS has no root level access
    Actually, if you think about it, you'll find that ALL the MacOS has is root level access.... :-)
  • The people in the US Government who need to know if Windows is secure and backdoor-free most surely have access to all the source code they need.

    Then, assuming they do the sane thing and audit it, how does the hack change anything? One of the points I was making was that there is only a security hole if the U.S. government is already being stupid.

  • hell, I've never gotten a virus in my ~8 years of using windows either... I did get a deltree once, but that was on purpose. I never use antiviral software either, as it is a useless battle, to try to check for every known virus out there. But, I digress, as for my opinion on windows: 95/98 suck ass, and should be burned alive. NT/2000 is an ok system in and of itself, but it's a victim of its own success IMHO. Theres simply too many win 2000 boxes that all have the same security holes. and more people know how to take advantage of those holes than other OSes. I don't think its even the fact the windows has more holes than other oses neccesarily, there's just more boxes out there, and more people trying to find the holes. I personally think that the various *nixes are the most secure network OSes, because they are strongly multiuser, which can have its downsides though, when it comes to usability. But about the whole national security bullshit, why would anyone put important secrets on a computer connected to the internet? No matter how strong your firewall is, it's a stupid idea.
  • So some sinister nameless hax0r who maybe, maybe not, managed to download a few source code files despite Microsoft's "world-class" internal network security is a threat to our national welfare - but each and every one of the tens of thousands of Microsoft employees with unfettered day-to-day access to that same source code, well, all of them can be trusted implicitly?

    Gee, thinking like that goes a long way toward explaining how Aldritch Ames got away with all he did to subvert the CIA [loyola.edu] (Completely Incompetent A**holes) so successfully for as long as he did.

    Yours WD "untrustworthy" K - WKiernan@concentric.net

  • it just occurred to me how bitching this hack was, window's closed secret source is where much of it's security lies, now that it's out, it seems we have discovered a blitzcrieg of sorts, an attack on the company thru litigation (anti-trust), thru espionage (hack), and also commercially (linux). now whether microsoft buckling under its own weight is a good thing stands to be seen, microsoft supports the economy, and companies must act quickly to swallow up their market share.
    ______
  • by mr_burns ( 13129 ) on Saturday December 30, 2000 @08:55AM (#1426792)
    Somebody once posted or quoted here that running microsoft OS's on the net was like planting the same strain of corn throughout the entire country, and that a single corn disease could wipe them all out.

    It doesn't matter whether or not some crackers futzed with the 'doze source. I think all of us agree that it's so darned insecure and widespread that even as a checksummed audited binary, it's a national security threat.

    All a foreign nation needs to do to really screw us over is combine the growth mechanism of melissa or ILOVEYOU and the bittersweet tang of back orifice (modified enough to fool the 2 year old virus patterns most people are using), and they've got us by the balls.

    Windows by itself is a threat to national security. Thankfully, we have alternatives who's component schemes have ACL's built in , whose source has been audited for buffer overflows, and for the most part are free. The applications are there, and free, to replace office, explorer and most other things.

    And I know this works in practice, too. Because I've never owned a windows box in my 20+ years of computing, I've been able (combined with some common sense) to avoid getting a single virus, without the aid of virus scanning utilities.
  • One of the more popular FUDs against OSS is that it's not secure because crackers can find all the holes and exploit them.

    With Open Source, at least those who are seriously concerned about the security of the systems they run can do a thorough and targetted audit of the code to satisfy (to some reasonable degree of satisfaction) themselves that their systems are secure.

    With Closed Source, you have to trust the vendor, the disgruntled former employees of the vendor and any cracker who might gain access to the source that there are no exploitable security flaws.



    ---

  • by Grant Elliott ( 132633 ) on Saturday December 30, 2000 @09:00AM (#1426796)
    This article seems to contain a few contradictions. It seems a trite ironic that the US government is willing to admit that most of their machines run Microsoft software, yet they continue to take Microsoft to court. (Bite the hand that feed you, anyone?) If I recall, one of the recommendations made in the trial was to make Microsoft open source at least parts of their software. And yet, access to that source code constitutes a security risk. There is a slight contradiction here!

    On another note, if we are ever to convince big-name organizations (ie. the US government) that Linux is a viable option, we can't exactly agree with the content of this report. If unwittingly revealed source code is bad, what is intentially released source code? They don't like code that may have been modified by one person, but we want to offer them an alternative in code that has been modified by hundreds of people. Somewhat humorously, the Linux community may have to defend Microsoft on this one.

    By the way, you might want to fix that link.
  • by gotan ( 60103 ) on Saturday December 30, 2000 @09:00AM (#1426797) Homepage
    I mean, what do they expect? They make the proper functioning of the government and the military dependant on the products of one single software giant who won't even let them look at the intrinsic workings of their software (the source) and without planning ahead what to do if it breaks.

    Now they need a security breach at MS to recognize this is a bad idea after hundreds of previous security holes didn't open their eyes? And what will all this lead up to? A few papers how this security breach isn't all that important for national security (and in fact it isn't, reagarding all the other gaping security holes in MS products) and that's it.

    The alternatives i see are:
    - look out for alternatives to work with and put them into use at least in some places, so if the security breaches in one OS forbid it's further use the alternative is ready for use in an overseeable time (anything less than a year is unrealistic here)
    - engage in the development of the software they use (open source is a good starting point here if you don't want to do it from scratch) so at least they have a little control over the security and when holes will be patched.

    All this is of no use if the people handling critical data aren't minimally trained (it is a bad idea to download software from the net and run it, regardless of the OS you use. If the OS faciliates this (like running applications from mailprograms at a mouseclick) it only makes things worse).
  • Here's a simple example that'll work on quite a few UNIX systems, as well as Windows based ones. I'll do it in pseudo-code, but you can do it how you'd like; even a shell script.
    begin

    mkdir X
    cd X
    loop
  • Now, I'll point out that we're not talking single user operating systems in a non-networked environment

    Becuase such a system would be utterly useless for any government (with the possible exception of Sealand).
  • I merely cite the book as a really great basic introduction. And especially in this Internet world, the average user would find that installation quite tedious to use; needing to be present to put a finger on a plate during boot, no backups, no Internet access, etc etc....
  • The article and analysis is about as hostile to open source as it coulde be. The reasoning is that because someone may have seen Microsoft's source code, its security problems may become understood by crackers, and that puts the security of the system at risk.

    "Whoever stole proprietary secrets at the heart of the ubiquitous Windows program can hack into any PC in the world that uses it and is connected to the Internet," the report states.

    Of course, that's completely bogus and runs counter to decades of experience with computer security. If Windows were so full of holes that people with access to the source code could break in, Windows would be compromised a lot more than it is. Besides, lots of people have access to Windows source code already, and such security holes would be hard to keep under wraps. If Windows wants to become more secure, it needs more public exposure of its source code, not less.

    It is scary to think that people like Hamre and his think tank are considered authorities when it comes to "cybersecurity", as they call it.

  • This sounds like a mistake (probably somewhere down the line of the story). The original probably came from a known security problem with MS word where you can see revision history in some documents (I don't know enough about it to say if it is always on). There is a feature in word that will allow you to have it save revisions. You can later look at revisions and see the strikethoughs of past editing. There are U.L.s about companies putting humourous stuff in documents like "This client is a boob", which are later edited out but saved in the revision history.

    The easiest way to make sure a document is revision free is to cut and paste it into a new document. The new document will not contain the revisions.

  • I believe governments are able to purchase source licenses for Microsoft Operating Systems.

    That small US government probably can't afford it though!
  • Yup, they trust Microsoft enough to power state of the art navy vessels, and I can only assume they've fixed the it-stop-dead-now feature of a few years back, and have fixed the last bug that could cause such problems.

    IMHO using Commercial Off The Shelf Software (or for that matter hardware) in a warship is utter stupidity. This stuff was designed for a nice safe office environment. It makes about as much sense as the USN buying a passenger liner and painting USS W H Gates on the side.

    A secret agent running around as a senior Microsoft programmer could cause reams of damage, for anyody interested in real power over Windows boxes

    The most likely concern is that there are "security by obscurity" issues in Windows.
    The sensible solution would be to either write from scratch or use open source (N.B. not an off the shelf linux CD, since you can't buy "Red Hat Warship" or "Suse Submarine") both ways people familiar with the specific requirments can put together a suitable system. Which is likely to include such things as high availability, highly redundant and damage tolerant networking shock and seawater resistant hardware (with intergral UPS), etc. (Also should the whole thing fail manual overrides.)
  • by Anonymous Coward on Saturday December 30, 2000 @09:02AM (#1426817)
    I've just tested that, and it seems to be bullshit. At least it is under word2000. I haven't word word97 in ages, but I certainly don't remember it there either.

    Maybe they were referring to fast saves (which I always disable)? Fast save only writes document changes. A full save re-writes the entire document to disk. That's been known and documented behavior for years.
    Not knowing or forgetting is incompetence.

    Just tried with fast save enabled. Still doesn't happen.

    There was a thing where people distributed a PDF with sections blacked out. On slow machines the text could (momentarily) still be seen as the black boxes were drawn on a different layer to the text. Even that was incompetence, rather than a real flaw in the app.
  • Any governmental agency, whether American or Canadian (being a Canuck myself, I can't really comment on the Yankees), should make it a serious goal to utilize the *nix's or custom developed OS'es in sensitive operations. There are a couple of reasons.

    1) No coporate entity should have absolute control over the operations, however minimal, of a government. I think most of you would agree that a coporation, whether it is Sun or Microsoft, should not infiltrate a government agency in that manner. As a point, I am aware that the US Military and various agencies use the services of Sun Microsystems. However, my understanding is that Sun is contracted for customized development work, of both OS'es and apps (rather than just running out and buying 50 workstations preinstalled).

    2) Its also my understanding that the original BSD distribution, developed at Berkley was contracted by the American government for use in critical systems. If that was the case, then why is a consumer OS like Microsoft Windows seeing such proflific use in government operations. Economic deals with major corporations should not dictate what what OS is holding our sensitive information. Again, American or Canadian, that basic point of fact should make you think.

    3) If it was government policy to use a specific *nix, one or many (ie OpenBSD, FreeBSD, Linux, whichever was most appropriate for the particular task), then numerous engineers and scientists could be utilized to strengthen weak areas and improve already effective areas. In effect what would be happening is a re-conribution of code back into the main source trees of each distribution, or flavour. This would be the same as an influx of intellect and dollars into this area of Computing.(I also think most of you would agree that many of the best, and brightest minds in CS and OS development around today are working in government agencies - whether or not you know their names, this is the truth).

    Finally, throughout the computing industry, it is being recognized that computing technology no longer exists only in the realms of research and science. This technology has become critical to the functioning of society, in a very practical, day to day sense. I did read an article recently on Ars-Technica about the recognition that fault tolerant computing is now getting. To this end, the government should seriously evaluate the use of a consumer OS. For instance:

    Does NASA buy 50 Aibo robot dogs to launch into space? No

    Do they hire TRW or Boeing to custom build equipment on a contract basis? yes

    So, if these agencies already have a method for contractng the services of companies to design fault-tolerant and secure systems for various military and aerospace operations, why should the database which stores my medical, personal, or credit information be any different? In both cases, the lives of individual citizens is at stake.

    I am certainly not trying to simplify the situation or even offer a blanket solution. I am saying one thing though no government should be purchasing and using off the shelf, shrink wrapped software to hold any of our information. Period.

    Flame away if u think I am way off base =)

  • I believe governments are able to purchase source licenses for Microsoft Operating Systems.

    By the time they finished wading through those millions of lines of bloat, the version would be obsolete. With Free operating systems, it is possable to follow the changes as they happen and not have to re-analyse everything from square one.

  • There is that, and of course there's also a huge range of different configurations a user could have, making it more difficult for a virus writer to create a successful hack. Also there are no mail programs on Linux that automatically execute unknown programs.
  • Have you got a link for that, preferrably not on microsoft.com?
  • The thing is, though, that there's nothing to stop you fixing it yourself, as you have all the source available. You don't have to wait for the Microsoft bureaucracy to decide it's important enough for them to fix.
  • And I assume all formatting will remain the same. Even with footers, headers, etc.

  • But let's look at it this way. We'll consider a default Window ME install to be very useable, but rather insecure.

    How useable do you think the Windows UI is for driving a tank or an aircraft carrier. You need to add quite a bit on for flight sim remember...

    Now, add a small filesystem layer that encrypts and decrypts everything to and from the hard drive. Replace the usual login password with something that checks an individual's physical traits(such as DNA or maybe fingerprints). Make sure that it's checked as soon as possible. I'd replace the BIOS with whatever checks for the DNA/fingerprint. We'll also assume this workstation isn't physically connected to any other.


    If it's not connected to anything else how are you going in install the biodata on it in the first place, if it's not networked? Anyway unless something of the biometrics is used as the encryption key then anyone with physical access to the hardware can get at the data, which is probably useless sat on the workstation anyway. What do you do when someone else uses that workstation?
  • You failed to mention any sort of human protections; DNA/Fingerprints are very easy to get ahold of. I can convince you to put your finger on it, one way or another.

    Especially if the system cannot tell the difference between a living person and one very recently dead. It's not as if this is a difficult "trick", it crops up in many films and TV shows where biometrics are used.
  • During a brief stint at Los Alamos as a researcher I heard this story: The classified portions of an MS Word document were highlighted and cut out so that the document could be sent to individuals without the proper clearance. Unfortunately, because the "Undo" feature works across sessions (the undo information is stored in the saved document) all the uncleared recipients had to do was Edit->Undo to see the classified portions.

    The problem here is people treating .DOC (and for that matter PDF) files as being identical to what would be output if was printed. Black out parts of a paper document (and photocopy it) and there is nothing "underneath" attempt to do something similar with certain types of electronic formats and there might well be things "underneath". It's a WYSIWYG problem. You may get more than you see...
  • While your points are valid; it must be asserted that the point of shrink wrap apps is accountability and liability.

    Would these be the same shrink wrap apps which say in effect "if this breaks then we have no liability, even if we knew it was broken".

    If you sell CanduOS to Candu reactor users and an OS exploit causes a meltdown then you as the vender are held responsible.

    No you say "sorry about the mess, but didn't you read the licence?" then maybe put "unsuitable for fission cooling systems" in CanduOS V2.

    Free software is good but lacks the liability and accountability that governments and enterprise depends on.

    In fact it's the only way of getting liability and accountability. Because your own experts can examine it and alter it to your organisations needs.
  • The same goes for Word Perfect. We were once able to get a competitor's bid, kindly sent by the client, as a Word Perfect document in which they kindly deleted all the quoted prices, which we restored by doing "undo"... We got the contract, it was government and we underbid them by $200...

    --

  • LANL has a lot more security problems than that. I did a stint over there, and the security was deplorable. There were lots of people who had computer accounts that shouldn't have (including me). Password security wasn't enforced, shadowed passwords weren't being used. It was laughable. And the physical security wasn't too much better in most places. Most of the medium security places (ie. not the plutonium facility) didn't bother to check laptops and bags and such going in and out.

    But I ramble.

    -Todd

    ---
  • What has kept Microsoft employees from doing the same thing?

    The potential to be hanged for treason? The fact that even the amazing MS marketing department couldn't overcome the negative publicity MS spying for an enemy power? Since MS is the employer, they know that they bear responsability for the work related things their employees do, so they police it. They do not bear (legal) responsability for what A third party did without their permission.

    As soon as the break-in became known, MS gained plausable deniability. Anything found becomes "Obviously the work of that evil foreign hacker".

    I doubt that the fear level before was zero, it's just a lot higher now.

  • Yup, they trust Microsoft enough to power state of the art navy vessels, and I can only assume they've fixed the it-stop-dead-now feature of a few years back, and have fixed the last bug that could cause such problems. Or maybe now the ships just reboot faster. :)

    A secret agent running around as a senior Microsoft programmer could cause reams of damage, for anyody interested in real power over Windows boxes-- e.g. any nefarious government, corporation, or super villan.

    It should also be easier to subvert existing programmers, now that they can't retire in a year like they planned due to the stock nosedive.
  • by Dorkman909 ( 170722 ) on Saturday December 30, 2000 @09:39AM (#1426852)
    The government doesn't use Windows, Linux or xBSD for its truly sensitive documents. Instead, the DoD uses Wang's XTS-300, which is tested more extensively than the OpenBSD project and is the highest security rated operating system in existence, as seen here [ncsc.mil]. One thing I thought was cool about this system is that you can't tell with 100% certainty disk space because users could in theory devise a scheme where they could pass messages encoded in changes in availability. For the same reason, if you time a process, some margin is added to the value you would get, which makes message passing take extremely long. The full specs of the Common Critera, an updated "Orangebook" are here [ncsc.mil].
  • Or under the same token you can hire a professional security company to get a license from Microsoft for the component in question and audit it for you and then you're the only one who can use it. This is not illegal and according to a second hand account is actually endorsed by Microsoft provided you do not distribute the code to anyone (a pretty fierce NDA) and make them aware of the audit made. Costs just about as much money as having an open sourced component and sending it to professional auditors.
  • Does it really? Do you have any evidence to back this up? And it still doesn't fix the problem for everyone unless Microsoft sees it as a serious problem.
  • Anti-trust has nothing to do with believing what someone says. Besides which, with Linux nothing is as simple as removing a few lines of code. Looking at some code doesn't give you some in depth knowledge of the underpinnings of a program. Linux is alot more complex than chaging a few lines of code and going over it to make it secure.
  • by mpe ( 36238 )
    How are you going to get that assembly code to execute on my box, with sufficient privaleges to trap for that kind of behavior?

    Or that even if they did, it would work?
    Can they test every possible binary resulting from compiling an open source system?
  • Who's ass did you drag your conclusions out of? With your logic process, Linux is inferior for mission critical tasks. Wait which tasks? Well you didn't specify either. Any networked operating system you pick out of a bin full of them all have their good points and bad points as well as their own fucking list of security flaws. Because an OS has the possibility for more eyes to look at it doesn't mean those eyes actually do or that they are qualified to make security audits? Great fucking conclusions man. Oh yeah, security audits take time and often times money. This is not a free process. Few people have the benefit of funding to basically do charity work auditing security of computer operating systems.
  • You missed my point. That thing about Windows ME was just to disprove the "usability is inversely proportional to security point".

    Using that very simple and easy-to-implement security scheme I mentioned, you increased security several times over what it was before - with almost no loss in useability. By what that poster said, the 400-500% gain in security would have meant a serious loss in useability - obviously not so.

    Now, since I'm in the mood for a little fight:

    Points 4 and 5 have nothing to do with security. They had to do with good computing in general - always have a backup. I will ignore them.

    1. You're right, you could do all those things. But it's all of a sudden a HELLUVA lot harder than a regular Windows ME install. And there's no guarantee you'll be able to decrypt the drive before you die. It could take that long. But only a minor speed loss is incurred(with the proper algorithms).

    2. The ACLs and such you mention are for multi-user systems. Windows ME is not a multi-user system. Sure, you can have different backgrounds and preferences for different users, but that does not a multi-user system make. Since Windows ME is a single-user system, no ACLs are required.

    3. See point 1 above.

    4. Ignored.

    5. Ignored.

    6. Good idea about the "duress" password. Point taken.

    7. We're talking about security vs. useability here. I am saying that with a certain value of security, you don't lose that same value in useability. The TEMPEST protection measures, though, shouldn't hamper useability of the operating system - although the size of such a computer might be a hindrance.

    8. You can write a 2 line VB program that will grind my computer to a halt; I can write a 4-5 hundred line Intel assembly program which will completely preclude those particular VB instructions from ever being run.

    9. It's still very usable. Games, word processing, office apps.

    10. You're wrong there. Money buys a lot of things - including respect. Some of my employers pay Microsoft to have a team of engineers on standby, 24 hours a day, with access to Windows source. And Microsoft *HAS* given source to other companies.

    Now, I'm not going to lambast you for purposefully taking my argument as something it wasn't. Before you say anything else, consider for a moment the point I was trying to make, and then refute the point itself - not my example.

    Dave

    Barclay family motto:
    Aut agere aut mori.
    (Either action or death.)
  • the military's view is: if it's been proven commercially, it's cheaper to use off the shelf than to build custom stuff.

    When did the military start being an office? Or when did office workers start becoming soldiers?
    It's effectivly saying "those apples make good apple pie, so they should make good orange juice too"...
  • Actually, I thought the title was correct, but I'm not sure if it refers to the hacking of the website or the hacks who write Microsoft software. :)

    --

  • How do you remove this history? And for other possible history for confidential documents. Thanks. :)

  • by www.sorehands.com ( 142825 ) on Saturday December 30, 2000 @09:20AM (#1426883) Homepage
    Is the concern because the source code was distributed?

    Or is it that Microsoft so little knowledge of security that their own system is compromised?

    Open source with many eyes can enhance security...Closed source that hackers have the source to is a security breach.

  • I gave it a very boring test on Word97... Using simple text, there are no problems.

    I created a doc with the body of "This is confidential information" then the alphabet in lower case, then the same thing, except that it was not confidential information and the alphabet in upper case. I saved it, ran strings on it, and I saw all of my text (and the full name as registered in the product!)

    I then selected the text with the mouse, hit delete, and checked strings.

    All the information was gone... except that Word appeared to have taken the words "This is confidential information" as a title, and kept it in the document. The lower case alphabet was gone though. Of course my name was still there.

    When I closed Word and reopened the second document using the run history, the undo buffer was empty.

  • "So the U.S. government trusts every single Microsoft employee with the authority to make changes to the source code?"

    Does the USA also trust the non-USA MS employees and network [microsoft.com]?

  • Either this is humor (possibly satire), or you've never read a MS OS License. Nuculear reactors are one of the things specifically excluded.

    Actually the MS license disclaims just as much responsibility as the GNU license. They just demand a lot of duties from the "purchaser". (That's humor -- they specifically deny that they are selling you anything. Even the license is still theirs, and that's why they can change it even after you've agreed to it.)

    Caution: Now approaching the (technological) singularity.
  • by Anonymous Coward
    This is terribly frightening. It is against all computer security regulations to remove a MS document (in fact, any non-ASCII text file) from a system of higher classification to a system of lower classification. There's just too much possibility for a breach of security.
    For instance, if a SECRET powerpoint file exists on a network rated for TOP SECRET processing, it cannot be moved down to a network that is only authorized to process SECRET - even if the data matches classification. Why? It's not just to be a pain in the ass of those who need to get information to different people - what if there is a graphic/textbox behind a "rectangle" shape that matches the background (white, in most cases - not hard)? You've passed that text/graphic without even knowing it.
    I've seen this happen, and it's a nightmare.
    BTW - even if you copy an ASCII file 'down', you need to do it with a special program to ensure that no extra bits are copied off the system...
  • Someone doing a quick code hack in Linux doesn't always fix the problem for everyone else either.
  • The most effective, most used, and most trusted security measures are locks, guards, vigilance, and effort. I don't care if you run triple-encrpyted, Extra-Tasty Secure BSD/Linux/WinNT..etc, if they have got to the computer, you are, for the most part, screwed.

    That's why computers with secret/confidential/top secret data are physically locked up, and physically isolated from the internet. Places like the NSA run on a system where if an unauthorized person is in the room, flashing lights go off so you don't talk about secret things. You get layed out on the ground and searched if you have a badge that says "I need an escort" and you don't have an escort. These sorts of measures are what keeps us safe on a national security standpoint.

    Insiders (i.e spies) in the Gov't are always going to be able to get to the data, no how many retinal/finger/rectal scans you require. Computers are not really a big issue. Sure, stuff like data left over on hard drives after you've "emptied the trash" used to be a problem. But that sort of thing has been covered now. People, as always, are STILL the only big security hole.

    I am speaking from personal experience here. I work for a federal contractor that deals in information that requires a clearance.

    I think that computer security issues apply much more to the real world then the military. But the rest of the government, well, that worries me too. And its still the people that worry me rather then the computers. A smart person with windows 95 is more secure then a stupid one with the most secure OS.

  • by dasunt ( 249686 ) on Saturday December 30, 2000 @08:36AM (#1426890)
    Us slashdotters have better watch out, I'm told its pretty easy to get the source to linux. :)
  • Open source != security!!!!!!!!!!
    Fucking shit where do you people get this attitude from? Security comes from lots of places, none of which are source code being open for everyone to see. Linux doesn't have an A1 security rating dispite being open sourced, come to think of it no operating systems have an A1, the highest rated OS is Wang GS XTS-x systems. These aren't open sourced yet have the highest security ratings out of anyone. Systems get secure when they resist penetration better than nuns as well as not allowing trusted users too much freedom inside the system. If your security stops at the door you're fucked.
  • Especially if the system cannot tell the difference between a living person and one very recently dead.
    Actually, I believe you can check for that by running a small electric current along the scanner surface; living flesh will alter the current differently than dead flesh. But I'm not so sure. :-)
  • by DeafDumbBlind ( 264205 ) on Saturday December 30, 2000 @08:38AM (#1426903)
    No system connected to the outside is 100% secure, be it unix, windows, MACOS, whatever. If it's a national security issue then the machines shouldn't be on the internet.
    Regarless, the biggest security threat is the lack of dilligence displayed by users and admins. Far too many people use their name as the password or use no password.

  • I will point out that this conversation is quite civil, and I like it that way. Thanks for that. :-)
    Points 4 and 5 have nothing to do with security. They had to do with good computing in general - always have a backup. I will ignore them.
    I'm sorry, my good man, but that line demonstrates that you actually do not have any idea what 'computer security' really is. Two of the central tenents are data integrity and availability; you're just a fucked if a lightning storm takes out your harddrive as if a hacker does. Now, I'll point out that we're not talking single user operating systems in a non-networked environment; that's a contrived example that's against the slashdot article. They're specifically talking about using Windows OSs in a multi-user 'secure' environment. I'd really suggest that you find the O'Reilly book I mentioned, and read it. We're not really on the same wavelength here; I talking building a moated fortress, while you're locking your car door. :-)
  • well, it depends on what you mean by "connect it to the internet". i've controlled macs across the internet using ana (the extension and app can be delivered via a simple AppleScript trojan).

    i agree that the lack of command line make it more difficult to intuitively hack a mac than other os's, but remember:

    a) AppleScript and other OSA implementations can be used as a fine substitute for a command line.
    b) Very soon OS X will make even that illusion of security go away...

  • by tolldog ( 1571 ) on Saturday December 30, 2000 @08:39AM (#1426911) Homepage Journal
    I find it interesting that they openly accept any software just because it is made by a large "trust worthy" company.
    But since that software may have been compromised by somebody from the outside they are afraid.
    What has kept Microsoft employees from doing the same thing? Or, as some would want us to believe, keep Microsoft from doing anything.
    Any time a company (or a government) uses closed source software, there has to be a level of trust.
  • Comment removed based on user account deletion
  • http://www.csis.org/homeland/reports/cyberthreatsa ndinfosec.pdf [csis.org]

    The Microsoft angle is only one part of the report, which also discusses open-source, mobile computing, distributed computing, and nanotechnology. The specific areas of concern are predictably:
    1. threat of disruption of communication
    2. threat of exploitation of information
    3. threat of manipulation of information
    4. threat of destruction of information or infrastructure

  • by rknop ( 240417 ) on Saturday December 30, 2000 @09:24AM (#1426916) Homepage

    It should read "Microsoft a National Security Threat".

    -Rob

  • here [cnn.com]

    Gang, in the 80s it used to be that software had to go through a IVV cycle (independent verification and validation), before you could use it for anything critical. Admittedly, this slowed stuff down, but it had its merits. Even if M$ has the best of intentions and tries the best it can, I don't think I really, really want to bet my ass on their efforts. Do you?

  • There's a simple solution that Linux advocates use to give themselves perfect security: They just chant "security through obscurity is bad" over and over, and then they are magically secure!


    --

  • You mean the people who now trust them would remember their denial? That strains belief. If they trust them now, they've already forgotten many more then many false statements already made.

    Caution: Now approaching the (technological) singularity.
  • Take a Mac. Connect it to the Internet. Do not take the four explicit steps necessary to render it insecure. Now try to access it remotely. Go ahead. I dare you to try.

    While Windows always has TCP port 139 open even when file & printer sharing is not enabled, nmap can't even identify a Mac, because there are no open ports to get a TCP fingerprint from (assuming you haven't taken the first two of those four steps).

    --

  • But PDF's can be disassembled. If the text is there, then is could be peeled off. More to the poing, the black layer could be peeled off. I think that all that you would need is PageMaker and a Win32 (any version) system.

    I must admit that I haven't tried this, but PDF's were never intended as a method for secrecy.

    Caution: Now approaching the (technological) singularity.
  • For the record, it's not "Undo" that works across sessions, it's the "Track Changes" feature. There have been documented examples (i.e. not just apocryphal) of companies making available documents, such as contracts, which were edited with "Track Changes" enabled, and then sent out without removing the change history, so that simply by enabling change highlighting, details of prior edits to the document could be seen.

    I haven't heard this story related to any classified material, but it's certainly quite feasible. Or, the commercial sector stories may just have been adapted.

  • Does XML count as text? What about hex dumps? There are lots of ASCII files that shouldn't be moved. Then there's the problem of steganography...

    The "ASCII text" restriction isn't exactly an ideal protection.


    Caution: Now approaching the (technological) singularity.
  • Guess this means that script kiddies can nuke countries instead of just other computers now...

    (I know, it's not quite that bad)

    But in all seriousness, this could be pretty bad. Who knows what kind of information is "protected" on windows machines. Who knows who might get their hands on plans for various weapons, etc, or just cause havoc with various databases throughout governments worldwide.

    Maybe they should get some of those copy protected hard drives :)

    Dark Nexus
  • by JimDabell ( 42870 ) on Saturday December 30, 2000 @08:40AM (#1426932) Homepage

    So the U.S. government trusts every single Microsoft employee with the authority to make changes to the source code?

    Whether or not an intruder gained access to the source, the U.S. government would be fools to trust something for sensitive operations without performing a full security audit on the source themselves.

  • The difference is that Microsoft code is only open to people who will abuse it, and linux code is open to people who for the most part will analize it and make it better and more secure. I think the best thing would be for Microsoft to open the source completly (under a strict license so they can still make money) and benifit from open critisim on the net. I know that this will never happen, but its an idea

  • by Anonymous Coward
    Are we forgetting that Perl and Shell scripts are also "small text file"-based.

    ALMOST..

    Perl and Shell scripts that are "small text files" can't be arbitrarily executed.. you have to chmod +x them first.

    This is a subtle, but very important distinction. It provides a mechanism that stops arbitrary code from being executed. (The user needs to save it, then chmod +x it, then execute it)..

    Granted it's not inconceivable that someone could write a MUA that would perform these steps automatically, but (in general) Unix programmers have more brains than this... which leads us to another issue: MS programmers (in general) wouldn't know a security hole if it came up and bit them on the ass (which, if you read NTBugtraq, happens pretty frequently :o)

    The problem ... is the fact that a user on Windows is running as the equivalent of root.

    Nope. The problem is that the system will run arbitrary programs based on the file name. The issue of "everybody is root" is an issue, but a minor one, as Windows is not a multi-user OS.
  • http://www.cnn.com/2000/TECH/computing/12/29/csis. microsoft.report.idg/index.html

    If this doesn't work, the problem with the original link was a space after csis. and before microsoft

    Caution: Now approaching the (technological) singularity.
  • Wouldn't know the system's underlying code make writing assembly-based hacks a hell of a lot easier

    How are you going to get that assembly code to execute on my box, with sufficient privaleges to trap for that kind of behavior? My server is set up in such a way that gaining the level of access required to run your code is (hopefully) very difficult.

    Whereas with closed source, you really don't have a full understanding of how to attack it.

    Don't be so sure. Someone goes looking, and finds a security hole in a closed-source OS. Nobody else has seen it, because nobody else was looking. They write a program to exploit that bug, and distribute it. You now have a security problem.

    The open-source difference is, lots of people are looking at the code, and bugs are more likely to be found. Since the people finding these bugs also depend on the software themselves, they're quite likely to report the bug to someone who can write a patch, or patch it themselves and submit their patch to be reviewed and distributed.

    --

  • Security and ease-of-use are mutually exclusive, and are usually inversely proportional.

    I disagree. In many cases, yes, security can limit useability. For instance, the most secure system is one that has been broken down to its individual molecules and scattered out into space on a hundred million different probes. Of course, at that point, it's not very useful.

    But let's look at it this way. We'll consider a default Window ME install to be very useable, but rather insecure. Now, add a small filesystem layer that encrypts and decrypts everything to and from the hard drive. Replace the usual login password with something that checks an individual's physical traits(such as DNA or maybe fingerprints). Make sure that it's checked as soon as possible. I'd replace the BIOS with whatever checks for the DNA/fingerprint. We'll also assume this workstation isn't physically connected to any other.

    All of a sudden, you have an incredibly secure system, with the same useability(maybe a little slowdown for encryption/decryption, but there are fast, secure algorithms availble). So no I've already refuted the "inversely proportional" part.

    Now, I've yet to see a security implementation that doesn't hamper useability in some form, but to say that it's impossible is downright moronic. Just because you can't think of a way to do it doesn't mean it's not possible.

    Dave

    Barclay family motto:
    Aut agere aut mori.
    (Either action or death.)
  • by SuiteSisterMary ( 123932 ) <slebrunNO@SPAMgmail.com> on Saturday December 30, 2000 @08:42AM (#1426954) Journal
    It's not that difficult folks; just remember the golden rule:
    Security and ease-of-use are mutually exclusive, and are usually inversely proportional.
    And remember, neither Linux nor BSD, nor any other OS you can probably name, are secure. Security is a) more than just the ability not to be hacked, and b) more than the OS. A truly secure OS doesn't have the concept of root, for example, and requires hardware support for quite a few of the security features. In other words, by definition, any OS you can a) buy off-shelf at the mall, or download freely (as opposed to 'a free download) or b) that runs on 'commodity' hardware, isn't secure. It might be 'secure enough for my purposes,' but that's it.
  • by phinance ( 210768 ) on Saturday December 30, 2000 @08:43AM (#1426955) Homepage
    It's worse than just trying to fight off skilled crackers, etc. During a brief stint at Los Alamos as a researcher I heard this story: The classified portions of an MS Word document were highlighted and cut out so that the document could be sent to individuals without the proper clearance. Unfortunately, because the "Undo" feature works across sessions (the undo information is stored in the saved document) all the uncleared recipients had to do was Edit->Undo to see the classified portions.

    The lab could educate the secretaries and researchers about the "gotchas" of every commercial product they use (and they do try), but people are bound to forget or make mistakes. If they deployed open source software they could inspect and modify the code to make these holes unavailable.

  • by juliao ( 219156 ) on Saturday December 30, 2000 @08:44AM (#1426958) Homepage
    Critical systems, either from a security or from a reliability stand-point, are very different from retail systems.

    You can never be sure of anything unless you check it yourself. Mere "trust" is seldom an option when it comes to mission-critical applications. And while trust if acceptable in commercial systems (if it breaks, let's sue them) it just isn't an option when break-of-trust involves lives or national security.

    That's why I understand that banks use Microsoft products, but i get very scared when aerospace or medical systems even go near Windows...
    -----

  • Did Nostradamus really say that? I thought he predicted the world would end last summer, so I can't see why he'd make any further predictions. But if he did say it, that is really damn funny.
    --
    Obfuscated e-mail addresses won't stop sadistic 12-year-old ACs.

"You'll pay to know what you really think." -- J.R. "Bob" Dobbs

Working...