Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Linux Software

'Experts' Back To Claiming Open Source Insecure 207

jacobito was the first of the folks who sent us a report running in Silicon.com regarding security and open source products. It's the typical claims - that open source is insecure because it is open source. They've also provided the counter-quotes, though, talking about that because it is open source, it's inherently more secure. *sigh* I hate issue re-tread.
This discussion has been archived. No new comments can be posted.

'Experts' Back To Claiming Open Source Insecure

Comments Filter:
  • P.S., We also support UCITA as that will allow us even more security. We won't have to post all those nasty bug reports nor disclose all our 65,000 Windows 2000 bugs, meaning your Microsoft (C)(R)(tm)(sm)(patent pending) Operating System (tm) will be even more secure!

    And to further support security and in accordance with UCITA, we at Microsoft (C)(R)(tm)(sm)(patent pending) will sue you if you disclose our bugs or reverse engineer our products to find bugs or securty holes. After all, anyone who is reverse engineering our software or Operating System (tm) must be a hacker trying to steal our IP or trying to break into our computers on the Internet [Internet (C) 2000-2100, Microsoft Corporation, All Rights Reserved. Any use of the term Internet without express permission by Microsoft Corporation is punishable by death in accordance with UCITA and the DMCA.]

    Have a Good Day (tm)!

  • First of all, I think that generally speaking, Open Source is and will be always more secure than proprietary software.
    However, I will give you a little example of how open source can weaken security.
    I am using the mail reader Kmail, part of the KDE package. I had typed in my mail password, and checked the "remember password" box, but finally I forgot this password and realized that I couldn't check my mail anywhere else than home, which was a pain in the neck. My ISP wouldn't help me deal with the problem, and I couln't change my email address (too many people IMO had that one).
    So I looked in the config files of kmail and found my password encrypted. I had no other solution to go in the kmail source and re-implement in C the password decryption algorithm. I did it and, without being a C guru, I was able to make it work within 25mins, and get my password back.
    As you can see, the fact that Kmail is open source helped me, but if I were a cracker or some 'malicious' computer nerd, I could have hacked a user's password just from being root on the machine, and probably gotten access to other machines controlled by the password.
    If the mail program had been proprietary, I would not have been able to decipher the password so quickly...
    Just to give you guy one (more) example of how difficult it is to be an open source advocate.
    This is the kind of problems the open source community will have to face, and maybe it should try to develop strategies against that kind of stuff...

  • by Anonymous Coward
    please make it your full time work to keep them out. It isn't worth it.

    I find it interesting that, at least according to Dave Barry, no war has ever been fought between two countries which both possess McDonalds restauraunts. The reason for this is left as an exercise for the student- it could be that McD's is the single most potent instrument of peace the world has ever known, or it could be that McD's is part of a terrible communist plot to undermine the free world, burying all we hold dear beneath a mound of french fries and chicken McNugget goo.

    Only time will tell.

    By the way, the only reason I am posting as an AC is becuase I don't want to have to answer to the Fry Guys when their time comes. Just in case, you know.
  • by Anonymous Coward
    I don't even recommend writing to correct these people.

    Nothing would please them more than not having to deal with complaints regarding unresponsible reporting. Let them hear your views and ask them to better research thier material.

  • Malcolm Beattie isn't just "a Unix expert at Oxford University". He was also responsible for releasing Perl 5.005 (which you're probably running if you're a Perl programmer), as well as the Perl Compiler and multi-threading for Perl. Malcolm is one of the unsung heros of Open Source.
  • I agree with almost all of your post, except I think you should change one thing:
    The more important thing we all seem to miss is that the security of an OS is dependent on two critical features:

    How easy is to find exploits?

    and

    How fast are those exploits fixed in the field?

    It makes no difference how quickly the code gets fixed, it is how quickly the sites get fixed that counts. Of course, open source software is a plus here too.

  • Well, depends on if you're talking local security or network security for the Mac... the Mac's local security is very low without the help of third-party utilities. OTOH, the Mac's network security is very HIGH without the 'help' of third-party utilities like web servers, ftp servers, etc., and even then, there's no 'command-line' to access if you can hack the web server/ftp server... yet.. (telnetting into a Mac OS X box is a kinda cool feeling)

    [poking fun at hemos-time]
    It's amusing that Hemos posted "I hate issue retread," when in the past he's been known to post a story that had already been posted a day or two before.. 'fess up, Hemos.. :)
  • If they aren't educated enough to know what Linux is, and can't be bothered to learn about it before writing a story, blacklist the company from ever being posted on Slashdot. Make sure you write to them.

    If this story is just a joke, blacklist them anyways. They only joke stories I'll accept would be those written on April 1, and even then that's annoying.

    There's no need to have them being slashdotted so they can get 'eyeballs' and 'ad revenue'.

    One huge act of chagrin if they are using a proprietary server is to have it come down. Oops!
  • If you must post this sort of thing, perhaps you should follow a couple of rules:

    1) Precis the article so that people don't need to go and have a look.

    2) Don't include a link, put it in as plain text so that people have to decide to cut&paste it to read the article.

    Otherwise sites like this have a strong incentive to post drivel like this in order to leverage the slashdot effect to generate loads of hits and thus raise their advertising revenue.

    A reasonable precis might have been:
    Three so called experts reveal their incompetence by advocating Security through Obscurity. Their opinions are thouroughly rebuffed by
    Malcolm Beatie [ox.ac.uk] who runs the the e-mail systems for Oxford University, among other things, and so can be assumed to know about defending systems from bright students.

    If you really must read the article, it's here http://www.silicon.com/...
  • They said "Security needs to be built into the architecture of the operating system."

    Last time I checked, apache is not an operating system. I'm not saying I'm agreeing with them, but the point you made was moot.
  • Clive Longbottom is internationally recognised in the provision of advice on foundational issues, covering hardware, operating
    system and collaborative technology issues as applied to today's dynamic businesses. Coming from an end-user background...
    Stop right there! Dodwell is elsewhere listed as a marketing
    manager, hardly someone to speak authority on
    this subject.
  • First issue is why would anyone who wants a secure system click "Remember Password". Secondly why would a person who knows so much about security as yourself forget their password. Thirdly Kmail is open source which allowed "YOU" to fix your problem which wasn't even a secure solution to begin with. If you had been using Outlook your life would have been hell. Just being root on the machine is probably easy on your linux box because you don't seem to have a clue. I am sorry that you are unable to see why open source is better for security fixes and protection by closing obvious open doors that everyone knows about if they read a book about it or even visit a site telling of linux's potential vulnerabilities. No one should trust and OS out of the box to be a secure solution. It's a shame you hit the submit button!! Leimy
  • Open Source is the only true way to make sure you software is totally secure because if it is not you can fix it. Now it is true that you can find more backdoor if you have the source in front of you but there are people out there who can read assembly as if it was a their native tongue. So in even close source software they someone can figure out if the software is insecure. Well that is my say on this thing.


    http://theotherside.com/dvd/ [theotherside.com]
  • Just a quick search of the so-called "experts", Phil Roberts, Clive Longbottom, & Bernie Dodwell, would reveal that all three of them work for companies that provided, and only provided, M$ 'solutions'. I'll leave it up as a lesson for /.'s to track down the email addresses of the offending experts.

    hint. it wasn't that hard

    Steve
  • Most of these comments can easily be shifted in the other direction. At first glance you can say Open Source Software is insecure because of the ability for a hacker to see the source and thus exploit it. On the other hand, if a "competent" administrator and/or programmer cannot see the source then they cannot fix the exploit. I guess these "experts" must have assumed that hackers cannot exploit compiled code. Hackers exploited programs for years with and without source code.

    At least with the source at hand I have a fighting chance to prevent crackers from entering my system, and if I can't do it maybe someone smarter can!

    This article was terribly written and was not even interesting. The author just spurted out comments from various people I have never heard of and then contradicted himself with another expert. Maybe if the experts would give case studies it would be more interesting. It seems that anyone can just say anthing online without worring about the consequences.
  • RSBAC [rsbac.de] has most of this functionality now, really. There's some other patches that do similar things...
  • Ah, having a busy babelfish day Sr. Covarde Anônimo? But if you actually had a brain, you would have noticed -brazil- is actually a German dude....

    Moderators, the above post is flamebait in its purest form!

  • Well, it depends on what you mean by "proprietary". If by that you mean "manufactured by a known company who has put their name on a Linux distro and charges money for it", then yes, there already exist "proprietary" distributions. If you mean "contains binaries only and we won't provide you with any source code", then theoretically the GPL would prevent that proprietary distribution. Although if a company includes their own applications with the distribution, they aren't obligated to release source to those, because those apps aren't GPL'd. So you could see a Linux distribution which has GPL'd Linux at the core, surrounded by different proprietary applications (installer, GUI, DVD player). This distribution could be licensed so that the GPL'd software may be redistributed, but the closed-source apps can't be. Thus, the CD that you get it on is proprietary in that you can't just make $1.89 copies of the CD and sell them.

  • At first I figured this would be a well reasoned article that presented some good food for thought. This is what I read instead (Imagine if you will, kids arguing on the playground):

    Kid 1: "My dad says your operating system is stupid because the source code is available."

    Kid 2: "Yeah, my dad said the same thing."

    Kid 3: "Yeah, mine too."

    Kid 4: "Well my dad makes operating systems (so he must be an expert) and he agrees with you guys."

    [Enter the token dissenter]

    Dissenter: "What are you talking about, we issue patches to our problems in hours not months. Do you even acknowledge the fact that your operating system HAS problems? I'll be the first to admit there are flaws in my operating system, that's the best way of getting them fixed..."

    All kids: "Let's kick his ass!"


    --
    Quantum Linux Laboratories - Accelerating Business with Linux
    * Education
    * Integration
    * Support

  • If there's any one phrase that is completely and irrefutably true, it is this:

    Security needs to be built into the architecture of the operating system.

    Correct - security cannot be an add-on. I'm not sure, though, how it would be possible to come to this conclusion from that statement:

    This cannot happen if your source code is publicly available.

    In here, he is comparing apples with oranges. What does open source have to do with design? In most cases I know, design has to be already set when you start sharing and contributing code. Design - that happens through diagrams, papers, brainstorming maybe - but not through code.

    There is another worth noting. All the "experts" said open source security was bad, Linux had bad security. But not one of them said that a Microsoft operating system was any better. They did not mention NT, did not mention Windows 2000. Why is that ?

    Regardless of what they promote, if you really want security, if it's your number one objective - you won't choose Linux. You won't choose NT or Windows 2000 either, you won't even choose OpenBSD. Instead you are going to look at the Orange Book ratings and take a level B or level A (verified security) certified operating system.

    Of course, the Orange Book only applies to non-networked computers - with the addition of the network things become more complicated. Naturally usability will suffer if you want provable security.. but you can't have it all.

  • Solaris is the most popular closed-source commercial UNIX out there. OpenBSD is an open source UNIX maintained in Canada which doesn't have nutty encryption export laws. In terms of security of OpenBSD and Solaris, there's no comparison. Every release of Solaris since I can remember back to SunOS 3(!) has had major security whole allowing local root access or even remote root access. Compare this to OpenBSD where all the code is vetted continously for security problems, and where the code has security features defaulty on, or easy to turn on. If I needed a box to be secure, and I had to choose between Solaris and OpenBSD, I would choose OpenBSD in the blink of an eye.
  • Wrong experts. If you want a secure lock, you have it reviewed by locksmiths and lock pickers, not cat burglars. The specialty of a cat burglar is stealth, not locks (although some skill with locks is often useful with that specialty). A cat burglar can enter and steal from an occupied building without being seen, even a room where people are sleeping. Hollywood has several examples...on-line it is easy to find examples of failed cat burglars.
  • I hate issue re-tread.

    Agreed.

    By the way, when's the next "Ask Slashdot" on "Which license should I use?"

    When's the next Your Rights Online article about censorware?

    Just curious. :)

  • Yeah, then why is that CIA guy in so much trouble? You cannot defeat stupid/clueless lusers on your systems. Even without an internet connect you still have to worry about "sneaker-net" walking the data right out the door.

  • "He added that the issue could lead to proprietary versions of Linux being developed"

    Can you do this given that Linux is under GPL?
  • Someone over at Linux Today was good enough to dig up info on Clive and post it in reply to the article over there:

    Clive's bio [strategy-partners.com].

    Check out his "previous work". Screams "PHB" to me:

    Coming from an end-user background, Mr. Longbottom brings together large organisation experience with extensive IT knowledge to cut through current"flavours of the month", ensuring clients concentrate on the technology required to support business needs.

    Oh goodie. Somehow, "end-user experience" gives him the authority to declare Linux too insecure for use in a network...has anyone told him about OpenBSD, or is the IT department over at Strategy Partners tired of having to explain things to him?

    I'm an end user myself. This guy just seems like the stereotypical "I'm a tech expert! I know how to change my background and use Windows!"-type "expert" that you read about once in a while over at TechTales [techtales.com].

  • With closed source, you only have the company's word that it's secure. How do you know it's not chock-full of trapdoors for them, or >>Insert least fovourite government agency here to hack in easily. That may not be especially likely, but you don't know, do you? Why? Because the source is _closed_.

    Except how do you tell that it's unlikely that closed source is "back door free"? Especially when all sorts of junk already makes it in to such programs. Also how much is the word of a company, who have been caught enguaging in systematic purjury, worth? (Probably one Turkish Lira would be an overvaluation...)
  • It never ceases to amaze me how many 'experts' make the newbie mistake of thinking security is a matter of obscurity. I pity the companies that hire such newbie 'experts' to 'secure' their systems.

    I wonder if any of the 'experts' quoted worked on CSS?

    Of course, by their definition, no OS can be secure since every proprietary OS vendor has to have had at least one disgruntled programmer who has seen the source.

  • I've never even heard of any of these "experts" before. Only the Open-Source advocate actually gave any facts to back up his claim, I might add; the other three seem to be just spreading FUD.

    Yeah; you can find an exploit more easily if you have the code in front of you. So what? You get maybe a full day to use it if you're lucky. The second you use it you'll be pounced on, and if you try "waiting for the right moment" someone else will find your precious exploit and see that it's fixed.

    Contrast this with the "security-through-obscurity" of a closed-source system. OK, so it's harder to find an exploit. But you'll get at least a week, possibly even months if it's Windows, to play around with the exploit once you do find it, because it simply doesn't get fixed so quickly.

    Does being Open-Source make something more secure? Nope. But it doesn't make things less secure, either. It all comes down to how good of an admin you are. But it should be moted that the bugfix time on an OSS system is a huge advantage; there will always be exploits lurking around in any operating system, but the fast turnaround time of Linux and its kin make it easier to keep a system secure even as those exploits are found.
  • Some anonymous coward dun said:

    I find it interesting that, at least according to Dave Barry, no war has ever been fought between two countries which both possess McDonalds restauraunts. The reason for this is left as an exercise for the student- it could be that McD's is the single most potent instrument of peace the world has ever known, or it could be that McD's is part of a terrible communist plot to undermine the free world, burying all we hold dear beneath a mound of french fries and chicken McNugget goo.

    *chuckle* As far as that goes...I dunno on that, but I can truthfully state that I've not been able to eat meat at McDonald's since I saw one of the employees take a 50-pound bag of "Miracle Meat" (no, I am not making this up--this is what their meat is called), which resembled nothing less than the 50-pound bags of Gravy Train dog food you see at the pet-food department of the grocery, from the freezer-shed. :) (The really sad thing is--Gravy Train likely tastes better and has more nutritional value (not to mention more actual meat) than Miracle Meat does. :)

    Seriously, though...the real reason Dave Barry's analysis holds up well (save for Belgrade) is due to a combination of three factors:

    1). Generally, when the US goes into a state of war with another country, they put in rather strict trade sanctions that basically state that you cannot do any business--not even visit relatives--with that country unless you have special permission from both the State Department and the US Treasury. (The law that this is under is specifically called the "Trading with the Enemy Act", and you don't even need to be at a state of war--hell, out of the countries where it is virtually illegal for a normal US citizen to go (incidentially, now the only countries you can't send crypto to) we've had shooting wars with only two of them. It's this very law that makes it outright illegal for most Americans to go to Cuba or even buy Cuban cigars in Canada, while everyone else goes to vacations on Cuban beaches...)

    It doesn't hurt that the vast majority of big fast-food chains are based in the US, and even if they weren't the US anymore tends to not only put strict sanctions on its own citizens under the Trading with the Enemy Act, but they also manage to get through UN sanctions or at the very least sanctions among NATO members. You know what they say about 800-pound gorillas (no offense to gorillas, who generally are peaceful folk, have good senses of humour, and are rather intelligent unlike the US government ;)...

    2) Most countries that the US is pissed off enough at to get trade sanctions against and/or go to shooting wars with aren't likely to want much to do with American stuff at all, and likely have imposed their own versions of the Trading with the Enemy Act in regards to American goods and companies. (I'd be REALLY surprised if the Serbian government hadn't run the McDonald's out of Belgrade.) Again, shooting wars aren't even a necessity here, and a lot of it has to do with ideology--it's rather unlikely Afghanistan would be getting a McDonald's soon, or North Korea (even if eighty percent of the country wasn't starving to death) because the ideology of the countries wouldn't permit such a thing.

    3) The potential Real Biggie here is that there have not been any hellaciously big shooting wars since McDonald's incorporated back in the 50's. The last Really Big War was in the 40's, during World War II; most wars then have been skirmishes between at most four or five countries (literally the three largest wars the US was involved in were with Vietnam, Korea and Iraq since McDonald's opened shop--for various and sundry reasons hinted at with 1 and 2 above, it's doubtful they'd have McDonald's restaurants to begin with [though in Iraq's case it was probably a combination of culture and the fact they were fighting with Iran]). If another World War were to break out (Grud forbid), we'd likely end up warring with a country with a McDonald's (or more properly, one which HAD one before we ordered McDonald's to Divest Or Else). (Of course, we'd also end up likely going back to the high technology of making knives and projectiles out of obsidian and flint, not to mention getting meat by hunting down deer instead of ingesting Miracle Meat--this is, of course, assuming mammals larger than mice or bats survived and we didn't end up with Planet of the Bipedal Mousies 65 million years later :)

    For that matter--interesting historical note: McDonald's didn't enter either what is now Russia nor did it enter China until the Cold War had thawed quite considerably. (Most of you who are reading probably do not remember the days before Gorbachev in the old USSR. Gorby did a lot to warm up relations between the US and the USSR--before that, especially in the early- to mid-80's, people were convinced that before my generation hit the age of 18 (I'll be turning 27 this year, btw) the world would have been blown to smithereens and we'd end up with Planet of the Cockroaches. It was Quite Tense, believe you me.) Even then, they didn't open till things had warmed up to the point there was almost no going back from there...and, more to the point, companies like Pepsi and McDonald's thought it would be profitable to operate there and didn't have to worry about the State Department coming about and telling them they had to divest (other companies--most notably, oil concerns and banks--had already been burned like this several times, most notably in Cuba and in Iran).

  • I can easily log onto my ISP's very secure FreeBSD box, make a SAMBA build and browse around several business systems that are currently connected with their Win9x or NT shares left open to the world. Except I just don't do that, but Msft has given the keys to THEIR systems to anyone with half a brain to snoop around in.

    Read between the lines - the people quoted in the article, a 'network installer' and a Company "Strategy Partners" both probably have a big investment in NT & 2K, and probably are able to setup a secure NT system, but their claims that Linux is somehow inherently less secure and wide open to Linux savvy hackers is just sales FUD. They are Msft 'expurts' in the sense of the old joke: and 'ex' is a has-been, and a 'spurt' is a drip under pressure.

    Now I rarely use 'FUD' for any Linux critics, but this is a clear case. I learned long ago how sales/politics works, and you have to build up CONFIDENCE in a system. Just having a working server is not enough, the owners have to BELEIVE in it and get the warm fuzzies as well. That's one thing Msft is good at, getting and keeping big clients happy in the board room, while the McSE's are in the server closet plugging up holes and traipsing around land mines.
  • It is far more secure than any current closed-source operating system.

    I like OpenBSD but your assertion is bogus. There are closed-source operating systems that are very secure, Multics, MLS versions of UNIX, SCOMP, MVS. See the list here [mitre.org] and look for operating systems with A or B class security ratings.

  • Bernie Dodwell, business development manager for System Security specialist Integralis Group, said the operating system is insecure because it is open source. "This issue has to be resolved to get the system ready for the enterprise. At present a hacker would be able to go through the operating system like a dose of salts," he said. Microsoft was keen to endorse this view.

    Anyone else find that last bit amusing?
  • Seatte, WA In an anouncement that has all of Sillicon Valley and Redmond buzzing, abnormal psychologists at the University of Washington have found widespread insecurity among Linux advocates.
    Dr. Rajeev Papshigali and his team of graduate students analyzed Linux advocates in the lab for several months in the groundbreaking study. "We found several neuroses common among Linux advocates, including paranoid delusions of the most severe sort" reported Dr. Papshigali. "It was amazing, every time you mentioned anything unfavorable about Linux, they would become extremely defensive and begin shouting 'FUD!!!' Many of them also display paranoid delusions about Bill Gates."

    Dr. Papshigali's study has lead several "Security Experts" to try and reach out to Linux users. Dr. Charles Widebottom, a popular self help author has just released a new book entitled My OS is Okay, Your OS is Okay. "The important thing for Linux advocates to realize is that not everything is FUD." advises Dr. Widebottom. "Some of it is valid criticism, and some articles like the silicon.com one are plain old fashioned stupidity." Dr. Widebottom hopes that Linux advocates will simply take a deep breath before accusing Microsoft of controlling every aspect of the Media.

    Dr. Papshigali calls this approach naive. "One Linux advocate we studied actually walked into a McDonalds and ordered a burger with Linux on it. When the cashier said 'what's linux?' he started screaming 'microserf' and then accused Ronald McDonald of being a paid henchman of Bill Gates. I don't see how a deep breath will help these guys."

    Dr. Papshigali also noted that other OS advocates display major insecurities. With Windows fans becoming very irate and defensive when you point out that Microsoft means 'small and flaccid', and mac users (to put it politely) thinking a bit different. "We see the possibility of virtually limitless research grants with the mac users." commented Dr. Papshigali.
    --Shoeboy
  • Bernie Dodwell, business development manager
    Clive Longbottom, strategy analyst at Strategy Partners
    Phil Roberts, systems manager for a network installer

    Since when did these chaps become "security experts" Anyone ever heard of them. Just for the purpose of comparison I did a quick poll of my chums and came up with this:
    1 operations manager
    1 senior DBA
    1 dev manager
    1 senior systems engineer
    Wow, equally impressive titles. Maybe we can start writing security articles too.
    I can spare 5 minutes to provide the same level of detailed, well researched analysis these guys did.
    --Shoeboy
    (full disclosure, I work for microsoft)
  • Steg. is security via a secret. Obscurity is not a real secret, it's just something that cost a little effort to find. Yes, this small effort is a barrier making it slightly more difficult to find an exploit. However, it is a far greater barrier for fixing things.
  • In addition to the question of how fast the closed-source vendors are going to move to patch exploits...

    Who's going to secure us from the closed-source vendors?

    That became a big question after some nasty pranks were revealed back in '99, and I suspect it will become and increasingly important question for consumers, businesses, and governments alike, over the next few years.

    --
  • > an ad for silicon.com caught my eye. It was a picture of a man's head, with a finger held up to his lips, and the slogan "Don't reveal your source!"

    It could almost make a guy wonder who's behind silicon.com, eh?

    It's obvious who suffers under competition with free software. But who suffers from open software?
    • People with "easter eggs" in their code. (And I'm not just talking about the otherwise benevolent bloat. I'm talking _NSAKEY kind of stuff.)
    • People who make their living selling aftermarket fixes for problems that closed-source vendors won't fix.

    --
  • > open source is no more stable than closed by default. the oposite is probably true

    By the same logic, companies should not document the features of their applications, because that makes it easier for people to find ways to abuse them (think "macro viruses").

    The solution isn't keeping the problems hidden, it's keeping the problems out.

    And no one has ever shown, nor posited a convincing argument, that closed source beats open source on that.

    --
  • Well, yes, unless of course you are a really smart administrator, and you install a linux based firewall, while letting your boss be so proud of his NT network -grin-. That way security fixed can be made at the door, by you.
  • There are two claims in the article. The first one, that Linux is not secure enough, is the one that I don't afraid at all. All of us knows that only troll will still think that they need a close source system in order to get security. All of us knows how open-source works to allow bugs, and in particular security holes, to be found and patched quickly.

    But the second claim is somewhat more disturbing: that there is a trend that more people become trolls. If this is really such a trend, this has to be dealt with. Of course, there is every possibility that the article is once again funded by Microsoft to generate FUD.

    But if not, what can be done? How new comers can be educated about security more readily than getting the FUD?
  • No one wants to hear it, but all security is security through obscurity. It's simply a matter of whether something is obscure enough.

    Hoping you're safe because you haven't publicized that your web server exists, even though it has holes, probably isn't obscure enough. Port scans happen all day, every day.

    Hoping your e-mail is secure because someone shouldn't be able to randomly bang on the keyboard and generate your 2048-bit key IS probably obscure enough.

    In both cases, if the attacker knew what they needed to know, they'd succeed.

    OBOSS: We've been breaking commercial, closed-source software for way too many years to believe that not having the source code slows us down.
  • >If the mail program had been proprietary, I
    >would not have been able to decipher the
    >password so quickly...


    http://www.thievco.com/advisor ies/nspreferences.html [thievco.com]

  • It could almost make a guy wonder who's behind silicon.com, eh?

    Actually, yes.

    whois silicon.com

    Registrant:
    Network Multimedia Television (SILICON16-DOM)
    15-19 Britten Street
    London, SW3 3TZ
    UK

    Hmmm...

    James

  • I was amused by the line "Microsoft was keen to agree with this." I can just see it now:

    "Mr Balmer, do you agree that Linux is insecure because of the source code being available."

    "Well, from a marketing standpoint, I'd love to agree (as you know we hide our source as if it were actually valuable, so we have something to concede to the DoJ). However, the truth is that my technical analysts (yeah, we had to hire a couple last week) told me that Linux is actually very secure, and that most of the security problems that arise in any environment are either insiders exploiting the local security policy or months old problems that the administrators should have fixed. Now, I'm no programmer, but it seems to me that if I had the source code, then I could do my own security evaluations, and limit the extent of problem #2, but it still lies in my hands to create good security policy."

    "Wow, Mr Balmer, that's just an amazingly cogent and forthright statement for you!"

    "Mmmmrrfffll... Mrrrrmmm! Rugh.... Get this damn daemon out of my head!"

    "Um, and as Mr. Balmer spews forth pea soup, we go back to you in Metropolis, Clark!"

    I see I got side-tracked, there. Sorry.

    Disclaimer: None of the people herin depicted ever acted this reasonably.
  • NT wasn't designed from first principles to be secure.

    The first rule of security is to limit what programs can do to the minimum neccessary to do their job. Putting the video drivers into the kernel is not the minimum neccessary to do their job, so obviously security was secondary to other aspects.

  • He added that the issue could
    lead to proprietary versions of Linux being developed.


    Obvious that this person does not know much about Linux. Since everyone I know, knows that the GPL will prevent this.

    This view seems to be more or less closed source advocates trying to bring a dead horse back to life, just to beat it a few more times.

    A system is not easy to break just because you have source, Unless you have a bad system, where a cracker can see areas where buffer overflows exist. I was recently told by a Samba developer that there are several areas that buffer overflows exist in W2K. And this is just one of the ways crackers can break systems.

    Steven Rostedt
  • ...you just wouldn't be able to distribute the binary if you didn't also make the source available.

    IIRC, the GPL only controls distribution, not what you actually do with the OS in-house. Of course, that implies that if General Motors distributed a proprietary Linux to all their employees, the employees would also have a right to the source code. I guess that the employees would also have the right to redistribute the whole thing. They might get fired, but probably would be legally safe. ;) (IANACL)
  • Look at the rapid increase in problems with Quake bots after source was released.
    The release of the Quake source code, and the subsequent increase in cheating within that environment, has been invoked as proof of the failure of Open Source security a few times now. While this event did provide a few valuable lessons in designing a secure environment, the conclusion of Open Source's failure (and the subsequent strength of obscurity) misses the point.

    First and foremost, it has to be mentioned that Quake has a very poor security model. It relies heavily on client-side security. Quake isn't alone in using this model; however, it provides countless ways to attack the integrity of the environment. To id's credit, there are some very important performance reasons this model was adopted (search for Carmak's Slashdot posting on this topic). Nevertheless, we have a design that is wide open to attack.

    Closed source obscurity did not protect Quake. It sometimes sounds like Quake's cheating woes didn't begin until the release of the Quake source. Untrue. While Quake was a closed source product, various ways to cheat existed (proxies, hacked maps, hacked models, etc.) It wasn't as wide-spread and blatant as it is now, but cheating was hardly uncommon.

    Open Source changed the environment. By releasing the source code, Carmak allowed the world to see exactly how insecure the Quake environment was. Blatent cheats (ie: speed cheat) appeared. Cheats became more widespread as more people had access to them. It would be ignorant to claim that the Quake community hasn't suffered because of this. And many blame Open Source and the GPL.

    But blaming Open Source, and claiming the widespread cheating is an example of how Open Source can't be secure, is also just as ignorant. Quake itself is to blame. Its security model needs a complete overhaul. Open Source developers have a chance to shine. Their challenge is to do that overhaul - make Quake playable and secure. As Carmak has noted, its no easy task.

    Whether Open Source developers are able to "fix" Quake or not... there will be one thing for certain. We will all know how secure Quake is. Before, only a select few knew of its weaknesses. And some of those select few used their rare knowledge to exploit the environment without public awareness.

    An interesting side note to all this... I visited a Quake cheating web site the other day. It seems that they pulled a bunch of the cheats since they violated the GPL (no source code available).

  • It's trivial to hide stuff like tfn in plain sight in the 'nix-es - simple patches to who, ps, top, syslogd, etc.. and I'm done.. but I've yet to figure out how to patch g-d taskmgr and pview (or the new g-d sfp stuff).. To say nothin 'bout the old capture login & password scripts for enticing the unwary 'nix admin.. or peering into pgp-s process space..
    Nice fear tactics. Spook the horses. Have a chuckle.

    Of course, much of the same points made here can be made about closed flavors of Unix and even WinNT. Our dear Joe Friday may not have figured out how to do it... but NT utilities can be trojaned. Eeye gave an interesting demonstration at toorcon doing just that.

    Closed source... obsurity... does not provide security.

  • silicon.com, advocates for closed source software! I knew there had to be one out there.

    I was looking at these ads too, but now I'll look at them with a different point of view.

    Check out their website, you'll need to log on to see anything interesting (hint, the anti-cypher is your friend) to see these gems

    UK employees happy with big brother watching
    We like being spied on, says study by monitoring software company.

    Microsoft UK MD blames Win2000 bugs on rivals
    But win2000 is closed souce, so how did those rivals plant those bugs in there?

    Eric Raymond backs Linux profiteers
    Go ESR!

    Consider this to be news lite. Nothing more than a handful of overworked and underinformed journalists who reformat press releases and trim them down into bite size newsbits. So this is where all those ex-Dennis people ended up (bring back Zero!)

    If you have the patience, try loading one of their streaming videos. They are under a permanent slashdot effect, so the videos are best viewed by copying locally. The little chats they have with industry 'experts' can be quite hilarious, they are really nothing more than info-mercials.

    the AC
  • It seems that any article with so-called "experts" seems to find mostly experts in other areas. The "strategy analyst" and the other "experts" don't seem to know what they're talking about. For one thing, some of them seem to ignore the fact that OpenBSD, undoubtedly one of the most secure, if not the most secure operating system, is open-source. If an operating system is truly secure, it does not matter whether it is open-source or not, and open-source projects undoubtedly end up the most secure.

    Chris Hagar
  • HOWEVER, why is there in each article like this also an "open source advocate" who claims that "patches from Microsoft take months to appear!", which is simply not true either!
    It's a mild exaggeration, but is probably pretty close - have a read though the BugTraq archives [securityfocus.com] - it is often two to three weeks after a report is handed to them before they acknowledge a problem exists, and another few weeks before a patch is released - and even then, they often seem to have "phone support for this patch as it is not regression tested" on it......
    --
  • Well, one that springs immediately to mind was the Lotus email product "secure encryption" that leaked most of the secret key in a form the american NSA could read - and was used by some government departments overseas as a secure communication medium. They weren't really that pleased when they found out - example of the conversations of the time can be found here [google.com], and a suitable websearch should find you hundreds more :+)
    --
  • Ok, update for any who care :+)

    I have contacted the two whose companies are named (interestingly enough, one doesn't actually work for the company given, but the journalist thought it would sound "better" to name the larger company, and not the subcontractor) and both say they were taken massively out of context;
    Both seem to believe that the more recent server platforms (NT and Linux in particular) are not yet mature enough for a "secure" environment, ,and that the open/disclosed source nature of some unix-alikes make vunerability finding easier and faster than they would be if they were closed source (which of course is true). Given that BOTH stressed in their replies that they had been discussing only the needs of secure services (for example, banking servers) the exercise of a certain caution (for example, recommending SeOS as a secure operating system, which it practically defines) is understandable.
    Both also expressed their disappointment at the hate-mail they had received from members of this forum over this - which is predictable, I suppose, but as is usually the case, uncalled-for.
    --

  • Yes, security through obscurity DOES work!
    Chanting that it doesn't work doesn't make it so and doesn't help.

    It's a debatable option - in the short term, Security Through Obscurity DOES work, provided
    1. Black Hats can't get hold of a working copy to test against
    2. The vendors are committed to expansive testing and getting the patches out into the field fast and
    3. Known exploits against similar systems don't work on this one
    If ANY of the above aren't true, then StO fails; if the system actually DOES have good enough security to survive a failure of one of the above three points, than it has good enough security to be open source (or at least peer reviewed) in the first place.

    Out of interest, does anyone know

    • If any of these three are particularly tied to a closed source & StO product and
    • who Phil Roberts (the main source in the piece, apparently) is and who he works for?

    --
  • This is obviously true. Obviously security through obscurity works - that's why Windows NEVER gets hacked, and why we hear about Linux machines being compromised every day. You just have to look at the real statistics - none of this 'Anecdotal' evidence...

    :-)
  • Actually, it was Thomas Freidman, the Foreign Affairs columnist for the NY Times, who came up with the McDonald's Theory of Conflict Prevention. And yes, it did fail in the case of the US bombings of Serbia, although Freidman rather convincingly argues that it was "McDonald's" that stopped that war, rather than any traditional military concerns.

    That is to say (since obviously McDonald's didn't literally stop the war, just like it doesn't literally prevent other ones) that the reasons the Serbs gave in and withdrew from Kosovo had nothing to do with any military losses we inflicted on them. Indeed, we barely touched their tanks/artilary in Kosovo, which were dug in well in advance and shielded by the mountainous terrain. Our bombing campaign against their military targets was a pretty big flop.

    Instead, they surrendered because we bombed their economic infrastructure--namely all the bridges and power plants in Belgrade. Thus, Milosovic didn't withdraw because he no longer had the military ability to continue occupying Kosovo and killing/kicking out Kosovars at will, but rather because Serbia wants to be part of the global economy--hence the McDonald's--and the economic/political costs were too great. Indeed, he would have had a revolt on his hands, precisely (so says Freidman) because the citizens of Serbia care more about being able to "eat at McDonald's" (i.e. partake in the global economy) than they care about oppressing a bunch of Kosovars. (Or Kosovians, if you're George W. Bush.)

    Hence the McDonald's Theory of Conflict Prevention is strengthened, despite being conclusively refuted by example. Or so says Freidman. (If you can't tell, I'm taking a course that he's co-teaching this semester. But you can read all of this in his book, The Lexus and the Olive Tree [barnesandnoble.com] .)
  • "An example: the SYN DoS weakness discovered a while back, in both Windows and various UNIXen. Open source administrators and Linux/FreeBSD kernel hackers had a fix out within hours, while Microsoft and others languished for days or even weeks before releasing a fix. "

    So a skilled administrator would then install an opensource firewall of some type over night.

    Any competent system administrator would be able to install a firewall, and work around the operating system bug (hack around, in this case) :-)

    Just scale up your thinking beyond the case/case scenario. Any admin worth his/her salt would just grab a 486, firewall with NAT/MASQ, and then report the problem to the PHBs. If the PHBs insisted on insecurity, the admin would then follow the job description (security over all), and lie to them like many other IT people have had to in the past (see false authority syndrome).

    QED a knowledgeable, competent sysadmin is the most crucial part of any security :-)
    ---
  • The program encrypts it in a trivial fashion to stop any namby from just going in and looking in. However, given enough time, anyone could decrypt it -- even by hand. This is why the shadow password system exists -- only UID 0 processes can access the (trivial to brute force) crypt DES hashes of the passwords. True, modern distros (Slackware) use MD5 hashes now, but they can still be brute forced given a dictionary, an MD5 encoder, and a final "hash" to compare to.

    It is not possible to store secrets on the client computer if the client computer cannot be trusted.

    Let me reiterate: it is not possible to store complete secrets on the local computer if the local computer cannot be trusted.

    Solution: Don't write apps that store passwords on the local computer without using another password to encrypt them.

    Workaround: Disable all "remember this password for me" checkboxes that keep cropping up in all sorts of apps

    If I have access to your money box, I can break the lock. If I have access to your passwords, I can brute the hash. That's why you shouldn't "remember passwords" unless you 1) have the computer some place secure, and 2) are willing to remember it yourself so you don't put yourself in that situation.
    ---
  • Security by obscurity has been debunked so many times, and yet there are still people who cling to it. The real reason is simple. Their job security depends on the flaws in their code not being made public because they aren't bright enough to avoid them or even fix them.

    Here's Bruce Schneier's commentary on open source and cryptography, an obviously security related subject on which he can reasonably be considered an expert:

    As a cryptography and computer security expert, I have never understood the current fuss about the open source software movement. In the cryptography world, we consider open source necessary for good security; we have for decades. Public security is always more secure than proprietary security. It's true for cryptographic algorithms, security protocols, and security source code. For us, open source isn't just a business model; it's smart engineering practice.


    There is more detailed commentary in the newsletter [counterpane.com] that I have quoted. The people who believe FUD respect recognized authorities. Use him as a good one to counter this particular piece of FUD.
  • A small correction: www.articon.com probably has nothing to do with that. The relevant sites are www.articon.at, .ch, .cz, and .de, but still 3 out of 4 are running Apache on Linux.
    --
  • I really don't know where they find these people. I noticed his title was "strategy analyst" not "security analyst." The part where he said "Security needs to be built into the architecture of the operating system. This cannot happen if your source code is publicly available"--what rock has this guy been living under? This guy is supposed to have some kind of understanding about OS security?

    I wouldn't be too concerned about this article at any rate. Open source has already proven itself in this area. It just goes to show that there is still a bit of ignorance about it and there will always be someone that digs it up and puts it in an article...

    numb
  • I got to admit not reading the article itself (yet) because I got this eerie feeling of wasting time and reading some cheap tabloid-like article.

    However, I would like to make one single comment. If this headline were true then how on earth can a program like PGP be as secure as it is even when they released their source code?

    These guys still live in the stone age if you ask me. Back then you could hack a dBase database just by taking a closer look at the Clipper source code. Times have changed; guess its time to read up and get a clue.

  • I'll reply to the points as they are made:

    1. Individual applications often perform security audits looking for buffer overruns and the like. Also, a buffer overrun found in gnu grep would be fixed and benefit all operating systems that it can be compiled on.

    2. I use RedHat as my single point of contact. It's worked very well so far. Linuxtoday also publishes when security patches are released.

    3. The community keeps an eye on this, and if the Ukrainian fixes the problem and there seems to be a consensus that that is the proper fix, I'd install it without compunctions. Hence the quickness of the response. Besides, the community mobilizes pretty quickly. It's not like there's just you and that Ukrainian working on the problem.

    4. As 2 and 3 are not a problem, 4 isn't either. There are many people dedicated to finding security bugs, and many amatuers who stumble upon them. With many eyes, all bugs are shallow. As is all FUD.

  • Interestingly enough, a quick look trough netcraft reveals the following:
    While Clide Longbottom claims that Open Source is insecure, Strategy-partners.com [netcraft.com], his company, runs a BSD server.
    And while Bernie Dodwell says the same thing, his company, Integralis, merged with Articon, where most of their servers run, yes, you got it, none other than linux:
    www.articon.com [netcraft.com]
    www.articon.de [netcraft.com] (german branch).
    www.articon.cz [netcraft.com] (czech branch).
    www.articon.at [netcraft.com] (austrian branch).

    Now thats what I call getting things straight.
  • To be fair, Integralis.com [netcraft.com], Integralis.fr [netcraft.com] and Integralis.co.uk [netcraft.com] are using WinNT or W98.
    Now thats what I call secure.
  • I am going to have a go at tracking down the authors of these quotes on the offchance they have been taken out of context; I am not familiar with the Strategy Partners, but I know many at Integralis Group would be horrified that they had given a press release / quote stating they believed in security though obscurity....

    The bio of Clive Longbottom (one of the Open Source is less secure guys) is at:

    [www.strate...rtnershttp]
    http://www.strategy-partners.com/bios/clive.htm


    Since he's a chemist, I wonder if he's in favor of knowing what active ingredients are in medications and drugs. After all, "close the source" of drugs and it's harder to abuse them!

  • Do you put valuables out of sight when you leave your car parked in public?

    Yes, it's called keeping a LOW PROFILE. There is no security in dealing with cars, anyone can come along and smash a window or torch through the trunk.

    Let's take your analogy and express it in a little more realistic scenario: The black hats want an object that is in your car, and they're going to make every attempt to steal that object when your car is parked.

    Security through obscurity: Hide the object under a seat or in the trunk. I'd give a professional car stripper (hey I live in New Jersey :) 10 minutes before your car is apart and the object is stolen.

    or

    Good security: Attack dogs inside the car, the object in a safe that is welded to the frame, armed guards surrounding the car.

    Which is more secure? I even told you where the object is in the second situation...


    Do you have a hidden key for your house/car, and if you really believe that obscurity doesn't work, why is it hidden?


    This isn't SOA really either. This is like suggesting that even though I use Open Source operating systems, I'm using SOA because I don't give the root password out.

    The security is with the lock I use at the door. I'd much rather use a lock that has been under a peer review and proven unpassable without the key than one which is "closed source" and unreviewed.
  • Opinion: This article may or may not be FUD, but, inescapably, its pretty much the 'Other Camp' reaction to the zealot rallying cry that Open Source code is some kind of software panacea. If OS proponents weren't so single-mindedly bullish about its superiority in all fields, this wouldnt happen nearly as much. Don't confuse the development process (which IMHO is superior) with the product. OS is a solution, but not necessarily the only solution. Its an alternative, but shouldnt be dogma.

    And I'll state what I consider to be a fact. There's nothing inherently more secure about an Open Source implementation of a feature versus a proprietry implementation. But there is a greater likelihood that the feature will be improved upon, faster and better, than a proprietry solution. Not always, but it is more likely.

    The article, though, seems to make a different (mistaken) assumption. Access to the source code for a given Linux distro is probably the least significant factor in compromising security on a given Linux box. Is the article implying that someone would be able to develop a cracked kernel, and somehow cause its proliferation? Why not also mention Sendmail, BIND, or Apache, all of which sit on more boxen than Linux does? The kernel isn't the typical weak spot in a system; if there's any main software weakness, it's likely to be in the various server daemons.

    Most importantly, though, at the end of the day, poor administration is absolutely the worst problem. Implying that a closed-source OS is automatically safer instills a ludicrous perspective, implying that admins of closed OS's need to know less about security. For that reason, and that reason alone, Silicon.com ought to be pilloried publically.

  • at epinions.com [epinions.com]. You can rate a ton of different things and people can rate reviews. The site includes a section for software, including OSes. You can rate [epinions.com] people's opinions. Pretty neat idea, really.
  • If the information on your machines is so incredibly vital to you, disconnect them from the network.

    Of course, this isn't always an option. But i think the common view on `hacking' is still the TV-ish "hey, i cracked the DoD's machine in 5min.".

    I'm having a bit of trouble imagining that the DoD, or any other organization for that matter, would but all their "Top Secret" documents (including the ones with the red "Top Secret" label) equipped with a modem or a connection to the Internet.

    The same effect can be reached through firewalling and proper administration.

    If the information is unavailable it is secure.

  • Security through obscurity DOES NOT WORK!
  • Also see Stanford's version of Linux [slashdot.org].

    Personally, having worked on development of secure operating systems for DoD years ago, I don't take seriously anything with an all-powerful "root" or "administrator" account. In the serious security world, it's not done that way. But users hate highly secure operating systems. There are lots of things you're not allowed to do.

  • This is just part of the never ending soap opra FUD. Last time we visited the "security experts" they were telling the word that Open Source operating systems and applications were more suseptable to virus attacks. This time it's security. Get you boots on folks, it's startin' to get deep.

    From a PHB's point of view, plain and simple, Security on any system is more in the hands of the Sys admins and proper implementation and administration of the products that just the base architecture of the product. This said, with the caliber of admins on the street, basically between the MCSE variety and a solid Linux or Open OS admin, I would choose the Open OS admin every time.

  • Silicon.com has uncovered growing concern that the Linux operating system suffers from major security problems that could prevent its widespread adoption in the enterprise environment.

    The very first paragraph tells me I do not need to continue reading. Amazing how they have magically uncovered this to reveal it to the rest of the world. BZZT im working lol no time to read garbage.

  • You want to design a secure lock? Take your design and throw it to the cat burglars of the world and see what they do with it.

    You want a secure server? Give the source to the system crackers to play with...same thing. You go through a time when exploits are showing up left and right (and getting patched), but soon you'll have a hardened server.

    Afterall, who do you trust more to find the holes in your security? A couple of hired security experts? Or a few thousand people with direct experience slipping into places they don't belong?

    What part of this doesn't make sense?

    -- WhiskeyJack

  • Really, what is secure? Eventually, every system has a port of entry where there is an element of trust. Even biometrics presume you're presenting your bodyparts of your own free will, not at gunpoint.

    Anyway, who cares what the analysts think? The proof is in the pudding - people who need secure OSs are using OpenBSD. No endorsement is more important than a headcount of installations.

  • No, it's just confirmation of his cluelessness.
    Anomalous: inconsistent with or deviating from what is usual, normal, or expected
  • I was travelling into London on the Tube yesterday, and an ad for silicon.com caught my eye. It was a picture of a man's head, with a finger held up to his lips, and the slogan "Don't reveal your source!" underneath it. I assumed at the time that it meant "silicon.com is a source of industry knowledge - don't tell people where you get your information", but I'm not so sure that the second (anti-free software/open source) meaning is an accident, now.
  • Check the credentials of the people questioned and you realize that this article is heavily pro Linux.

    1. Phil Roberts, systems manager for a network installer, ( anti )

    2. Clive Longbottom, strategy analyst at Strategy Partners ( anti )

    3. Bernie Dodwell, business development manager for System Security specialist Integralis Group ( anti )

    4. Unix expert Malcolm Beattie, systems programmer for Oxford University Computer Service ( pro )

    This is like coming out with some claim about the thrust required to launch a 15 tun object into space and having bunch of automechanics and a graphic artist give one view then getting another from the chief launch engineer at NASA.

    Simply put the fact that the only Linux supporter comes down strongly against the other 3 and also has the best standing to make such claims speaks wonders. For those who don't know You can't name a top ten list of Universities without Oxford on it. Some of us would call it the #1 university on this planet.
  • by Bad Mojo ( 12210 ) on Monday March 20, 2000 @04:28AM (#1191105)
    Move along. Nothing to see here. Just more FUD.

    For those who didn't read the article, you didn't miss much. No real examples. No specific instances of Linux being insecure. Just general hearsay about how insecure Open Source must be. If you want a textbook example of FUD, this is it.

    I don't even recommend writing to correct these people. Let them wallow in their own crapulence(sp).

    Bad Mojo
  • by sammy baby ( 14909 ) on Monday March 20, 2000 @05:18AM (#1191106) Journal
    Check out the archives on Bugtraq (available at SecurityFocus.com [securityfocus.com]. Although I wasn't able to find much during the 5 minutes or so I spent trying to navigate their irritatingly counterintuitive web site, I was able to locate documentation on a backdoor to 3Com switches [securityfocus.com]. I also know (from having previously subscribed to that list) that it's far from the only back door intentionally left in a product.

    Even our highly clueful friends at id [idsoftware.com] were caught with their hands in the cookie jar [securityfocus.com]. Carmack later went on record as saying that leaving the back door in the finished product was a dumb idea, and that he regretted the decision.

  • by 0xdeadbeef ( 28836 ) on Monday March 20, 2000 @05:19AM (#1191107) Homepage Journal
    I've got an in idea. Someone should implement a credibility database for pundits and other self-described "experts". When they say something really good or really stupid, they go in. Positive karma when good, negative when bad.

    When one needs the services of a consulting group, or just needs to hire more people, you can go to the credibility database to help weed out the morons. It might encourage these people to think a little before they say something controversial and stupid just to get their name in an article.

    Say for instance, Phil Roberts of some unnamed company, Clive Longbottom of Strategy Partners, and Bernie Dodwell of the Integralis Group, would all go into this database as "clueless".

    My only concern is that this could be used to silence speech, as your company forbids you from talking to the media about *anything*, because they don't your negative karma affecting them. It could also encourage "cliquish" behavior, as people who have a high rating in the Linux db would probably be negative in the Win2k db. But hey, that's politics, it's been that way without public databases.
  • by infodragon ( 38608 ) on Monday March 20, 2000 @05:13AM (#1191108)
    A. The slashdot community is on the internet.
    B. When something like this gets put on slashdot it often results in the slashdot effect.
    C. Companies like Silicon.com generate revenue through ads
    D. More hits = more money
    E. Slashdot effect = More hits
    F. Slashdot effect = More money

    Are we responsible in some way for the Linux FUD. By visiting these sites we are supporting the FUD.

    Just an idle observation.

  • by DaveHowe ( 51510 ) on Monday March 20, 2000 @06:55AM (#1191109)
    They got quotes from a strategy analyst and a business development manager.
    Not entirely sure if this applies to Intergralis, but I just checked with OUR personnel department, and "business development manager" is one of the things our cold-call salespeople are allowed to call themselves on their business cards. The vast majority are issued with company car, laptop and sales brochures, and given a half day "induction" before they go out on the road....
    --
  • by DaveHowe ( 51510 ) on Monday March 20, 2000 @06:30AM (#1191110)
    First of all, Silicon.com isn't any place to be getting good opinions about technical stuff. It's a overview-style PHB rag. Too bad they don't recognise this.
    Unfortunately, this is EXACTLY the sort of rag we need to keep FUD down in - we don't need our PHB's taking every word as gospel, as we could find yet another "use only microsoft, only microsoft can be trusted" Corporate Strategy Decision handed down from on high and enforced, purely on gossip and heresay.
    I am going to have a go at tracking down the authors of these quotes on the offchance they have been taken out of context; I am not familiar with the Strategy Partners, but I know many at Integralis Group would be horrified that they had given a press release / quote stating they believed in security though obscurity....

    BTW, did anyone else visit the registration screen and read their blatant attempts to build a headhunter-register? "how soon do you plan to change jobs" as a mandatory field.... :+)
    --

  • by anatoli ( 74215 ) on Monday March 20, 2000 @06:54AM (#1191111) Homepage
    You confuse two different kinds of security by obscurity. You can obscure your encryption method (or your OS), or you can obscure the fact your message (or your computer) even exists.

    The former kind doesn't work. The latter kind (which is steganography) may work if you keep low profile.

    IOW you probably can leave your briefcase in the trunk of your $500 '78 Subaru, but not of your $800,000 '99 Ferrari.
    --

  • by waynem77 ( 83902 ) <waynem77@yahoo.com> on Monday March 20, 2000 @05:11AM (#1191112)

    The Computer Virus Myths [kumite.com] page labels this "False Authority Syndrome" and has a pretty good write-up at http://kumite.com/myths/fas/ [kumite.com].

  • Microsoft and others have proved again and again that you can not trust the people implementing your operating system. Only through open source and open peer review is any security at all possible.

    Any "security expert" who implies that with just the right choice of operating system can complete security be attained is an idiot. Security is an ongoing process that starts with well trained administrators. But most companies want to pay some dipshit (much less money) to keep their network running and like to delude themselves into thinking that their networks are secure because they're running an obscure OS.

    Anyone out there holding shares in any internet company should attend the next shareholder's meeting and ask some hard questions about the security policy and the "experts" in place to deal with it.

  • by JDax ( 148242 ) on Monday March 20, 2000 @04:41AM (#1191114)
    ....Once again. &nbsp A quote from the article:

    Both agreed that commercial flavors of Linux are still fall from ready for the corporate environment

    Uh excuse me? &nbsp If we're focussing strictly on security, then how (and please don't flame me Microsoft users/administrators, because I am one myself at work by requirement, whereas I choose something different at home), can any Microsoft product be "ready for the corporate environment", with at least a virus a week (and more and more - at least one a day being reported), whereas Linux is not???? &nbsp The amount of time *I* and my staff have to spend making sure 800+ desktops running Microsoft products, as well as 30 servers running said MS products, are virus-free has gone beyond comprehension.

    We do have some production Linux boxes at work as well (have had them for several years) - and have yet to run into any "security" problem.

    Note too, that most of the powerful firewalls are running *nix products, eg., SunOS.

    Some on other forums have posted an interesting ditty that I'll post here:

    On Winning

    First they ignore you
    Then they laugh at you
    Then they fight you
    Then you win.


  • by Effugas ( 2378 ) on Monday March 20, 2000 @05:08AM (#1191115) Homepage
    Ohhhh, I've been waiting for some geniuses to make this mistake publically.

    Anyone install CuteFTP lately? Or any of a couple hundred other applications that Aureate Inc. paid companies to install their advertising software within?

    Now, many people have debunked the rather virulent myth that Aureate was paying off these hundreds of shareware developers so that they could spy on people's computers.

    However, it'd be rather hard to debunk one simple fact: Hundreds of software developers put their good name on code that not only wasn't open to the world to search for security concerns...

    It wasn't even open to them.

    You can't just can't pay a Linux developer to include code in their software that nobody else can see, let alone that they can't. But hundreds of software developers merrily included Aureate's package, sight unseen, and hoped it didn't do anything bad.

    Perhaps Aureate indeed does expose the final end customers to certain forms of privacy violation(most directly, users don't generally expect that anyone on the outside world knows what software they're running). But that's not nearly as significant as some of the charges against Aureate--that they were searching through registries, rifling through hard drives looking for data.

    But the developers who put their name on the package didn't know for sure that the code didn't do that. The users who trusted those developers--the users whose systems were at the greatest risk--they too had no ability to audit that code for safety analysis.

    And, for all of Aureate's desperate attempts to defend itself, not even they can ever be absolutely sure that their code is intrinsically free of all buffer overflows, of all forged replies, of a preconstructed false advertisement that, when retrieved, overflows the GIF decompression code to allow the host system to be compromised...in the Open Source world, we find these problems quickly and send the authors fixes.

    Aureate has no such help, and no such luck.

    But, they'll just keep payin' 'em off...proving every day just why Open Source is more trustable.

    Yours Truly,

    Dan Kaminsky
    DoxPara Research
    http://www.doxpara.com
  • by FreeUser ( 11483 ) on Monday March 20, 2000 @05:23AM (#1191116)
    Open source doesn't make software more secure, and neither does closed source. It was established a long time ago that a skilled administrator was the most important security device.

    Your first sentence is not at all correct. Your second sentence is very true, and explicitly explains why your first comment is not, if you think about it.

    Open Source tools and operating systems give the "most important security device" the ability to do something to correct an emerging security issue, which in a closed source environment may not exist.

    An example: the SYN DoS weakness discovered a while back, in both Windows and various UNIXen. Open source administrators and Linux/FreeBSD kernel hackers had a fix out within hours, while Microsoft and others languished for days or even weeks before releasing a fix. It made absolutely no difference how good or skilled a system administrator responsible for Windows machines was in that scenerio - they simply could do nothing about the problem (short of sitting in the office watching the system and doing a manaul reboot) until Microsoft got around to releasing their patch. The same was true of other closed source platforms which have an otherwise much better history of quality control than MS. The open source admins, on the other hand, were able to fix the problem (and share the solution with the world) almost immediately.

    Clearly, the Open Source paradigm allows for a much more timely and robust response to security threats:

    • The product is subjected to peer review in every phase of its development, allowing many security fixes to be performed pro-actively, before weaknesses are ever exploited. In contrast, closed source never goes through any significant peer review whatsoever.
    • Open source provides accessability to the code allowing thousands of minds to address security issues which emerge as a result of an exploit (such as the SYN DoS attack), and share their solutions with the rest of the world in an astonishingly short time.
    • Security through obscurity has been demonstrated time and time again to be ineffective, and always results in a reactive, rather than proactive, solution, catalyzed by an exploit of said weakness. With open source there is no temptation whatsoever to attempt to engage in "security through obscurity" as the source availability guarantees there will be no obscurity.
  • by anatoli ( 74215 ) on Monday March 20, 2000 @04:52AM (#1191117) Homepage
    Solaris is disclosed source. Which, for the purpose of this discussion, is the same as open source (i.e. anyone, including hackers, can see the source).

    More info:

    Bernie Dodwell, business development manager for System Security specialist Integralis Group, said the operating system is insecure because it is open source.
    Integralis.com is bought by Articon.com. Incidentally, www.articon.com runs Apache on Linux.
    --
  • by anatoli ( 74215 ) on Monday March 20, 2000 @04:36AM (#1191118) Homepage
    Clive Longbottom, strategy analyst at Strategy Partners, agreed with his analysis, saying the problems are preventing its adoption in secure areas. He said: "Security needs to be built into the architecture of the operating system. This cannot happen if your source code is publicly available." He added that the issue could lead to proprietary versions of Linux being developed.
    Why their website is running Apache on Solaris, then?
    --
  • Yes, security through obscurity DOES work!

    Chanting that it doesn't work doesn't make it so and doesn't help.

    There is a whole field of cryptography called "Steganograpy" that studies how to hide messages. Do you put valuables out of sight when you leave your car parked in public? Do you have a hidden key for your house/car, and if you really believe that obscurity doesn't work, why is it hidden? How many times have you heard wisecrackers on /. say that "microsoft will never release their source 'cause think of how many security holes would be immediately found." Look at the rapid increase in problems with Quake bots after source was released.

    Obscurity is just one more layer of protection. Hopefully it isn't the only layer nor the strongest layer, but it does help. Obscurity is often a very easy layer to add so the cost/benefit ratio is very good.

    Yes, obscurity most keeps out only the least skilled or people who want to spend only a little bit of time breaking something, but that is a huge group.

    Ranting that "security through obscurity doesn't work" is a nice bummer-sticker type slogan. Like most other short rants, it is bogus and life is more complicated than that.

    Instead, we should be calmly explaining that "open source is more secure despite not being obscure." We can take about how open source can be a plus as well as a minus. We can show emprical evidence, we can talk about how many "white hat" people can fix bugs, we can talk about how "too often closed source developers use obscurity as their only defense".

  • by trims ( 10010 ) on Monday March 20, 2000 @04:46AM (#1191120) Homepage

    First of all, Silicon.com isn't any place to be getting good opinions about technical stuff. It's a overview-style PHB rag. Too bad they don't recognise this.

    The more important thing we all seem to miss is that the security of an OS is dependent on two critical features:

    How easy is to find exploits?

    and

    How fast are those exploits fixed?

    Now, as a simple matter of logic, it is easier to find an exploit on a Open-Source system than a closed source system, everything else being equal. It's that simple. You've got the code right in front of you, so it's easy to verify that there is indeed a flaw.

    However, the other issue is where is Open Source community shines. Typical patches for exploits are generally issued within hours, or at most a couple of days for OS stuff, whereas we all know how long it takes our favorite vendors to fix their stuff (if they ever get around to it).

    You simply can't consider one of the two requirements in absence of the other. It's impossible. Doing so marks you as a complete nincompoop. Or dort, whichever you prefer. And, of course, we're talking about an ideal world, where everyone has an equally elegant design, all coders made the same quality code, etc. In reality, these other issues generally far outweigh the first consideration, and have a considerable impact on the second (bad code is harder to fix, thus longer patch times). And we've all seen the quality of some of the closed-source code, haven't we?

    The other quote there that I love is: Security needs to be built into the architecture of the operating system. This cannot happen if your source code is publicly available. The first sentance has nothing to do with the second one - they are completely unrelated. Indeed, security must be built into the OS, you simply can't bolt it on later. This is a design issue, and has nothing to do with whether the OS is OpenSource or closed. The guy's a blathering clueless moron.

    Right now, the most secure OSes around are OpenBSD, Secure IRIX, and Secure SunOS. All have a very careful security design included in them, and are very attentive to security concerns. One is OpenSource, the other two are closed. Giving away the code makes no difference to the end -security of your system. Either you did a good security design, or you didn't.

    The article is simply wrong.

    -Erik

  • by LocalYokel ( 85558 ) on Monday March 20, 2000 @04:50AM (#1191121) Homepage Journal
    Open source doesn't make software more secure, and neither does closed source. It was established a long time ago that a skilled administrator was the most important security device.

    You can make NT, Linux, BSD, the MacOS, or even MS-DOS secure with a little bit of knowhow, even if the latter two are inherently nonsecured operating systems.

    (A car with ABS is no good if the driver still pumps the brakes, if you know what I mean.)

    --

  • by locutus074 ( 137331 ) on Monday March 20, 2000 @04:45AM (#1191122)
    "Security needs to be built into the architecture of the operating system. This cannot happen if your source code is publicly available."
    It's nice to see independent peer review confirming what we here at your Friendly Local Microsoft Business Office (C)(R)(tm)(sm)(patent pending) have been saying for years. You need to ensure that the source code to your Operating System (tm) stays out of the hands of the so-called "hackers" whose only aim is to break into your system and steal your important data.

    What is the best way to do this? You need to ensure that the source code to your Operating System (tm) is in the hands of a neutral third party: Microsoft (C)(R)(tm)(sm)(patent pending). We've been doing this for years. We ensure that nobody outside of our Company (tm) knows about any bugs that may or may not be in our Closed Source Code (tm). And because every Operating System (tm), as long as it is designed by humans, will have security holes, we ensure that each Service Pack (tm) will not only plug the old security holes, but also will introduce new ones that no one yet knows about. This, friends, truly is Quality (tm); there will always be security flaws, but don't you sleep better at night knowing that for the time being, the only party who knows about them is a name you can trust? And that so-called Operating System (tm) (we are investigating a trademark infringemnt lawsuit over the unauthorized use of a registered Microsoft (C)(R)(tm)(sm)(patent pending) trademark) designed by one Mr. Linux Torvalds has new security holes discovered at least once a week! You don't hear about Windows NT (C)(R)(tm)(sm)(patent pending) security holes for months sometimes!

    In closing, permit me to thank you for your continued patronage of Microsoft (C)(R)(tm)(sm)(patent pending), or your imminent switch to a Microsoft (C)(R)(tm)(sm)(patent pending)-based Operating System (tm).

    Sincerely,
    Mr. L. Mer Fudd, Microsoft (C)(R)(tm)(sm)(patent pending) Assistant Vice-Presidential Director of Marketing-Type Activities

    --

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...