Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
The Internet

The Twenty Most Critical Internet Security Holes 250

Ant writes: "A little over a year ago, the SANS Institute and the National Infrastructure Protection Center (NIPC) released a document summarizing the Ten Most Critical Internet Security Vulnerabilities. Thousands of organizations used that list to prioritize their efforts so they could close the most dangerous holes first. This new list, released on October 1, 2001, updates and expands the Top Ten list. With this new release, we have increased the list to the Top Twenty vulnerabilities, and we have segmented it into three categories: General Vulnerabilities, Windows Vulnerabilities, and Unix Vulnerabilities."
This discussion has been archived. No new comments can be posted.

The Twenty Most Critical Internet Security Holes

Comments Filter:
  • #21 (Score:5, Funny)

    by smnolde ( 209197 ) on Wednesday October 03, 2001 @11:59AM (#2383654) Homepage
    Being Slashdotted
  • by Doc Hopper ( 59070 ) <slashdot@barnson.org> on Wednesday October 03, 2001 @12:03PM (#2383674) Homepage Journal
    Here's Google's cache of the page. It's kind of tough to slashdot google : )
    http://www.google.com/search?q=cache:dbJlh35mihk:w ww.sans.org/top20.htm+&hl=en [google.com]
    Remember, check those links, you don't want to be goatse'd....
    • Aaargh! [slashdot.org] The pain of being a karma whore with lag! ;)
    • I bet in certain cases Google's cache could be a big security hole too. One that springs to mind is how after 9.11 nuclear power plants removed a bunch of info off of their sites. I just checked, and these pages (now 404's) are still in Google's cache.
    • To avoid being "goatse'd", you can enable the option in the Slashcode that shows which domain a link is pointing to (click your username, and try each menu option at the top). So, for example, a link to a personal homepage at Stanford would be "My Honors' Thesis web site [stanford.edu]" rather than just a blind link that could end up making you look at a really gross picture of some guy's, ahem, well, you get the point.
  • Argh! Slashdotted, there's a security hole for you! :)

    Google archive here [google.com].
  • by Zwack ( 27039 ) on Wednesday October 03, 2001 @12:03PM (#2383678) Homepage Journal
    That the top ten list of last year makes an appearance in the top 20 of this year?

    Haven't we learned anything?

    O.K. So some of them (no/weak passwords) are user related, but so many of them are admin related (bind vulnerabilities, IIS RDS vulnerabilities)

    Don't any admins care about these?

    Of course, inside a company network some of these problems can be ignored if that is the decision. R commands are useful, but I wouldn't want people using them across the internet to my machines... But at the very least firewall... Please.

    Z.
    • I can't even tell you how many ADMINS I have met in corporate who say things like, "But all the upper-lower case, numbers, &$% stuff is hard to remember."

      It is so sad.
      • I can't even tell you how many ADMINS I have met in corporate who say things like, "But all the upper-lower case, numbers, &$% stuff is hard to remember."

        The use of a single upper case or symbol character in a password does not increase the randomness of the password by very much in practice. Most users simply add a number at the beginning or end of a word. The cost of a dictionary attack goes up a bit, but it still ain't very secure.

        The only way to make passwords secure is to severly limit the scope of brute force attack. Partitioning the password verification database into two parts such that both have to be compromised before the attacker can start a brute force attack.

    • Some people never learn. Welcome to the ugly world of the 90/10 rule (90% of any given group are idiots). Cynical, yes. But all too true.

      Myself, I've moved from sendmail to qmail and from BIND to djbdns. Yes, the license isn't "Open Source" the way distributions would like it to be, but it certainly is for me, the User.

      IIS, is that the Insecure Internet Server I keep hearing about?

      :)
    • O.K. So some of them (no/weak passwords) are user related, but so many of them are admin related (bind vulnerabilities, IIS RDS vulnerabilities)

      well - in theory admin problems should be the only holes. the software should be able to be configured in a manner that is 'completely secure' (as far as anything can be). Programs shouldn't be insecure because of programming faults - only insecure becasue they're not configured properly.

      speaking of security problems - has anyone thought of/made a version of code red/etc that goes around and downloads the security patches and the resends itself?

      -shpoffo
    • O.K. So some of them (no/weak passwords) are user related, but so many of them are admin related (bind vulnerabilities, IIS RDS vulnerabilities)
      Anything related to IIS is user-related, too. Quite a few users install a web server without realizing what they are doing. After all, their hard disks are big, and they have paid for this stuff in some way. And there will always be people who complain that security is too complex for them to deal with and refuse to install patches on their favorite web server (usually IIS), unless they are cut off the net until their machines are fixed.
  • by new-black-hand ( 197043 ) <nik AT techcrunch DOT com> on Wednesday October 03, 2001 @12:03PM (#2383680) Homepage
    id add

    21. Hiring admin's with no clue about security

  • Not to put down the usefulness of their document, but none of the vulnerabilities are particularly new. It is interesting that many of the windows vulnerabilities are tied to IIS, though.


    As far as the *nix vulnerabilities, I think that a large majority of Slashdot readers could name off NFS, Bind, Sendmail, rlogin/rsh as critical (and many have already disabled / blocked those services).


    Just my $0.02


    Ed

    • Hopefully by repetition it will open some eyes and result in a more secure environment.

    • Of course, anybody who really is into security knows every problem mentioned by the document. However, some people do not stay informed on a daily basis. This kind of analysis is useful for neophytes and for people outside of the security domain. Also, as the document mentioned, the idea was to help sysadmin choose which problem to fix first.

      Something interesting comes out of this analysis:

      -General problems remain present with years. Negligence from the users, programmer and administrators are the cause of all the security problems.

      -Unix and Windows problems have basically the same roots: programming errors (buffer overflow, bad input validation) and inadequate trust.

      Not mentioned in this article:
      -Windows users are less computer-literate than Unix users. This is the major why so much problems occur on Windows (virus, worms, executable mail attachments, etc...).

      System security is a very pragmatic issue. Some relatively well-known pratices will increase a lot the security of a network/system. There is always a hole somewhere but removing the well-known ones will make a huge difference.
    • They don't have to be new. The lesson of code red and nimda is that many, many servers aren't properly maintained. Sometimes a refresher course on the basics is just what the doctor ordered.
    • The trouble is that most Linux distros come with NFS, BIND, Sendmail and rlogin/rsh installed by default. They're getting a bit more savvy about this, but it's still a major problem. If you're a competent administrator, you can deal with it. Most people aren't. I certainly am not, which is why I prefer systems that don't turn on every damned vulnerability known to man.

      Too many distros want to make you do all of your sysadmining from DistroConf2. You don't tune your automobile engine from your dashboard, and you don't secure your system from a GUI.
  • I certainly hope that "The Slashdot Effect" is high on the list. It definitely qualifies as a DOS attack for most webservers.

    Including theirs.
    • Does anyone has any idea what the slashdot effect looks like ??? I have no idea how many we are clicking on those links but it must hurt.
  • by Kozz ( 7764 ) on Wednesday October 03, 2001 @12:05PM (#2383698)
    I'm surprised to see that this hole [bbspot.com] didn't make the list.
  • Summary (Score:3, Funny)

    by zpengo ( 99887 ) on Wednesday October 03, 2001 @12:08PM (#2383709) Homepage
    Top Security Vulnerabilities:
    • Clicking "Next" instead of reading.
    • Using passwords from Hackers, et al., for your system accounts.
    • Bragging about how many servers you've got running on your home computer.
    • Setting file permissions to "everyone can execute" because you can't get your Perl scripts to work.
    • Using Microsoft Anything.
  • by bark76 ( 410275 ) on Wednesday October 03, 2001 @12:08PM (#2383711)
    Looks like the feds are considering setting government standards, abcnews article is here [go.com]. I'm not sure how helpful government standards could be, but I think I could welcome them. I'm sure that if my toaster lit on fire as often as my windows box crashes the government would do something about it, so why not hold software companies more accountable.
    • What impact would such standards have on the open-source community?


      Presumably government standards would come with either a carrot or a stick. A typical carrot might be, the feds will only buy software which has been certified to an appropriate level. If the certification process costs $100K, who's going to pay the bill to get a particular software package tested? If IBM gets kernel version 2.4.3 certified, what happens with 2.4.4? A typical stick is the threat of serious liability for damage caused by security holes. Who will use a software package that doesn't have a large corporation behind it? Even a $1M liability judgement against me and I'm broke for the rest of my life and may still never pay it all off.

  • by MadCow42 ( 243108 ) on Wednesday October 03, 2001 @12:09PM (#2383712) Homepage
    The site is already fairly well /.'ed... Here's the top 20 holes they mention, without the detail for each point (sorry).

    "G" stands for "general holes"
    "W" stands for "Windows holes"
    "U" stands for "Unix holes"

    G1 - Default installs of operating systems and applications
    G2 - Accounts with No Passwords or Weak Passwords
    G3 - Non-existent or Incomplete Backups
    G4 - Large number of open ports
    G5 - Not filtering packets for correct incoming and outgoing addresses
    G6 - Non-existent or incomplete logging
    G7 - Vulnerable CGI Programs
    W1 - Unicode Vulnerability (Web Server Folder Traversal)
    W2 - ISAPI Extension Buffer Overflows
    W3 - IIS RDS exploit (Microsoft Remote Data Services)
    W4 - NETBIOS - unprotected Windows networking shares
    W5 - Information leakage via null session connections
    W6 - Weak hashing in SAM (LM hash)
    U1 - Buffer Overflows in RPC Services
    U2 - Sendmail Vulnerabilities
    U3 - Bind Weaknesses
    U4 - R Commands (rlogin, rsh, rcp)
    U5 - LPD (remote print protocol daemon)
    U6 - sadmind and mountd
    U7 - Default SNMP Strings

    MadCow

    • Of course, you now know that MS is going to spin this in the PR, with comments like "Windows has fewer security holes than UNIX systems according to a recent survey of security experts..."

      • by MadCow42 ( 243108 ) on Wednesday October 03, 2001 @12:26PM (#2383807) Homepage
        Well, the interesting thing is the the "Windows" holes are more "bugs" than general architecture problems. Bugs can be easily fixed (if users patch their machines), and in fact most of the Windows ones already are fixed.

        The UNIX holes listed are more fundamental in nature, requiring a significant re-development effort, and in some cases, redefining of protocols and fundamental tools.

        Although the Windows "bugs" have been exploited more (and are easier to exploit in general), it'll take longer to address the issues in the UNIX list than those in the Windows list.

        Sorry... I'm not a M$ advocate, but it does point out some significant issues that we need to overcome in the UNIX world, and quickly.

        MadCow.
        • U1 - Buffer Overflows in RPC Services
          U2 - Sendmail Vulnerabilities
          U3 - Bind Weaknesses
          U4 - R Commands (rlogin, rsh, rcp)
          U5 - LPD (remote print protocol daemon)
          U6 ? sadmind and mountd

          U1 - Implementation
          U2 - Implementation
          U3 - Implementation
          U4 - Known bad for a while, replaced with S Commands
          U5 - Implementation
          U6 - Implementation

          How exactly is Unix architectually bad compared to windows? Seems they're both full of bugs.
        • At least sendmail and BIND have patches.

          You forgot the gaping hole otherwise known as the Office document format, and the massive "treating symptoms" anti-virus market, which, last I checked, was primarily aimed at Microsoft's products.
          • At least sendmail and BIND have patches.

            You forgot the gaping hole otherwise known as the Office document format...

            What the hell are you smoking??

            Sendmail and bind can not be patched in such a way as to eventually become completely secure. The architecture underlying sendmail is not conducive to creating security. These packages should be taken out of use. There are alternatives to BIND and Sendmail: use djbdns and qmail. I haven't used djbdns, but given the quality and ease of configuration for qmail, I wouldn't hesitate to recommend anything from DJ Bernstein. See http://cr.yp.to/djbdns.html and http://cr.yp.to/qmail.html.
            It's a pity about the licensing on DJB's stuff. Otherwise I would imagine that they would be included in more distributions...

        • The UNIX holes listed are more fundamental in nature, requiring a significant re-development effort, and in some cases, redefining of protocols and fundamental tools.

          How the hell did this crap get moderated up? Most of the popular Unix expolits are buffer overflows, and most of the popular Windows expoliots are.... buffer overflows!

          I wouldn't say that the r* tools are fundamental tools - every UNIX admin that hasn't been living under a rock has that stuff disabled on a public machine.

    • Accountability (Score:2, Interesting)

      by jpostel ( 114922 )
      Not trolling here but, you have to notice that there are 7 general, 6 windows, and 7 unix vulnerabilities.

      IIS is bad, but Unix admins that don't patch BIND and SendMail are worse. The IIS versions change every year or so and the patches come fast and furious, but SendMail and BIND have had stable versions and patches for a while.

      Almost everyone reading this will admit that it takes a bit more expertise to get SendMail and BIND up and running than IIS (which is installed by default in Win2kSrv). Therefore the admins with more expertise should be held MORE accountable since they have greater responsibility by running BIND and SendMail.
    • Maybe it's just me, but it seems that all of those unix holes are silly. There is absolutely NO reason for RPC, rsh/rcp, LPD, sadmin/mountd or SNMP to be open to the outside world. Just no reason for it.

      The very first thing you need for a secure network is a firewall. And not an opt-out firewall. An opt-in firewall. As follows:

      Rule #1: block in all
      Rule #2: block out all

      There, now that the firewall is secure you can add rules to it to allow the specific things you need to flow into and out of the building.

      Justin Dubs
      • by ink ( 4325 )
        Maybe it's just me, but it seems that all of those unix holes are silly. There is absolutely NO reason for RPC, rsh/rcp, LPD, sadmin/mountd or SNMP to be open to the outside world. Just no reason for it.

        Congratulations! You've just conditioned the next wave of software developers to use port 80 for all their traffic because of your silly firewall rules. Don't believe me? Take a look at Microsoft's dotNet architecture sometime. Take a look at the IM protocols. Take a look at the new P2P protocols. What an excellent job you've done....

        Attack the source of the problem: individual computers. People like you only cause more headaches for the rest of us in the long term.

        • Actually, most of our .NET web services will run on port 443 :)
      • Maybe it's just me, but it seems that all of those unix holes are silly. There is absolutely NO reason for RPC, rsh/rcp, LPD, sadmin/mountd or SNMP to be open to the outside world.

        OK, by your logic Microsoft SMB and RPC holes are also silly. That shortens their list considerably too. (W4/W5/W6).

        However, in the real world, unfirewalled RPC servers have been a huge problem for both Unix and Windows. Basically, the idea of a "trusted LAN" should be obsolete in this day-and-age, and somebody needs to fix this crap.

        Besides, it's been pointed out that the hackers outside of your firewall only want to deface your webpage. The industrial espionage agents and others that can seriously damage your organization's business are most likely plugged into your LAN.
        • Then keep your important data on servers. Servers don't go on the LAN. If they need outside access they go in the DMZ. If not, they go in a separate LAN. A firewall or a smart bridge sits between that LAN and the regular LAN. Now we are back to having a firewall protecting everything.

          Again, there is no reason to have SMB, RPC, SNMP, LPD or anything of that sort running on these special servers with their magical important information. They just have data and a port open for whatever software is used to interface with that data, whether a SQL port or what-have-you.

          I'm not saying these bugs aren't significant. They need to be fixed. I'm simply trying to point out that a good firewall/bridge system can go a long way to preventing some problems. Not all of them, but some.

          Justin Dubs
          • "Servers don't go on the LAN."

            I'm curious if you've ever worked in a place that implemented that idea, or if it just wafted out of your crackpipe.

            Hint: The "magical important information" is created by users (heard of them?) who use normal applicaitons. Generally the LAN was installed in the first place to allow them to store this information on centrally managed servers. If your internal firewall has to let 137-139 through to allow client access to NT fileservrs, why is there in the first place? (And I've even worked in places that use Lotus Notes with it's better security, real authenticaion and special port. Guess what? People still stick critical data in Excel files.)
    • by devphil ( 51341 ) on Wednesday October 03, 2001 @01:01PM (#2383951) Homepage


      ...is that, for the Unix vulnerabilities, most of them have long since been replaced by better, more secure alternatives. Where I work, nobody has used the word "telnet" or "rexec" for years. Nobody here runs sendmail, or sadmind, or SNMP stuff. It's basically a list of "don't ever use this ancient crap" tools.

      But for the Windows vulnerabilities, they're all related to current, recent, flagship, "this is what you should be using" products. No alternatives within the Windows world.

    • I wouldn't use U7 but G8 to the SNMP strings. True, SNMP is more used in Unix (mostly because the availability of client support on those systems and because it is NEEDED to support management of an heterogenous network), but is also present in Windows, and the vulnerability stated is one of configuration, not of particular vulnerability of the protocol or implementation.

      I think its misleading to place it as a *Nix problem, since probably most devices being subject to attack will not even be running any widely known OS (more likely they will be printers, routers and the like).
  • Google Link:
    http://www.google.com/search/?q=cache:dbJlh35mihk: www.sans.org/top20.htm+
    Click here [google.com]
  • by D3 ( 31029 ) <daviddhenningNO@SPAMgmail.com> on Wednesday October 03, 2001 @12:17PM (#2383755) Journal
    I have worked for SANS in the past but I have to disagree with the way they compiled this list. The fact that there are a larger number of "vulnerabilities" for *NIX than Windows is misleading. I just bet the M$ people latch onto this "See, Windows is less vulnerable!" Even though most of the *NIX stuff is so old you rarely find it occuring in the real world.

    What is more useful IMO is to have a ranking of these "vulnerabilities". Right now an unpatched IIS box can be hit even though you have it firewalled so only port 80 is open. With the *NIX stuff, the only way to hit a sytem via port 80 is bad CGI or a new exploit to the webserver software. And when was the last time an Apache exploit was released?

    Look at the CVE numbers. That tells a tale of what is going on _now_. The number has the year and there are many of the *NIX exploits that are 2 years old or more. Many of the Win exploits are within the last year.

    • The fact that there are a larger number of "vulnerabilities" for *NIX than Windows is misleading. I just bet the M$ people latch onto this "See, Windows is less vulnerable!"

      I doubt that MS itself is going to be stupid enough to try and say this shows their product is more secure, but I could be wrong. There will always be people who try and scew any information that is presented. This is simply a list of the top twenty security risks compiled by the listed experts. There isn't any quantitative method to rank these issues, so they didn't even try. If your systems has any of these vulnerabilities, you should fix them. This isn't designed as a marketing tool, or an advocacy tool. It's a tool for administrators to check their systems for common, serious security issues.

      I agree that Windows, or at least IIS seems to have more security issues that are causing wide spread problems, but the purpose of this report isn't to point that out. These experts could have spent months arguing about how to weigh the different security issues, and how to rank them. Then when the report was released they would be called partial and discriminatory by advocates from both sides. The report would have less credibility, and it's purpose of pointing out security flaws would not be served any better.

      Even though most of the *NIX stuff is so old you rarely find it occuring in the real world.

      People set up unsecure UNIX systems all the time. Even though these are old issues, they still exist.

      Look at the CVE numbers. That tells a tale of what is going on _now_. The number has the year and there are many of the *NIX exploits that are 2 years old or more. Many of the Win exploits are within the last year.

      UNIX and Windows are different. UNIX is an older more mature OS. More serious bugs listed are older, because UNIX has been around longer. There's going to be more new exploits in Windows, because there's more active development on new features in Windows. Many users don't need those new features, and would likely be better off with a more mature UNIX solution. Other users feel they need those features, and UNIX has not evolved to provide them with a solution yet. The two OSs take a different approach, and place different priorities on security.

      This article doesn't take sides in that issue. The experts don't try and advocate one OS over another. They just point out the issues that they consider to be the most serious, and organize them in a way that it's easy to find the ones that apply to the reader. They did a very good job of trying to stay out of the UNIX/Windows. There are plenty of reports on who has the most vulnearabilities, if that's the kind of report you're looking for, then go read one of them.
  • How Linux Fares (Score:5, Insightful)

    by sting3r ( 519844 ) on Wednesday October 03, 2001 @12:19PM (#2383768) Homepage
    Many of these vulnerabilities have been addressed in the past 1-2 years by the major Linux vendors. Redhat and Debian, in particular, have been quite good at reducing the avenues of attack. For instance, the changes I've observed include:

    • Redhat used to open up the xfs port to internet traffic, but now uses a local UNIX socket. No access -> no exploit.
    • After many problems with lpd, most Linux distros now restrict the internet hosts that can connect to port 515 to localhost only.
    • I don't know of a single Linux distro that ships with default passwords for any user. (Even Solaris and the other oldskool unices stopped this practice within the past few years.)
    • With the rp_filter option, Linux (by default) drops packets that are spoofed to look like they come from a different network. For instance, traffic from the internet with your internal network's addresses in the header is automatically discarded. (FreeBSD should really do the same but they're being stubborn about it.)
    • GNU Apache and most of the distros out there remove all of the sample cgis (like nph) that used to be a security threat. Indeed, my Debian box has only the Apache manual (static html) installed; and that's damn hard to exploit. :)
    • Samba has never been vulnerable to the NETBIOS unprotected share vulnerabilities. It takes a considerable amount of effort to enable sharing anything via Samba to the general public - if you don't intend for that to happen, it's not going to happen.
    • Samba has no Null Session support. Samba does not send out lists of users (the equivalent of /etc/passwd under shadowing) like NT does. It is very difficult to break into a Linux box through SMB networking.
    • In general, setuid root programs have become setgid (something else) programs through the years. xterm and xlock immediately come to mind; on other platforms (even OpenBSD) they are still setuid root. This further hardens the GNU/Linux system. ps and netstat do not need privilege because of the privilege-bracketing nature of /proc.

    Linux boxes are much more secure than any of the competitors. Solaris is getting better; UnixWare is pretty hopeless (see BUGTRAQ). NT is ... well, draw your own conclusions about NT. I feel much safer with a Linux server than with any other OS and the security just keeps getting better.

    -sting3r

    • Forgot a big one: Debian changed the default X config so that it listens on a local UNIX socket instead of 0.0.0.0:6000. Coupled with ssh X forwarding, this maintains all the old functionality but makes a huge difference in security.
    • Samba has no Null Session support. Samba does not send out lists of users (the equivalent of /etc/passwd under shadowing) like NT does. It is very difficult to break into a Linux box through SMB networking.

      This is true, but in addition to the superior security, I find that simply as a user I prefer the way Samba works. When I browse a Windows machine's list of shares, I see everything -- even shares that I'm not allowed to access. I can only find out which ones I can use by trying to access them and seeing which ones succeed. With Samba, by contrast, I find that I can only see the shares that I am allowed to access. One might say that the the signal-to-noise ratio is better with Samba, since you aren't shown things that aren't relevant to you.
    • by MosesJones ( 55544 ) on Wednesday October 03, 2001 @12:42PM (#2383879) Homepage
      The most secure system is a Unix box run by a 40+ year old bloke who has seen the virtual deaths of more script kiddies than I've had hot dinners.

      Actually Mainframe admins run pretty tight ships as well. Its a sad reflection on the new generation of admins that most of these are things the old school had never even thought of doing wrong. The current raft of virii are an example. The people hit had new school systems, the old school companies survived untouched.

      Old blokes in a distant room of the organisation, possibly called "Gary" or "Dave" never seem to be doing much, but their network never fails.
      • Gary or Dave in the distant room are not the security admins. They are the security guards. They are as grumpy and never do much either like the sysadmin but they don't use computers.
      • by Anonymous Coward
        The most secure system is a Unix box run by a 40+ year old bloke who has seen the virtual deaths of more script kiddies than I've had hot dinners.

        Thats me. 40+, and always losing jobs to script kiddies turned sysadmins who underbid the job by several orders of magnitude. That means I get the jobs with clued bosses :-) That also means the other sites get r00ted immediately after the skriptadmin leaves.

        I lost a bid a few weeks ago to secure a big network in the midst of a complete rebuild. My bid was around 400 hours to do the work, plus 200 hours testing and fixing, using expensive cisco and nokia hardware. The guy who got the contract claimed he could do it in only 3 days onsite with a single linux box.

        He left after a week, after he managed to trash the network, and left the whole thing open to the internet over the weekend. CodeRed, nimda, and every box sploited, anon FTP server full of porn, etc. They arent paying him. They cant even find him to prosecute.

        They called me monday morning, and my price doubled from the original estimate, and they have no choice but to pay. This will make for a nice month long vacation at the end, a sunny beach or maybe a skiing holiday.

        Cant use my nic from this secure location. awwww.
      • Yeah, Gary and Dave, the old blokes brought us:

        SMTP - plain text email
        POP3 - plain text email AND usually user/pass pairs
        telnet - more of the same
        r-tools - 'nuff said (and one of the top 10)
        old versions of sendmail - 'nuff said (and one of the top 10)
        bind - 'nuff said
        RPC - big fat holes (and one of the top 10)

        Now, I perfectly understand that much of the above is because the internet "used to be such a nice neighborhood." I'm just suggesting that we not pretend away the past.

        -Peter
    • by trcooper ( 18794 ) <coopNO@SPAMredout.org> on Wednesday October 03, 2001 @02:10PM (#2384377) Homepage

      Linux boxes are much more secure than any of the competitors. Solaris is getting better; UnixWare is pretty hopeless (see BUGTRAQ). NT is ... well, draw your own conclusions about NT. I feel much safer with a Linux server than with any other OS and the security just keeps getting better.


      Bullshit. You're lying to yourself. One OS is not automatically more secure than another. Notice the first problem they noted: Default installations of operating systems and applications. They meant all operating systems, they didn't say 'RedHat and Debian are pretty good, you'll probably be okay with them, or at least more okay than someone using Windows.' Not only is this the most important point of the article, all other vulnerabilities stem from it. They all exist because of complacency with the current state of security of a system.

      Security is not determined by OS. Period.

      A systems security depends on the administrator's vigilance in keeping up to date on patches. Sure, windows has had a lot of exploits lately, but how many of these exploits were not patchable? Hmm. Conversly, Linux and other Unix systems have been not as widely or at least as publically attacked lately. Is this because they have less holes? Redhat 7.1, about 6 months old has 23 security alerts [redhat.com] listed. 7.0 and 6.2 both have over 60. So, there's likely likely more out there in 7.1. Many of these are critical and involve remote root exploits. Feel safe? I hope not.

      (Li||U)nix can be attacked with the same efficiency of what we've seen happen to Windows systems in the past few months. Administrators aren't simply better because they admin unix boxes, that's proven in the article that 50% of the copies of BIND that were running in mid 1999 were vulnerable. It would make sense that a similar percentage of other security risks exist as well.

      I'm not bashing Unix, and I'm certainly not saying that Windows is a more secure OS. Its a moot point. What I'm saying is that people who blame the OS for their mistakes are wrong. They're using windows as a scapegoat, and ignoring the real problem behind this.

      Unix will be hit by one of these sometime or another, and it will be just as publicized because it will likely use the same distrubution methods as before, email.

      Go back, read the article again, paying close attention to the generic problems they mention. These are the basic things that any admin has to look at, every day. A machine is never secure. You can be sure of that.
    • Re:How Linux Fares (Score:3, Interesting)

      by pmz ( 462998 )
      Linux boxes are much more secure than...

      Than what? [openbsd.org]
      OpenBSD???

      Look at the default install of OpenBSD, and you'll find most of the "Top 20" are already addressed. Linux is generally very good, but I wouldn't put the default install of RedHat between my business and the world. It's just too risky.

  • by ghibli ( 38720 ) on Wednesday October 03, 2001 @12:19PM (#2383769)
    Until managers understand and treat computer security SERIOUSLY, the same basic weaknesses will remain.

    One thing that helps is for companies to hire computer security specialists, and make this their primary job. Instead, many businesses that I work with expect their already-overburdened sysadmin or network administrator to "protect" the network, something he/she has never been trained to do. The average NT Administrator does NOT know much about network security. The new Win2K Security certification is a step in the right direction, but it is only a baby step.

    -------------
    "Against stupidity the gods themselves content in vain." - Schiller
  • I just read over the list and there is nothing new here. We know Sendmail needs regular patching. We know BIND needs regular patching. We know never to run the R commands or IIS. We know we need firewalls. I can write down a list of common sense things to do, too. There is nothing new here.
    • That's the sad part, that there really isn't anything new -- everyone KNOWS what needs to be done, but so many people just don't follow through. (/me hangs head in shame at not having patched his freebsd box against the telnet exploit in time. Luckily, it was just a personal mess-around project, so recovery was just a matter of re-install, and it didn't appear that anything truly malicious had been done).
  • I can't believe that the slashdot effect is number one. WOW! Congratz all around!
  • I like this sentence from the sans.org article: "Sendmail has a large number of vulnerabilities and must be regularly updated and patched." One might go further and suggest that switching to another mail transport is the best solution. On my small site, I use exim; other people like postfix or qmail.
  • The Value of This (Score:3, Insightful)

    by maggard ( 5579 ) <michael@michaelmaggard.com> on Wednesday October 03, 2001 @12:31PM (#2383838) Homepage Journal
    This document is a great one to give to the Powers-That-Be at one's employer, school, ISP, etc.

    In one credible place with annotations and links are the most common problems. Sure most of them aren't news to /.'ers but they're likely news to lots of other folks and exactly the thing to light a fire under the PHB's of the world. It's almost a checklist of "Are these implemented and if not *why* not?"-items for the semi-technical and as such is invaluable.

    My thanks to the SANS Institute and the NIPC for releasing such a well-written & useful document.

    • This document is a great one to give to the Powers-That-Be at one's employer, school, ISP, etc.

      Bad Idea.

      Last time I tried something like this, I got the following response:

      "Why would anyone ever want to hack into my computer? Its just all boring work stuff..... Anyway, how come you know so much about hacking? eh?"

      ARRRRRRRGGGGGGGGGGGGHHHHHHHHHHHH!

  • is windows, fo a system that crashed usually can't be hacked....
  • by jgaynor ( 205453 ) <jon.gaynor@org> on Wednesday October 03, 2001 @12:37PM (#2383861) Homepage
    Maybe not on UNIX machines, where SNMP is generally turned off by default - but on Cisco devices where it is enabled by default with the common SNMP names . . .

    SNMP on cisco devices is weak because of the default community string names (public, private and secret). To add to the situation, the secret string will allow you to bring interfaces up and down at will, all without a trace of intrusion in the logs. While the big guys like ATT and Wcom may fix these using default config files, may universities and smaller carriers dont even know it exists.
  • by Nicolas MONNET ( 4727 ) <nicoaltiva@@@gmail...com> on Wednesday October 03, 2001 @12:56PM (#2383940) Journal
    ... in programs (setting aside administration issues such as passwords)

    1. string.h
    2. sprintf
    3. system
    4. char buff[255];
    5. snprintf(buf,len,user_input);

    Let's face it, C's string handling is the biggest cause of security problems on the Internet. Static strings are evil. Too bad there is no standard way to handle them in C.
    • I meant: too bad there is no standard way to handle dynamically allocated strings in C.
    • Huh? This is like banning hammers just because people have been known to hit their thumbs with them.

      If you don't know how to use strings, you will get burned everytime. But if you do know strings, and are aware of the tarpits, then every one on your list is perfectly fine.

      The number one security problem in C is not strings, but the lack of unit and system testing. Do you unit test every one of your functions? Does someone other than you or your end user perform system testing? Do you even have a test plan?
      • > This is like banning hammers just because people have been known to hit their thumbs with them.

        This is like banning unguarded circular saws just because people have been known to slice off their thumbs with them. Guess what? Circular saws come with guards. If a tool is really dangerous, and can be made safer through simple solutions, then we use those solutions to make it safer.

        Strings are a source of problems for a lot of programs, including well-known programs that have very experianced programmers working on them. Unit testing will never catch all bugs. Many languages - Ada/Java/C++/Perl - have string types that won't cause buffer overflows - ever. Using an unsafe tool when you have a safe tool at hand that will do the job about as easily is just stupid, whether or not you think you're good enough to keep yourself safe.
    • It's too bad misguided people somehow think that C is a good language to write security-critical network apps in. In fact, it's very nearly the worst language to write such apps in.

      The fact of being automatically buffer-overflow free alone should make people drool over the prospect of using a high-level, safe language. Not to mention better productivity, code reuse, and even sometimes performance.

      What mindset drives this crazy practice?
  • by ink ( 4325 ) on Wednesday October 03, 2001 @01:04PM (#2383962) Homepage
    It's very very dangerous to keep on complaining about having a "large" number of open ports. Many system administrators will take this to mean "firewall all these ports at the border".

    "Why is that dangerous?" I hear you ask? As we drive more and more traffic to a small number of ports (read: everything on port 80) because of draconian firewall and proxy servers, and even driving all traffic to one protocol (read: http) a large number of services will still be running, but will now be undetectable without traffic analysis, which is mostly voodoo technology right now. The bugs and security holes are still there, but now they are hidden from us because we've conditioned everyone that non-80 is firewalled (see SOAP and Microsoft's dotNET -- in order to avoid firewalling, they are basically going to do RPC over port 80 using HTTP!)

    I agree that unused services need to be shut down, but at the source of the problem and not at the firewall. We need to encourage new protocols to make use of new ports so that we can manage thus stuff -- the more we drive traffic away, the harder our job will be. Please, if you are in charge of a firewall, take time to think about what you are doing to everyone else when you institute strict policies that only make you safer in the very short term. Not only are you hurting yourself, but you're giving your users and network a false sense of security.

    Besides, the attacks de jour of late have all propogated over SMTP and HTTP, haven't they?

  • by nuetrino ( 525207 ) on Wednesday October 03, 2001 @01:33PM (#2384149)
    When one looks at the top six vulnerabilities, one sees the mark of shoddy implementation and almost nonexistent manufacturer and vendor responsibility. For instance, the default installs of OSs that leave the customer at risk. An example of this is in the Windows and MacOs install. In the installation process, there is a suggestion to make a shared folder. Most people do not need a shared folder, and with the explosion of broadband, most people should not have a shared folder. Yet both these applications want the user to create one. To make matter worse, there is no suggested password to increase the likelihood of security(On an up note, I was happy to see that SuSE did suggest a password at installation). Software vendors should not be encouraging us to make our computers less secure.

    Equally negligent are broadband vendors that give away connection hardware, but can't be bothered to include a firewall or software that will check for open ports. These vendors won't make the simplest effort to insure the product they are selling is secure, yet will not take the responsibility when their service dies due to DOS attacks. These DOS attacks are largely possible because of the massive number of wide-open computers created by their broadband connections.

    This is not a rant; this is a statement of reality. Vendors can not, and should not, expect the consumer to be skilled enough to provide adequate levels of security. This is why houses and cars come with locks. Sometimes consumers lock themselves out, but that is a minor inconvenience. As an extreme example, many shoes now have Velcro, and most cars, at least in the U.S., have automatic transmissions.

    No stream of security patches, warnings, and news items will solve the problem. The consumer is not skilled enough to keep up. Until the default configuration is secure, until vendors are forced to take monetary consequences for their defective products, and until the consumer is trained to suffer the imposed inconveniences, we will continue to see the same sort of problems.

  • The worst remote hole I've had to deal with in my sysadmin 'career' so far has clearly been the remote SSH exploits [debian.org] last winter. Exploits in BIND are of course very serious since the very backbones of the Internet are running it, but in my network _every_ machine had openssh running without any TCP wrappers.

    Atleast i learned that not even the services that have 'secure' in their name are to be trusted completely :-)

  • by slashkitty ( 21637 ) on Wednesday October 03, 2001 @01:53PM (#2384216) Homepage
    This one affects most every site, including ones like chase, citibank, aol, slashdot, nytimes and many more. It's cross platform and their is not an easy patch. I wouldn't be surprised if there were already malicious undetected scripts that could pretty much get your logins to all your favorite sites.

    A year and a half old advisory, and sites still refuse to fix it. http://www.cert.org/advisories/CA-2000-02.html [cert.org]

    Some of you will remember the problems with Hotmail relating to cross site scripting. Newsflash, it affects your site too!

    • This one affects most every site, including ones like chase, citibank, aol, slashdot, nytimes and many more. It's cross platform and their is not an easy patch. I wouldn't be surprised if there were already malicious undetected scripts that could pretty much get your logins to all your favorite sites.

      That one falls under the "bad CGI" umbrella. All freely enterable user data displayed from the user to a web site has to either have all the text escaped (which isn't hard) or filtered for only certain allowable tags (mildly annoying, but not too terrible) - and this applies to stuff fetched from other web sites too.

      It's a simple matter of not ever trusting the user to enter sane (non-harmful) text.

      • In fact, it's really an entirely different attack. While you may argue that it would be covered in the statement "many CGI programmers fail to consider ways in which their programs may be misused or subverted to execute malicious commands".. that's like saying that all security holes are just using in the server in some way the sa did not consider. It is hardly enough to direct developers to fix this problem.

        They did not mention one exploit that was cross site scripting, even though there have been many many advisories from CERT.

        Protecting input from being executed on the server side does not help here. It is also not at all limited to cgi applications. In some cases, it's been the web server itself, in others, it's been the app server. It's also not limited to "user input", which many programmers seem to consider to be the form fields. It really any input values that can be passed to program from the external world. paths, id's, options, etc.. Also, a common place where these holes show up is in error messages spit back to users.. Hardly a place where people look for patching.

  • by ajs ( 35943 ) <<ajs> <at> <ajs.com>> on Wednesday October 03, 2001 @02:47PM (#2384701) Homepage Journal
    MY_NET=1.2.3.4/5
    INT_DEV=eth0
    EXT_DEV=eth1
    # 1. Any packet coming into your network must not have a source address of your internal network
    ipchains -A forward -i $EXT_DEV -j DENY -s $MY_NET
    # 2. Any packet coming into your network must have a destination address of your internal network
    ipchains -A forward -i $EXT_DEV -j DENY -d ! $MY_NET
    # 3. Any packet leaving your network must have a source address of your internal network
    ipchains -A forward -i $INT_DEV -j DENY -s ! $MY_NET
    # 4. Any packet leaving your network must not have a destination address of your internal network.
    ipchains -A forward -i $INT_DEV -j DENY -d ! $MY_NET
    # 5. Any packet coming into your network or leaving your network must not have a source or destination address of a private address or an address listed in RFC1918 reserved space. These include 10.x.x.x/8, 172.16.x.x/12 or 192.168.x.x/16 and the loopback network 127.0.0.0/8.
    ipchains -A forward -i $EXT_DEV -j DENY -s 10.0.0.0/8
    ipchains -A forward -i $EXT_DEV -j DENY -s 172.16.0.0/12
    ipchains -A forward -i $EXT_DEV -j DENY -s 192.168.0.0/16
    ipchains -A forward -j DENY -d 10.0.0.0/8
    ipchains -A forward -j DENY -d 172.16.0.0/12
    ipchains -A forward -j DENY -d 192.168.0.0/16
    ### REMOVE the next 3 rules for masquerading systems
    ipchains -A forward -i $INT_DEV -j DENY -s 10.0.0.0/8
    ipchains -A forward -i $INT_DEV -j DENY -s 172.16.0.0/12
    ipchains -A forward -i $INT_DEV -j DENY -s 192.168.0.0/16
    # 6. Block any source routed packets or any packets with the IP options field set.

    # This is done at the kernel level under Linux, and is usually set by default.

Bus error -- please leave by the rear door.

Working...