Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
The Internet

Crackers Preparing Massive DDoS? 175

Tairan writes: "Crackers are using two exploits to ready another distributed Denial of Service attack. MSNBC.com is reporting there are at least 560 computers infected. CERT claims it 'poses a significant threat to Internet sites and the Internet infrastructure.'"
This discussion has been archived. No new comments can be posted.

Crackers Preparing Massive DDoS?

Comments Filter:
  • TLA == Three Letter Acronym (ie, CIA, FBI, NSA, DEA, etc.) alternatively, They Liberate America. ;-)

    Ah. Now I understand, although I can't think of what they liberate us from, except our hard-earned money...



    --
  • by Chuq Roast ( 233581 ) on Saturday September 16, 2000 @12:59PM (#774030)
    Well, CERT doesn't seem to be be taking this lightly in any case. I've seen a few posts about there being only 560 compromised hosts and how that isn't enough to perform a decent DDoS on anything but a tin can and strings. Uh, not quite: that's 560 hosts in one incident that has been reported. There have been around a hundred incidents reported, and while I'm sure that 560 hosts weren't compromised in each incident, I would bet you that the number is a lot greater than one for more than a few of them.

    CERT seems to be following up on most every lead they can and contacting everyone they believe to have been compromised and urging them to take measures to protect their systems and networks where possible. I am personally aware of a few hosts (which have since been secured as well as possible) which I do not control, but which were involved in a separate incident involving another rather large volume of hosts that CERT followed up on.

    So it would seem to me that the folks at CERT, at the very least, are just being careful. As the old saying goes, an ounce of prevention is worth a pound of cure -- and it's no different with computer security.

  • by Inoshiro ( 71693 ) on Sunday September 17, 2000 @04:32AM (#774031) Homepage
    As the 'security guy' for my home, Kuro5hin.org [kuro5hin.org], and other firewalls I've setup for people I know, I can tell you that:

    "after all, a bunch of them are probably not even very much up-to-date and it takes a lots of time and experience to secure properly a Linux server. "

    Is wrong! It's very simple: you need three things to lock down a box from remote root: nmap, lsof, and kill. Find what's open (nmap scan TCP), find out what 'owns' the port (lsof), and kill it. Then set your system to not run it. The RPC services should be turned off without even bothering to check if they're running -- every distro has them one by default (why!?). ps -ax|grep rpc .. kill. Then go and chmod -x all those binaries. No remote root. Simple, effective. You could probably have perl scripts do it :-)

    Otherwise, it's just watch bugtraq, watch your box, and be suspicous. Oh, and don't run Washington University code ;-)
    --
  • One thing nmap does not seem to check for
    is TCP wrappers; it reports a port open,
    but TCP wrappers may drop all connects to that
    port.

  • by Naikrovek ( 667 ) <jjohnson.psg@com> on Saturday September 16, 2000 @06:17PM (#774033)
    Its not that simple. One user on one 56k modem can completely saturate a T3 if he knows what he's doing. 560 machines on higher connectivity boxes can not only fill one T3, but hundreds. Thousands.

    If I have a one machine that can access the net, I can ping spoof thousands of boxes, (this is still a problem) who in return all reply the ping to hostX. hostX feels the punch of 100's of boxes pinging it, even though those pings all came from one machine. Now imagine 560 machines doing the same.

    If hackerX can find 560 machines to compromize, he can find thousands of hosts who's routers are not configured to block ping spoofs.

    Its not the 560 machines that will be the ammunition, its the incorrectly configured subnets that will actually do the pipe choking.

  • by KodaK ( 5477 ) <sakodak@gma[ ]com ['il.' in gap]> on Saturday September 16, 2000 @06:30PM (#774034) Homepage
    "...no one from the DOS demoscene ever releases source code!"

    As an aside, most demos "back in the day" were written in assembly language, it was sort of a given that you'd have access to a disassembler and be able to reverse engineer algorithims, if that was your thing. That's how I learned pretty much everything about coding on my Amiga (well, that and the rom manuals...)

    Wow, that brings back a lot of memories of DOS/Amiga demoscene flamewars:

    "Yeah, DOS demos can be good, if you have
    an Orchid and a Soundblaster, but the hardware's
    standard on an Amiga!"
  • When are we going to learn? Sysadmins are treated like peasants in any organization, corporate, educational, or other. As the net increases in importance, the more it looks to me like the most despised members of society (geeks) are responsible for keeping everything working. Can you say exploitation? I speak as a former sysadmin who is now in law school.

    Onward, revolution.

  • I went down for a bit about two months ago. I had just gotten a DSL connection and had installed Red Hat 6.2. While I was working on my firewall, I ran an errand for my wife (note to self: bring down interface when no firewall is present). I came back from the errand, finished the firewall, updated my software (oh my! I think I need this WuFTP update!), built my md5 database and was good to go.

    I started getting a DoS on myself as I noticed that I could hardly get *anywhere*. So much so that I kept dropping off my ISP's network (of course I was suspicious of my new ISP). When I checked the logs, it was clear that I was in the process of "attacking" other people. Only, here's the irony, my firewall was working well enough that I (being anal, as I am) was not actually succeeding in doing so (all the packets were being denied, and my logs were flooding). A look at my process table showed "t0rn" taking the bulk of my CPU power and basically just spitting and sputtering, not being able to do much more than be a pain in my butt. Still, I had to rebuild my machine since I determined that /bin/ls had been ovewritten and I had no idea what else they had done.

    I would like to note that OpenBSD was installed later that evening. Funny how your experiences influence your platform decisions, eh? :-)
  • Yeah, I'm getting to see that now. I was thinking of just using SMB shares under Linux. I need some form of network filesystem. Coda sounds good too, but I haven't had the time to look into it. Thanks for your suggestion.

    Cheers,

    Costyn.
  • I am aware of this, but since the hacking work I saw in my computer looked pretty amateuristic, I assume that there was no root kit nor did there seem to be any strange open files (unless 'lsof' was replaced too). Of course, one could say that this is just a cover. I'm installing a serious firewall at the moment. I will see how it goes. A clean install is not something I have time for at the moment, and neither is my box that important.

    Cheers,

    Costyn.
  • There are some good articles out there. Check out Armoring Linux [enteract.com] which has some really nice tips & tricks on how to start out securing your box. Of course, I hadn't done it yet, cause I wasn't that paranoid... however, minds change. :-)

    Cheers,

    Costyn.
  • They were also worried about a DDoS attack about 3 months before the first ones actually happened. There were some BoF sessions in November, December, and January. The big DDoS attack hit in February.
  • by Anonymous Coward
    Don't call them "crackers" -- Call them "Caucasians" or at least "White Folk".
  • Just out of curiosity, what kind of hardware do the root servers run? Is each root server actually a cluster or are they each just a Really Big Machine?


    This came up in a slashdot article some months ago called "Sun no longer the dot in .com" [slashdot.org].

    The answer is, the a.root server used to be a Sun Enterprise 10000, but was replaced by an IBM RS/6000 S80. Both would qualify as "Really Big Machines" in my opinion...

    There are several root nameservers, in disparate locations...

    - B.ROOT-SERVERS.NET
    128.9.0.107

    - J.ROOT-SERVERS.NET
    198.41.0.10

    - K.ROOT-SERVERS.NET
    193.0.14.129

    - L.ROOT-SERVERS.NET
    198.32.64.12

    - M.ROOT-SERVERS.NET
    202.12.27.33

    - I.ROOT-SERVERS.NET
    192.36.148.17

    - E.ROOT-SERVERS.NET
    192.203.230.10

    - D.ROOT-SERVERS.NET
    128.8.10.90

    - A.ROOT-SERVERS.NET
    198.41.0.4

    - H.ROOT-SERVERS.NET
    128.63.2.53

  • I kind of surprise by the lack seriousness displayed by some of the posts. Well maybe I shouldn't be surprise. But the rpc exploit is well known and has been a real source of concern as displayed by the various CERT advisories/current activity reports. Keep in mind that CERT is not a TLA.

    Last week I downloaded some code, from a popular security web site, which demonstrated this exploit. I was trying to convince somebody that he had to immediately apply the RH patch. So I compiled and ran the program. What the program essential did was: cd /;ls -alF;id

    Nice, user id 0, group 0.

    BTW, if you do run nfs/portmap, then please use a firewall to block port 111. Furthermore, it also very highly educational to run some net logging software. You'll get a sense of what the script kiddies are looking for.

  • Three cheers for that idea! I've been wishing the demo scene would find some life even without the extra incentive of deterring script kiddies--demos are just plain cool!

    I've been hoping for this too, but I fear it might be impossible. I don't mean we can never have a demoscene again, but we can't ever have a similar demoscene again.

    A lot of the fun, for me anyway, was in the bare-metal coding. That's exceedingly difficult today. There was a certain charm to working directly with the VGA, it was a lot of fun, and a great learning experience. You can't really do the same thing these days, with the huge variety of video cards on the market. Besides, the reason it was fun was also because it was very primative :) Making a cube that rotated around with nifty plasma backgrounds was cool in '94, but nowadays....

    Writing a demo in OpenGL, with "put a cube here, rotate it this much" isn't as appealing to me as the old way. It allows you to do way cooler effects, much easier, but its just not the same. There's something about doing the math for your own 3d effects that makes it more rewarding, IMO. But, I'm just a tired old crank who's sick of programming anyway. Less API, More Math!

  • look at this with a critical eye. It's just another story that tells the public that anybody that programs a computer is a cracker. And they accept it blindly.

    In the MSNBC article, they have a box 'The history of Hacking', which has Emmanuel Goldstein as starting 2600 as a clearinghouse for hackers, and the last page Masters of Reverse Engineering (MORE) noting the DeCSS player.

    Forget that we wanted to play DVDs on linux. Forget that we have a right to reverse engineer. Forget free speech.

    We're nothing but a bunch of outlaw hackers.
  • by rngadam ( 304 ) on Saturday September 16, 2000 @01:01PM (#774047)
    Due to my own laziness, one of my personal Linux home server was rootkit'ed and was so for at least a month before I discovered it by accident while investigating why top crashed (utmp was corrupted). It seemed that someone was running what looked like a covert IRC channel on my computer.

    Once I reinstalled and locked it down (tcpwrapper, ipchains, scanlogd, disabling of services, packages updates, etc) I still got an awful lots of unexplained connections to port 40118:40120 (I still do, two months after, if someone can tell me what it is I'd be happy). I also warned any owner of those IP that did that, but they didn't seem to care too much.

    I don't have an hard time believing that a very large number of Linux servers out there are compromised: after all, a bunch of them are probably not even very much up-to-date and it takes a lots of time and experience to secure properly a Linux server.

    I always thought that RedHat (prime culprit because it is the largest deployed distribution out there) doesn't take network security seriously, especially now that RH can be installed and configured to offer various network services by virtual newbies.

    Things that could be done by RH (and others) IMO:

    1) Create a single reference called security.redhat.com where you could register to receive updates and/or have one of your server registered to be regurlarly and automatically evaluated (nmap'd for example) from a security standpoint.

    2) Automatically install some of the pretty good detection and prevention tools!!
  • You're correct about "solid default security configuration", but what *NIX OS (besides OpenBSD) comes with a good default security configuration?

    As people have stated many times, in hundereds of posts before, it's the admins job to secure the box. I don't care if the *NIX I'm installing has a default root password of "password" and a 5 year old version of sendmail running on it. It's up to me to fix those things before I put it on a network.

    Now, it can be argued that the vendor should fix these things. Sure, *NIX vendors should always put the latest versions of software in their distributions in order to prevent security breaches, but there are always exploits that come out after the CDs have been pressed and shipped. In which case (and it always is the case), it's up to the admin to apply the appropriate security patches.

    So, no matter how good, bad or indifferent the default configuration of a machine is, it's up to the person who admins and/or installs the box to secure it for real.

  • ...configure ipchains and ntsysv and hosts.allow, and you're secure.

    In a word, no. That false sense of security is what gets a lot of machines compromised.

    Security is a process, not a state.
  • I remember that they were afraid of a DDoS attack about 6 months ago [slashdot.org]. Nothing came of it. I don't think anything will happen with this either.
  • by YoJ ( 20860 ) on Sunday September 17, 2000 @06:36AM (#774051) Journal
    I agree. The plethora of buffer overruns that allow arbitrary code to execute is a fault in Linux (and Unix). The i386 allows separate code, data, and stack segments. This means that the operating system can set up hardware locking to prevent execution of arbitrary code, or stack smashing. I'm sure there are other ways to cause unexpected behavior in programs, but if you remove buffer overruns and stack smashing that allow execution of arbitrary code, you've removed more than half of all security bugs. (I don't know if other chips have that feature).
  • I agree. But the teachers are generally idiots when it comes to computers, and the classes at my schools have done NOTHING. I wish I could at least just sit at a computer for 1 hour a day learning from an O'Reilly book and get school credit. It's tougher to learn at home - I have a lot of homework (see paragraph 4 for more). Now, if I had a class, I could learn much faster a part I can't understand from a teacher, have him/her analyze the algorithms, etc. Drools...

    At a private school K-2 grades, 1990- 1992, we had "computer class" every other day. We used this "logo" program that was supposed to teach you how to use computers or something. You moved a triangle around a screen using arrow keys to draw lines. It was totally useless and we learned nothing.

    In elementary school 3-6 grades, 1993- 1996, there was a computer in every classroom. It was an Apple IIe (or however it is spelled). But there wasn't a class for those, they were to just play Oregon Trail during playtime. We did have a computer class once a week on some older Macs. They tried (and failed) to teach us how to type. I still don't have my fingers in the right place as I am typing this.

    In middle school, 1997- 1998, Classes offered were typing and computer graphics. My older sister took the graphics class, and it appeared to involve combining cheesy pre- drawn graphics with text, then printing it out on a dot matrix printer.

    In high school, (I'm in 10th grade now), I am in the IB program, which is a lot of work. The classes teach a lot, if ou're willing to learn, and cut a lot of the BS and the teachers are mostly good, especially English. I have 2 problems with the school: too many MTV loving preppies (I'm not going into that), and poor computer classes. For underclassmen, the classes taught are: HTML and Java website. This involves learning how to format text with Frontpage Express and (gasp) a scarce amount of Java. Typing - obvious. Graphics - probably only a little better than the class in middle school. For upperclassmen: C++, finally. Unfortunately, I'd much rather learn Perl, C, and Linux. Fortunately, Juniors/Seniors can take PSOP, which means I can take College classes.

    My conclusion: public schools should do as the StarFace suggests, although it might be better to teach Linux or BSD, since its better with PC's.

    They could give out a copy of the OS so kids could use it at home. In grade school, teach kids how to write some simple scripts. In middle school, teach how computers and the Internet work in general. Do some light Linux/BSD stuff, and maybe C programming. In high school, go into the configs, the kernel, and more deeply into hardware. I don't see any school teaching anything complicated about hardware. It could be partially integrated into other subjects, esp. math. Write algorithms to do some complicated problems, and some simpler ones. (who can find prime numbers 1- 100000? Make it as fast as possible.) In any case, the school system definately needs more in the area of computers.

  • ...and your point is what?

    vi
    i
    "Enter your comments here!"
    ESC
    :q!
  • One reply mentioned the SANS GIAC [sans.org] - we haven't actually used it, though it looks like they do have good advice. But I'm not sure they actually do what you suggest, which I think could help a lot. As soon as we installed our firewall a few years back we noticed apparently coordinated scanning attempts from a wide variety of hosts - contacting any of them (even including copies of log info) gave us either no response or a "you must be mistaken, we've checked and we've never been compromised" responses. We basically quit there not knowing who else to report the problems too - at least with the firewall we could monitor things and feel smugly that we were much better off than we had been before it went in...

    But a centralized reporting service like the Spam Realtime BlackHole list etc could make a big difference...
  • A point is being missed here. There are 560 known compromised systems (there have shurely been cleaned). My guess is that there are at least 10 undetected compromised systems for every one that has been detected. If a DDos attack is being co-ordinated, we are looking at 6000 machines not 560 and most of these machines will be on T1 lines or better. The threat of a well co-ordinated attack looks very real to me.
  • by Anonymous Coward
    - Many posters appear to be responding to the slightly misleading summary instead of the actual article. This is a general summary of known hacker activity since May (!)

    - It indicates a pattern of scanning, exploitation of those two vulnerabilities, and installation of various flavors of r00tkits.

    - This indicates that Red Hat sells software with an insecure default configuration. (In fact, many distros do. Red Hat probably earns its mention in the article by being the most commonly used, not the least secure. Each time I've installed a new distro, I've had to blip around the system manually stopping network services I don't need.)

    - This problem could be minimized if distros simply came with non-critical services disabled, forcing you to manually enable the ones you want. This means the many, many clueless (who probably don't even know what security patches are) won't be running all the insecure stuff - in contrast to today, where the clueless run it *all*.

    - This needs to be done *now*. It's in the distros' business interest to do so. If things continue as they are, this issue will remain a persistent threat to both the stability of the Internet and the reputation of the otherwise nifty Linux phenomenon.
  • Since when did crackers do distributed denial of service attacks? OK, I understand slashdot's unwillingness to use the term hackers....but in this case it would be more correct, but not totally ofcourse because hacking and cracking are both legitimate activities as long as you don't break the law.....WHY NOT CALL THESE GUYS TERRORISTS, a MENACE TO SOCIETY or just plain CRIMINALS?

    You don't need to crack anything to do a DoS attack!
  • I have to say that not ALL kids just use GUIs. I'm in high school and _hate+ GUIs...they waste time. Slashdot looks fine in w3m!

  • Last time I installed redhat (and mandrake), I recall the choice being left up to the user as to which services to run at start up. (the dialog was presented near the end of the install.) This may be available if you choose the "custom" option, as opposed to workstation/server/etc.

    A quick run of linuxconf, and going to control service activity should present a simple interface for beginners to close things down. (provided that they at least start to RTFM.)

    On mandrake, a quick msec 5 should lock things down like a prison. I may be wrong, but it seems that even a newbie should not have a hard time making a clean install reasonably secure.


    Of course, no matter how much I brag about how secure my linux machines are, and how much time I spend securing them ... I'm sure that there's some prick out there who could bring my whole network down.

  • No, instead, he uses Red Hat, with a spiffy GUI tool, and has no clue what on earth is going on his system.

    Sure... if Joe User is installing it for a desktop. I've only been running RedHat and SuSE servers for about three years now, and I've yet to see one work right off the CD. I always need to configure something. The packages set up a default configuration-but that doesn't make the app work. You still need to edit, say, httpd.conf to get apache to run as a webserver.

    Eh, it's more that you need to understand inetd, inetd.conf and whatever way your OS/distribution runs the bootscripts to turn the services off. (Not that that is enough to close all the ports). That is, if you can figure out which services you really need.

    ...or you can shut them all off at install and turn them on later very easily (with YaST). RedHat takes more work, but it's work you have to do anyway just to get the fscking thing installed.

    How is "that's the way MS shipped it to us" different from "that's the way Red Hat shipped it to us" or "that's the way Sun Microsystems shipped it to us"?

    Thanks for completely ignoring half of what I said. The answer is the expectation of ease. RedHat has none. M$ does. M$ writes its software for ease if install, and the install gui presents the end SA with the illusion that everything is taken care of for him. You should see the fscking mess we found on the NT box, and that was with someone we PAID to make it secure. I sat right there and watched the guy install it (never seen it before, you see) and talked to him about how it worked, etc. I've put it on a few other machines to see how it worked, it was like installing a game.

    With the SuSE server, I had installed it as a desktop (by choosing single user, so no security) for someone to use and then had to switch it over to use as a server when one crashed, so it was wide open, and that was my fault.

    The other linux box I set up with the server choice to begin with. I had to tweak a few apps to get them to work, and as I read the manuals(apache, etc), I watched for "if you want to be secure, do this" notes and followed them. That box passed. It's still probably an easy target for someone who knows what they're doing, but at least the hobbyists will have a hard time getting in.

    So I reiterate what I said in my first post: The difference between linux and NT is that linux SAs are expected to do some work to get their server up and running, at the same time they should be making it relatively secure, which is not that much more time consuming. NT SAs are expected to slap the CD in, install IIS or whatever, and get back to their jobs as (in my mother's case) Tax Accountant.

    I can't speak to your point about buffer overflows. If you're suggesting that NT is more secure for protecting against it, more power to you (and M$). Doesn't do much good once the end user installs Outlook, does it? Oh wait, that's in there by default.

    -jpowers
  • So a great many boxes are getting DDoS tools installed on them these days. How does this indicate in any way that Internet Armageddon in on it's way? So they take down Yahoo for a day or something... it'd take one hell of an organized attack to bring down more than a few sites...
  • also, most cable i've seen (including my own) is 128KB/s, i wouldn't be -pay $50/mo. for a mere 16KB/sec... Flemer's, Read This: yes, i know the difference between Kb and KB!
  • If you read the article they said that wuftpd had been comprimized by the crackers. It is not that proftpd is better, it is that they know how to hack into wuftpd and have already exploited it. (At least taht was my understanding from the the other documentation that they had links to). This is mute anyway since my ipchaines are set up to deny port 21 from ppp and log requests to those ports.

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • by CoughDropAddict ( 40792 ) on Saturday September 16, 2000 @07:30PM (#774065) Homepage
    Your brother may be smart, but he's obviously not a computer geek. I have plenty of smart friends who don't know the first thing about the mechanics of computers. On the other hand, I suck at swimming, even though I like the water.

    I was a high school senior three months ago, and I assure you I can chain IDE devices. I even manually short circuited the internal battery on my computer once when someone set a BIOS password and then forgot it (how's that for resourcefulness?). We are not extinct.

    --
  • by Scrag ( 137843 ) on Saturday September 16, 2000 @12:29PM (#774066)
    560 computers infected is not enough for a massive DDoS. Unless of course they were targetting someone on a dialup, which wouldnt matter anyway.
    What was the problem again?
  • After all, what good is the internet going to be after this?
  • by n3rd ( 111397 ) on Saturday September 16, 2000 @12:33PM (#774071)
    There really isn't much to say about this article execpt for good old rampant Slashdot speculation.

    So some people found some trojans that could be used for DDoS attacks on a few hundered machines. Does this mean a DDoS is "brewing" or ready to be launched? Hardly.

    In order to know if something was coming, we would actually have to talk to whomever put those trojans on the machines to see what their motivation is, and when they plan to use them. Unfortunatly, this will more than likely never happen.

    For all we know, this could just be some script kiddies person cache of trojans to take over IRC channels, not DDoS a large site such as Yahoo! or Ebay. Heck, maybe is the BOFH Users Group out for revenge on companies that have had enought of their antics and fired them. Who knows?

    So, is a new, massive DDoS brewing? Unless one of the people who planted these trojans tells us, or a DDoS actually happens, we'll never know.
  • by Anonymous Coward on Saturday September 16, 2000 @01:11PM (#774073)
    This is no big deal. Hasn't anyone been on irc? Practically every kiddie on irc has his/her own dosnet, most with hundreds of hosts. People create them everyday, and have been doing so for years. Yes, years. This is nothing new, and its no big threat. If someone really wanted to 'take down the net', they would have already done so. But, a person like that has only one life, and that would be his/her life on the net, and why would he/she want to destroy their life? Point being, I am tired of seeing the media hype up stupid findings of 500 or 600 infected boxes, and calling it 'the end of the internet.' Every second a thousand boxes get owned and infected by some script kid. Does it really matter? No. Why not? a.) They don't know how to 'take down the entire net' b.) They're too busy packeting people on irc. End of story. Get over it media whores. No more dumb ddos stories, please!
  • by trog ( 6564 ) on Sunday September 17, 2000 @09:04AM (#774075)
    As a sysadmin who specializes in security, I have to take issue with your statements. NO machine is ever secure, regardless of OS. Any machine can be compromised.

    First on OpenBSD: ever run nmap on a fresh install of OpenBSD? Both sendmail and portmap are happily running BY DEFAULT. Two of the most insecure applications ever written. All OpenBSD really does is give it's users a false sence of security.

    Secondly, on Red Hat: It is my opinion that the reason that Red Hat is getting this attention is that it is by far the most used Linux distro. I often build systems based on Red Hat, because I know what I am doing.

    You can spend hours and hours of time, securing a box, and if someone can use social engineering to get a username and password, it's all for nothing. This is the biggest issue when it comes to security.

    (As an aside: I recently taught a seminar to a company on social engineering. They had never even heard of the concept before. Do you know what they do? Provide customer service for over a dozen banks. Scary.)
  • Both sendmail and portmap are happily running BY DEFAULT.

    I don't believe this is true on 2.6 or later.

    but I never said 'y' to sendmail or portmapper - its been a while since I've installed my obsd box but I don't believe sendmail as a daemon runs. and when you install qmail, it does wipe out any 'badness' that sendmail (the pkg) might have done.

    All OpenBSD really does is give it's users a false sence of security.

    troll. backup your assertion: name any significant security issue of obsd 2.6 or later. even in the default install. I've checked its buglist - have you? or are you just blowing smoke (which I suspect).

    I often build systems based on Red Hat, because I know what I am doing.

    anyone who knows what they're doing can secure a unix box; the point is that linux attracts a lot of inexperienced unix users (who have little or no admin background). as such, if linux is to stay viable in the server market, it must protect its image. you cannot do this if your default install is very insecure by default - that's all I'm saying. and redhat, even though its the most popular, is one of the most insecure distros.

    --

  • According to a few of posts on NANOG (North American Network Operators Group - see www.merit.edu for info), NASA's Ames facility was attacked on Friday, knocking it down for most of the day. NASA hosts E.ROOT-SERVERS.NET, which having it down is a "bad thing".

  • If you're suggesting that NT is more secure for protecting against it

    No, I am not. I said that UNIX had a basic design flaw. I didn't compare UNIX with anything else.

    -- Abigail

  • Both sendmail and portmap are happily running BY DEFAULT.

    I don't believe this is true on 2.6 or later.

    Last time I installed 2.6, it was true. I have not played with 2.7 yet, so this may no longer be the case.

    troll. backup your assertion: name any significant security issue of obsd 2.6 or later. even in the default install. I've checked its buglist - have you? or are you just blowing smoke (which I suspect).

    Significant security issue: Sendmail and portmap are installed, by default, in 2.6 (I am unaware of 2.7). inetd is enabled by default (even though I believe everything is commented out). Better practice would be to default to tcpdaemon instead of inetd, and qmail instead of sendmail. Or to install neither by default.

    (Try suggesting this to Theo. Enjoy the flames that follow.)

    Are you so much a zealot that you believe that the only bugs that can ever exist in OpenBSD are the one's that appear on a buglist? This is not a troll; it just runs counter to your beliefs. This is what will get your systems compromised; complacency.

    OpenBSD does have the right idea when it comes to security (compared to other free *nix), but you have to remember that you cannot ever prove anything secure. All you can honestly say is that, to the best of your knowlege, there are no known security issues. Security cannot be proven, but insecurity can.

    It has been my experience that it is best to assume all machines can be compromised. A security syadmin then sets about the task of making things more and more difficult to exploit.

    Besides, sucessful application of social engineering will foil the best system security ALWAYS. Thieves don't always pick a lock to get into a home; they break windows as well.

    the point is that linux attracts a lot of inexperienced unix users (who have little or no admin background). as such, if linux is to stay viable in the server market, it must protect its image.

    I agree with this stance, and am doing something about it. I am in the process of making security documentation that I have written available under an open documentation license (It is currently owned by my employer, so this takes a little convincing). I intend to submit this documentation to friends at Red Hat, in the hopes that it can be put to good use.

    What are you doing?
  • It seems to me rather than not installing NFS (as some have suggested to you), you might use ipchains to disable NFS access from all but your trusted systems. I.e. just drop the NFS request packets from systems that aren't yours. You should probably do this for any of the other ports that you have open, too.

    General Question: Is it just me, or shouldn't a well secured distribution behave like this out of the box?

    --
  • I used to T.A. for my High School CS teacher's Pascal class. These were kids who wanted to learn. They were really, truly interested. The problem is, they just didn't have the talent. (Otherwise they would have been in the AP class, doing C++). The point is, some people don't have the talent. There are people in this world who are destined to be the ones who fix our cars and serve our food. THEY ARE GOOD PEOPLE. Many of them are highly motivated for self-improvement, but just because a system seems intuitive to you and I, doesn't mean the same is true for everyone else.

    It's very true that there are many people who don't learn because they are lazy, but there's also a huge population that simply doesn't think the same way.
  • by ryanr ( 30917 ) <ryan@thievco.com> on Saturday September 16, 2000 @08:28PM (#774089) Homepage Journal
    They got me too... they didn't install a root kit either. I checked my packages with MD5 sums and all my binaries checked out.

    You're aware that there are rootkits that will get around the checksums, right? They will hand over the original binaries when you request a read, but will serve up the modified binary when the OS requests an execute.

    You can't be sure they don't have anything else on your box until you reinstall clean from known-good media. (And maybe re-flash the BIOS, though we haven't seen that trick used yet.)
  • by Gurlia ( 110988 ) on Saturday September 16, 2000 @01:23PM (#774092)
    The problem with this is that today's philosophy is the Feed-Me-I-Want-Instant-Gratification philosophy. The so-called "user-friendliness" of GUIs like Windows just reinforce this. (Sorry, I know this sounds like rabid anti-M$ zealot rant, but this applies to today's GUI design in general.)

    I'm not saying we should go back to the "good ole days" with only a bare command-line prompt, but IMNSHO software should not be designed to try to be everything. (Wizards, anyone?) Software should be designed to provide the necessary tools to get things done, but it should never attempt to be smarter than the user. The user needs to learn how to use the tools.

    Why are script kiddies so abundant these days? 'cos they're so used to the click-on-button-and-it-does-everything way that computers work these days. A friend once joked with me that World War III might be started by a kid pressing a single wrong button on the nuclear launch controls...

    What we need IMNSHO is a change in philosophy. Yes I know easy GUI's are good and perhaps even necessary for people who want to get things done without worrying about manpages and editing conf files. But for teenagers? Give 'em a bare command prompt and let them figure out how to configure X manually. Kids these days need to learn that the world isn't an instant gratification vending machine. You need effort if you want value.
    ---

  • Well, yeah, anyone putting a stock RedHat box on the net is an idiot. Anyone putting a stock *anything* box on the net is probably an idiot too. ;^)

    However it's true that RedHat is particularly bad - that doesn't mean Linux is bad - RedHat != Linux. If you want a Linux distro that is reasonably secure by default, give Slack [slackware.com] a try. I know it gets a bad rap for supposedly being hard to install, but 1) if you are using OBSD already that's surely not a concern for you and 2) when I finally gave it a try, I found it to be little if any harder to install than RedHat or Mandrake were anyway. The selection of packages available with the native package management system is smaller than the RPM collection, of course, but it usually includes all the important stuff and is very up to date - check out LinuxMafia [linuxmafia.org] if you need something that isn't included. Plus you can always compile yourself, use the included rpm conversion tools (rpms usually but not always will work fine after a quick conversion) or even install RPM if you want to. YMMV, but I've found Slack to provide a very nice middle ground between OBSD and RedHat.

  • by the_quark ( 101253 ) on Saturday September 16, 2000 @01:32PM (#774095) Homepage
    Ok, they got me. My main personal mailserver/web server/MP3 server got compromised. I upgraded the system (which had 390 days of uptime, woo-hoo!) in late August to Redhat 6.2. Due to the high quality of crack I've been smoking, I FORGOT to turn off rpc in inetd.conf. Whoops! I noticed this last week and fixed it, but I'd already been gotten by the rcp.statd. Interestingly, what tipped me off was the fact that the DDOS software essentially caused a denial of service against ME. I have a DSL connection, and the DOS software flooded my network so badly I started investigating why my network performance was so slow.

    I have to rant a little bit, here - Redhat, is it SO HARD to make the default install be BASICALLY SECURE? Don't turn RPC on by default, for God's sake! The first thing I have to remember to do is to remove the really obvious security holes as soon as I install!

    One nice thing about this DDOS activity - now, the script kiddies want my network bandwith. Used to be they didn't know what to do when they got in. The same system was compromised three years ago while I was on vacation, and the script kiddies involved did an "rm -rf /" as root. Ouch. This time was pretty easy to clean up from, by comparison.

    But, whomever pointed out that the connections of the hosts are important - absolutely. I'm sure my puny 384kbps upstream didn't cause whoever the victim was any real trouble.

    Tips for people who may be having the same experience:

    First, I was tipped off by the very large numbers of collisions on my hub, and the massive traffic. I'd installed a bunch of new hardware and software, and, at first, thought something was broken. Additionally, I was running mrtg against my router, and the traffic saturation broke SNMP connections, so cron kept complaining.

    Once I figured out the host the traffic was coming from, I started looking around. First of all, a command representing itself as "lpsched" was running with a very low PID (like 120) and had a child process representing itself as in.telne (I believe these were actually the same program). When I killed them, the traffic ended. After some research, I realized that the attackers had installed a trojan in /usr/sbin/init (which was then changing its program name as represented in ps after execution). /usr/sbin/init was being executed by /etc/rc.d/rc.sysinit, at the end of the file (placed here very nicely with a check to make sure /etc/rc.d/rc.sysinit existed).

    Interestingly, they did NOT install a rootkit - I used SHA1 hashing and some custom scripts I wrote to compare the compromised host with a clean install of RedHat 6.2. All they did was modify /etc/rc.d/rc.sysinit and install the Trojan (they may also have edited log files at the time of intrusion). rpc.statd did spew a "I'm executing this obvious buffer overflow attack" in /var/log/messages; "grep rcp.statd /var/log/*" should give you some idea if you have a problem. In the rpc buffer overlow, they echoed to /tmp/m:

    9088 stream tcp nowait root /bin/sh -i

    and then, executed "/usr/sbin/inetd /tmp/m", essentially giving themselves a root shell on port 9088. What they did from there I have no record of, but, obviously, they installed the Trojan and moved onto the next one.

    Good luck, out there...

  • I think the solution to script-kiddy wankerism is a revival of the demo scene.

    How about fixing the holes? It's not like we haven't known about the problems with wu-ftp since forever.
    --
  • Being someone who has setup some Redhat servers for various companies I would like to make a few comments in reply to your post.

    Many times a company does not have a concept of System Administrator. For instance where I work now I am a programmer. The company wants something done such as mail services or DNS and they come to you and say we need this and this and this. And so you give them options and they say "Oh, but we dont want to spend any money" So you setup a linux box and you go back to what you are suppose to be doing (in my case programming).

    In on of my previous incarnations I was a Internet Technician for an ISP. Most of my days were spent battling with Sprint and TW Telecom. Setting up routers and trying to setup reporting. I had to monitor updates and potential exploits for a vast array of operating systems and equipment. Keeping up with BSDi, Redhat Linux, Windows NT, FreeBSD, Cisco IOS... blah blah blah... ate away at my time. And the company saw it as unproductive time because in the end the upper management did not see anything different.

    I would also like to point out some personal experience that may not apply to everyone or everything in the Internet universe. It felt to me like Redhat Linux constantly needed updating and patching. The BSD derivatives seemed to require less work. And NT's exploits seemed trivial in comparison to a root exploit. Of course, NT was pathetic on handling loads or multiple services and needed to be rebooted once every week.

    I just wanted to provide a perspective from the other side of the fence so to speak.
  • Read the docs. The author says yes, this is a possibility, but he has had zero complaints about it actually happening in the real world so far.

    Also, portsentry is entirely configurable. You can drop attacking hosts into hosts.deny, or block them with ipchains, or block them with route reject, or not block them at all and just dump a message to /dev/lp0 so you have a hardcopy log of the attack. Whatever. It's up to you.

    If you -are- using IPchains and you know what you're doing, you should be able to set it up so that port 80 -always- answers and is -always- exposed to the whole world. This means the attacker can still read your web pages, yes, but hopefully your web server is secure. (Okay, maybe it's the least-likely to be secure thing on your box, but then, an attacker that wants in through your webserver can go to some host that hasn't yet been portsentried and attack the webserver, being careful not to trigger portsentry this time... )

    Also, maybe some people are using tcpd only and not ipchains at all; so the host is still 'live' to the 'net but the service ports get closed leaving only pinging and other ICMP packets up.

    Do whatever you think is best for your box. There is -no- best practice for this, because heterogenity of implementation is key to preventing predictable DoS exploints by turning your own security against you.

    Anyway, I don't care if my home box disappears from half the net for six hours until I can manually rework things when I'm sure it was a spoof - OTOH, if you're a major e-commerce site, yeah, you'd better make sure your portsentry isn't going to close off your http!


    --Parity
  • Oh h*** yes you can. The source address of the packet (where you want the spoofed reply to go) is not within the ISPs network, so throw it away. Many large providers at meet-points started checking that years ago and threatened to stop peering with those that hadn't blocked it. I don't know that anyone still does that anymore.

    (This is also necessary to prevent misconfigured multi-homed customers from sending you the wrong traffic or acting as a transit point.)
  • by iCEBaLM ( 34905 ) on Saturday September 16, 2000 @09:14PM (#774100)
    If I have a one machine that can access the net, I can ping spoof thousands of boxes, (this is still a problem) who in return all reply the ping to hostX. hostX feels the punch of 100's of boxes pinging it, even though those pings all came from one machine. Now imagine 560 machines doing the same.

    I'm well aware of smurf attacks, I was just illustrating that 560 machines on "slow" connections coordinated make for a formidable foe. Of course using a smurf attack amplifies that, however I know trinoo doesn't support that kind of attack, I'm not sure about tribes. (assuming the cracker is a typical skript kiddie who wouldn't write his own "tools")

    If hackerX can find 560 machines to compromize, he can find thousands of hosts who's routers are not configured to block ping spoofs.

    The way ICMP works you will *never* be able to block "ping spoofs", the problem is blocking them on broadcast addresses (1 packet turns into many just by sending it to a broadcast address) which is the whole basis of smurf.

    -- iCEBaLM
  • Alternatively, you could argue that it's libc that's flawed by making it so easy to create buffer overruns, as what's his name at http://cr.yp.to does, and so he uses instead a bunch of 'safe' functions to do 'String' functionality instead of using the unsafe libc str* functions.
    Of course, lots of Unix people don't want to switch from C to C++ or Java where this kind of thing is the standard way of working, but a libsafeCarrays or something going into common usage would reduce this kind of thing drastically.

    Though separating code and data pages is an elegant solution, though perhaps not a complete one. I'm not sure how many programs there are that have legitimate reason to modify executing code, but it's conceivable. I suppose we could just say that self-modifying code is too perverse an abberation to be permitted to live.


    --Parity
  • Software should be designed to provide the necessary tools to get things done, but it should never attempt to be smarter than the user. The user needs to learn how to use the tools.

    "I hereby declare a change in philosophy!"

    These things don't just happen. I'm not going to go into whether they SHOULD happen - I just hate to see post after post of people declaring the way things should be with nary a word about how to make it work. At least the parent post had a suggestion - revive the demo scene.

    With no further ado, I therefore present my own suggestions for "fixing the kids these days".

    (1) The kids these days aren't the problem. Neither is the government, nor the corporations. You (my illustrious reader) are the problem. Get off your duff and learn a language, write some code, write some documentation, make something work that didn't. If you don't like the way computers work, if you see things that need fixing, do something to fix them. And try to throw something original in while you're doing it - too many programs out there now where people simply didn't check to see if someone had already written something to do exactly the same thing.

    (2) In that vein, vote goddammit. (If you live in a country where you can't vote, move goddammit.) Bitching about throwing away your vote doesn't cut it any more - if you don't like the mainstream candidates, vote for one of the smaller candidates. Your vote will count for MORE; if Ralph Nader got 2 votes in the last election, and your vote makes it 4 in the next election, the pundits will be able to say his following doubled between elections. Political advocacy aside, your political activism will put pressure to change the things that cause script kiddiez, whatever you believe them to be. (Unless you believe the FBI is orchestrating the DDoS attacks--if so, I can't help you.)

    (3) Finally, teach someone how to use a computer. If you say 'rtfm' on a regular basis, I have an acronym for you: 'uyfps' (use your fucking people skills). Don't wave your hands about how people aren't using computers creatively/constructively -- show someone how to use computers constructively, and teach them why. Give them some of your enthusiasm. A teenager whom you've taught to write a database app isn't going to try and bring down eBay, cuz he knows that'll hurt his job offers when he gets out of college.

    Xant, maintainer of packet2sql [sourceforge.net], author of Repairlix [sourceforge.net], writer of documentation [elitecryptos.com]. (Not bragging, just doing a preemptive strike against accusations of hypocrisy.)

  • A full T1 is 1.54 megaBIT per second, which is 192.5 kiloBYTES per second.

    -- iCEBaLM
  • Linux is a different animal. It takes some work to configure one of these things. SendMail, Apache, Samba, X, whatever you need, you configure, and unlike NT, everything is "off" until you turn it "on", and not only by running YaST, but by endlessly tweaking relevant app.conf files.

    One typically doesn't install "Linux", but a distribution. And what is installed by default varies from distribution to distribution, but most of them install much more (or make it trivially to install much more) than necessary. And there's no tweaking of config files - the packages do that for you. Joe R. User who comes from Windows would be utterly lost if he had to select everything (but nothing more) from Debian's dselect. No, instead, he uses Red Hat, with a spiffy GUI tool, and has no clue what on earth is going on his system.

    You basically need to know the inner workings of the programs just to get them to run.

    Eh, it's more that you need to understand inetd, inetd.conf and whatever way your OS/distribution runs the bootscripts to turn the services off. (Not that that is enough to close all the ports). That is, if you can figure out which services you really need.

    forces the *nix admins to take all the responsibility for their systems while NT can just say "that's the way MS shipped it to us"

    How is "that's the way MS shipped it to us" different from "that's the way Red Hat shipped it to us" or "that's the way Sun Microsystems shipped it to us"?

    Consequently, the security audit my single-purpose linux ftp server failed last Thursday is my fault, but the NT guy gets to blame the MS-approved consultant who installed his fileserver.

    Are you suggesting that if you had a Red Hat-approved consultant installing your ftp server, and the NT guy installed the fileserver himself, you were still to blame, and the NT guy could still blame MS?

    Sysadmins of all stripes deserve SOME of the flak for the spread of viruses and the DDOS attacks from their exploited servers,

    Sysadmins deserve blames SOMETIMES, but they shouldn't be blamed for the gazillion of holes Unix utilities have had over the past 3 decades, with no end in sight. Remember, sysadmins DO NOT write those utilities. Don't blame the sysadmin for being 20 minutes late in installing the latest security fixes. It's a never ending stream of holes, and sysadmins also need to do other things, like making backups, reading Usenet, drinking coffee and LARTing lusers.

    But then, are utility writers at fault? Partially. 30 years of experience would suggest they know better, but the same bugs (buffer overflows) happen again and again and again. However, the biggest share of the blame has to be taken by the OS (and hence, its designers). It's a fundamental design flaw that the kernel does not separate code and data pages, and hence that buffer overflow errors can lead to execution of arbitrary code. That flaw disqualifies UNIX as being a secure OS.

    And the sad, sad thing is, that a now popular Unix-like OS, which was written from scratch after more than 20 years of UNIX evolution into a wild variety of sub species makes exactly the same fundamental design flaw.

    Don't blame the sysadmins for not being able to keep up with the never ending stream of buffer overflows. Fix the OS!

    -- Abigail

  • Hell yeah - those were the days... at least on the Amiga they were...

    Silents, Razor 1911, Complex, Cryptoburners, Melon Dezign, Fairlight, Crusaders, Skid Row, Kefrens, Andromeda... those cats flexed furious audiovisual skills on the Amiga's dedicated coprocessors. Most of them are working at game and 3D companies now, I imagine.

    If anyone's interested, there's a very cool research paper called "The Hacker Demo Scene and it's Cultural Artifacts" at http://www.curti n.edu.au/conference/cybermind/papers/borzysko.html [curtin.edu.au]...



    --
  • How about fixing the holes? It's not like we haven't known about the problems with wu-ftp since forever.

    Yes! Let's fix the holes. But let's fix them once and for all: At the OS level. Fix the OS such that a buffer overflow cannot result in executing arbitrary code. Or else, for each hole in an application you close, 10 new ones spring up.

    Here's a scary thought. How long till crackers and script kiddies start sending patches and/or become active developers for well used open source projects, intentionally introducing holes. Even if 99 out of 100 of such attempts get removed before the product becomes "stable", the few that make it to the next Red Hat or Debian CD make it a hax0rs delight.

    -- Abigail

  • Not complicated, but resourceful I'd say. It's not as if someone said "look, there's the battery, short it out," it was a case of someone screwing with my BIOS and me needing to find a way to fix it.

    I didn't have a manual, I just figured that the battery-looking thing probably was responsible for storing the time and all the other BIOS settings, so I figured shorting it out would solve the problem. Not bad for 15, or however old I was at the time.

    --
  • CNN Reporter: I am reporting live from the offices of [insert TLA here] where news of the devastating Denial of Services [sic] attack has just been heard. Oh my God, What can we do? What does FBI think?

    FBI spokesdroid:Like I said, if only they had let us install Carnivore in every ISPs server room, we could have stopped this.

    NSA spokesdroid:Shutup fool, if only Echelon funding hadn't been stopped we would have nipped this in the bud.

    Right-wing spokesdroid::Liars, liars, only installing censorware which blocks out images of "hacking" and "cranking" [sic] could have stopped this...

    RIAA spokesfiend:Copy protection, ban Napster, steal consumer rights, kill, murder, hahahahaah.

    CNN Reporter: Riiiiiiight. I guess we'll get nothing useful here. Back to the newsroom.



  • The real problem here is that most popular distros (read: redhat) have default configs designed for hackers. I set up a redhat 6.2 box as an ipmasq server... 2 days before it was compromised.
  • by iCEBaLM ( 34905 ) on Saturday September 16, 2000 @02:03PM (#774116)
    Would 560 computers with cable modems (capped at 128 Kb/sec upstream) be enough for a DDoS? Probably not.

    Lets take a bigger look at this...
    (128Kb/s == 16KB/s) * 560 == 8960KB/s or 8 megabytes/s

    That will take out a T3 or an OC-1 pretty handily.

    560 dialup machines with 56k modems would be enough to flood a few dialup connections, or perhaps a cable modem or DSL line.

    Again, a closer look (56k's only get 33.6Kb/s up):
    (33.6Kb/s == 4KB/s) * 560 == 2240KB/s or 2 megabytes/s.

    Enough to take out 10x T1's.

    Don't dismiss the power of 560 machines so easily.

    -- iCEBaLM
  • by Eric Green ( 627 ) on Saturday September 16, 2000 @02:13PM (#774119) Homepage
    If everybody ran 'portsentry' and had no open ports listening on the Internet, things would be much better. Script kiddies could not know whether there was no computer at that address or if it was instead running 'portsentry' ('portsentry' tosses that host into ipchains so that no ACK is even sent), and thus would not be able to exploit the undesirable denial-of-service possibilities of 'portsentry'.

    I am waiting for a distribution to come set up that way out-of-box... yeah, right.

    -E

  • by Woody77 ( 118089 ) on Saturday September 16, 2000 @03:21PM (#774120)

    I agree with him, for a simple reason.

    I started with a commodore, using the command-prompt, and moved up to a PC with a prompt, and that's how I learned computers, in elementary school. Probably not uncommon for the people on this site.

    Now, my little brother never used anything other than 95. Loves computers, mainly games, but couldn't use a command-prompt to save his life, and can't even setup Master/Salves correctly on an IDE chain. Called me to try and help over the phone...

    He's a smart kid, too.

    He's a Senior and high-school, and can get into any school in the nation, from his test-scores and grades.

    So why can't he figure out why the new game he installed whacked windows? Why can't he install a new HD? Because all he's ever used is point-click. He's never actually learned how things work.

    One of my proffessors once made a statement about "experts", one I've also heard from a few now-retired computing columnists.

    Essentially: A real expert does not know how to do 100 neat things (tweaks) with a piece of software (or other product). Instead, they understand fundamentally how it works. From that, they know how to do the same 100 "neat things", but they also know why those "neat things" do what they do.

    Sorry, enough ranting on "kids these days...."

  • I knew there was a reason that I started locking down my box this year. After reading a few articles here is what I suggest as a minimum.

    First I dont run wuftpd, I use proftpd. That eliminates one of the areas of problems.

    The second is I use ipchains to log on my ports taht might be scanned. This means that if someone scans my machines I have a log of the originating IP. Although this is probably not there ip it is a starting point atleast and can lead to other machines that they have comprimized.

    Next I have a script that parses the system log, to check for things that came from a possible scan this run every 15 seconds. So that basically gives them 15 seconds to hack my machine. I hope they can do that other wise I hear festival go off saying I am being scanned. It needs som e work to be perfected, but it is a start at seeing if I am being scanned an dhacked. I am lucky though it is just one machines.

    I am more worried about my machine at work. Several days ago someone else used my machine and they said they gave there password out to someone else who may have use the machine. This is total stupiditiy on that persons part for handing out her password, and yes I reported her to our security. If my machine is use in DDOS then I know who is going to get in serious trouble for it.

    I don't want a lot, I just want it all!
    Flame away, I have a hose!

  • Good point. He really doesn't try to learn. But he's very typical of the people that don't want to learn.

    They complain that things don't work, but they don't want to learn how to make them work. In a word, lazy.

    They are the same people that amaze me when they don't know why you're supposed to change the oil every so often in your car...

  • In this case, though, Mr 4Life is probably referring to Three Letter Agencies, such as the CIA and NSA, and more broadly the FBI, KGB, and similar agencies around the world.
  • by Anonymous Coward
    I recently installed my first Debian server install (and actually am still in the process of confugring it), but I learned some interesting things by doing that.
    One of them is that it's the OS vendor's fault that linux distro's have such weak default installs. Debian is pretty decent securitywise in and of themselves, but even they do outright weird things like going with a firewall without rules and ACCEPT policies on all the rulechains.
    The default should be completely locked up from outside to inside, opening up only ssh. If a user wants a service, he needs to not only install it, but activate it too.
    If you do a regular install of, for example, red hat you get dozens of services running, from named to apache, to TELNET !!! Some of them useless and ancient history (telnet, r protocols), and some of them don't ever run on the same system. pop3 and smtp and apache and ftp. Show me a real professional server that runs all those, I dare you.
    Now if you want to make an install that installs all these services then go right ahead, but please don't offer it as the default. Linux newbies will always pick the default, and when offered with the choice: install (y/n), they WILL pick "y", because that's the windows way of doing things, install everything, even if you don't know if you'll need it.
    It's actually possible to make a default install that doesn't have a firewall and still doesn't open up ports to the internet. Imagine how solid it can be made when you add a decently configured firewall. I'm not asking for a default deny policy, just some common sense. Leaving ports 6000 and up open to the internet is EVIL.
    So, to restate my opinion, it's redhat's fault that boxes get hacked. Because you don't need suid root stuff (yes, you can build a complete linux install where daemons have no access to suid root programs), and you don't need daemons running as root (check out tcpserver on cr.yp.to).
  • by CvD ( 94050 ) on Saturday September 16, 2000 @03:57PM (#774144) Homepage Journal
    They got me too... they didn't install a root kit either. I checked my packages with MD5 sums and all my binaries checked out.

    I installed the patch that Red Hat had made available, but you mention commenting out RPC all toghether. Wouldn't that toast NFS completely, which I use a lot? In my case they added two lines to /etc/passwd, one user with uid 0 and another normal one... they then logged into using telnet and left a 'backdoor' by setting up another telnet port on port 10023 (just in case I would switch off telnet but be too stupid to see the new entry at the bottom).

    They also changed my root passwd. THis is how I discovered it. Of course, the rpc.statd nicely reported it was being buffer overflowed in /var/log/messages like in your case. Getting my root passwd was pretty simple, just reboot the machine into single user mode and change the passwd. I haven't found any other visible damage... No root kit, no trojans, no ddos tools. Who knows what they wanted my system for.

    Hope this info can be of some use to someone somewhere when landed in a similar situation.

    Cheers,

    Costyn.
  • by wzc ( 233635 ) on Saturday September 16, 2000 @03:57PM (#774146)
    For those of you who don't believe in DDoS attacks or just don't want to believe in them, please check out http://wzc.dhs.org/home/news/index.html and the news post dated 6th September 2000. This is a linux server which I run... it has been DDoSed many times this summer, each time, taking out the ISP on which it is hosted. I managed to log all the networks involved using tcpdump and other such tools. The reason for it being dossed? It runs an eggdrop on IRC hence the hackers DDoS the server to make the bot ping timeout, and take over the channel.... how sad.... So ppl, these attacks are for real.... we better suss them out... this is exactly what I did.... With help from one of my mates, I managed to determine the protocol used by the packetting agents (the agents which actually cause the garbage traffic) and wrote a little C program which makes them packet; if you care to visit wzc.dhs.org's news section, you will see that the server was setup to perform a scan of all the networks which I had logged (the scan was done by sending control packets to each potentially infected host on each network telling it to packet my server for exactly 1 second... if the host packetted my server, I knew it was hacked and running a packetting agent). The list has now been submitted to cyberabuse.org and CERT have also been notified about them (which is, I assume how this posting got onto here in the first place). I don't claim to be "Mr Expert" of DDoS attacks, but I did the scan due to my general anger against the hackers which were orchastrating these attacks against my server during this summer. If anyone would like to know more about how the protocol works, or would even like a copy of the C program which causes these packetting agents to packet, then contact me via the email on wzc.dhs.org's news page..... maybe I should post it publically so everyone can do their own DDoS attacks, and then... the admin of the compromised hosts might fix their hacked systems. Thank you for listening. ----- Mark Hedges (admin of http://wzc.dhs.org)
  • Here's the Jargon File [tuxedo.org]entry [tuxedo.org]:

    TLA /T-L-A/ n.

    [Three-Letter Acronym] 1. Self-describing abbreviation for a species with which computing terminology is infested. 2. Any confusing acronym. Examples include MCA, FTP, SNA, CPU, MMU, SCCS, DMU, FPU, NNTP, TLA. People who like this looser usage argue that not all TLAs have three letters, just as not all four-letter words have four letters. One also hears of `ETLA' (Extended Three-Letter Acronym, pronounced /ee tee el ay/) being used to describe four-letter acronyms. The term `SFLA' (Stupid Four-Letter Acronym) has also been reported. See also YABA.

    The self-effacing phrase "TDM TLA" (Too Damn Many...) is often used to bemoan the plethora of TLAs in use. In 1989, a random of the journalistic persuasion asked hacker Paul Boutin "What do you think will be the biggest problem in computing in the 90s?" Paul's straight-faced response: "There are only 17,000 three-letter acronyms." (To be exact, there are 26^3 = 17,576.) There is probably some karmic justice in the fact that Paul Boutin subsequently became a journalist.

  • by halbritt ( 30189 ) on Saturday September 16, 2000 @04:34PM (#774150)
    Well, I had a user at one of my sites today get DDoS'd of the Internet. As a matter of fact, we were receiving so much traffic my firewall at that site choked. I got a couple of packet traces. Basically it was a bunch of tcp syn packets going to random port numbers. I started nmapping the source addresses to determine if they were real or spoofed (spoofed source addresses typically consist of a lot of invalid addresses that don't actually exist on the Internet). It turns out that 80% of the source addresses in question responded to ping. After nmapping a few of them I came to realize that they were all Linux boxes. Here's the results of one:

    turmoil# nmap -sS -O 216.17.xxx.xxx

    Starting nmap V. 2.53 by fyodor@insecure.org ( www.insecure.org/nmap/ )
    Interesting ports on xxx.dsl.frii.net (216.17.xxx.xxx):
    (The 1506 ports scanned but not shown below are in state: closed)
    Port State Service
    21/tcp open ftp
    23/tcp open telnet
    25/tcp open smtp
    53/tcp open domain
    79/tcp open finger
    80/tcp open http
    110/tcp open pop-3
    111/tcp open sunrpc
    113/tcp open auth
    143/tcp open imap2
    511/tcp open passgo
    514/tcp open shell
    515/tcp open printer
    1023/tcp open unknown
    1024/tcp open kdm
    3306/tcp open mysql

    TCP Sequence Prediction: Class=random positive increments
    Difficulty=1200108 (Good luck!)
    Remote operating system guess: Linux 2.1.122 - 2.2.14

    Nmap run completed -- 1 IP address (1 host up) scanned in 54 seconds

    Now, I don't know how you would assess the skills of this particular administrator, but as for me, I would say that he is a completely and totally ignorant and most likely stupid to boot. What kind of kneebiter actually puts a box like this in the wild? Ok, here's a little contrast. I'm running a counterstrike server on a generic install of Redhat 6.2. Here's the results of an nmap:

    turmoil# nmap -sS -O 206.173.xxx.xxx

    Starting nmap V. 2.53 by fyodor@insecure.org ( www.insecure.org/nmap/ )
    Interesting ports on ahl (206.173.xxx.xxx):
    (The 1522 ports scanned but not shown below are in state: closed)
    Port State Service
    22/tcp open ssh

    TCP Sequence Prediction: Class=random positive increments
    Difficulty=2103891 (Good luck!)
    Remote operating system guess: Linux 2.1.122 - 2.2.14

    Nmap run completed -- 1 IP address (1 host up) scanned in 22 seconds


    That's it. Imagine that, a secure Linux box. What a novel concept. The key difference between *nix administrators and NT administrators is that *nix is designed to be remotely accessible thereby making it more subject to remote attacks. It is also possible to secure *nix. NT on the other hand is traditionally not as remotely accessible, which I think prevents it from being more of a platform for this sort of behaviour. However, if there's a security weakness, it's usually in there for a good long while and on top of that, it's difficult as hell to secure.

  • by smnolde ( 209197 ) on Saturday September 16, 2000 @04:53PM (#774152) Homepage
    I just did an install of RH6.2 this evening for a friend of mine. It was amazing how much crap gets activated when you select 'everything' during the installation! He wanted everything, so he got it.

    Needless to say, I turned off all listening daemons and promptly installed OpenSSH.

    I see absolutely no need whatsoever to run telnet or ftp servers anymore. And my friend didn't need to have them running anyway on a dialup connection so I got rid of them. And even if he wasn't on the 'net, he still didn't need telnet, ftp, nfs, etc... running.

    I agree that a good half hour of cleanup is required after any linux installation. Even if RH is a 'newbie-ized' linux distro, all the NFS, rpc, apmd, pcmcia, sendmail, etc... services should be turned off until the sysadmin turns them on.

    I like the idea that I have a fully configurable, highly powered, and fully functional (free) OS. but dammit(!) let me turn the stuff on!

    No newbie should be faced with NFS or identd on their first day. Let them learn the power of GNU/Linux. Don't blind them like a deer in the headlights, but give them turn up the dimmer switch.

    eof
  • by n3rd ( 111397 ) on Saturday September 16, 2000 @12:37PM (#774154)
    Sorry, but I cannot agree with you.

    Would 560 computers with OC3's attached to each one be enough for a DDoS? You bet it would.

    Would 560 computers with cable modems (capped at 128 Kb/sec upstream) be enough for a DDoS? Probably not.

    Also, keep in mind, this all depends on the target of the DDoS. 560 dialup machines with 56k modems would be enough to flood a few dialup connections, or perhaps a cable modem or DSL line.

    Thus, as for the question "is is enough for a DDoS?" the answer is "it depends on the connections that the infected computers have and the target."
  • by Robert Hayden ( 58313 ) on Saturday September 16, 2000 @12:37PM (#774157) Homepage
    Remember a few months back when the DDOS attacks happened? Just before that, the FBI and CIA and a whole bunch of other TLA departments sent out huge warnings about "cyber-terrorists". Then *POOF* out of nowhere come these DDOS attacks. However, they aren't aimed at any important infrastructure (like the root name servers, for example!!!), but instead at a few well-known and public e-commerce sites.

    OH NO! See, the evil cyber-terrorists have attacked and the TLAs must get their funding to stop it.

    Suddenly....*POOF* the attacks _END_. No "bad guys {tm}" were caught, but the problem goes away.

    Ooops, here come the FUD and scare tactics again! Time to eliminate some more civil rights to protect us from "cyber-terrorists" and make sure those TLAs charged with fighting this dragon are properly funded!

    Maybe this time the feds will attack something on the net that really is meaningful instead of ebay and yahoo. Otherwise, I just ain't buying it.

  • I was told that the problems were caused by a fiber cut, AKA backhoe fade.
  • by cOdEgUru ( 181536 ) on Saturday September 16, 2000 @12:41PM (#774160) Homepage Journal
    MSNBC report states that most of the systems which are compromised are Red Hat systems,using a recent exploit, for which patches are available. These compromised systems are surely systems whose admins are either ignorant or crackheads who believes Linux or Red Hat is too secure to be compromised.

    This again brings to light the eternal question which begs an answer. Is it the fault of the company behind the OS or the Sys admins who forgets to apply the newly released patches who are responsible for these attacks. My opinion would be the latter.

    Any piece of code is liable to exploits, including Windows and *nix, and its quite obvious that the script kiddies behind these attacks do not envisage new exploits, rather piggyback on existing exploits for which users or admins might not have applied the patch. The fault I must say lies with the Admins.

    As long as there are systems liable for attack, whether they might be open source or closed, there would be kids who take advantages of the exploits that arise from these systems. Rather than crying foul everytime a new exploit is released, the geek community should make sure to plug these holes, rather than pointing fingers.

    As long as we dont plug the holes in the Internet Infrastructure which allows these kind of DDOS attacks, that would be the sanest thing to do.

    My two cents
  • by TheGratefulNet ( 143330 ) on Saturday September 16, 2000 @12:42PM (#774161)
    so when the hell is redhat going to even come close to the level of security that openbsd has??

    I love linux and I wish it was more secure. I do tend to use redhat for my desktop boxes (it supports a lot of hardware and is the most well-known distro around, mostly). but I'd NEVER put it 'bare' on the net - for all to play with. that's lunacy.

    for my public box, its openbsd. I got tired of my linux boxen getting hacked ;-( behind the firewall, linux is very nice - but just don't put it in its DEFAULT config on the public net. if you do, well, its just a matter of time before you're hacked.

    but if redhat at least made the default config totally locked down, they'd enjoy a much better rep. and linux, as a whole, would take less abuse about security issues.

    --

  • by YoJ ( 20860 ) on Saturday September 16, 2000 @12:43PM (#774163) Journal
    I think the solution to script-kiddy wankerism is a revival of the demo scene. Everyone uses GUIs now, so it's harder to program to the metal to make cool demos that cut the edge of technology. That's what these dumb kids should be doing, actually improving their skills and learning something, rather than being destructive and 31337. I'm gonna wager than many Slashdotters know at least a couple kids that are probably warez dudes or 31337-haxors at night. Be a force for good in their lives; show them how they can create with a computer as well as cause problems.
  • I find it particularly interesting that if it is a Linux exploit, it is the fault of the "ignorant Sys Admin".

    If it is an NT exploit, it is the fault of the OS manufacturer, even if there is a 'hotfix' or patch available.

    I enjoy Linux as much as the next geek, but sometimes I'm disappionted by all the FUD coming from the same community that claims that they are the greatest FUD victims...

    -jerdenn

  • RedHat is a consumer product. The vast majority of RedHat installs have no "sysadmin." They have a person who clicks a few buttons to install the damn thing and that's that.

    When a Win98 box is exploited, is it the syadmin's fault? That question doesn't even make sense. And neither does it make sense to say that poor sysadmins are at fault for RedHat exploits.

  • Three cheers for that idea! I've been wishing the demo scene would find some life even without the extra incentive of deterring script kiddies--demos are just plain cool!

    A few links that are pertanent:
    • www.scene.org [scene.org] - sort of a ground central for the demoscene today, the way I guess the Hornet archive [hornet.org] used to be, though I didn't even know what a demo was when hornet was in business.
    • Orange Juice [ojuice.net], the self-proclaimed "demoscene information center, though I've never found anything useful there. Mostly pertanent to Europe, I think.
    • The famous hornet archive [hornet.org], which shut down in 1998 but still seems to host something of an archive.
    • A few budding Linux demo sites:



    Personally, I'd love to see growth in the Linux demoscene, because even though there are lots of great (and recent!) demos out there, no one from the DOS demoscene ever releases source code! I'd really love to learn some of the tricks of the trade, and it's hard to even know where to start without being able to look at the work of the masters.

    In case any of you have never seen a demo and happen to be running Windows, my personal favorite is Bakkslide Seven, made by the group Omnicolor [hellcore.art.pl]. Even more impressive is the fact that it is 64kb in its entirety: music, graphics, and everything!

    --
  • by jpowers ( 32595 ) on Saturday September 16, 2000 @04:56PM (#774174) Homepage
    While none of us really need to get in another fight over which OS is better, I have to ask you to give your own post a second look and consider that your view of this situation may be too simple to be accurate: configuring and operating a linux server is different than doing the same with an NT box. For most NT servers, NT installs right off the CD, you fill out the networking info, and start making user logins. All services are either switched on from the control panel or by installing it off another CD with "autorun".

    Linux is a different animal. It takes some work to configure one of these things. SendMail, Apache, Samba, X, whatever you need, you configure, and unlike NT, everything is "off" until you turn it "on", and not only by running YaST, but by endlessly tweaking relevant app.conf files. You basically need to know the inner workings of the programs just to get them to run. Of course, you get some pretty exact control in return, but it really does take a degree of effort just to think the program's configuration through. Not that you couldn't put the same time and effort into tweaking an NT box, but the distribution and marketing of NT don't encourage it. It doesn't make NT admins' sloth any less wrong than *nix admins', but the truth is that the culture and attitude that has developed around the two (NT is great because I slap a M$-approved CD in the drive, then sit in my big comfy chair all day and wait for it to crash v. Linux is great because I tweak the hell out of Apache to get it compatible with my perlcgi style then set hosts.allow to all:all because I'm too lazy to map my fscking users) forces the *nix admins to take all the responsibility for their systems while NT can just say "that's the way MS shipped it to us".

    Sysadmins of all stripes deserve SOME of the flak for the spread of viruses and the DDOS attacks from their exploited servers, but M$, by taking some of the control over the system away from NT SAs, also must take a proportional share of the responsibility. Consequently, the security audit my single-purpose linux ftp server failed last Thursday is my fault, but the NT guy gets to blame the MS-approved consultant who installed his fileserver.

    -jpowers
  • Yes! Going back and saying "well we had to do it the hard way." and using that as some sort of meter for crowds of people isn't really fair.

    There is the same ratio of lazy people to motivated people today that there was yesterday. If you look around you see a lot of people on computers, but don't forget that when we were learning on computers that you booted on a floppy since harddrives didn't exist, the ones who were NOT motivated, simply didn't use computers.

    Now, it is different, they use computers because they are practically a necessity, they still don't want to learn them though. If you look around, there are still plenty of people willing, and even anxious to learn things the hard way and understand the fundamentals of how they work. Same ratio, different set of parameters.

  • by Svartalf ( 2997 ) on Saturday September 16, 2000 @05:10PM (#774178) Homepage
    NFS is grotesquely insecure. If you have to use NFS, use it behind a seriously locked down firewall box. If you can avoid using it, use anything other than it. SMB is also less desirable than others but it's design won't leave you open for these attacks. I suggest using AFS or Coda at this point.
  • by Smack ( 977 ) on Saturday September 16, 2000 @05:11PM (#774179) Homepage
    Can you believe that? Those evil hackers have figured out how to raise the dead and have them fight for them as a zombie army. Man, this is almost as bad as copying DVD's.
  • my personal favorite is Bakkslide Seven

    That was beautiful. Thanks.

    -jpowers
  • Yes, lazy admins are the problem, but just to clarify, this isn't an "exploit".

    These machines were hacked (in ways that any other machine would be hacked DNS, rpc.statd, sendmail, etc). The person that hacked the machine then put the DDoS software on the machine for later use.

    Lazy admins are the issue since they did not take the appropriate measures to secure their boxes (applying patches, setting up a firewall, etc), but the actual DDoS software was installed as a result of the boxes being hacked. Hence, the DDoS software is the result and not the cause of the hacking.
  • This is a really basic way to check what ports are open on a Linux box:
    Open /etc/inetd.conf and look for lines that do not begin with a hash (#). These are the services that inetd is listening for. Do not use the "grep -v '^#' inetd.conf" way of checking , as this won't alert you if the # is not the absolute first character on the line.
    Ideally you should run nmap against your machine. Inetd will listen on the ports configured in the inetd.conf file. Other services may be listening directly to the port (apache, for example). There is a front end called nmapfe that automates the procedure. Run it against your local machine.
    You could also try looking at https://grc.com then doing the scan network tests. This site is geared to Windows services but the probe is useful. Do not trust what the guy says in his FAQ because it's meant to sell his firewall product. He has some wrong information in there.
    Once you realize that your machine is wide open, how do you disable those services? First, edit the inetd.conf file and then comment out the lines for services you don't need. Then restart inetd by doing a killall -HUP inetd. Go into LinuxConf and disable the other services. For now I'd suggest completely disabling wu-ftpd and rpc.statd until the fixes have been tested for a while.
    As for any security, don't trust your box to this minimal information. There are lots of other ports open that I didn't address here. Do some reading!
  • You are correct, I should have puts quotes around the second usage of "hard way" it was my intention.

    This is why I think they should teach *NIX in the classrooms instead of just teaching you how to use word processors and spreadsheets. Some would argue that it would be learning the wrong environment since everybody uses MS products in the workplace, but I disagree. Since I've started using Linux I have learned far more about the fundamentals of a computer than I ever learned under DOS and Win32. I'm able to take that knowledge and use it in the Win32 environment easily.

    The problem starts with the lack of education in the area of computers. As a regular highschool curiculum, you should take basic *nix fundamentals, basic C programming, maybe Perl, and a hardware course. Fourth year should then move to MS products, VBS and system maintence. Once you can accomplish tasks of this nature, you can very easily learn the simple programs such as making spreadsheets and typing research papers on a computer. It would be second nature at that point.

    This isn't going to happen any time soon though. The computer market has way too much inertia in creating software for the dummies. This only leads people to take an increasingly 'dumb' approach to computing.

    So, things are still as I have outlined in the original post. You have a division between the people who want to get their fingers dirty and those who don't. As you put it, they see it as a tool. It isn't the way it should be, I never meant to convey that.

  • agreed, sw is about middle between obsd and redhat.

    I use sw on my 'embedded' systems (mp3 players mostly) its a nice minimal install and yets its easy to install to (ie, it has gcc) ;-)

    but still, there's something about the development model (ie, chaos) in linux that isn't there in the bsd world - at least as much as linux. and obsd has security -first-. I just really like that for public dmz boxes.

    --

  • Just a curiousity thing - I'm not activly looking. I'm just curious where root kits come from and how they get around. I have one or two that a few years old that I removed from another departments hacked machine.

    Do they make their rounds via IRC? Usenet?
  • um... good luck DDOS'ing the root servers. hope you have a couple million computers.

    Just out of curiosity, what kind of hardware do the root servers run? Is each root server actually a cluster or are they each just a Really Big Machine?

  • Maybe your little brother simply isn't that interested in how they work, either. If he travelled the same path you did, maybe he would have gotten sick of them all and done something else instead. There are plenty of people who started out with GUIs and still managed to learn many deep things about how their computers worked. There are also plenty of people who started out with C64s and still can't properly chain IDE hard drives.
  • but what doesn't make sense about your post: if you like redhat so much, and are so knowledgable about locking down a box, why don't you use redhat on the net? It takes about 30 minutes to configure ipchains and ntsysv and hosts.allow, and you're secure. I too wish that RedHat would ship in a more protected configuration, but I don't let it stop me.
  • by stab ( 26928 ) on Saturday September 16, 2000 @12:51PM (#774199) Homepage
    I've been scanning the bait logs on my machine (I run a simple tcp listener on port 111, 23 and others to report scans), and over the last four weeks the rate of scans against the machine has gone up orders of magnitude.

    Probes to port 111 come about twice a day, from a large range of IPs. These boxes could all be compromised, and being used as part of a worm attack, but I dont have time to track down the postmaster of each of the ip addresses and mail him/her.

    Does anyone know if there's a service run by CERT or anyone to report possibly compromised hosts that turn up in our logs too?

    If not, it would be pretty useful to have ...

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...