Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
The Internet

Internet Auditing Project Results 161

The Internet Auditing Project has returned with some pretty grim results. From Jan 99, they tried to crack 36 million servers, and found that a huge amount of the machines, and some you would think aren't, are open. They've also made the program they used Bulk Auditing Security Scanner availible for download. Pretty disturbing results though-well worth reading.
This discussion has been archived. No new comments can be posted.

Internet Auditing Project Results

Comments Filter:
  • Yes.

    I try to be at least a little paranoid on any system I set up for public access. Installing ssh, disabling telnet, allowing connections only from hosts with intended users, yadda yadda...

    There's no way in hell I'd catch that.

    My machine could be cracked right now, and I wouldn't know it.

    That scares the sh*t out of me.
  • > If someone runs a lame box like that, LET them get cracked, and LET them learn the hard way.

    Yes, and what if someone on that EZ box uses SSH to connect to your "secure" system?

  • Why would they have an account?
  • With a securelevel of two the kernel cannot be altered and even root cannot fiddle with disks and memory devices. Your are pretty sure your kernel is yours unless the machine is rebooted, but you could notice that. (Why the hell have I been disconnected ?)

    Don't be so sure about noticing reboot.

    I think it's theoretically possible to reboot a system, replace the kernel, and bring back all of the processes as if nothing has happened. This of course requires grabbing images of the processes and their state before rebooting. Really, if you can replace the kernel, you can do practically anything.

    Heck, on most servers it would probably be sufficient to reboot the machine in the middle of the night and then fiddle with the system uptime and processes' run time to make it look like the uptime was still 200+ days.

    So, when using securelevels it is important to make sure that everything involved in the boot process is locked down.

  • READ THE FUCKING SOURCE!
  • by Rob Wilderspin ( 2556 ) on Saturday August 14, 1999 @01:08PM (#1746232)
    What if that "someone" with a lame box is your ISP? What if it's the company you work for, your government? What if it's the Debian FTP servers (for example) and the intruders start introducing subtley-designed backdoors into packages that *you* download and install because *you* trust the server?

    As they said in the report, we don't live in isolation on the Internet. If any host is insecure then there's usually a knock-on effect which could affect any one of us.


    Rob Wilderspin

  • That the Linux kernel could be modified on-the-fly via a module is a serious security hole. That really needs to be fixed, urgently, IMHO.


    Well, if you're that paranoid you can compile a kernel without module support. (you may have some difficulty with newer hardware, PnP devices and stuff though..that's a problem. Maybe sort-of modules..compile it like a module but load it into the kernel at bootup, just delay its activation and don't allow later loading of modules)

    Loadable module support is always going to leave you vulnerable to this sort of thing, though.

    Daniel
  • Anyone else having problems getting it running under Linux? My new binary just core dumps...
  • i read this on friday when it was posted to bugtraq and the story of their .jp box getting cracked should be enough to scare most slashdot readers i think.
  • I hardly consider 450,000 hosts out of 36 million to be a huge amount. In fact, it comes to 1.25% of the hosts. I thought this odd when I read the original BugTraq/Security Focus article on this subject. A big deal was made of the fact that 450 thousand hosts where vulnerable to common exploits, but nobody bothered to note that this was less than 2% of the tested hosts.

    I think it was a neat project, I was very interested in the "super hack" that occurred on one of the participating scan servers, and I think the groups recommendation for an IDDN is a worthwhile project.

    But I still am actually reassured, not scared, by the fact that less than 2% of the hosts in their fairly sizable test group were "wide open" (as I would consider any host that is vulnerable to a common exploit)

    -Count Zero-
  • by Millennium ( 2451 ) on Saturday August 14, 1999 @11:23AM (#1746247) Homepage
    One word: bravo! If this one ever comes up, I'll gladly put away RC5 and SETI@Home to work on this. It's important that the Net be a secure place, and we need some kind of thing to ensure that holes are found and stamped out. While IDDN wasn't quite what I had in mind, it's definitely a winner.

    Of course, law enforcement will hate IDDN; after all if there are no more security vulnerabilities how are they going to snoop on us^H^H^H^H^H^H^H^H^H^H^H^Hprotect us from evil terrorists?
  • How about a module that prevents the loading of other modules? That could act as a sort of stopper after all modules are loaded at boot.
  • Puhlease.. This ranks right up there with "Scientists Discover Doors Make Homes Vulnerable To Intrusion." Lets see some real studies.. You know, NOT one meant to push a product?
    Bowie J. Poag
  • 1) Location, Location, Location
    2) No, they *shouldn't* go to prison. Some archaic (Is that how you spel that?) laws say they should go to prison, but that's something rather different.
  • Face it: This is the internet. There is very little law here, the evolutionary survival of the fittest and strongest has beeen converted into the survival of the people with the most knowledge. To be honest, I kind of like it that way. IMHO, just a hack shouldn't be a crime. Doing on purpose damage should be, and things like spamming, DoSing etc.... Though Calling it a crime won't work here of course, because /the law/ can't catch up. I think we'd better go for an RBL-ish structure. If an ISP doesn't fix it's holes and sue/kick offenders, it's traffic gets blackholed. As long a reasonable diversity in ISPs is maintained, this should work very well.
  • The window manager does NOT listen for connections in the X model.
  • by Anonymous Coward
    # On Solaris you'll need to add *at least* these linker flags:
    # -lnsl -lsocket -lresolv -lrpc (is that how the rpc library is called?)
    #
    # On Irix you'll need to... Hmmm...
    #
    # Forget it! I'm not going to fight Unix. Here's a nickel kid, go buy yourself
    # a Linux distribution.
  • Or they could just run the _freely available auditing tool_ themselves.
  • The story of the .jp box REALLY SCARED THE SH*T OUT OF ME! I'm sure that my box is much more vulnerable than that paranoid .jp box was, and I'm CERTAIN that I would NEVER detect an intrusion such as this. The only way I would not loose sleep over something such as that is knowing that there really is no way that I can prevent it.
  • Yeah, it doesn't work here. I found on strace-ing the binary that it was having problems when the logfile didn't exist, so I touched a 0-byte log file and it still exited PDQ. As far as I'm concerned it's a crock of crap and an excuse for a ranting article about "security"... pretty useless.

    ~Tim
    --
  • yeah... unalias rm;rm -rf / and install a real operating system.

    Never was the name 'anonymous coward' more applicable; the only more applicable one is 'pillock', I think.

    ~Tim
    --
  • by Anonymous Coward
    Check If Hackers Were Smart [hackernews.com] at Hacker News Network [hackernews.com]. It expands a bit on the issue (mostly in the end of the article, but to understand you have to read everything).

    In fact, if "them" can get into any box in the net, they could change our systems in a way we wouldn't be able to notice (read "Reflections on Trusting Trust"). Then any system we made from any of our systems would be compromissed too. "They" would have backdoors to any computer in the world, and there would be no way we could find out except out of sheer luck ("this system is acting strange, the foo feature is not working as the source says it should...").

    See also Worst Nightmares Come Alive [hackernews.com].
  • Aside from the criticly bad spelling and the grandiosity -- Seven hundred thousand vulnerabilities, gaping holes, wounds in the skin of our present and future information infrastructures, our dream for a free nexus of knowledge, a prosperous digital economy, where we learn, work, play and live our lives. -- I mean, puh-LEEZE . . . still, some interesting results.

    Notice which vulnerabilities are the most common:

    tooltalk 26.1%
    bind 18.1%
    wu_imapd 15.5% (hel-LOO! anyone hoooome??)
    qpopper 12.4%
    wwwcount 11.8%
    rpc_mountd 10.8%

    This was right before the big wave of tooltalk advisories came out so it may be somewhat less now.

    What is instructive is that, face it, TCP/IP and all the associated dependencies are way too complex but we can't roll this back so get used to it. No reason to give up on tightening things, of course.

    For an interesting view from another level, read Stephen Northcutt's new book Network Intrusion Detection (New Riders Publishing).

    -------


  • In FreeBSD you can do something to the effects of 'options NO_KLM' (afaik) in the kernel configration and disable loadable modules anyway, after a requisite kernel recompile.

  • Same thing here.

    It seems to occur if you actually use the ``-l logname'' option (at least this is what triggers it for me). The problem seems to be with getopt() not seeing the ``logname'' part, returning 0 which then causes the segfault. I am at a loss at why this would be happening.

    However, doing the full `make install' and running it as root with the default log file location (/usr/local/bass/bass.log) works for me.

    In general this program seems a little shifty. For example, why is there the (default) ability to clear out it's argv[0] (and thus be less immediately detectable in the process table)? And why does it need a ``coward'' mode. It seems as if it was written to run on machines where it wasn't supposed to be.

  • 1.I already exempted systems programming. You may as well bring up assembly language too.

    Then what kind of applications are you talking about? Buffer overflow in a program that never uses anything but the data it produced before, can't compromise the system anyway. And if a program is a mailreader, text editor or web server, the same rules as for "system" software apply.

    2.bzzt! thanks for playing. Many other languages such as Ada and Eiffel are "bootstrapped" from C into a subset compiler, and then fully rewritten in their own programming language and used to compile themselves. You obviously know zero about compiler technology. I suggest taking a course or reading the Dragon book.

    I am familiar with that. The point still stands -- implementation does not necessarily works correctly, nothing gives a hard proof that combination of generated code plus system libraries plus OS still follow the rules that were supposed to be implemented.

    So? The point it it's safeER, not completely safe. With a fully debugged string/buffer/collections library with assertions and bounds checking, less programmers will "reinvent" them and there will be less bugs.

    This has nothing to do with language -- only with libraries. And "standard" libraries that come with C++ can help only with extremely dumb cases of buffer overflows, so "increase" of safety exists only if a programmer doesn't think at all. To eliminate the theoretical possibility of buffer overflows one has to eliminate pointers, and this can't keep system simple and efficient enough to be implemented well -- look at Java.

    The facts are, stronger typed languages produce less bugs, period. Years of study by the military and NASA (do a google search) bear this out.

    Why should I trust them? Military pushed Ada for ages and achieved nothing -- they have to "justify" their wasted time somehow.

    WRONG! In many if not most of the cases, the compiler has sufficiently enough information to *PROVE* that an invalid array access can't occur, and hence can remove bounds checking code altogether. This occurs for example, when variable is incremented monotonically, like most iteration code which accounts for most loops.

    Problems start when functions are being called, and cross-dependencies between data structures in them (not necessarily even data structures that exist at the same time and definitely ones that didn't exist when libraries were compiled) come into play.

    Web Apps in particular are a perfect example. Their performance is easily bounded by network, disk, memory, and database performance long before CPU even matters. The CPU spends most of its time waiting for I/O to complete.

    This only applies to primitive applications that do nothing but wait for database to give result, then do something trivial to represent it. "Web application" that performs complex calculation on its own or its backend application that keeps track of various parameters that are not stored in database will be still CPU-bound. If a lot of amateurs manage to stick database and high-latency access to it everywhere where it doesn't belong, it doesn't mean that this is correct or even sane way to solve those problems.

    Any book on scalability will tell you this. And, when it comes down to CPU bound apps, 80-90% of the CPU time is spent in 10% of the code, so it makes sense to concentrate your risky optimized bounds-check free code there.

    This won't improve security because in most of cases exactly the same code is most likely to contain buffer overflows, and in various conditions distribution of time spend in different pieces of code may change thus opening an easy DoS vulnerability (ex: almost all routers DoS).

    I'm simply amazed at the number of so-called programmer geeks on slashdot that don't understand Knuth's Law or Amdahl's Law. People who have orgasms over Mhz ratings, and then wonder why some code on a 90Mhz sun will rock their 300Mhz pentium application.

    How is it relevant? You still can't take already implemented and optimized application and make its equivalent on the same hardware with the same performance if you will reimplement every part of it in less efficient way.

    I'm sorry, but C simply isn't the right tool for every job. It has it's place in systems programming, much like Perl is great at text processing. It sucks for everything else, and I speak as someone with 13 years C experience. I guess, I decided that it is better to know and use multiple programming languages than to rely on only one.

    Most of things still can't be implemented in inefficient language and be expected to remain usable. If one is afraid of buffer overflows, he should get more benefit from buffer-overflow protection tricks in function calls, nonexecutable stack, etc. -- and even that is considered too expensive in the terms of performance in most of cases. And if someone claims that his language can eliminate all possible, or even most of classes of security bugs, he is lying, and Java is the best example of this so far.

  • IIRC you need root perms to tamper with /dev/kmem.

    IIRC you can turn off /proc support or even patch your kernel to get rid of /proc/kmem.

    IIRC there is *no* way to write to a write-protected floppy through software.

    If someone rooted your machine then you're fucked anyway. I always thought the trick was to make your machine either impossible (yeah right) or very difficult to root.
  • *SIGH* I never said anything about the SSH password being stored! I was talking strictly about the ISP password! If anyone's not read something, you didn't read my reply, you were too bosy picking faults with it.

    Second, it doesn't matter if it was an overseas phonecall. The tunnel endpoint wouldn't be with the ISP! It would be with the host machine.

    Thirdly, yes there would. Yes, you could break the encryption (it's very simple), and modify the routines so that they use absolute values rather than checksums. Yes, you can install stealth code to trick the kernel into thinking no changes have occured. All these are perfectly possible, and I understand the mechanisms well. The point is, the magnitude of the changes required is far greater than for a simple, unprotected binary. The stealth code is therefore going to be bulkier, to hide all those changes. So, either the stealth code is going to reveal itself, or an antivirus scanner will pick up the changes.

    The trick with any kind of protection is to not bother trying to make the protection itself perfect (you can't), but to force attacks on that protection to be visible.

  • >> Thirdly, how did they get hold of the ISP password?

    > He probably used the password-saving feature
    > of the dialup software. It's easy to get it
    > this way (since the dialer needs the plaintext
    > password to do the dialing)

    Just being nitpicky here, but CHAP doesn't send anything plaintext...

    ... unless you're talking about grabbing the ISP password from the registry or something.
  • something I've been working on... use hardware-asissted OTP. a pair of Keeloq [microchip.com] devices, one on your serial port, one on the server's. The server's password doesn't increment until a valid password has been used.

    when you SSH in the password is passed and the indexed incremented. Use the same thing for su. PAM would be wonderful here.

    Any thoughts?
  • linux got rooted because he knew a login password and used the kde buffer overflow vulnerability (which should have been patched). The kernel module thing has been around for a long time (check phrack magazine for a BSD one). anyway, this scares the shit outta me personally.
  • As it has been said, This is only like 18-19 common exploites. If the server in Japan was cracked and it was only running SSH and HTTPD then this is only a small persentage of total hosts that could be cracked (wether they keep up to date on patches or not). In this case it was because an NT machine was compromised, and that machine connected to the same network as this server. If this is the case I suspect the the total of computers that COULD be cracked by a GOOD cracker would be as high as 70% to 80%. now this is something to be woried about.
  • by Raetsel ( 34442 ) on Saturday August 14, 1999 @03:34PM (#1746281)
    Note: All numbered items are direct quotes from the SecurityFocus article by Liraz Siri. The intent here is not to flame, but to state the facts as I understood them from the article.

    "The crack was via an NT box, so the weakness was less in Linux itself than in NT. (NT has more holes than swiss cheese.)"

    1. 1: The attacker knows the employee's username and password and is even connecting through the employee's Japanese ISP on the employee's account! (the phone company identified this was an untraceable overseas caller)
    2. 2: This is only an hypothesis, but is strongly supported by the fact that the entire attack only lasted an incredible 8 seconds! During which the attacker manages to log on (over an employee's SSH account, no less), gain root privileges, backdoor the system, remove any (standard) traces of it's activity and log off.

      3: Further investigation shows that this employee's personal NT box, connected over a dynamic dailup connection, had been cracked into 4 days earlier.

    It appears that the crack was due to an NT box, not via it. The actual intrusion came in at the Japanese ISP, and the intruder already knew the username and password for both the ISP and SSH. Note that the phone call to the ISP is from an "untraceable overseas" number.

    "The second vulnerability was SSH. Someone altered the SSH client to act as a trojan. This should not be possible - programs should be able to detect if they've been modified. Failing that, a virus scanner should be able to detect modifications."

    1. 4: Readers should also note how although a key binary in the cracked machine had been modified, tripwire and an assortment of other booby traps failed to detect this had happened. Even a close-up manual inspection (comparing file contents with a trusted backup, playing with it's name) could not detect any odd behavior. This trick, and others equally spooky were achieved by clever manipulation of the OS's kernel code (dynamicly, through a module).

    They were using scanning and file comparator software. Even when the backdoor was identified and manually examined, they "could not detect any odd behavior"! Impressive.

    "Thirdly, how did they get hold of the ISP password? The article said SSH was cracked, but not that the dial-in software was. "

    There's no specific quote I can use here, but knowing the NT box was compromised leads me to believe that the ISP account was compromised shortly thereafter. I've tried L0phtCrack, it's an impressive program. If I can 'script kiddie' almost every NT machine I've ever worked on like this, getting the ISP account info out of the registry isn't much of a stretch.

    I want to know how they ID'd the NT box in the first place. I don't know how they did that, and I can't even start to guess...

  • by Anonymous Coward
    I shall ignore NT ;-) but 4 ssh ... nothing new either but yep, fair enough. But let me tell what one of the security experts in my former company told me : "If people tell you they have never been cracked, then they are either liers or haven't got a clue". In short there is no such thing as total security, as there is no such thing as bug free software. It is not about not getting cracked but about how long does it take "you" to detect that and how can "you" limit the possible damage. In so far a seemingly insecure system can be better than one that, seemingly, is secure. The first one might be one taken care of, the second one might be one where the admin feels happy about having installed the latest security software that does the it all itself. This is the reason I am not impressed as what those guys did was not even lightyears near to a security audit. It was fun for sure; sort of as they ate up a lot of resources and kept quite some people busy who have got better things to do, but that is all. Once you've read the paper that describes how tcp_wrapper came to life you know what *real* trouble means 8-}
  • > The second vulnerability was SSH. Someone
    > altered the SSH client to act as a trojan. This
    > should not be possible - programs should be able
    > to detect if they've been modified. Failing
    > that, a virus scanner should be able to detect
    > modifications.

    As the author of the package that was trojaned (TTSSH), I feel obliged to point out that there is NO WAY for a program to reliably detect if it's been modified. The cracker can always just disable the code that's supposed to detect the modifications. (This is a bit easier when source code is available, as TTSSH's is.)

    A separate virus scanner might be a bit tougher, but would be vulnerable in the same way.
  • is their tty camera thing - sounded neato. anyone have something similar ??
  • Or, in other words, the conspiracy is so good it doesn't even know of its own existence.

    Sure... if you want to refer to marketing as a "conspiracy". And that's not too far from the concept. Marketing IS selling. Selling is getting people to do what you want them to do. It's just that marketdroids are so much less exciting than gunmen on grassy knolls.

  • by Anonymous Coward
    Note: Livermore is part of the Department of Energy, not DoD. Furthermore, Lee worked at Los Alamos, not Livermore. Furthermore, one of problems with Los Alamos is that it partially run by the University of California. One of Lee's security lapse was that he put secure material onto his unsecure computer. This transfer was supposedly done not thru a network by some physical means (tape?).
  • FYI, the kde buffer overflow has been fixed, though I'm not sure if it was back in December when this survey was running. I'm sure a browse of the KDE mail lists would reveal exact dates (www.kde.org) if anyone else is that interested. :)
  • Clue: they scanned 36 million addresses. They were only looking for vulnerabilities on Unix hosts. Now, how many of those 36 million hosts were running Unix?

    Be scared. Be very scared.
    -russ
  • The article said how the NT machine was sending fake DNS packets down to Australia. I assumed that's how they got all the passwords and any other information they wanted from the NT box. A virus, trojan email, back oriface, etc can easily get installed (depending on the user's competence). After that, the rest is all downhill...

    And the eight seconds bit is certainly enlightening. They had to have had a way of knowing exactly what was on this box (notably KDE's buffer overflow problem) to get in, do it's thing, and get out. Perhaps it could be BO, they could sit back and watch the user for a couple days, grabbing passwords, watching what they do when the SSH in, etc. Hmmm
  • by Anonymous Coward

    A global fury of half a billion packets, digital signals zipping back and force across the planet at the speed of light. Above the Earth, across the land, under the sea, over satellite microwave, copper wiring, fiberoptics, wireless and undersea cable. Probing cyberspace.
    In the world of tomorrow...

    Seven hundred thousand vulnerabilities, gaping holes, wounds in the skin of our present and future information infrastructures, our dream for a free nexus of knowledge, a prosperous digital economy, where we learn, work, play and live our lives. Easy pickings, at the fingerprints of anyone who follows in our footsteps, friend or foe.

    ... all is not well.

    Struggles for power in the digital domain could very well develop into the world's first real information war, with the very future of the Internet as a free unregulated supernetwork caught in the cross fire.

    The stakes are huge, the weapons deadly...

    The only thing necessary for the triumph of evil is for good men to do nothing. Wake up fellow countrymen. Let's get to work.

    ... but the proud people of the free world stand strong (led by Harrison Ford, no doubt).

    Coming August 14th to your local cinema.

    Whoever wrote that article has some pretty big delusions of greatness there. The same applies to the "IDNN" project. First, they seem to have forgotten that scanning systems for vulnerabilities is very much illegal in many countries. I don't know the specifics, but I'd guess that it's illegal in much of North America and western Europe, which also happen to be the homes of most Internet users. That might kind of hurt any effort of this type. People are also not going to like being on the receiving end of regular scans. It would certainly make it difficult to tell the difference between a "friendly scan" and an attempted (or maybe realized) crack. Finally, I'm sure that someone would manage to subvert that system if it ever came into being. This isn't like hacking a distributed.net client; subverting the system would allow you to gain access to many computers.

    And in the final analysis, the numbers aren't all that bad. Under 2% of the servers on the Internet is not a huge number. I'm sure many of those boxes are relatively unimportant boxes that are probably just personal machines. That 2% may be composed of forgotten boxes in university/corporate offices. Also, this is purely theoretical; I doubt that all those boxes could just be cracked into in seconds.

    Oh, one more thing: someone tell this guy what "How discrete" means!

  • I have always wondered why Government agencies have whole-heartedly adopted NT in spite of the obvious security holes. Surely, such an OS is a threat to National Security.

    ...and NT will be used worldwide if PHB's see Government departments all using NT.

    That's a nice idea, but there's a simpler explaination. The Government IT budgets are controlled, in the most part, by PHBs. What sells to Corporate PHBs, also works for Government PHBs.

  • man you suck.
  • No No No.. It doesn't matter if you are a small one man operation or not. If you are on the net and people are talking to you, you need to have some sort of basic security. If you are insecure, you open the doorway for crackers to get to anyone you touch. Lets say I'm a one man software writing business. I have a "free trail" on my web page.. I don't need to be secure, after all, I don't take credit card orders or anything. But, if a cracker penetrates my host, my free trail can be patched to open a few ports on their box. Oh, but no-one with anything to keep secure would download a free trail and just run it? Maybe not, but one of their employee's might, on his home pc.. "It's my pc, I'll install what I like".. then he ssh's into work.

  • (note to moderators: I'll repost this comment until you stop moderating it down)

    Doesn't this just SCREAM for a "flamebait" tag? :)

    Welcome to the real world. It's populated by lots of generally cool, intelligent people. It's also chock-full of people with malicious intent. That's why we have people who make it their profession to protect those who hire them.

    The dynamics are the same online as offline. Most people already know this. Perhapse its new to you? In any case, either produce some Good Ideas on how to fix the situation or take your sabre-rattling elsewhere. On the off chance that this is humor... work on your delivery.

    (note to moderators: aren't these kinds of notes simply missing the point?)

  • I have a strong political interest in anarchy. My business model reflects that.

  • by Anonymous Coward
    Seems to be three camps here.
    1. Stupid thing to do, go find someone to work for.
    2. Cool project. I make comments but do nothing.
    3. I posted first!
    Look. Back in the old days we used to call up BBSs and down load the latest t-file from the cDc. It called for action "Go out! Save yourself! Do something!" I'm only 23. But, back then I went to school and didn't think about guns. I thought about Nukes. Now suddenly, people shooting eachother left and right. People get pissed and say we should do something, but nothing is done. Or, people just think it doesn't happen around here.
    Well, let me offer you a clue. Nukes = got enough money go get one. Guns = that kid Kip in Springfield, I dated his sister -- I cooked dinner in his house.
    This guy scanning is like Paul Revere. One if by land, two if by sea, three if by fiber. China is getting pissed and our whole economy is running off the net right now.
    If you don't smell the smoke you won't get burned right?
    Linux is a counter-culture. Once you have a mascot you just sit back and watch the game. Slashdot and freshmeat are your cheerleaders, but won't it be funny when you find out that's the idea.
    Keep em occupied and they will obey.
    Stop reading these half-thought, rant things and look around.
    Look at a gas station, see that dish? Look at the supermarket see that veriphone. Can you say rations, food coupons, etc.
  • The whole thing is amature code dude. If you want something that isn't "amature" buy solaris and we all know how secure solaris is. Please, the reason why we use C is because it is fast and we can compile it anywhere. What's more it's been a static language for over 10 years. Sure, lets code everything in Java, then we have to write a JVM and a JIT and keep it conforming to Sun's specification. Bounds checking is all well and good but you bitch and complain when your code is too slow. C does what the programmer tells it to do, nothing more. Java does whatever the hell it wants to do and the programmer strugles with it to get it to do anything remotely useful in a reasonable time. Oops, I promised myself I wouldn't dis Java's speed.. JIT's make it pretty much equivilent to C speed so I can't really dis that.. but lets dis the fact that you need a support library about the size of mount everest to run anything with it. I'm all for a secure operating system, but it is going to come from people implementing intelligent programming with concern for security on their mind, not from some magical programming language with bounds checking.

  • Granted they could have been more open and aboveboard, but for what was done it was at least following a principle of least damage. What impresses me the most is the open and meticulous documentation of the project; the report was quite nice to read especially considering that the author noted English is not their native tongue. Hopefully the distributed BASS can be implemented ASAP, and convince more people to A) use transparent encryption where possible, and B) think about the issues and their relevance to themselves (of course, expecting most people to think is a chore in and of itself).
  • why don't you just..

    *gasp*

    actually read the article, download the scanner that they used in their survey, and scan all the computers that you're wondering about

    not too hard now is it?

    btw, it compiled fine for me.
    ...
  • Window managers run as the user, and do not listen over the network. As such, a buffer overrun in a window manager would not gain an attacker anything. X on the other hand, runs as root (and by default listens on port 6000). block port 6000 at the firewall, and tunnel X through ssh. If you're hyper-paranoid, boot off a write protected floppy, and run tripwire (binary and database) on a burned CDROM. make sure that tripwire is statically linked! Might want to put ssh on that CD too.
  • Oh, maybe we should all just install "FED-watch" and leave it to the "Internet Protection Agency" to keep us all secure. We're in an anarchy here, we can either embrace that or we can try to build up authoritarian governments to destroy it. IDDN is an idea that very quickly leads to the libertarian argument (as told by Robert Nozick in "Anarch, State and Utopia"): When living in an anarchy we will sometimes come to disagreements. Sometimes these disagreements will need brute force to resolve (Nozick doesn't beleive that people can resolve conflicts without resorting to violence) and we can't allow anyone to just attack anyone they like.. this leads to feuding, because people will sometimes over-punish those who wrong them, which will lead them to seek retaliation. So, we say, form a group (or a union) who agree that if one person needs backup, we'll all pop up and go and help them. This gets real tired, real quick, so we ask "isn't there someone I can just pay to do this" and someone stands up and says "yer.. I'll protect you if you pay me".. so we have these protection agencies that stake out certain territories and get people to pay them. Now, eventually someone from one territory has a gripe with someone from another territory and the protection agencies go to war. Some time in the future, they decide it would make sense to come up with some way of deciding who is right and who is wrong in this silly argument. They apply these rules and form a sort of "Federation" of protection agencies. However, there is always the man who doesn't want to pay a protection agency. What happens when someone, who is in a protection agency, wrongs him? He goes out seeking retribution and meets up with a wall of resistance. The man defending himself decides that he can't live without protection from the federation of protection agencies. So he signs up.. before long, everyone is expected to pay the protection fee, and the protection agencies start saying "no one has a right to perform acts of violence except us". This is the night watchman of the libertarian state.

    From there we move on to people complaining that there arn't enough "public goods". So they start to form welfare societies and opera groups. These slowly get bigger and have trouble managing their money collecting, so they turn to the people who already have a good system of collecting cash - the state. Welfare and opera become another thing that everyone must pay for, and the "protection fee" becomes a "tax".

    The alternative is to refute any form of authoritarianism.. because we know where it leads, to an all powerful state who spends billions of dollars on war planes when half the world is starving.

  • So we should stop trying to make cars safer? Obviously there are risks in anything, but if we can control them, we should take steps to reduce them. Obviously, there are some risks we have no control over. Computer security is not one of them.
  • That 2% may be composed of forgotten boxes in university/corporate offices.

    Doesn't matter. If any machine on a network is insecure, then the entire network is insecure. Read the story of their own crack. J. Random Employee runs an insecure NT box. Result: Entire company network is compromised, and one of the most secure machines on it is rooted.

    That 450,000 compromises millions of other machines. And those millions compromise others, which compromise others... What depth does one need to reach any machine on the net? What's the average degree of seperation of two random boxes?

    One of my ISP's n-thousand users has a cracked box. This is a certainty. Now I could be running trojans, and my school's network is compromised. From there, access to hunreds of private, academic, and government networks...
  • I feel it's an amazingly scary number. As the article itself proved, a single insecure computer can open up other secure computers to vulnerabilities.

    Remember, the Linux box that was cracked was 'secure' as far as the Audit would show. Yet, it was comprimised, through the compromization of another box.

    The analogy was made in the article that the Internet is less of a community, more of an organism. When one area is infected (cracked) it can spread to other areas easily, without the problems of having to crack each box individually.

    Just remember that.
  • If you are refering to the people who did that scan has hackers or common criinals you are a complete ignorant person. They are computer security professionals doing a survey of the internets computer security. Companies do this kind of thing all the time and charge thousands for the results, they just did it free. Lets start arresting every demographics and market research company as well. And what about the company that surveys what web servers are used on the net, are they criminals? If you were not referring to the people who did that scan, then well, ignore everything I said.
  • by Tarnar ( 20289 ) on Saturday August 14, 1999 @05:56PM (#1746317) Homepage
    With the speed and intimate knowlege shown by the intruder from Week 3, one name comes up.. Erwin!

    I suppose after Columbia Internet got hit with the probe, Erwin took it personally. After having NT on it's drives before, I imagine it knew exactly how to get into the NT box and play around with everything to get the SSH going and eventually onto the Linux box.

    It makes perfect sense =) That's what we get for messing around with an AI of that caliber ;-)

  • Or, in other words, the conspiracy is so good it doesn't even know of its own existence. Which is, of course, a sure sign that there is a universal conspiracy going on - just not necessarily an orchestrated one.

    I should get around to reading "Revelation X" like my roommate insists I do... (No, he's not a "bobby," he just knows what's going on.)


    ---
    "'Is not a quine' is not a quine" is a quine.
  • No, no, no.. read the article. The cracker uploaded an archive of scripts and ran them.. he tried a lot of attacks.. the fact that KDE was, say, 4th in the list is what made the attack last 8 seconds.. if it was 1st it may have only been 3 seconds, if it was 10th it may have lasted 30 seconds.. Just too damn leet. If none of em succeeded he probably would have done it manually.

  • by AcMe ( 715 )
    You might be blinded by the 200 year old concept of laws in this country, but don't ignore the insight to be gained from this. They are not criminals if they operated outside legal jurisdictions, and shouldn't go to jail for helping the ignorant see the light. What price are you paying exactly from this? education? Alot of history's greatest minds who made contributions to society, were also labeled as petty criminals by ignorants like yourself. be more scared of your neighbor's ability to go to a gun store and purchase an assault rifle to blow your head off.
  • I just love comming off IRC and repeating the same conversation on slashdot.. Even if you disable kernel modules you can always use /dev/kmem tekniq to install stuff straight into the kernel (like STAOG, the world's first ELF infector).. so then someone says, "can't you checksum the kernel".. the answer is yes, but you could always just patch the checksum code.. if you have the time and the inclination. So then we said "why not reboot off a floppy to checksum your binaries?", and the work around that is old virus tekniq, change the CMOS to boot C: A:, password protect it, infect the mbr to check if a floppy is present and pretend to boot, patching the floppy's kernel on the fly.. and for the new skool: infect the flash memory on the motherboard.. oh the joys of reusing virus research.. the best idea is to pull the harddrive, plug it into a never-been-networked-pc and do your checksums there.. also a good idea to grab a disk editor and look through the sectors for the original logs and the crackers archive of scripts (as they did).



  • Word to the wise: secure the hell out of your box before you go poking around and knocking on people's doors and peeking in windows. A lot of people don't take kindly to portscans. Some of them will peek back, and maybe bounce your box and check for common vulnerabilities as a warning shot.

  • My conclusion is the exclusion of Microsoft was meant to speak volumes. Or, "Win 98/NT is a known insecurity as a whole. As such, we won't devote resources to rediscover a known problem."

    Perhaps Microsoft (if hell freezes over) will see what this means. Serious administrators concerned about security will/should shy away from using Win 95/98/NT for mission critical operations and networking where security is a concern. Then, when they are serious about beefing up security, they won't run windows2000test.com. Instead they'll allow independent and open review of their security system and network protocols. After all no one seems to trust an un publicly tested encryption algorhythm.

    Only after Microsoft has taken the serious step to more secure networking, do I think they'd be worthy of the scanning effort.
  • Um, they didn't try to crack 45 million computers. All they did was probe for known exploits.


    Rev. Dr. Xenophon Fenderson, the Carbon(d)ated, KSC, DEATH, SubGenius, mhm21x16
  • by Anonymous Coward on Saturday August 14, 1999 @11:54AM (#1746330)
    The article writer is correct when he suspects that military systems approved to process classified information must be specially audited and unconnected to the internet at large.

    As a point of caution to those about to grab this latest scanner and joyride, every military installation & network is monitored 24/7. I assure you, portscans are detected and the source IP recorded & blocked. (To be specific, for 15 days after the attack/intrusion; if it occurs again, further measures are taken.)

    Of course, where I work, many of the CSSOs and TASSOs consider applying the latest patches/disabling the latest services to be rather a pain. But then, it's a research institution, and scientists don't like to sully their hands with such mundane matters. :)

    Just wanted to reassure slashdotters that the military does take computer security very seriously. At some laboratories, you would have better luck sauntering in and sitting down at a computer physically, than messing around with network attacks.
  • The similarity I'm drawing on this is that these guys are, in effect, casing servers. Now the good or bad of the facts is something else.

    I've come up with a few thoughts:

    1) This seems to be beneficial information on the general state of security on the Net (poor)

    2) Any sysadmin smart enough to check logs and not be lulled feeling secure should use this as a further wake-up call to review their services. Do they really need non SSH telnet services running? How are those firewall settings?

    3) There are those out there who will always take things the wrong way.

    I'm interested in seeing where this project ends up. With RC5 like support, or in jail like Mitnick.
  • > As a point of caution to those about to grab
    > this latest scanner and joyride, every military
    > installation & network is monitored 24/7. I
    > assure you, portscans are detected and the
    > source IP recorded & blocked. (To be specific,
    > for 15 days after the attack/intrusion; if it
    > occurs again, further measures are taken.)

    Damn straight. I think our base blocks them for
    longer than 15 days, though... that may just be
    local extra paranoia. Not my baliwick, so I
    could be wrong.

    > But then, it's a research institution, and
    > scientists don't like to sully their hands
    > with such mundane matters. :)

    When I worked on base before, I used to try and
    make things convenient for my users as long as
    security never got compromised. If they came and
    complained, well, sorry, but that's the way
    things have to be. Usually they understood.

    Lately I've decided that I won't even listen to
    "inconvenience" complaints. Screw 'em. My boxes
    haven't been cracked yet, and I'm not going to
    take any chances. So they have to go through
    extra steps. Waaah.

  • Please read the article. You'll find that the scans were done from places (Russia) where this type of activity isn't considered much of a crime, if at all.

    [Now, a little more off topic...]

    • IIRC, in Sweden there was a fellow charged with "attempted cracking" He portscanned a company's computers. The courts ruled that portscanning was not an intrusion, thus not a crime. The fellow was acquitted.
  • by Admiral Burrito ( 11807 ) on Saturday August 14, 1999 @12:21PM (#1746335)

    Yes, it sent chills down my spine when I read it as well. I've known such things were possible but didn't think anyone had yet gone to the trouble.

    There are things you can do about it, though.

    Some Unixes, including Linux and the freeware BSDs (all BSDs since 4.4, I think), have the concept of "securelevels". Set files to be immutable (under *BSD the command is "chflags schg somefile") and raise the securelevel above zero. This prevents everyone, including root, from modifying the file. At securelevel 2, the disk and memory devices are also read-only, to prevent doctoring that way.

    This doesn't stop intruders from gaining root, but it can prevent them from trojaning everything and going invisible, or at least make it a hell of a lot harder.

    The only way around it is to go to the console and bring the system to single-user mode. If some files or directories used in the boot sequence before the securelevel is raised aren't set immutable, it's often possible to modify them such that the securelevel will not be raised during the next reboot, so it's important to know what you're doing. Other than that, the only way for an intruder to trojan the system is to discover a bug in the kernel itself. There have been bugs found in the past, but they are much less plentiful than root exploits.

  • The ability of a program to always be able to detect if it had been modified is the Holy Grail of the Software Anti-piracy world.

    I have never seen an anti-piracy scheme that couldn't (read: hasn't) been cracked. There is no such thing.

  • werd.. we charge cash for what they just did.. If everyone scanned every box they came in contact with and sent a descreet email to the admin telling them it was ownable, we would have much better security.

  • just graduated from HS in June and am seriouly thinking about enlisting in the U.S. Navy. My area of interests is UNIX/Linux/BSD security, administration, and coding.

    Good luck. Unless the branch of service you're interested in has an opening in the exact field you're interested in, don't exect to do it. Assuming you DO find an opening, do NOT enlist unless you've been guarenteed that position on papper. Recruiters will play bait-and-switch and they will lie.

    Having said that... it doesn't mean you can't find interesting, rewarding jobs in the military. You might even get to do what you want to do. But your happiness in your job involves a combination of carefull planning and luck (and maybe a willingness to accept your second or third choice of careers).

    Whatever you do, don't go in under an "open enlistment" (that is - no specified career field / position).

    Again... good luck.

  • by chamont ( 25273 ) <`monty' `at' `fullmonty.org'> on Saturday August 14, 1999 @06:22PM (#1746340) Homepage
    I'm sure you've already been there, done that, but for the unenlightened, start with: http://metalab.unc.edu/mdw/HOWTO/Security-HOWTO.ht ml then find http://www.ssh.fi/sshprotocols2/ then kill your unnecessary services and convert your NT servers to something a bit more secure.

    I'm realizing that something stupid like an obscene message on one of my stupid little web servers will probably get me in more trouble than a stealthy download of confidential files. Lock it all down. Only the paranoid survive.

    Monty

  • by QuantumG ( 50515 ) <qg@biodome.org> on Saturday August 14, 1999 @06:32PM (#1746341) Homepage Journal
    Perhaps the greatest injustice in the scan is that they were only interested in insecure unix machines. Agreed, it is way more interesting to probe unix and a lot easier, but there are a massive number of windoze boxes that are just obviously sploitable. A bigger threat than the splill-on effect of hack-sniff-hack attacks is the "secret weapon" attack. Spend two days in softice looking at tcp/ip code for win98 and you are almost guarenteed to find a DOS attack. Look for a week or more and you will probably find a local sploit.. try your luck at a month and you should be able to find a remote sploit that will get you access to every web surfin' spud's computer. Everything else is downhill from there. When have a sploit that no-one knows exists, you only have to worry about the folks who burn their tcpdump logs every day and only then when you screw up. If you want your network secure, don't use microsoft.. don't let your employee's use microsoft.. but who want's a secure network anyways?

  • This won't help against someone as sophisticated as that. They already used a kernel module to prevent tripwire from showing the trojans. You can put a kernel module into a kernel to make it ignore the immutable flag for the duration of your attack.

    The only way to protect yourself against someone like this (apart from having no bugs in any of the software you run), is to have your disks shared with another (separated and highly secured) computer - preferable with only console access. Even then, a memory resident trojan can get you - does anyone know of any systems which have the system memory readable by another computer (without the intervention of the first CPU or any programmable hardware which can disable the feature). We are really getting into national security stuff here I think - in wich case the computer should be in a bunker with an army squadron to protect it ;)

    --
  • No, the reason you use C is because the development tools in Linux suck for any language exception C. (g++ sucked until recently too)

    And the reason for that is the unusability of all languages other than C for system programming. Not to mention that all other commonly used languages are implemented in C, and implementations had their own security holes -- the "nicer" is the language, the larger are its implementation holes: perl had suid bugs in many versions, java crashes so much, it has to have some of that exploitable, etc.

    Even C++ would be better than C, since a large majority of the buffer overflows are idiots using the stdio str*/mem* routines without supplying buffer lengths, or trying to code up their own collection routines (vector, list, tree, etc) c++ and STL would fix a lot of it. The fact is, C is not type safe. C++ offers more type-safety.

    Yet after working in commercial software development I can testify that security bugs are abundant in C++ code, too.

    Eiffel offers design-by-contract assertions which would prevent a lot of the buggy libraries that float around from causing security holes.

    How?

    And Java of course, is like chroot()ing your code, since it can't modify memory, disk, or network resources without specifically allowing it to.

    This is irrelevant because program that does something useful must be allowed to do it anyway. But the main problem with java is its own bugginess, so one can never be sure if his program will be even running at the time when it will be necessary unless something is keeping an eye on it (=> DoS),and no one knows how many exploitable bugs are in Java implementations.

    The fact is, all the of the reasons to use C don't apply to developing daemons and application level code. You could make an argument for sticking with C for kernel work. But as an application language, it's far too sloppy and dangerous.

    Every other language is worse for most of purposes.

    Also, bounds checking *doesn't* lead to a performance hit.

    Sure it does.

    Most modern compilers can remove the vast majority of bounds checks when optimizing

    What are you smoking? Bounds checking code, once introduced can be optimized, yet the result will still produce enormous performance hit, especially considering that it should be performed at least once per array per function for every local variable.

    , and with Eiffel, you can turn them off for a "production build", but leave them on while developing/debugging.

    If they won't be there, how can they protect you from the attack? the whole point of attack is to create condition that never occured in development in testing, so it will be able to produce unexpected case of buffer overflow.

    I'm sick of C hacks making the "speed" argument when they usually don't understand how to make things scalable anyway (HINT: it's not CPU in the vast majority of cases)

    It *is* CPU -- if you look what is it comes down to in the end. Except, of course, in some braindead cases that no decent programmer should produce anyway. After problems of braindamaged communications with poorly-designed other software and inefficient libraries were solved, all my projects finally hit the limit of performance, imposed by CPU, what could be well proven by the results of profiling. And the only solution was to get a better compiler and faster box.

  • Bigs words for someone who doesn't sign their name. The level of skill shown by the "super cracker" here only comes about through either diligent practice or structured training. To get either you have to be free from police intervention, so my conclusion is that this person is either so good that they don't fear police or they are trained by the government and considered above them. So yes, put up your hand (and sign your name) to the libertarian night watchman but understand that your taxes will be spent on people like this guy, exactly what you appear to dispise.

  • 1. As in, for example, something set up to be installed via BO? Yes, that's a concern... and I'd suppose that most of the folks who are sufficiently vulnerable to script kiddies to be caught by this (have the copy of BO installed initially) wouldn't have the sophistication to detect it. However, there ARE things that can be done about this, and were it known, they would be. In short, it wouldn't be nearly as much of a threat as a pre-cut tool as it is as something wielded by an ubercracker.

    3. How about flashing their BIOS to load code off the hard drive (if needed) to modify the behavior of the copy of ssh on the floppy?

    (Okay, this is tricky. You'd have to be familiar with the specific BIOS and floppy-based ssh version available, and doing it without a speed cut would be difficult. Nonetheless, it's similar, in a one-level-up sort of way, to the kernel modifications. If they could do that)...
  • With a securelevel of two (or even 1) you cannot read a module after the boot phase.

    With a securelevel of two the kernel cannot be altered and even root cannot fiddle with disks and memory devices. Your are pretty sure your kernel is yours unless the machine is rebooted, but you could notice that. (Why the hell have I been disconnected ?)

  • Of the 36 million addresses in their list many are unused, part of modem dialup pools, assigned to router ports, running NT, etc. The percentage of UNIX servers which are vulnerable would be much higher.

    I wouldn't want begin to guess the percentage of vulnerable NT machines...

  • Crack a Linux box wilh only SSH and Apache running. Okay, perhaps, but it'll take me a while (and a screwdriver... ;)

    Do it in 8 seconds??? That's Incredible! (TM that old 70s (80s?) show by the same name)

    I learned a fair amount reading this article, most specifically:

    • Don't run anything you don't
    • absolutely need! Like X.
    I know that'll make things sometimes slower, and less pretty, but look at the alternative! 8 Seconds... DAMN!
  • Yep. To me it's like someone going from door to door, checking whether it's locked, and if not, politely closing the door again. Should have been really elegant if they also sent someone 'behind the door' a message stating that they had a security problem.


    I'm interested in seeing where this project ends up. With RC5 like support, or in jail like Mitnick.


    Seems like they're not in the US, so I think you can skip the jail option. I don't mean to say that outside of the US you can't go to jail for cracking, but there should be a bit more damage done before legal action is taken, generally.

    Geez, I liked the bit about the Aussie 486... Talk about a 'quick hack'...
  • I want to find out if *my* ISP and the hosts I use were some of the open systems.
  • by Anonymous Coward
    Think it over. They did what they could without being sent to hell. Now finding a known hole, a possible one, does not imply automatically this system is insecure. In addition some domains, even .com, are often used by small to tiny one man companies who just have not a./o. cannot afford the necessary infrastructure. My second criticism is about something like CERT is, well, superfluous. Contrary to their nice try CERT reports *are* about *real* and problems. Any CERT report is more informative than this accumulation of numbers. In short IMHO it was a good PR show within the possible limits and they are for sure qualified enough to go farther if they want to but they have done nothing for security, so thanks for the party and let's shoot the lights.
  • by jd ( 1658 ) <[moc.oohay] [ta] [kapimi]> on Saturday August 14, 1999 @12:39PM (#1746362) Homepage Journal
    The crack was via an NT box, so the weakness was less in Linux itself than in NT. (NT has more holes than swiss cheese.)

    The second vulnerability was SSH. Someone altered the SSH client to act as a trojan. This should not be possible - programs should be able to detect if they've been modified. Failing that, a virus scanner should be able to detect modifications.

    (Ideally, for an ultra-paranoid setup, the connection should be made via an IPIP tunnel, and connections refused from anything other than the correct end-point.)

    Thirdly, how did they get hold of the ISP password? The article said SSH was cracked, but not that the dial-in software was. I assume they have thought of that. If not, the NT box and the ISP account are still wide open.

    That the Linux kernel could be modified on-the-fly via a module is a serious security hole. That really needs to be fixed, urgently, IMHO.

  • So let's do it. I mean, think about what would happen if even a simple implementation of this service was available for linux (or better yet, even came with some popular distrib). How many network administrators out there (or better yet, on /.) would be willing to give up their extra cycles/bandwidth in exchange for information on their own security vulnerabilities?

    It seems to me that if we were able to implement this, the fact of its accomplishment would be much more than any law enforcement or legal agency would be able to cope with. Fait accompli.

    So what would it take? How soon can we get it done?
  • Actually, it's useless for anti-piracy. You can duplicate a program, bit-for-bit, thus pirating it, but with nothing changed.
  • It's called self-modifying code. If a program has a function to XOR itself with it's checksum, with the next routine run doing exactly the same, using the new checksum, you have -nearly- 100% protection against modifications.

    Simple modifications would then be impossible, as the code is essentially encrypted.

    More sophisticated modifications, which included altering the decryption to use an absolute value rather than a calculated one, would still fail, owing to the second, encrypted, decoder, which would be using an invalid checksum.

    Very sophisticated attacks, which include decrypting the binary, modifying the second routine, and re-encrypting it using the new checksum would still work, but you're talking about something that is going to make -MAJOR- changes to a program, setting off every alarm on the machine.

    For something like SSH, binary encryption is an absolute MUST.

  • Linux got rooted via a KDE buffer overflow, which has since been fixed.

    *SIGH* Clearly, someone who's NEVER played with self-modifying code. Why must I suffer this? Anyway, you have one routine in the clear, which XORs your binary with the checksum, or some function of the checksum. This unencrypts a second routine (but not the rest of the program), which does exactly the same, using the new checksum value, or function thereof. This, finally, decrypts the actual program itself.

    Anyone can "break" the first check, but this leaves the rest of the program scrambled, and so useless. You can modify it to use an absolute value, sure, but if you do that, the second decrypt routine will use the wrong checksum value and the program will crash. Only by breaking BOTH encryption routines can you alter the code, but by then, tripwire and every other security scanner will be screaming. There's no way to break a double lock of that kind, without triggering every alarm on the machine.

    Tunnel endpoints are to =TWO= IP addresses (physical and virtual). You can't simply move one end of a tunnel, like that. The end-point would be at the NT machine, and nothing could change that, short of breaking into the Linux box, which can't, then, be done without connecting through the NT machine. Overseas connections, of the kind that happened, CAN'T divert a tunnel.

    Anyone paranoid enough to use SSH to connect to an ultra-secure Linux box is hardly likely to save their password on the computer! That's the -biggest- single NONO in standard security. NEVER, EVER write your password down, or keep a copy of it on a computer.

  • That only works if the running program image in memory was modified, but wasn't modified to not do your check.

    Meanwhile, encrypted code is useless because the program has to start with...a decryptor! This is why, long ago when people compressed and scrambled executables with some PKzip product, it was so easy to decrypt them: to run itself, it needs the key and the decryption code...
  • Bounds checking can be used -- but:

    1. It can't be placed everywhere.
    2. It's wrong to rely on it as a way to make program secure or even correct -- it's the last thing that prevents something extremely bad from happening in the case when things are bad already.
  • can you post yours up or give a link ? i'd like to see the code.
  • by jd ( 1658 ) <[moc.oohay] [ta] [kapimi]> on Saturday August 14, 1999 @12:49PM (#1746376) Homepage Journal
    From what I could gather, the crackers broke into an NT box, rigged SSH to send them the password, and then SSHed into that person's account from a vulnerable Australian computer.

    Once inside, yes, KDE is going to listen, if it's running. X is a networked GUI, and any server that is active will listen to connections. If XDM(?) is running, that, too, will be listening for network connections.

    KDE has almost certainly removed that buffer overflow, in more recent versions. If they haven't, they almost certainly will, soon. I think it's about as safe to install KDE as any other window manager. However, I -don't- advise leaving any window manager running, unless it's needed. They -are- complex pieces of software, and that means possibilities of bugs (memory leaks, etc) and security holes. If the computer is idly running a program that's a potential risk, for no reason but to put swirling patterns on a monitor that's turned off, you're better off with it shut down.

  • I'd 1/3 agree with both of the above posts:

    1. It's assumed that everyone is extremely proficient with the doors. I bet you'd be surprised by the count of simple enough ways to crack the whole-metal, patended lock and all-that door you consider absolutely invulnerable. 'Real-world' criminals spend years studying constructions of doors and locks.
    2. Of course it's essential for such an effort to ask the 'door' owner in advance or at least to provide them with test results. However, I didn't find anything stating that this hasn't been done. Obviously, they can't put the whole list online but after this article is published one can address them for security audit log, I believe. Of course one needs to prove his rights
  • Oh, I dunno. They have shown a vulnerability in NT, and what could be considered a vulnerability in SSH.
  • It's a real problem if any of my ISPs systems are vulnerable, and a real problem if those of our governments and militaries are also open. By simply pointing out that they could perform such a broad scan with such success they've shown that anyone else could do the same. Why should we think that this bunch of hackers were the first to do this? If our infrastructure is insecure then we need to be told.

    CERT reports are useful, but people need a wake-up call to actually do something about them. This is a good first step, and some sort of automatic monitoring agency could well be a good second.


    Rob Wilderspin
  • Also, the scary story about using one weak machine to crack a fairly secure one has a bearing here. People on those ~450K machines are going to have accounts on lots of those ~36G additional ones.

Hackers of the world, unite!

Working...