Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet

Sandia's Distributed Anti-Cracking Bot 70

tgw writes: "Beyond 2000 is reporting that Sandia National Laboratories has created a unique type of bot that can defend against cracking attacks and viruses. The bot 'runs on multiple computers in a network and depending on circumstances the separate copies of the agent can act alone or together as a single, distributed program. The [bots] across the network constantly compare notes to determine if any unusual requests or commands have been received from external or internal sources.'" A cool-sounding project, to be sure -- but how much control is too much to cede to the intelligent agents?
This discussion has been archived. No new comments can be posted.

Sandia's Distibuted Anti-Cracking Bot

Comments Filter:
  • I think the interesting part about these "bots" was that they seem more active in defense. Firewalls and tripwire are strictly defense and monitoring. I'll be the second (tbo was 1st) to admit there's some wild ass marketing going on, but it seems to me a bit more interesting than a "smarter firewall".

    Interesting points to ponder. So it can detect DDOS and close things down? How? I though the whole idea of DDOS was that at first it just looks like heavy traffic. So how do you tell the difference? Or do you always just shut the box down when the traffic gets heavy (seems kinda stupid) Also, it can keep virii like ILY from entering the system (paraphrased). How? Will it stop all executables from being sent through email? Couldn't that be done by easier means? Please don't tell me they're going to scan email looking for questionable code (and invade my privacy).
    If anything this is a really cool look at how someone with an imagination can take a security util and bring it to life. Hello agent smith.
    I am only an egg, be kind... but do reply

  • "Agent" is a fairly common AI term for an autonomous program. These network bots would be one, as would the agents in the matrix.
    Cheers,

    Rick Kirkland
  • Wouldn't this type of protection be useful on large service installations *and* ISP's, colleges, etc... for outgoing data? The data could be analyzed and restricted until confirmation of just why someone needed to send 400,000 icmp packets in 30 seconds.

    Analyzers at multiple points along the backbones could communicate and discover DDOS attacks and cease to route packets that could be nothing but a massive coordinated attack. Seems like backbone bots could be triggered by a plea for help by the network being attacked. That would help extinguish smurf attacks and could probably be used to actually help track down poorly configured networks.

    This could also put the burdon on ISP's that have lousy security and allow forged packets to escape their domains. Sure, make them accountable for their insecurity.
  • by orpheus ( 14534 ) on Friday June 09, 2000 @09:44PM (#1012276)
    Given the sensitive nature of much of their work, Sandia National Labs probably has resources that are not available to your ordinary ISP, like 'sleeping' multiply redundant bandwidth and use of the *many* Federal access points to the web, both "owned" and leased, including normal ISPs.

    When a DDoS strikes, you 'wake' a normally dormanr redundant line to another backbone (possibly via direct link to a high volume ISP to hide your signature), then you identify your "known" valid users and VPN to them via the peer. You can VPN via a distant ISP/node, so the attacker won't know you're communicating with 'legitimate users' via a hidden link to the ATT backbone in Ogden Utah and emerging on the Net over the redundant bandwidth of .mil bases and co-locos all over the country.

    The invaders can pound on the gates all they want. You're not using the front door anymore. You can also have secret VPN router/authenticators around the country that 'legitimate users' can connect to when the front door is slammed shut. The details could be built into key software, transparent to the legitimate users.

    It's like a reverse DDoS. In a DDoS, the attacker overwhelms you with attacks from every direction, faster than you can respond. In the 'Reverse DDos' Defense, the legitimate communications links are leaking out from all over, faster than you can find them and swamp them. In DDoS attacks, the defender is trapped by converging bandwidth from 'distributed attackers'. In the Reverse DDoS defense, the 'hidden peering' lets the defender 'distribute its communication bandwidth' to emerge as barely above background blips in widely divergent locations

    Well, that's how *I'd* do it, if I were the Feds. Hence the unimaginative codename "Legion"
    (Am I too close for comfort? My services are available for a fee, Feds.) ;=>
  • http://infoserve.sandia.gov/webpac-bin/wgbroker?06 100136270880327+1+search+open+-sort+NONE +AK+goldsmith

  • AT&T does the same with calling cards. Back in 1992, some friends and I were doing some volunteer phoning involving calling people statewide. We were getting reimbursed for expenses, so I gave each team my calling card number. Within 1 hour I got a call from AT&T, wanting to know if I was aware that dozens of calls were being charged to my card simultaneously from 4 different locations - and they were clearly surprised when I said yes! :-)

    ======
    "Rex unto my cleeb, and thou shalt have everlasting blort." - Zorp 3:16

  • After I hadn't used my MasterCard for a few months, and then used it (to buy gas), the next day I received a call from MasterCard asking me if I did use my card recently.
    Apparently its pretty common among credit card companies.
  • Supposedly, Amex's backend looks out for a very small ($1-2) purchase followed immediately by a very large one. Card fraudsters do a test run with a small purchase and scale it up if it works. Someone caught doing a tiny heist could talk their way out of it. I seem to remember reading in some Fortune-type magazine that this pattern was first detected by a neural net but maybe that was just hype.
  • I've never actually heard of someone being denied an unlikely purchase.


    I've never been denied a purchase, but I did get a telephone call at home one morning asking me if I had used my card a day earlier to purchase luggage from a travel agency in Hong Kong. (Perhaps the computers don't do profile checking in real-time.) The agent told me that the computer had flagged the transaction based on the merchant's profile.

  • Cyber agents, guards, police, these analogies just won't quit. I am sensing more than the role of a cyber agent in this article. What is to prevent an excessive ease dropping on my private communications? This technology could be easily abused to gain an unlawful control of information to obviate a democracy from functioning like it was intended to. I would like to see a separation of powers for instituting its use from a very granular to coarse level of monitoring and/or control. This technology might be ready for fielding in three years but the developing acceptable policies for its use could take longer and may affect the overall architecture, thus delaying deployment even longer.

    All right, "No central authority operates the agent", but the information for making decisions must be aggregated, a form of centralization. Granted the use of this information for command decisions may be decentralized, but the information aggregation process will susceptible to spoofing.

    Also, man will be in the loop somehow. When the information is aggregated, the reaction times for responding to this higher level of information slows down considerably. Thus, the control can become stable. The time for tactical decision making with cyber technology will be to small for a human, but strategic decisions will always involve man.
  • [Steve Golldsmiths] assessment of his agent's abilities is blunt: "If every node on the Internet was run by one of these agents, the I-Love-You virus would not have got beyond the first machine." ... decentralised control makes each agent autonomous yet cooperative ... Cyberagents faced with a runaway barrage of incoming network requests close the gates of the system to prevent it from being flooded.

    Closing the gates doesn't stop a DDos attack, it merely limits the damage which can be caused. One defence mechanism is having redundant links which are 'turned on' when currently active links come under attack. If you detect the attack quickly enough, it is, theoretically, possible to redirect valid connections to your new links. In practice it's difficult to implement as you run the risk of redirecting the attackers to your new links too. Of course the attack is still effective, it's closed rendered the link that was there useless, and should the attackers be dynamic enough, the can start attacking the new links.

    Sandia's system offers up a new defence - Assume multiple ISPs had agents deployed, say ISP A comes under an attack routed through ISP B from a user connected to ISP C.

    Assuming also that the ISPs cooperated, their agents would have a certain level of trust. ISP A's agents could, upon detecting an attack, seek help from ISP B's agents. ISP B then filters that route, and asks ISP C what the hell is going on. And so forth. Of course, one ISPs bot couldn't control another, but I think them saying "Help!!! I'm getting these from here. Please make them go away..." is acceptable.

    The thing I don't like about this however is the trust issue. As they say, Trust no one. Practical authentication methods are never 100% reliable. Distributed security agents should never have to trust anybody else. If they do, they run the risk of being compromised if the trusted party is compromised. I accept that trust is neccessary in order that network immune systems, anti-virus distributors, and agents like these are effective. But the control that one agent has over another agent is something to be wary of. I'm wondering whether Santia may opensource their non-munitions version when it's releaed. They appear keen on the idea of their system being implemented worldwide. Of course, if they didn't opensource, then its all just a government plot to control the world. :)

    The other defence of course, is everybody could just implement RFC 2267 [ietf.org] to prevents address spoofing.

    "A goldfish was his muse, eternally amused"

  • My plan was actually a little simpler than yours. It was based on the assumption that the attacker can swamp a link with packets but not read the backbone or identify/monitor/subvert privileged users (an entirely different risk)

    Phase I
    "Public services" (web, etc.) are sacrified. They aren't a gov't priority when under DDoS attack. The front door is shut (or left open but ignored)

    Specifically authorized high-priority users with a pre-existing relationship to Sandia are equipped with special software for locating/connecting to "Secret Authenticating routers" (SARs - see below)

    Sandia opens the 'back doors', its first messages are advertisements to SARs. SARs don't respond to normal network requests, but only provide SAR info (dynamically assigned alternate IPs and secondary SARs) after a client rigorously authenticates itself. These clients may be DNS routers of secure .gov or .mil nodes, or special clients given to, say, key researchers at .com and .edu sites.

    When one of these client finds it cannot connect via standard routing, it clicks into 'defense mode' and consults one or more SARs to open or restore its connection to Sandia via VPN to otherwise 'innocent' and distant sites.

    Phase II
    If the VPN IPs come under attack, they are shut down or changed, and removed from the SAR list. Clients must re-query for new VPN connect IPs The list of authenticated clients is scrutinized and correlated with attacks, to identify subverted machines at trusted nodes, or attackers using trusted machines.

    The topology is a "private" physical network connecting to the Internet at 'disposable' secure gateways. I presume there are many more possible gateway ISPs/IP than needed, with only a few in use at a given time. This is practical for the Gov't since the same network can protect all sites secured by this method

    The DDoS fails because the attackers no longer know where to attack. The connection they initially attack never "knows" and therefore cannot reveal, the alternate routing. Dedicated hardware decoders handle any re-addressing over the private network, so compromised internal Sandia machines may not help the attacker much: they can "phone home" and reveal individual disposable outside links, but not the overall network. In the process, they also reveal themselves as suspects for compromise.

    Refinements are, of course, possible ad infinitum. I estimate that the DDoS defense will be robust in proportion to the total bandwidth of all available outside gateways, and will degrade gracefully and scale without saturation until the DDoS is sufficient to bring down most of the Internet itself.

    Phase III
    The Internet is now useless. All external gateways are cut, and the system goes to 'emergency mode', communicating over the internal network. NOTE: the system NEVER uses the internal network as a network if it is still connected to the internet (this would permit network tracing).

    At this point, external DDoS is impossible, but the possibility that the Internet only *appears* to be swamped because all connections to the outside are swamped by a massive internal compromise.

    A core set of top-priority internal systems (e.g. Military C3I, etc.) and connections will form a network within the network to maintain gov't functions, and identify compromised systems or subnetworks, as above, but on a smaller scale
  • While not a complete standalone solution, this is certainly a very good and useful idea.

    No sysadmin in the world worth his/her salt would ever just leave it all up to these bots.

    However, in a world of script kiddies and wanna-be hackers, these little guys could really help out alot.

    Locking down a server is no small task, and having a team of little nano/code robots working with you could definitley be cool...maybe slashdot could have these guys running through looking for common trolls too =) A perl bot looking for Natalie Portman and other junk and moderating them to trolls might prove useful *wink*

  • We went out of town on a spur of the moment trip to Florida, with very little cash ( under $50 ) and the Discover Card. We only stopped for gas until we got to Florida (stopped to nap at rest stops, etc.) and then we paid cash for a room in south of Miami.

    That night I went to fill up the tank, and I get "Card Declined" so I ask the clerk what's up and they say I have to call this 800 number. Turns out they had tried to call my house, but seing as how we were 2000 miles from home nobody answered.

    When I called they said the computer watches for things such as gas only purchases in a short period of time, and then re-enabled the card. The gas station apparently had cached the decline though, so we had to go to another station.

    But imagine being stuck 2000 miles from home because your credit card suddenly isn't accepted - that's how it felt until I was able to call and straighten things out.
  • The technology sounds like it could have merit, but it seems as though it wouldn't take much to make this distributed bot network into a DDOS itself.

    If suspicious activity is planned smartly, couldn't the bots create so much internal chatter trying to figure out "what's going on" that they themselves create a DOS effect on the network?
  • This actually happend to me when I used my
    Mastercard first time in an Internet book
    purchase worth some $300 back in '96. Eurocard
    Frankfurt called me back the other day and
    asked whether I actually authorize a transfer
    of $300 to a certain company in Sebastopol, CA
    or if this is fraudulent.

    © Copyright 2000 Kristian Köhntopp
  • I like the idea. Will it run under MS Bob?
  • I've been denied purchases when traveling. Don't count on using the card on the first day you travel unless you can speak to them! Of course in my case they said it was a mistake but it happened twice...
  • It seems that to be of any use, these self-replicating 'bots must have high levels of access to the system.

    Yep. And the bots themselves are after all just another system that is likely to have security vulnerabilities.
    It will certainly get attacked with the intention to blind or corrupt it.

  • It seems that to be of any use, these self-replicating 'bots must have high levels of access to the system.

    If there are "backdoors" or special security exceptions made for the bots, then one would hope there is some means of recognizing rogue 'bots.

    The most obvious attack on a system with security like this is to duke in a rogue 'bot faking inheritance of the security privileges given to the real ones, and then use that to mount the attack.

  • by Anonymous Coward
    even if you have the best AI to detect against attacks/viruses, its not going to amount to a damn thing if the sysadmin leaves insecure services running on an insecure OS without at least a firewall or DMZ to help the poor thing from being "rm -rf /"ed...no amount of AI will take the slack up from a /lazy admin.
  • ...for a secure operating system. Each time I see a product like this, I sigh. It just bothers me that some people are so obsessed with "securing their computer" that they don't realize that Windows is inherently insecure: it uses the one-user paradigm, and all programs have r/w access to all of the system.

    I noticed it's supposed to find Trojan horses. That's all well and good, but, generally, most trojans aren't found as trojans until they're used against Netizens at large.

    Arrgh. Why can't everyone use Linux/BSD/Be/*nix?
  • Very interesting project inteed, but what exactly is the huge advantage of having the processes distributed across the network? What's wrong with one process that does such scanning?

    --
  • This sounds alot like what is portrayed in Henry Gibson's NeuroMancer. Awesome book. Talks about intelligent security programs that are hugely complex and adjust to the attack.
  • I was reading this over, and to me, the whole concept seems scary on more then one level.

    I am sure most of you have heard of echelon. These people want us to put a worse-then-military grade (why are we inferior??) security package that we don't even have the source for?

    Secondly, I am sure I am not the only one thinking about backdoors. If they had an "accidental" backdoor in one of these things, just think of the havoc! Already, they have functions built in to allow themselves to spread more easily.

    Perhaps this is true of all software that comes without source - but this is the only package to which we give free roam of our network, giving it a large carte blanche - it's not under our scrutiny.

    Just a few thoughts..

    I won't even touch upon what happens when AI goes bad :-)

  • >Really, this just sounds like a souped-up firewall + Tripwire. Nothing too revolutionary. Wanna bet >that a properly-configured OpenBSD box could have held off those four script kiddies (err, >"experienced hackers") for 16 hours, too?

    Script kiddies? Don't you think that Sandia can come up with some pretty competent testers? (There's a link to a description of the Red Team above). What I really wonder about is the possibility mentioned at the end of the article - distributed cracker bots. I know this isn't a popular opinion around here, but that's really the sort of technology I don't want to see too publicly available.
  • by konstant ( 63560 ) on Friday June 09, 2000 @07:52PM (#1012299)
    I've just spent far too many hours securing the network settings on a test harness... and I can tell you right now, I would never allow this sort of access to any topology unless I had very clear and fixed signatures for all these "bots".

    One principle behind securing a network is to disallow unnecessary access between machines. The fewer legitimate channels, and the more predictable the dataflow, the easier it is to monitor for anomalies. There are certain machines in the test harness I'm working on, for example, that *never* talk to one another. If I didn't segregate machines this way, we could lose essential data if a weak front-end box were taken.

    Opening the network to roving spiders and allowing them discretionary control over monitoring and transmitting would be difficult to secure. How could I tell the difference between a scan from a bot and a scan from an attacker? How could I identify what is an "dangerous" data transmission when the bots are semi-autonomous and unpredictable?

    I don't want to dismiss the idea, because eventually we will have to develop "immune systems" for our machines. But right now, it seems difficult to integrate these two models. When I run my own scans, I know that I'm doing it and I can pick out my work from the logs. This would add a new layer of complexity - something that already exists in abundance in the security field!
    -konstant
    Yes! We are all individuals! I'm not!
  • by jayhawk88 ( 160512 ) <jayhawk88@gmail.com> on Friday June 09, 2000 @07:53PM (#1012300)
    ...does the article's descriptions of "agents" sound eerily like the agents from the Matrix. Dammit, if I wake up tommorrow in some coccoon full of pink goo, I'm going to be heli-pissed!
  • A RDF newsfeed of B@K, which is updated daily, is at: http://www.beyond2000.com/b2k.rdf [beyond2000.com], and has been submitted for a slashbox ... Also trying to get an Avantgo inclusion at the moment, so we have a text version working ... just have to wait and see what avantgo says first before we unleash it. Weekly mailing list is up and running too :) (by the way, another way to reach my is webmasteratbeyond2000dotcom) Enjoy!
    -
  • by Chasuk ( 62477 ) <chasuk@gmail.com> on Friday June 09, 2000 @08:21PM (#1012302)
    And, if these bots are re-written, what capacity for harm would they have? Assuming that they interract with each other in an environment requiring digital signatures to make them "safe," what happens when some self-proclaimed guardian of our security decides to show us the new flaw he has discovered and re-programs these bots to wreak havoc? All for the better good, of course.

    How easy will they be to deactivate once someone else has the key? Let's suppose that we have become reliant on these bots (or their descendants), and that our ever noble, benovolent security experts (Cult of the Dead Cow IV, for example) have decided to re-define "unusual requests" as something which would normally be useful, and vice-versa? Denial of Service attacks are bad enough, but an attack from millions of bots - distributed and replicating WITH PERMISSION across a network - that would be horrific.

    If this protocol is accepted and integrated into the system, I can imagine password sniffing bots, e-mail re-directing bots, etc., all written by the script kiddies of the day and reproducing as nightmare progency.

    I suggest that, with such a threat, the safeguards need to be more formidable than any yet formulated. We would need to have -virtually - "viral inhibitors" keyed to destroy any interlopers on our system. Does such technology currently exist? Do we want to release these bots into the world before they do?
  • Damn foolishness, typing in the dark on a laptop keyboard and not previewing my post. Pretend like there is a couple of breaks in there, B@K = B2K, and 'reach my' = 'reach me'.

    At least I got the rdf url [beyond2000.com] right :)
    -
  • Come on doesn't that sound a little like a the start of a William Gibson book.

    Yes. But it sounds even MORE like the Terminator series.
  • by Pinball Wizard ( 161942 ) on Friday June 09, 2000 @08:24PM (#1012305) Homepage Journal
    can be found here [sandia.gov]. These guys are somewhat more sophisticated than your average script kiddie.
  • This deals with such a wide array of computer sabotage that its utterly amazing. Everything from breakins to virii to DDOS's can be successfully combated by this. Its exactly what the net needs.

    How does it deal with a DDOS? If a DDOS sucks up all the bandwidth a site has, how does this program help anything?

    It seems, if anything, that it could add an additional weakness--if the attacker knew anything about the bots, he could deliberately do things that send them into a frenzy, rendering the security bots themselves a menace to the system.

  • The researcher's name is Stephanie Forrest. A characteristic quote of hers is "Correctness is overrated" -- I disagree, but I see her point.

    see this article [santafe.edu]

  • Well several reasons...for the time being suppose you have a group of machines (workstations or whatever) to protect.

    If the program is not distributed by running port scans and various other info gather activities against machine A, B and C I can gain information about machine D as they are probably all configured similarly.

    If they run only with local data they may not pick up patterns over multiple computers.

    A partially succesful attack on machine A (non-root access) or even an attempted attack causes machine A may cause the machine to close the port...it would be nice if the other machines did similarly (as they know there is a security hole here).

    Which machine has control over the firewall? It seems one of the advantages of this system is that it can dynamically affect the firewall. If any machine has the ability to unilaterally affect the firewall one hacked machine could enact changes which fuck up the entire network.

    Communicating with client software on the machines probably makes it easier to determine which machines are hacked
  • Look...the report came from beyond2000 while I really like the show they do tend to overdramatize things (so don't take what they say too seriously).

    For instance they claim to deal with DDOS byt shutting off access to that port. While this certainly helps with ddos (less half-open connections) so the computer may not crash. It doesn't eliminate the ddos.

    I would wait for confirmation before taking all the claims here as gold.
  • Think about it,
    we have DDOS, even slashdot got it during their server move,
    and now we have another distributed tool, this time to prevent DDOS, strange.

    My question is can you run them both on the same computer?
    THink about all the fun you could have!
    -0-0-0-0-0-0-0-0-0-
    Laptop006 (RHCE: That means I know what I'm talking about!)
    Melbourne, Australia

  • So say your site gets slashdotted? Then these bots will be nice enough to close port 80 for you? Thanks bots! :)

    Things like that are hard to tell, how does the bot say tell the differance between a DOS attack on your httpd OR did someone just post the story to slashdot? What about a ton of incoming email, is it spam, DOS attack, or a bunch of people replying to your ad on monekybage.

    I am not flaming the project, it sounds like a noble one that is really dam cool, but don't understand how they are doing things...

    I would like to install a system admin bot then just troll slashdot all day...

  • by Cato ( 8296 ) on Friday June 09, 2000 @11:52PM (#1012312)
    Having redundant links into the Net is certainly one way of handling DDoS, and is in fact already practiced (more for general resilience and performance, but it works well against DDoS) - see http://www.dn.net/technology/network.html for an example of how DigitalNation, a dedicated server hosting company, has tens of links to various ISPs.

    This will only work against DDoS if the new routes (i.e. via un-DDOSed links) are advertised out quickly enough - first of all, the IDS needs to detect the DDoS, then it needs to mark the links under attack as down, so that the routing protocol (which has to be BGP as you are interfacing to multiple ISPs), can advertise the new routes.

    These routes then have to propagate out throughout the Net, across multiple ISPs, via BGP, until they reach the ISPs of (non-attackers) who are trying to reach your site - these ISPs' routers will then start sending packets via the new routes. The tough part is making sure the route advertisements don't reach the DDoSing hosts - if they do, you have just moved the DDoS attacks onto a new link!

    The IDS actually has to analyse the origin of the DDoS attacks (which may mean cooperating with upstream ISPs, since the source addresses will be forged if the attackers have any sense), working out which ISP is hosting the attacking hosts, and then make sure that the route advertisements don't get sent to that ISP. While BGP is very powerful, I'm not sure if it can do this - ask a BGP guru... In any case, if the DDoS attackers are smart enough to subvert hosts in tens of major ISPs, there is no way you can use this 'change route' approach to combat them, without cutting off many legitimate users of your site who are also getting on the Net via those ISPs.

    Before DDoS, this approach would have worked, i.e. where there was a single host attacking you - but it's now not sophisticated enough. It may still help in some DDoS attacks, it just depends on luck and the skill of the attackers.

    Combatting DDoS is a fundamentally hard problem. The best single mechanism to reduce DDoSes and make them easier to track is RFC 2267 (see www.faqs.org for a copy), which prevents people from injecting packets with forged source addresses (actually they can forge their host address but can't claim to be from a different network). This makes it much easier to directly contact the ISP whose customer or web hosts have been compromised and get them to put in filters blocking the attack.

    Without source address spoofing prevention, you have to have a laborious process of going from network operations centre (NOC) to NOC for each ISP back up the chain, getting them to put on traffic analysis tools to see where the traffic is coming from.
  • by Alik ( 81811 ) on Saturday June 10, 2000 @05:19AM (#1012313)
    I spent two years working for these guys [dartmouth.edu] building the Scheme component of their agent system, so I had a chance to learn something about the general theory of the field. Every agent system I've seen has a notion of a sandbox that agents are limited to. In the case of our particular system, agents were also to be signed by their "master", who might then be responsible for any damage caused. Agent data transmitted across the network could be encrypted; agents themselves had to be packaged and signed when in transit between machines, unless they came from a "trusted" machine. Inter-agent communication was not direct; it went through the agent server daemon on each host machine, so that untrusted agents wouldn't need to have the ability to open sockets or files. We were slowly putting together a system for resource allocation, such that each agent would only be allowed to use a certain percentage of each system resource --- that can help prevent a DDoSing agent. (There were interesting attempts to work out a micropayment-like system for purchasing resource access; I don't think it ever got finalized.)

    In short, if Sandia has remotely competent people, these agents are going to have strict limitations on their capabilities. Are they completely immune to attack? No. As Bruce Schneier has taught us, this only reduces risk. Still, if you add a requirement for agents to monitor each other, a human would have to be damn good to compromise a sufficient agent population. (Of course, this means that we may be headed for a future of eternal agent war. Might be cool. Want to prove open source? Make Tux2.0 the agent that can kick the crap out of any other agent.)
  • So this is the part no one's focusing in on...

    It will tighten security when attacked. Well why the heck isn't security tight anyway?! Viruses aren't a problem when you have an OS with permissions.

    Port scans aren't really attacks, only tell you what's on, and firewalls take care of this.

    DoS attacks, and this is my favorite... they will shut down the port when they get a DoS attack! Well, that to me sounds like DoS!!!
    So to protect ourselves from a DoS, we're going to turn off all the services.Please excuse me while I shoot myself in the head,, I think I just caught a cold.

    Even if it just blocks that one machine, like PortSentry, how the heck does it _really_ know the offending machine, that can be spoofed. Either way, it's just denying service to that machine, making the whole thing just a little more useless.

    - Serge Wroclawski
  • With the rather sweeping powers that are assigned to these bots, it is rather easy to imagine a time where the behavioural complexity of the bots overcomes the ability of human analysts to distinguish between programmed-for behaviour and errant behaviour. Unless, that is, there is a theoretical foundation for bot algorithms that can guarantee that the operations done by the bots will not corrupt computational imtegrity.

    For example, in the worst case scenario, let us assume that the bots are written with behaviour so complex that they become self-modifiable and have a goal of survival. Hypothetically, one of the things the bot could do is to ensure its survival by (1) propagating itself over the network (i.e. virus like behaviour) and (2) disabling all the sysadmin tools that could be used to wipe out the bot by a sysadmin.

    Admittedly, this is kind of a sci-fi like behaviour, but then, WAP devices could have fit in very well into 1930's sci-fi. Are there any theoretical results that indicate that such a scenario is not possible. If it is possible, is there a well-defined set of bot operations that is "safe" to code for when making a bot?

  • What about internet transactions? I can order stuff from the Shop-down-the-street(tm) and something from australia five minutes later. How are they supposed to respond to someone who has such a sporatic buying pattern?
  • The "agent" idea reminds me of numerous self-protection iRC scripts. There are literally thousdands of "me too" irc scripts that op/deop/kickban/etc people faster than humans can do. pretty much the human element of IRC can be taken out.

    Now let's ponder this. You ever see people riding in netsplits? I've seen all too many irc scripts interpret a ride in on a netsplit as an "attack" and HUGE chain reactions of mode changes occur between these scripts. It quickly turns to hell.
  • by tbo ( 35008 ) on Friday June 09, 2000 @08:00PM (#1012318) Journal
    It sounds like a smarter firewall that communicates with other firewalls on the same network. Not a huge advance in technology, just some marketroid got carried away. Here are the amazing features/capabilities it has:

    port scans: it detects port scans. Most firewalls do this. Theirs just detects ones that take place over a longer period, too.

    faint probes: isn't this pretty much the same as above? So it detects "stealth" scans, etc. A lot of firewalls do this.

    trojan horses: it recognizes "patterns" indicative of trojan horses. Tripwire, anyone?

    denial of service attacks there's only so much you can do without changing the upstream routing hardware/logic, especially against DDoS or DoS from a source with higher bandwidth (wanna bet Sandia has a really fat pipe, though?)

    security functions are integrated with ordinary everyday network use: email and web browsing are integrated into the security agent? How does that work? All I can think of is global security settings. Kinda nice, but is that really necessary if you're not running buggy MS junk?

    'live' programs such as the I-Love-You virus are prohibited this is a problem of stupid users and really bad design. Untrusted scripts/executables shouldn't run automatically, and user education is the most important part of any security system.

    Really, this just sounds like a souped-up firewall + Tripwire. Nothing too revolutionary. Wanna bet that a properly-configured OpenBSD box could have held off those four script kiddies (err, "experienced hackers") for 16 hours, too?

    Sorry for being so bloody sarcastic, but this just sounds like the kind of marketroid detail-free crap that ZDNet usually turns out.
  • Talk to My Agent In time the agents may graduate from patrol to control. Intelligent agents would be ideal for the control of interplanetary robot swarm missions while at the same time protecting them from long distance hackers or practical jokers. Closer to home micro-satellite swarms or perhaps even remote-controlled jet fighters could be computer-coordinated with agent assistance.

    This sounds like a really clever idea. Until they started talking about remote-controlled jet fighters. No one can be taken seriously after they start talking about remote-controlled jet fighters being piloted by autonomous swarms of autonomous programs. :)

  • by gargle ( 97883 ) on Friday June 09, 2000 @08:30PM (#1012320) Homepage
    I attended a talk last november given by a Sante Fe institute lecturer on computer intrusion detection systems modelled after the human immune system. (unfortunately, I can't remember what her name was, otherwise I would try to post a link). There's actually a very strong parallel between what a computer security system has to do, and the role of the human immune system: the key behind both these systems is to be able to distinguish between "self" and "non-self". In the case of the human immune system, the anti-bodies are trained on marrow cells (?) and only released into general circulation if they do not attack host cells. In their research, they used genetic programming to train the intrusion detectors on "typical" network activity - after which, the detectors would be able to identify and report non-typical activity. It supposedly works pretty well.
  • by Pinball Wizard ( 161942 ) on Friday June 09, 2000 @08:05PM (#1012321) Homepage Journal
    I can't remember the last time I got this excited about a piece of software. First, from the sounds of things, they want this to be a big distributed program like DNS. I imagine they would like to see every ISP run this.

    This deals with such a wide array of computer sabotage that its utterly amazing. Everything from breakins to virii to DDOS's can be successfully combated by this. Its exactly what the net needs.

    What would really be cool of course if the source was released(drool). But maybe that will happen since from the article it sounds like they want to see their program widely distributed.

  • And then I crack the bot, and get in anyways, lovely thought, sounds like something that's been done before.
  • It's not that I'm discrediting the guys over at Sandia, but the idea of bots that "runs on multiple computers in a network...constantly compare notes to determine if any unusual requests or commands have been received from external or internal sources" is not unique or a first.

    There is in fact two noticable examples of distributed network monitoring/Intrusion Detection Tools out there already that sound very similar to Sandia's new tool. They include the HummingBird System and MOM

    The Hummer Project [uidaho.edu] led by Dr. Deborah Frincke has been around since early 1998 and their main project, the HummingBird System is now in version 3.4. It is a complex toolkit that gives an administrator the power to distribute security and intrustion detection information between several hosts (including Solaris and NT machines as well as Linux) in which multiple attackers and targets are mixed and matched.

    The other example I know of is MOM [wisc.edu] which unfortunately been out of further development for over a year now.

    The main similarity between the two's functionality is that they both have:

    • A main process that runs on a central machine that gathers, sorts, and reports on data received from children on other hosts.
    • On other hosts, a child client process runs which reports anomalies to the central host and;
    • On all hosts, agents run that perform various maintenance, diagnostic, and intrusion detection tasks.

    So as you can see, distributed anti-cracknig and IDS tools have been around longer than you think and are quite refined. Good luck setting them up, and for those developing them

    Keep up the great work.

  • denial of service attacks which recently shut down the web-side operations of several American corporations have no effect. Cyberagents faced with a runaway barrage of incoming network requests close the gates of the system to prevent it from being flooded.

    Hmmm... would that be no effect, besides shutting down port 80, into which the attacks were coming? Are they serious that this would keep the website up, with no effect?
  • by |deity| ( 102693 ) on Friday June 09, 2000 @08:34PM (#1012325) Homepage
    At least this will give the truly skilled black/grey hat hackers something new to play with. I would bet that this type of computer defense would be good against script kiddies but skilled computer intruders would be able to get around it. In and of itself this could make for several vulnerabilities. Imagine a Dos attack that only has to do a port scan to fool the computers defenses into closing it's ports. Although like all vulnerabilities it wouldn't last long.

    Once this becomes more common and something people are familiar with attacks against it will get easier. I would imagine that in the realworld people will do like they always do and shut off many of the security features in order to make thier lives more conveniant.

    From the article. A consumer release is at least three years away as Sandia says the agent must be "trained to protect a wider variety of services" before it can be of much use to the average household. One suspects it also needs to be dumbed down slightly so that it is not quite as clever as the military-grade version.

    Besides it's like an arms race with both sides forever increasing the sophistication of their weapons. "What kind of attacks with these be? Well if these agents are as good as Sandia says you can bet it won't be long be before the bad guys get some bots of their own and start using them against governments, corporations and the general public. "

    I thought the opening comment "A cool-sounding project, to be sure -- but how much control is too much to cede to the intelligent agents? " was a little paranoid until I read the end of the article. In time the agents may graduate from patrol to control. Intelligent agents would be ideal for the control of interplanetary robot swarm missions while at the same time protecting them from long distance hackers or practical jokers. Closer to home micro-satellite swarms or perhaps even remote-controlled jet fighters could be computer-coordinated with agent assistance. Come on doesn't that sound a little like a the start of a William Gibson book.

  • To surf for porn? Imagine, having a whole bunch of semi intelligent bots collectively searching to 'net for free porn, faster than any hormonally overcharged teenager could ever hope to achive!
  • Speaking of Shadowrun, anyone thinking of "ice"?

    This is probably the first step towards black ice.
  • Sounds funny to me.

    Let's have the dogs protect the hens from the foxes.

    As if the typical users are going to be able to tell the difference between dogs and foxes.

    Judging from these things - one is just _less_ likely to screw up your system.

    We don't need more "automation" for this thank you. Software is still STUPID. Most users are still IGNORANT. So all we need is something that won't change no matter what you chuck at it.

    Linux doesn't protect users from themselves either. What might would be an O/S which defaults to TEST/RUNSAFE instead of RUNASUSER. That way the program has only a safe subset of the user's rights - can't alter/delete stuff it didn't create, without permission from the user.

    You just set up profiles for types of programs- "word processor", "network diagnostic tool", "browser" and if the programs don't behave as expected, you get an error.

    Cheerio,

    Link.

  • A relative of mine had the bank call her up one day to check a suspicious set of transactions: her card had been used to buy several loads of groceries several hundred miles away. Her card had been stolen - but the bank noticed this before she did! Now that's useful software...

    On the other hand, my father had his card cloned in LA a couple of years ago. The duplicate card spent a happy three months buying fuel at the same gas station, and that bank didn't notice a thing - even though the same card was also being used back home in the UK at the same time. Then the bank refused to cancel the transactions, since the card hadn't been stolen and the protection plan only covered card theft. Sometimes, the cards themselves have a higher IQ than the bank that issues them :-(

  • >MS Bob?

    SMACK!

    and once again, I say...

    SMACK!

  • i know what I would do!

    i could gaze deeply into her eyes and say, "hey baby, let us go to quick trip and feast upon a vegetarian burrito." then i would take her back to my place and look longingly at her. then i would turn her to stone! :)


    What do I do, when it seems I relate to Judas more than You?
  • Excuse me - whoever flagged this as flamebait should look again. Yes this person is steaming a bit, even smouldering, but read between the lines. They are making an interesting point that some people are going security mad.

    Think before you moderate...
  • OK, so basically, if I'm being attacked and this system is protecting my boxen, they assume a defensive posture, start shutting down, hiding, etc.

    No thanks. I'd prefer a more offensive response, like have a library of known exploits, port scan the attacking host, determine its host type, and launch an exploit right back as an attempt to shut the bitches down or DOS them.

    Yeah, there are some ethical issues, but if someone enters my house and I kill them in response, I've still killed someone. The difference that this is in self-defense.

    I'm just thinking of automating what is a normal human response. Often, the only time people learn their box has been r00ted is when the intruders launch other attacks from it to other boxes, the foreign admin notifies the cracked box WHOIS contact, and they go find it and shut it down.

    I'm also thinking of this from the other side too. If someone r00ts a box I have control of, I want it shut down asap to contain the damage. The sooner the better, no matter what that particular box's function is or how important it is. The sooner it is shut down, either by myself or a similar self-defense response, the better for ME. A compromised box sitting around undetected is fertal ground to sniff packets, and attack other inside boxes.

    In almost all cases of compromises I've seen is that the box attacking is a compromised box itself. Therefore, by definition, it has an available remote exploit on it, can be compromised, and then "init 0" sent to it (if a *nix box) and shut down.

    I know, loads of dangers and I can think of a lot of them. Stuff like this can get out of hand, but so can the "agents" in the article mentioned here. Once someone discovers how an agent works, it can be turned against itself.

    But when it comes to automated attack responses, I like fantacizing about my scenario better than a "pussy" defensive response! :-)

    "What, you looking at me muthafucker? Prepare to die bitch!"

  • A partially succesful attack on machine A (non-root access) or even an attempted attack causes machine A may cause the machine to close the port

    I still don't see how this helps against DDoS. After all, suppose your purpose *is* to close the port?

    One thing the source article seemed to hint at but didn't quite say was that it somehow rerouted the traffic dynamically. Another nice touch is the idea of a network learning what "normal" traffic is, but then suppose you're a weirdo and constantly do bizarre stuff with your network?

    It would be really annoying to be attacked by your own bots every time you tried doing something new.

    I stopped reading it at around the phrase "interplanetary robot swarm" because it just started to seem like a tremendous crock of shit at that point.

  • by mangu ( 126918 ) on Saturday June 10, 2000 @01:29AM (#1012336)
    Quoting the article "Statistical Pattern Recognition: A Review", from the January 200 "IEEE Transactions on Pattern Matching and Machine Intelligence":

    By the time they are five years old, most children can recognize digits and letters ... We take this ability for granted until we face the task of teaching a machine how to do the same ... In spite of almost 50 years of research, design of a general purpose machine pattern recognizer remains an elusive goal.

    I don't think this Sandia project will work as intended. Until we build a computer with processing power equivalent to our brains' trillion synapses, a human will be able to beat a computer in many ways.

    However, that bot can have uses other than the acknowledged ones. Censorship, for instance. Security must be absoultely perfect, while censorship may have holes - it's much worse having a cracker penetrate the Sandia labs atomic research files than a public library misclassifying a pr0n website.

  • by Airon ( 108830 )
    This sounds to me like the first example of ICE. Black Ice would incorporate counter attacks(DDoS on your feeble little machine) by the network. Somebody should give William Gibson a call and tell him :).
  • by Remus Shepherd ( 32833 ) <remus@panix.com> on Saturday June 10, 2000 @09:26AM (#1012338) Homepage
    Anyone remember the game BotWars? It was simple; using assembler language in a protected and limited memory space, write a bot that will kill any other bots on the system. Most bots sprayed memory with nulls or JMP commands to corrupt and kill everything else on the system. But one very powerful bot was known as the Five Musketeers.

    The Five Musketeers bot looked for copies of itself in memory, and if it didn't find them, it created up to four copies. Each copy then kept checking on the health of the other four, and if one copy became corrupted it was rewritten. Thus, the Five Musketeers were cooperatively immortal, and payloads could be added to them to spray memory or any other offensive attack you wished.

    Sound anything like Sandia's bot yet?

    I'm not sure I like the idea of the internet turning into a playfield for agents like those in BotWars. It could rapidly turn into a wasteland, with all the bandwidth going to automatic attacks and defenses. :/
  • by FFFish ( 7567 ) on Friday June 09, 2000 @08:05PM (#1012339) Homepage
    By my understanding, VISA does a similar sort of thing with its transaction processing. The software monitors your usage pattern -- locations, dollar amounts, dates, time and suchlike -- and attempts to identify abnormal usage.

    So, probably, most of your spending is in and around your hometown. Once in a while you make a trip to the big city. You don't seem to use it a lot at the jeweller's -- Christmas is the exception.

    Hmmm.. what's this? You're buying a $3000 necklace at Goldstein's Jewellers in Vancouver, BC? Seems unlikely you'd be getting a videotape from Roger's on Tuesday in Poughkeepsie, and then buying diamonds in Vancouver on Wednesday... let's deny the transaction, or get the clerk to confirm ID.

    Now, this is hearsay. I can't say I've *read* a report on this, but I've heard several people tell of it. And it doesn't seem such a stretch, though I've never actually heard of someone being denied an unlikely purchase.

    Anyway, long and short of it is that it's not a real stretch to imagine this being a powerful tool for networks. Monitor the traffic and perform analysis: start figuring out what's normal and what's not. And alert someone when abnormal things begin to happen.

    Sounds cool. I'm for it!

    --
  • A year ago I got my first VISA card and promptly took a 3-month jaunt to Buenos Aires, Argentina, where I planned to use the new card as my primary means of cash flow. However, upon arrival there I found that none of the ATMs would accept the card and I was unable to access my account through any of various networks. I'm not certain that this was something VISA did...perhaps EVERY ATM in Buenos Aires with the VISA ensignia on it malfunctioned (and I tried about 30). But the only fault of the card was at the ATM...it still worked perfectly for purchases. It was quite a headache for me at the time, but I must support any kind of technology that keeps my money a little more secure...even if I'm not legally obligated to pay for unauthorized uses of my credit card number.
  • Or, you can compare it to Shadowrun. Heh.
  • See the best part is you wouldn't even know you're in pink poo, everything would be completely normal for you. I've seen the Matrix about 3 times and I still can't see why the agents could ever be considered bad guys. Cypher was the only person in that movie with a clue.

Disclaimer: "These opinions are my own, though for a small fee they be yours too." -- Dave Haynie

Working...