Sandia's Distributed Anti-Cracking Bot 70
tgw writes: "Beyond 2000 is reporting that Sandia National Laboratories has created a unique type of bot that can defend against cracking attacks and viruses. The bot 'runs on multiple computers in a network and depending on circumstances the separate copies of the agent can act alone or together as a single, distributed program. The [bots] across the network constantly compare notes to determine if any unusual requests or commands have been received from external or internal sources.'" A cool-sounding project, to be sure -- but how much control is too much to cede to the intelligent agents?
Re:Whoop-dee-do (Score:1)
Interesting points to ponder. So it can detect DDOS and close things down? How? I though the whole idea of DDOS was that at first it just looks like heavy traffic. So how do you tell the difference? Or do you always just shut the box down when the traffic gets heavy (seems kinda stupid) Also, it can keep virii like ILY from entering the system (paraphrased). How? Will it stop all executables from being sent through email? Couldn't that be done by easier means? Please don't tell me they're going to scan email looking for questionable code (and invade my privacy).
If anything this is a really cool look at how someone with an imagination can take a security util and bring it to life. Hello agent smith.
I am only an egg, be kind... but do reply
Well, yes (Score:1)
Cheers,
Rick Kirkland
Bots protecting both ends (Score:1)
Analyzers at multiple points along the backbones could communicate and discover DDOS attacks and cease to route packets that could be nothing but a massive coordinated attack. Seems like backbone bots could be triggered by a plea for help by the network being attacked. That would help extinguish smurf attacks and could probably be used to actually help track down poorly configured networks.
This could also put the burdon on ISP's that have lousy security and allow forged packets to escape their domains. Sure, make them accountable for their insecurity.
Re:this is sweet (Score:5)
When a DDoS strikes, you 'wake' a normally dormanr redundant line to another backbone (possibly via direct link to a high volume ISP to hide your signature), then you identify your "known" valid users and VPN to them via the peer. You can VPN via a distant ISP/node, so the attacker won't know you're communicating with 'legitimate users' via a hidden link to the ATT backbone in Ogden Utah and emerging on the Net over the redundant bandwidth of
The invaders can pound on the gates all they want. You're not using the front door anymore. You can also have secret VPN router/authenticators around the country that 'legitimate users' can connect to when the front door is slammed shut. The details could be built into key software, transparent to the legitimate users.
It's like a reverse DDoS. In a DDoS, the attacker overwhelms you with attacks from every direction, faster than you can respond. In the 'Reverse DDos' Defense, the legitimate communications links are leaking out from all over, faster than you can find them and swamp them. In DDoS attacks, the defender is trapped by converging bandwidth from 'distributed attackers'. In the Reverse DDoS defense, the 'hidden peering' lets the defender 'distribute its communication bandwidth' to emerge as barely above background blips in widely divergent locations
Well, that's how *I'd* do it, if I were the Feds. Hence the unimaginative codename "Legion"
(Am I too close for comfort? My services are available for a fee, Feds.)
Re:More info about the red team (Score:1)
Re:VISA does an analogous thing (Score:1)
AT&T does the same with calling cards. Back in 1992, some friends and I were doing some volunteer phoning involving calling people statewide. We were getting reimbursed for expenses, so I gave each team my calling card number. Within 1 hour I got a call from AT&T, wanting to know if I was aware that dozens of calls were being charged to my card simultaneously from 4 different locations - and they were clearly surprised when I said yes!
======
"Rex unto my cleeb, and thou shalt have everlasting blort." - Zorp 3:16
Re:VISA does an analogous thing (Score:1)
Apparently its pretty common among credit card companies.
Re:VISA does an analogous thing (Score:2)
Re:VISA does an analogous thing (Score:1)
I've never been denied a purchase, but I did get a telephone call at home one morning asking me if I had used my card a day earlier to purchase luggage from a travel agency in Hong Kong. (Perhaps the computers don't do profile checking in real-time.) The agent told me that the computer had flagged the transaction based on the merchant's profile.
Cyber Agents - Food for thought (Score:1)
All right, "No central authority operates the agent", but the information for making decisions must be aggregated, a form of centralization. Granted the use of this information for command decisions may be decentralized, but the information aggregation process will susceptible to spoofing.
Also, man will be in the loop somehow. When the information is aggregated, the reaction times for responding to this higher level of information slows down considerably. Thus, the control can become stable. The time for tactical decision making with cyber technology will be to small for a human, but strategic decisions will always involve man.
Distributed DDoS Defence? (Score:2)
[Steve Golldsmiths] assessment of his agent's abilities is blunt: "If every node on the Internet was run by one of these agents, the I-Love-You virus would not have got beyond the first machine." ... decentralised control makes each agent autonomous yet cooperative ... Cyberagents faced with a runaway barrage of incoming network requests close the gates of the system to prevent it from being flooded.
Closing the gates doesn't stop a DDos attack, it merely limits the damage which can be caused. One defence mechanism is having redundant links which are 'turned on' when currently active links come under attack. If you detect the attack quickly enough, it is, theoretically, possible to redirect valid connections to your new links. In practice it's difficult to implement as you run the risk of redirecting the attackers to your new links too. Of course the attack is still effective, it's closed rendered the link that was there useless, and should the attackers be dynamic enough, the can start attacking the new links.
Sandia's system offers up a new defence - Assume multiple ISPs had agents deployed, say ISP A comes under an attack routed through ISP B from a user connected to ISP C.
Assuming also that the ISPs cooperated, their agents would have a certain level of trust. ISP A's agents could, upon detecting an attack, seek help from ISP B's agents. ISP B then filters that route, and asks ISP C what the hell is going on. And so forth. Of course, one ISPs bot couldn't control another, but I think them saying "Help!!! I'm getting these from here. Please make them go away..." is acceptable.
The thing I don't like about this however is the trust issue. As they say, Trust no one. Practical authentication methods are never 100% reliable. Distributed security agents should never have to trust anybody else. If they do, they run the risk of being compromised if the trusted party is compromised. I accept that trust is neccessary in order that network immune systems, anti-virus distributors, and agents like these are effective. But the control that one agent has over another agent is something to be wary of. I'm wondering whether Santia may opensource their non-munitions version when it's releaed. They appear keen on the idea of their system being implemented worldwide. Of course, if they didn't opensource, then its all just a government plot to control the world. :)
The other defence of course, is everybody could just implement RFC 2267 [ietf.org] to prevents address spoofing.
"A goldfish was his muse, eternally amused"
Re:redundant Internet links revisited (Score:2)
Phase I
"Public services" (web, etc.) are sacrified. They aren't a gov't priority when under DDoS attack. The front door is shut (or left open but ignored)
Specifically authorized high-priority users with a pre-existing relationship to Sandia are equipped with special software for locating/connecting to "Secret Authenticating routers" (SARs - see below)
Sandia opens the 'back doors', its first messages are advertisements to SARs. SARs don't respond to normal network requests, but only provide SAR info (dynamically assigned alternate IPs and secondary SARs) after a client rigorously authenticates itself. These clients may be DNS routers of secure
When one of these client finds it cannot connect via standard routing, it clicks into 'defense mode' and consults one or more SARs to open or restore its connection to Sandia via VPN to otherwise 'innocent' and distant sites.
Phase II
If the VPN IPs come under attack, they are shut down or changed, and removed from the SAR list. Clients must re-query for new VPN connect IPs The list of authenticated clients is scrutinized and correlated with attacks, to identify subverted machines at trusted nodes, or attackers using trusted machines.
The topology is a "private" physical network connecting to the Internet at 'disposable' secure gateways. I presume there are many more possible gateway ISPs/IP than needed, with only a few in use at a given time. This is practical for the Gov't since the same network can protect all sites secured by this method
The DDoS fails because the attackers no longer know where to attack. The connection they initially attack never "knows" and therefore cannot reveal, the alternate routing. Dedicated hardware decoders handle any re-addressing over the private network, so compromised internal Sandia machines may not help the attacker much: they can "phone home" and reveal individual disposable outside links, but not the overall network. In the process, they also reveal themselves as suspects for compromise.
Refinements are, of course, possible ad infinitum. I estimate that the DDoS defense will be robust in proportion to the total bandwidth of all available outside gateways, and will degrade gracefully and scale without saturation until the DDoS is sufficient to bring down most of the Internet itself.
Phase III
The Internet is now useless. All external gateways are cut, and the system goes to 'emergency mode', communicating over the internal network. NOTE: the system NEVER uses the internal network as a network if it is still connected to the internet (this would permit network tracing).
At this point, external DDoS is impossible, but the possibility that the Internet only *appears* to be swamped because all connections to the outside are swamped by a massive internal compromise.
A core set of top-priority internal systems (e.g. Military C3I, etc.) and connections will form a network within the network to maintain gov't functions, and identify compromised systems or subnetworks, as above, but on a smaller scale
a good helper though (Score:2)
No sysadmin in the world worth his/her salt would ever just leave it all up to these bots.
However, in a world of script kiddies and wanna-be hackers, these little guys could really help out alot.
Locking down a server is no small task, and having a team of little nano/code robots working with you could definitley be cool...maybe slashdot could have these guys running through looking for common trolls too =) A perl bot looking for Natalie Portman and other junk and moderating them to trolls might prove useful *wink*
Discover bit me one time with this (Score:1)
That night I went to fill up the tank, and I get "Card Declined" so I ask the clerk what's up and they say I have to call this 800 number. Turns out they had tried to call my house, but seing as how we were 2000 miles from home nobody answered.
When I called they said the computer watches for things such as gas only purchases in a short period of time, and then re-enabled the card. The gas station apparently had cached the decline though, so we had to go to another station.
But imagine being stuck 2000 miles from home because your credit card suddenly isn't accepted - that's how it felt until I was able to call and straighten things out.
Interesting, but....? (Score:1)
If suspicious activity is planned smartly, couldn't the bots create so much internal chatter trying to figure out "what's going on" that they themselves create a DOS effect on the network?
Re:VISA does an analogous thing (Score:2)
Mastercard first time in an Internet book
purchase worth some $300 back in '96. Eurocard
Frankfurt called me back the other day and
asked whether I actually authorize a transfer
of $300 to a certain company in Sebastopol, CA
or if this is fraudulent.
© Copyright 2000 Kristian Köhntopp
Sign me up (Score:1)
Re:VISA does an analogous thing (Score:2)
Re:What level access do these "agents" have? (Score:2)
Yep. And the bots themselves are after all just another system that is likely to have security vulnerabilities.
It will certainly get attacked with the intention to blind or corrupt it.
What level access do these "agents" have? (Score:3)
If there are "backdoors" or special security exceptions made for the bots, then one would hope there is some means of recognizing rogue 'bots.
The most obvious attack on a system with security like this is to duke in a rogue 'bot faking inheritance of the security privileges given to the real ones, and then use that to mount the attack.
hmm (Score:2)
It's no substitute... (Score:1)
I noticed it's supposed to find Trojan horses. That's all well and good, but, generally, most trojans aren't found as trojans until they're used against Netizens at large.
Arrgh. Why can't everyone use Linux/BSD/Be/*nix?
Advantages? (Score:1)
--
Black Ice (Score:1)
is it really a security improvement? (Score:1)
I was reading this over, and to me, the whole concept seems scary on more then one level.
I am sure most of you have heard of echelon. These people want us to put a worse-then-military grade (why are we inferior??) security package that we don't even have the source for?
Secondly, I am sure I am not the only one thinking about backdoors. If they had an "accidental" backdoor in one of these things, just think of the havoc! Already, they have functions built in to allow themselves to spread more easily.
Perhaps this is true of all software that comes without source - but this is the only package to which we give free roam of our network, giving it a large carte blanche - it's not under our scrutiny.
Just a few thoughts..
I won't even touch upon what happens when AI goes bad :-)
Re:Whoop-dee-do (Score:2)
Script kiddies? Don't you think that Sandia can come up with some pretty competent testers? (There's a link to a description of the Red Team above). What I really wonder about is the possibility mentioned at the end of the article - distributed cracker bots. I know this isn't a popular opinion around here, but that's really the sort of technology I don't want to see too publicly available.
Not until I see it in the field (Score:4)
One principle behind securing a network is to disallow unnecessary access between machines. The fewer legitimate channels, and the more predictable the dataflow, the easier it is to monitor for anomalies. There are certain machines in the test harness I'm working on, for example, that *never* talk to one another. If I didn't segregate machines this way, we could lose essential data if a weak front-end box were taken.
Opening the network to roving spiders and allowing them discretionary control over monitoring and transmitting would be difficult to secure. How could I tell the difference between a scan from a bot and a scan from an attacker? How could I identify what is an "dangerous" data transmission when the bots are semi-autonomous and unpredictable?
I don't want to dismiss the idea, because eventually we will have to develop "immune systems" for our machines. But right now, it seems difficult to integrate these two models. When I run my own scans, I know that I'm doing it and I can pick out my work from the logs. This would add a new layer of complexity - something that already exists in abundance in the security field!
-konstant
Yes! We are all individuals! I'm not!
Is it just me, or... (Score:3)
Beyond 2000 RDF (Score:1)
-
Danger (Score:4)
How easy will they be to deactivate once someone else has the key? Let's suppose that we have become reliant on these bots (or their descendants), and that our ever noble, benovolent security experts (Cult of the Dead Cow IV, for example) have decided to re-define "unusual requests" as something which would normally be useful, and vice-versa? Denial of Service attacks are bad enough, but an attack from millions of bots - distributed and replicating WITH PERMISSION across a network - that would be horrific.
If this protocol is accepted and integrated into the system, I can imagine password sniffing bots, e-mail re-directing bots, etc., all written by the script kiddies of the day and reproducing as nightmare progency.
I suggest that, with such a threat, the safeguards need to be more formidable than any yet formulated. We would need to have -virtually - "viral inhibitors" keyed to destroy any interlopers on our system. Does such technology currently exist? Do we want to release these bots into the world before they do?
Re:Beyond 2000 RDF (Score:1)
At least I got the rdf url [beyond2000.com] right
-
Re:A couple of thoughts (Score:1)
Yes. But it sounds even MORE like the Terminator series.
More info about the red team (Score:4)
Re:this is sweet (Score:2)
This deals with such a wide array of computer sabotage that its utterly amazing. Everything from breakins to virii to DDOS's can be successfully combated by this. Its exactly what the net needs.
How does it deal with a DDOS? If a DDOS sucks up all the bandwidth a site has, how does this program help anything?
It seems, if anything, that it could add an additional weakness--if the attacker knew anything about the bots, he could deliberately do things that send them into a frenzy, rendering the security bots themselves a menace to the system.
it's Stephanie Forrest. (Score:1)
see this article [santafe.edu]
Re:Advantages? (Score:1)
If the program is not distributed by running port scans and various other info gather activities against machine A, B and C I can gain information about machine D as they are probably all configured similarly.
If they run only with local data they may not pick up patterns over multiple computers.
A partially succesful attack on machine A (non-root access) or even an attempted attack causes machine A may cause the machine to close the port...it would be nice if the other machines did similarly (as they know there is a security hole here).
Which machine has control over the firewall? It seems one of the advantages of this system is that it can dynamically affect the firewall. If any machine has the ability to unilaterally affect the firewall one hacked machine could enact changes which fuck up the entire network.
Communicating with client software on the machines probably makes it easier to determine which machines are hacked
Exageration (Score:1)
For instance they claim to deal with DDOS byt shutting off access to that port. While this certainly helps with ddos (less half-open connections) so the computer may not crash. It doesn't eliminate the ddos.
I would wait for confirmation before taking all the claims here as gold.
DDOS & DACB (Score:1)
we have DDOS, even slashdot got it during their server move,
and now we have another distributed tool, this time to prevent DDOS, strange.
My question is can you run them both on the same computer?
THink about all the fun you could have!
-0-0-0-0-0-0-0-0-0-
Laptop006 (RHCE: That means I know what I'm talking about!)
Melbourne, Australia
Re:Advantages? (Score:2)
So say your site gets slashdotted? Then these bots will be nice enough to close port 80 for you? Thanks bots!
Things like that are hard to tell, how does the bot say tell the differance between a DOS attack on your httpd OR did someone just post the story to slashdot? What about a ton of incoming email, is it spam, DOS attack, or a bunch of people replying to your ad on monekybage.
I am not flaming the project, it sounds like a noble one that is really dam cool, but don't understand how they are doing things...
I would like to install a system admin bot then just troll slashdot all day...
Re:redundant Internet links (Score:3)
This will only work against DDoS if the new routes (i.e. via un-DDOSed links) are advertised out quickly enough - first of all, the IDS needs to detect the DDoS, then it needs to mark the links under attack as down, so that the routing protocol (which has to be BGP as you are interfacing to multiple ISPs), can advertise the new routes.
These routes then have to propagate out throughout the Net, across multiple ISPs, via BGP, until they reach the ISPs of (non-attackers) who are trying to reach your site - these ISPs' routers will then start sending packets via the new routes. The tough part is making sure the route advertisements don't reach the DDoSing hosts - if they do, you have just moved the DDoS attacks onto a new link!
The IDS actually has to analyse the origin of the DDoS attacks (which may mean cooperating with upstream ISPs, since the source addresses will be forged if the attackers have any sense), working out which ISP is hosting the attacking hosts, and then make sure that the route advertisements don't get sent to that ISP. While BGP is very powerful, I'm not sure if it can do this - ask a BGP guru... In any case, if the DDoS attackers are smart enough to subvert hosts in tens of major ISPs, there is no way you can use this 'change route' approach to combat them, without cutting off many legitimate users of your site who are also getting on the Net via those ISPs.
Before DDoS, this approach would have worked, i.e. where there was a single host attacking you - but it's now not sophisticated enough. It may still help in some DDoS attacks, it just depends on luck and the skill of the attackers.
Combatting DDoS is a fundamentally hard problem. The best single mechanism to reduce DDoSes and make them easier to track is RFC 2267 (see www.faqs.org for a copy), which prevents people from injecting packets with forged source addresses (actually they can forge their host address but can't claim to be from a different network). This makes it much easier to directly contact the ISP whose customer or web hosts have been compromised and get them to put in filters blocking the attack.
Without source address spoofing prevention, you have to have a laborious process of going from network operations centre (NOC) to NOC for each ISP back up the chain, getting them to put on traffic analysis tools to see where the traffic is coming from.
Not as dangerous as they sound. (Score:3)
In short, if Sandia has remotely competent people, these agents are going to have strict limitations on their capabilities. Are they completely immune to attack? No. As Bruce Schneier has taught us, this only reduces risk. Still, if you add a requirement for agents to monitor each other, a human would have to be damn good to compromise a sufficient agent population. (Of course, this means that we may be headed for a future of eternal agent war. Might be cool. Want to prove open source? Make Tux2.0 the agent that can kick the crap out of any other agent.)
What's the point (Score:1)
It will tighten security when attacked. Well why the heck isn't security tight anyway?! Viruses aren't a problem when you have an OS with permissions.
Port scans aren't really attacks, only tell you what's on, and firewalls take care of this.
DoS attacks, and this is my favorite... they will shut down the port when they get a DoS attack! Well, that to me sounds like DoS!!!
So to protect ourselves from a DoS, we're going to turn off all the services.Please excuse me while I shoot myself in the head,, I think I just caught a cold.
Even if it just blocks that one machine, like PortSentry, how the heck does it _really_ know the offending machine, that can be spoofed. Either way, it's just denying service to that machine, making the whole thing just a little more useless.
- Serge Wroclawski
How does one guarantee computational integrity (Score:2)
With the rather sweeping powers that are assigned to these bots, it is rather easy to imagine a time where the behavioural complexity of the bots overcomes the ability of human analysts to distinguish between programmed-for behaviour and errant behaviour. Unless, that is, there is a theoretical foundation for bot algorithms that can guarantee that the operations done by the bots will not corrupt computational imtegrity.
For example, in the worst case scenario, let us assume that the bots are written with behaviour so complex that they become self-modifiable and have a goal of survival. Hypothetically, one of the things the bot could do is to ensure its survival by (1) propagating itself over the network (i.e. virus like behaviour) and (2) disabling all the sysadmin tools that could be used to wipe out the bot by a sysadmin.
Admittedly, this is kind of a sci-fi like behaviour, but then, WAP devices could have fit in very well into 1930's sci-fi. Are there any theoretical results that indicate that such a scenario is not possible. If it is possible, is there a well-defined set of bot operations that is "safe" to code for when making a bot?
Re:VISA does an analogous thing (Score:1)
Maybe a bad idea - example (Score:1)
Now let's ponder this. You ever see people riding in netsplits? I've seen all too many irc scripts interpret a ride in on a netsplit as an "attack" and HUGE chain reactions of mode changes occur between these scripts. It quickly turns to hell.
Whoop-dee-do (Score:3)
port scans: it detects port scans. Most firewalls do this. Theirs just detects ones that take place over a longer period, too.
faint probes: isn't this pretty much the same as above? So it detects "stealth" scans, etc. A lot of firewalls do this.
trojan horses: it recognizes "patterns" indicative of trojan horses. Tripwire, anyone?
denial of service attacks there's only so much you can do without changing the upstream routing hardware/logic, especially against DDoS or DoS from a source with higher bandwidth (wanna bet Sandia has a really fat pipe, though?)
security functions are integrated with ordinary everyday network use: email and web browsing are integrated into the security agent? How does that work? All I can think of is global security settings. Kinda nice, but is that really necessary if you're not running buggy MS junk?
'live' programs such as the I-Love-You virus are prohibited this is a problem of stupid users and really bad design. Untrusted scripts/executables shouldn't run automatically, and user education is the most important part of any security system.
Really, this just sounds like a souped-up firewall + Tripwire. Nothing too revolutionary. Wanna bet that a properly-configured OpenBSD box could have held off those four script kiddies (err, "experienced hackers") for 16 hours, too?
Sorry for being so bloody sarcastic, but this just sounds like the kind of marketroid detail-free crap that ZDNet usually turns out.
From the article... (Score:1)
This sounds like a really clever idea. Until they started talking about remote-controlled jet fighters. No one can be taken seriously after they start talking about remote-controlled jet fighters being piloted by autonomous swarms of autonomous programs. :)
Human immune system (Score:3)
this is sweet (Score:3)
This deals with such a wide array of computer sabotage that its utterly amazing. Everything from breakins to virii to DDOS's can be successfully combated by this. Its exactly what the net needs.
What would really be cool of course if the source was released(drool). But maybe that will happen since from the article it sounds like they want to see their program widely distributed.
And then (Score:1)
Aren't you forgetting the "Hummer Project"? (Score:5)
There is in fact two noticable examples of distributed network monitoring/Intrusion Detection Tools out there already that sound very similar to Sandia's new tool. They include the HummingBird System and MOM
The Hummer Project [uidaho.edu] led by Dr. Deborah Frincke has been around since early 1998 and their main project, the HummingBird System is now in version 3.4. It is a complex toolkit that gives an administrator the power to distribute security and intrustion detection information between several hosts (including Solaris and NT machines as well as Linux) in which multiple attackers and targets are mixed and matched.
The other example I know of is MOM [wisc.edu] which unfortunately been out of further development for over a year now.
The main similarity between the two's functionality is that they both have:
Keep up the great work.
Ummm... DDOS defense ? (Score:1)
Hmmm... would that be no effect, besides shutting down port 80, into which the attacks were coming? Are they serious that this would keep the website up, with no effect?
A couple of thoughts (Score:5)
Once this becomes more common and something people are familiar with attacks against it will get easier. I would imagine that in the realworld people will do like they always do and shut off many of the security features in order to make thier lives more conveniant.
From the article. A consumer release is at least three years away as Sandia says the agent must be "trained to protect a wider variety of services" before it can be of much use to the average household. One suspects it also needs to be dumbed down slightly so that it is not quite as clever as the military-grade version.
Besides it's like an arms race with both sides forever increasing the sophistication of their weapons. "What kind of attacks with these be? Well if these agents are as good as Sandia says you can bet it won't be long be before the bad guys get some bots of their own and start using them against governments, corporations and the general public. "
I thought the opening comment "A cool-sounding project, to be sure -- but how much control is too much to cede to the intelligent agents? " was a little paranoid until I read the end of the article. In time the agents may graduate from patrol to control. Intelligent agents would be ideal for the control of interplanetary robot swarm missions while at the same time protecting them from long distance hackers or practical jokers. Closer to home micro-satellite swarms or perhaps even remote-controlled jet fighters could be computer-coordinated with agent assistance. Come on doesn't that sound a little like a the start of a William Gibson book.
Could they be set up..... (Score:1)
Shadowrun (Score:1)
This is probably the first step towards black ice.
How can that work? (Score:1)
Let's have the dogs protect the hens from the foxes.
As if the typical users are going to be able to tell the difference between dogs and foxes.
Judging from these things - one is just _less_ likely to screw up your system.
We don't need more "automation" for this thank you. Software is still STUPID. Most users are still IGNORANT. So all we need is something that won't change no matter what you chuck at it.
Linux doesn't protect users from themselves either. What might would be an O/S which defaults to TEST/RUNSAFE instead of RUNASUSER. That way the program has only a safe subset of the user's rights - can't alter/delete stuff it didn't create, without permission from the user.
You just set up profiles for types of programs- "word processor", "network diagnostic tool", "browser" and if the programs don't behave as expected, you get an error.
Cheerio,
Link.
Re:VISA does an analogous thing (Score:1)
On the other hand, my father had his card cloned in LA a couple of years ago. The duplicate card spent a happy three months buying fuel at the same gas station, and that bank didn't notice a thing - even though the same card was also being used back home in the UK at the same time. Then the bank refused to cancel the transactions, since the card hadn't been stolen and the protection plan only covered card theft. Sometimes, the cards themselves have a higher IQ than the bank that issues them :-(
Re:Sign me up (Score:1)
SMACK!
and once again, I say...
SMACK!
Re:More Bitching About "Cracking" (Score:1)
i could gaze deeply into her eyes and say, "hey baby, let us go to quick trip and feast upon a vegetarian burrito." then i would take her back to my place and look longingly at her. then i would turn her to stone!
What do I do, when it seems I relate to Judas more than You?
Re:Security is getting out of hand (Score:1)
Think before you moderate...
The best defense is a good offense (Score:2)
No thanks. I'd prefer a more offensive response, like have a library of known exploits, port scan the attacking host, determine its host type, and launch an exploit right back as an attempt to shut the bitches down or DOS them.
Yeah, there are some ethical issues, but if someone enters my house and I kill them in response, I've still killed someone. The difference that this is in self-defense.
I'm just thinking of automating what is a normal human response. Often, the only time people learn their box has been r00ted is when the intruders launch other attacks from it to other boxes, the foreign admin notifies the cracked box WHOIS contact, and they go find it and shut it down.
I'm also thinking of this from the other side too. If someone r00ts a box I have control of, I want it shut down asap to contain the damage. The sooner the better, no matter what that particular box's function is or how important it is. The sooner it is shut down, either by myself or a similar self-defense response, the better for ME. A compromised box sitting around undetected is fertal ground to sniff packets, and attack other inside boxes.
In almost all cases of compromises I've seen is that the box attacking is a compromised box itself. Therefore, by definition, it has an available remote exploit on it, can be compromised, and then "init 0" sent to it (if a *nix box) and shut down.
I know, loads of dangers and I can think of a lot of them. Stuff like this can get out of hand, but so can the "agents" in the article mentioned here. Once someone discovers how an agent works, it can be turned against itself.
But when it comes to automated attack responses, I like fantacizing about my scenario better than a "pussy" defensive response! :-)
"What, you looking at me muthafucker? Prepare to die bitch!"
Re:Advantages? (Score:1)
A partially succesful attack on machine A (non-root access) or even an attempted attack causes machine A may cause the machine to close the port
I still don't see how this helps against DDoS. After all, suppose your purpose *is* to close the port?
One thing the source article seemed to hint at but didn't quite say was that it somehow rerouted the traffic dynamically. Another nice touch is the idea of a network learning what "normal" traffic is, but then suppose you're a weirdo and constantly do bizarre stuff with your network?
It would be really annoying to be attacked by your own bots every time you tried doing something new.
I stopped reading it at around the phrase "interplanetary robot swarm" because it just started to seem like a tremendous crock of shit at that point.
I know how it works (Score:1)
------------------
What is a pattern? (Score:3)
By the time they are five years old, most children can recognize digits and letters ... We take this ability for granted until we face the task of teaching a machine how to do the same ... In spite of almost 50 years of research, design of a general purpose machine pattern recognizer remains an elusive goal.
I don't think this Sandia project will work as intended. Until we build a computer with processing power equivalent to our brains' trillion synapses, a human will be able to beat a computer in many ways.
However, that bot can have uses other than the acknowledged ones. Censorship, for instance. Security must be absoultely perfect, while censorship may have holes - it's much worse having a cracker penetrate the Sandia labs atomic research files than a public library misclassifying a pr0n website.
Ice (Score:1)
BotWars on the Internet? (Score:3)
The Five Musketeers bot looked for copies of itself in memory, and if it didn't find them, it created up to four copies. Each copy then kept checking on the health of the other four, and if one copy became corrupted it was rewritten. Thus, the Five Musketeers were cooperatively immortal, and payloads could be added to them to spray memory or any other offensive attack you wished.
Sound anything like Sandia's bot yet?
I'm not sure I like the idea of the internet turning into a playfield for agents like those in BotWars. It could rapidly turn into a wasteland, with all the bandwidth going to automatic attacks and defenses.
VISA does an analogous thing (Score:5)
So, probably, most of your spending is in and around your hometown. Once in a while you make a trip to the big city. You don't seem to use it a lot at the jeweller's -- Christmas is the exception.
Hmmm.. what's this? You're buying a $3000 necklace at Goldstein's Jewellers in Vancouver, BC? Seems unlikely you'd be getting a videotape from Roger's on Tuesday in Poughkeepsie, and then buying diamonds in Vancouver on Wednesday... let's deny the transaction, or get the clerk to confirm ID.
Now, this is hearsay. I can't say I've *read* a report on this, but I've heard several people tell of it. And it doesn't seem such a stretch, though I've never actually heard of someone being denied an unlikely purchase.
Anyway, long and short of it is that it's not a real stretch to imagine this being a powerful tool for networks. Monitor the traffic and perform analysis: start figuring out what's normal and what's not. And alert someone when abnormal things begin to happen.
Sounds cool. I'm for it!
--
Same thing happened to me (Score:1)
Re:Is it just me, or... (Score:2)
Re:Is it just me, or... (Score:1)