Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Security

Pushback against DDOS Attacks 159

Posted by CmdrTaco
from the build-a-better-asshole-trap dept.
Huusker writes "Steven Bellovin and others at ATT Research Labs and ICIR have come up with mechanism to stop DDOS attacks. The idea is called Pushback. When the routers get flooded they consult a Unix daemon (/etc/pushbackd) to determine if they are being DDOS'ed. The routers propagate the quench packets back to the sources. The policy and propagation are separate, allowing hardware vendors to concentrate on the quench protocol while the white hats invent ever more clever DDOS detection filters for /etc/pushbackd. The authors of the paper have an initial implementation on FreeBSD."
This discussion has been archived. No new comments can be posted.

Pushback against DDOS Attacks

Comments Filter:
  • Problem? (Score:2, Insightful)

    by prichardson (603676)
    Unfortunately the DDOS'ers will simply find a new way to flood a system. The best way to defend against this is to have a backup plan for when your servers get hosed.
    • Re:Problem? (Score:3, Interesting)

      by thefalconer (569726)
      Yes, but this would also stop most typical script kiddies. Those are the most malicious ones. Lack of maturity combined with lots of "god complex" tend to cause them to do far more damage than a typical hacker/ddos'er. So if you shut them down or reverse dos them, then they get a taste of their own medicine and you get to laugh while they're trying to figure out why their system just took a dive. :)
      • Ah yes, but i was thinking far enough into the futire when this uninvented new attack is just as commonplace as the DDoS is now. I'm sure the DDoS was a truly godlike acompleshment in its day
      • So if you shut them down or reverse dos them, then they get a taste of their own medicine and you get to laugh while they're trying to figure out why their system just took a dive. :)

        I don't think you get it (or I misread the article). The first D in DDoS stands for distributed. In a distributed denial of service attack many, many clients all attack a target at once. You can assume that the machine of the guy that directs the attack is not among the attacking machines.

        A reverse DoS would only annoy some people that probably don't even know that their machine is being used in an attack. A better response (if it were possible) would be to send them instructions on securing their machines.

        • I agree that this doesn't, on the surface, seem to be a good idea. My question is this: don't some DDoS attacks spoof the source IP anyway? I believe that this is easily done, as long as you don't care about getting a return value. Being attacked in this manner would cause the victim to "push back" against another unsuspecting victim...

          Or am I missing something (or just wrong)?
          • The way I see it is that idea is to throttle the datastream going to the attacked IP, at all the routers up to the originating IP(s), not to bounce the packets back to the originating IP(s). It's the only thing that would make sense. If so, the choice of words in the writeup is a bit confusing. But then again, I'm easily confused :)
    • Yes, the typical arms race situation applies, but the defenders now have some good weapons at their disposal. If the methods that implement the quench feature is robust and hard to subvert, then it is just the server that needs to be updated. Many techniques could be used to identify the sources of the attacks, including some manual help from the system operators. Over time, the demon could get very good at recognizing attacks bases on heuristics, so changes to the flooding packets or patterns might not help get around the filtering.
  • by Anonymous Coward on Sunday October 27, 2002 @09:05AM (#4541085)
    If pushback is subverted, couldnt it function like an inverse DOD tool?
    • Department of Defense should use this?
      Why? .K
    • by Anonymous Coward
      No because it means stop sending. To the network this stops the flood of packets.

      and as the flood stop makes it,s way up the network to the source more and more bandwith is avalibale to the network untill hopefully even the ofending machines will ack and stop sending.

      But even if the ofending machine wont the routers
      in between the target and the source will.

      CCNA

      • What he means is that pushback is a form of muting a computer. Pushback just sends the filters upstream to stop the saturation of the line. If this mechanism is vulnerable in some form that is universal or at least common then you could subjugate the filtering mechanism to mute a target upstream. Don't like GRC? Hack a pushback capable router and tell it to drop all send packets from GRC.
      • No because it means stop sending. To the network this stops the flood of packets.

        Yes but if the system can be fooled into quenching legitimate requests then service has still been denied. I mean, to a user, does it matter if you can't get to the server because it's overloaded, or you can't get to the server because the routers are telling your machine to stop sending? Either way, all you know is that the blasted server is down.

  • If a large-enough site was getting DDoS'd (Yahoo!, Microsoft, universities, etc.), wouldn't there be someone on call 24/7 who could in a matter of minutes sort out what the similarities in the DDoS are and then manually get a RegEx to sort them all out?

    I don't have much knowledge of the subject, but that seems like an easy want to deal with it.

    • Nice idea but regex's have waaaay to high an overhead to filter the amount of traffic even a small DDoS produces - you'd need some kind of omnipotent distributed uberBeowulf cluster (or a million monkeys watching a zillion blinkenlights)
    • Re:Manual RegEx? (Score:5, Insightful)

      by Bill Wong (583178) <<moc.llew> <ta> <wcb>> on Sunday October 27, 2002 @09:21AM (#4541141) Homepage
      DDoS is usually bandwitdh consumption...
      Even if you drop 100% of the evil packets...
      Your pipe is still filled...

      And for the amount of traffic needed to actually DDoS a large-enough site like Yahoo (4 gbps last time around?), RegExs wouldn't be helpful
      since, the sheer amount of cpu required to process *every*single*packet*that*passes*through* is wayy too much...
      • yes, however if you do propagate the quench packets back towards the source, the idea is that its no longer your pipe that's being filled. this technique seems pretty good actually... imagining a large number of skript kidz filling up my pipe (dodgy image there ;) but I digress)... by 'quenching' each one of these at their ISP's router it means my pipe is empty, theirs is full and all they have succeded in doing is DOSing themselves :D
  • sure (Score:1, Insightful)

    by bicho (144895)
    the best defense is the attack, so if they saturate your A/B/C network, then saturating the Internet is the obvious right solution.

    Of course its not, it would do much more harm to many more innocent people.

    The right solution is to educate people so that their PC's doesnt get inffected with worms and the like so they dont unknowingly contribute to DDOS.

    Of course, the right is almost always the hard way and most people doesnt want to care about ignorant people so... we're in a vicious cycle here, just as in anything else.
    • Re:sure (Score:4, Insightful)

      by garcia (6573) on Sunday October 27, 2002 @09:27AM (#4541160)
      educate people who are getting infected? Come on. Your not serious...

      These people think that when they install virus scan software [slashdot.org] they are safe. I recently re-installed Windows on my gf's computer. She had V-Shield on there from 1999. She had no idea that she would need to update it.

      At least my roommate, my parents, and my gf know (from me) not to open attachments. But educate a WIDE group of people? That's just not going to happen and you know it.

      • She had V-Shield on there from 1999. She had no idea that she would need to update it.

        Newer products do solve this problem without customer education. My McAfee VirusScan checks for updates daily and generally downloads new definitions once or twice a week. I don't have to take the initiative to update it or buy new software.
      • Re:sure (Score:1, Funny)

        by Rhinobird (151521)
        how come your grandfather(gf) is a girl?
    • Re:sure (Score:5, Insightful)

      by Anonymous Coward on Sunday October 27, 2002 @09:37AM (#4541179)
      that has to be one of the least constructive, head in the sand arguments I've ever read. Did you read the article ?

      The technique is about making the internet move the point of dropping the flood packets, BACK closer to the source. That is, remove the flood from the internet itself, and contain it into the localised areas.

      Instead of expecting the impossible as you suggest, (which is joe-average running a secure system), finally someone is thinking about securing the internet in general from unsecured systems, which is a pragmatic approach which may well protect the internet in general from many unforeseen DDOS attacks, as well as the ones we know about.
      • Re:sure (Score:2, Insightful)

        by Anonymous Coward
        So what kind of authentication is taking place between routers and the pushback daemon? Why couldn't I just create a denial of service by claiming that someone is denying me, therefore causing them to get shut down?
    • Re:sure (Score:4, Informative)

      by Shishak (12540) on Sunday October 27, 2002 @09:40AM (#4541185) Homepage
      Not exactly...

      If every network provider ran this type of a system on their edge routers. Have all the edge routers communicate to distributed servers. Then, when you are being attacked you simply announce the offending IPs involved in the attack. That announcement gets propogated around all the servers which tell the edge devices to filter the traffic. It isn't a reverse flood. It is a way of telling the router closest to the source to start dropping packets.

      Forged source IP's should be dropped at the edge already.

      What we need is a protocol for sending dynamic filters to cisco routers. I would like to have input/output lists put on an interface that I can later build dynamically. I do it now with my Linux firewalls but it would be nice if I could drop the packets on the far side of my expensive link.
      • A couple other people have said this already, but if I announce an "attack", who says that the attack is really taking place? Why can't I just tell the routers to quench some random "source" and cause a reverse DOS (wouldn't even need to be distributed)?
      • by Phroggy (441)
        Forged source IP's should be dropped at the edge already.

        Amazing how many ISPs don't do this...
    • Re:sure (Score:4, Insightful)

      by Doug Neal (195160) on Sunday October 27, 2002 @09:40AM (#4541188)
      How is this design proposing to saturate the Internet?

      It involves sending a short message back to the routers that are routing the packets to you asking them to "quench" - i.e. filter out and don't route - the offending upstream sources.

      The message could propagate as far back as the individual ISPs from which the packets are originating from so that each participant in the attack is cut off.

      At least that's what I'm getting from the summary of the story, I could be completely wrong.
      • Re:sure (Score:3, Insightful)

        Not all denial of service is saturation.
        What happens when i spoof that you just DoSed your favorite website? You get cut off from it, and denied service.

        Although as far as taking advantage of this sort of thing goes, I'd much rather be able to use an ICMP Redirect to make a DoSnet packet its owner.
        • If the router of your ISP would drop every packet that doesn't come from your ipadress, I should be safe.
          • Re:sure (Score:3, Insightful)

            Yes, because no one is so unethical that they would specify your ip address as the source instead of theirs.

            The only thing to slow this down is checking routes, but even that can be gotten around (they just have to be on your network, thats not hard for colo's, shells, and most other providers)
  • My take (Score:5, Interesting)

    by bobetov (448774) on Sunday October 27, 2002 @09:21AM (#4541144) Homepage
    Sounds like a pretty v1.0 idea at this stage, but I'm psyched people are spending brain cycles working on DDoS and flash-flood solutions, since they're both problems that are only going to get worse.

    (Gotta love the Slashdot effect getting named explicitly, eh? Nice to be part of the problem for a change... hehe.)

    Seems to me the tricky part here is defining the aggregates. After reading the article, it isn't *really* a way to save your site from going down due to overload, more a way to prevent others sharing your pipe/routers from going down with you. ;-)

    Which is a good goal in itself. It seems like a real tough thing to determine which of the millions of hits to www.yahoo.com (for ex.) are valid users, and which are DDoS bots. So both get restricted (net result: bots win), but the guy in the cage next to yahoo stays up.
    • Re:My take (Score:5, Interesting)

      by Subcarrier (262294) on Sunday October 27, 2002 @10:01AM (#4541245)
      Sounds like a pretty v1.0 idea at this stage

      I have to agree. They leave a lot of issues for further study. One big problem seems to be that gigabit backbone routers don't really have time to do any of this stuff. It's not much use if the back plate packet rate drops to one quarter because of having to detect and deal with flow aggregates.
      • Re:My take (Score:4, Informative)

        by Cato (8296) on Sunday October 27, 2002 @11:11AM (#4541622)
        That's not true of all such routers - as long as the number of aggregates to be filtered is fairly low, it shouldn't have too much impact. Most of the filtering should be on the routers at the edge of a given provider's network, which have less work to do than the true core routers - this is similar to the DiffServ QoS model except that the core routers don't need to do anything at all, since traffic is limited on the edges.

        Juniper routers do this sort of filtering and policing in hardware, and can also generate traffic stats efficiently. Other vendors have similar features - Cisco 7500 routers can have multiple VIP processors, distributing the work down to the interface cards.

        The main constraint is that you need new software written, installed and debugged in these routers, which will take time and require an agreed standard across router vendors. In the short term, it's easier to use existing features such as NetFlow/cflowd for traffic stats, feeding into an existing DDoS analysis tool (e.g. the Arbor Networks ones), which then tells a router provisioning tool to reconfigure the routers. This would not be as slick or dynamic as the proposed scheme, but could be done today. It would also make it possible to have a human in the loop initially, reviewing suggested changes. This would work OK as long as management and routing traffic are assigned a separate queue on each router interface, guaranteeing enough bandwidth to make these changes in the face of a DDoS attack (something that the ACC approach would also need).
        • If the hardware limits the number of aggregates that a core router can handle, it's fairly easy for an attacker to saturate the hardware.

          Pushing the filtering all the way back to the edge nodes of the source networks may also be difficult, as detecting the aggregates is probably a lot easier than detecting individual malicious sources (you would like to leave the legitimate sources unblocked). Ultimately this would be the way to go, though. Applying ingress filtering universally to combat source address spoofing would be a good start.
          • by Cato (8296)
            A truly sophisticated attacker could probably hit the number of aggregates limit, but that's a second order issue and is unlikely to happen with current tools (as long as source addresses are not used in filters, since they are usually faked).

            My suggestion for using current tools would work only in a single provider - cooperation between providers would require an IETF standard, perhaps using BGP extensions to carry requests for filtering/limiting, or perhaps using a human-checkable format for these filtering.

            Whether in a single provider or not, routers are clearly less likely to melt if you can push the filtering/limiting upstream, reducing pressure on router and bandwidth resources by acting as close to the source of attack as possible.

            Universal ingress filtering should be mandatory - enterprises should demand it of providers, and vice versa. There is a good RFC discussing this, 2827 - see http://www.cis.ohio-state.edu/cgi-bin/rfc/rfc2827. html
  • by Anonymous Coward on Sunday October 27, 2002 @09:28AM (#4541163)
    Not all DDoS attacks are bandwidth based, they could be application level and targeted at all sorts of other resources.

    Some examples:

    SYN floods can exhaust incoming connection queues.

    DNS floods (asking a recursive nameserver a million questions, or even asking an authoritative nameserver a million questions).

    Too many HTTP requests to processor intensive dynamic content pages could deny service well before you are serving at your bw limit.

    The paper kept referring to the aggregate detection algorithm only coming into effect when the bandwidth limit is being exceeded .. it would be nice if these actions could be initiated in other situations also.

    Never the less, this is a promising initiative.

    --Iain
    • Well, If we can build a secure way of notifying the source network providers of an offending IP. Then have that network provider block that IP from sending on the Internet. We can then setup our servers to tell us when they are being attacked/flooded/poked at. Our server or IDS can then notify our distributed attack manager which can notify the source networks attack manager which can notify the edge router to drop packets.

      It isn't all that complicated, it is a major pain to get every network admin and small ISP to implement something.

      The simple act of filtering all outbound packets to only allow your netblock would stop forged IP attacks cold.
      • Well, If we can build a secure way of notifying the source network providers of an offending IP. Then have that network provider block that IP from sending on the Internet.

        Wonderful. Then later we can expand the system to block at will anyone who says something we don't want to hear. We could even hook it into Microsoft Passport! It will be easy to silence people.
    • You don't generally need all that many machines to do SYN flooding or overload a DNS server.

      DDoS attacks are brute force by nature, designed to take down sections of the network by saturating the links.
    • These tend to all follow the pattern that you are exhausting the cpu of the target rather than its network pipe. Newer network protocols already have protection for this by making it more expensive for the client initiating the connection rather than the server receiving it. SCTP has this in place already with a crypto-based cookie puzzle to prevent SYN bombs (similar approach would work for dns too). The other question is when (or rather if) newer protocols like these will eventually replace TCP with all of its inherent problems or if the inertia (but everybody knows TCP...) of the current protocols will kill them off first.
    • Dont forget calling your ISP's fax machine with a roll of black paper taped into a loop. We sent a kid out to a "grocery store" that day- we had no idea what was for lunch.
    • I think the difference between the targets of the article, and the DoS situations you mention, is that the Internet as a whole is responsible for bandwidth based DoS attacks and it should act proactively to stop them. They're also far easier to detect.
    • Too many HTTP requests to processor intensive dynamic content pages could deny service well before you are serving at your bw limit.

      A webpage that I have written has suffered this problem on many of its mirrors...
      (background, Its a very computationaly intensive page designed to work out the outcome of battles in an online game called planetarion)

      Apart from upgrading the processor and imposing restrictions on the size of the computation thats requested, are there any other precautions I can take?
  • "quench" ? (Score:2, Informative)

    by Bowie J. Poag (16898)


    Sounds like the name of a sports drink targeted at uh....interior decorators.

    Shouldn't it be "squelch" ?

    Cheers,

  • Question.... (Score:3, Insightful)

    by jwilcox154 (469038) on Sunday October 27, 2002 @09:38AM (#4541182) Homepage Journal
    How does it prevent a Server from being Slashdotted?
    • Re:Question.... (Score:2, Interesting)

      by Big Mark (575945)
      Pushback will ensure that when the /. effect happens, the server isn't overloaded by dropping connections enroute to the server rather than at the server itself.

      I wonder what impact the pushback overhead will have when a server gets slashdotted, though. What if the pushback message gets dropped due to swamped routers?
  • This is worse (Score:4, Insightful)

    by greenrom (576281) on Sunday October 27, 2002 @09:41AM (#4541192)
    What the paper suggests is that if a router is getting way too many packets to a specific destination address, it will tell the routers upstream to throttle packets to that destination address (drop a certain percentage of them).

    How does this really help a DOS attack? The idea behind a DOS attack is to flood a server with so many packets that the server can't keep up and ends up dropping most of the packets. This paper does not provide a solution to this problem. It simply shifts where the packets are being dropped... at a router upstream instead of at the server or router at the edge of the network. The only advantage here is that other servers hanging off the router that aren't being DOSed will be unaffected.

    The suggested solution also opens up a potential security hole. If you gained access to a server, it might be possible to send a packet to routers upstream and tell them to throttle bandwidth. This could be a much more effecient way of doing a DOS attack. Now instead of multiple machines on fast connections, all you really need to DOS your favorite website is a 268 and a 300 baud modem.

    • Re:This is worse (Score:3, Insightful)

      by Anonymous Coward
      If you read the post it is clearly pointed out that the objective is to prevent the DoS from affecting other services carried on the same network link.

      There is no clear way to differentiate some forms of DDoS attacks from legitimate traffic or a traffic spike .. so you have to concede that the attacker has won that battle and interrupted their targetted service, the next step needs to be harm minimisation.

      The pushback idea provides a generic method for notifying/instructing upstream carriers to drop a certain aggregate traffic flow and notify the destination of what affect that limiting is having so they can determine when to resume normal operation.

      In the mean time though, you have prevented a DDoS that may be targeted at a single machine from affecting the entire network.

      --Iain
    • Re:This is worse (Score:2, Insightful)

      by dubious9 (580994)
      If you root a webserver chances are that you want people to see the changes that you make to it. Once you have control of the machine you can do much worse things than DOS it.

      Besides it is much harder to break into a well protected machine, than to break into a couple of thousand nearly unprotected ones.

    • Re:This is worse (Score:4, Interesting)

      by Anonymous Coward on Sunday October 27, 2002 @10:43AM (#4541474)
      No, it throttles packets based on whatever is common to a majority of the packets. So, if a website suddenly gets a huge number of requests for /index.html, it can throttle those and let requests for another page through unhindered. If a web server gets a huge number of identically formed packets, it can throttle those and let differently formed packets through unhindered.

      You are correct when you say it shifts the site where the packets are dropped. However, you miss the whole point. The site's router determines a pattern common to an attack, and tells the routers upstream the pattern. Those routers tell their upstream routers the pattern, etc. Alone, the site's router might be overloaded. The routers two levels upstream might all be just about overloaded, but still able to let through all non-attacking traffic. If these routers all begin throttling, the site's router will no longer be overloaded. All nonattacking packets will be let through unhindered. All attacking packets will be throttled severely. If the attack picks up and the second-level-upstream routers can't handle it, they will pushback to the third-level-upstream routers, etc.

      At least, that's how I understood it.
  • Can this be right? (Score:4, Interesting)

    by rocjoe71 (545053) on Sunday October 27, 2002 @09:46AM (#4541202) Homepage
    This sounds like innovation and that just can't happen on non-M$ operating systems, can it?

    Back down to earth, it's mega-wicked when good ideas are developed in FreeBSD (or Linux). Developments like these come the closest to the original intents and purposes of open sourced OSes.

    • Troll much?

      Since when can you not write open and innovative software on a MS platform?

      People like you give idiot-yuppie-zealots a bad name.

      Tom
      • ...I was only kidding over MS' statements about their 'freedom to innovate' and how open-source is a 'threat to innovation'.

        It's Sunday morning! Don't be so serious over *everything*!

        People like you give knee-jerk-reactionaries a bad name.

  • while this idea is good... i often thought about the complexity of having software on every router like ntop or such setup with a tool like trace route.


    what in the heck am i saying? lets say you get a syn with spoofed ip's (ask any ircop how much fun that is) you could then trace back through every router that spoofed ip came from. i realize this would tax machines quite a bit in logging and what not.


    i dont think there will ever be a way to prevent any type of attack. i do think its important to have a proper response plan.

    • Re:another idea... (Score:3, Interesting)

      by Shishak (12540)
      There is a perfect, 100% sure way of stopping spoofed IP's. It is very easy, non-resource intensive and not being used by lazy network admins.

      On every edge router you simply need to put an access-list to drop all packets not coming from your netblocks.

      Edge routers going to customers you drop incoming packets not coming from your customer assigned IP. Amost EVERY edge device supports this, most support dynamic filters with RADIUS resquests. If you only allow your customers to send you data from their IP address it is impossible for them to be part of a spoofed attack.

  • by Anonymous Coward
    Make ISPs liable for machines that they allow to connect that are periodically engaged in attempting to abuse other machines for longer than, say, 10 days.

    Give ISPs an incentive to detect forged packets, portscanning, and other common signs of compromised machines at the source. Get rid of zombies at the source. Then there wouldn't be the raw material for DDoS.

    In short keep machines from swinging their fists, rather than try to make the recipients more resistant to being hurt.
    • One way to help "keep machines from swinging their fists" is to quickly notify their ISP when they do start attacking. http://www.mynetwatchman.com/vision.htm [mynetwatchman.com] explains a system of currently 1478 people who submit their Firewall logs using an automated agent to a server which aggregates the data, backtraces the activity to its source, filters false alarms, and automatically sends escalation e-mails to responsible party, often the network abuse contact for the ISP which owns the netblock of the IP address.
  • by wfmcwalter (124904) on Sunday October 27, 2002 @10:06AM (#4541277) Homepage
    Perhaps someone more network-literate than I can answer this DDoS question, which has bothered me for some time.

    I believe most DDoS attacks have the following in common:

    1. DDoS zombies generally send packets with forged return addresses, as doing so greatly complicates attempts both to block packets and to track down individual zombies.
    2. Machines used for DDoS attacks are almost always either corporate PCs or home PCs connected by DSL/cable. These nodes are single-homed, and as such packets emanating from them have only one initial route to the internet.
    My question is this - why can't corporate IT people or their counterparts at ISPs reprogram their front-line routers (those that directly connect to individual end-user PCs) to block packets with forged return addresses? Forged addresses typically are either totally illegal or indicate a totally different net or subnet from the actual sender.

    I can't see any reason why this wouldn't be a good idea - there really isn't any reason for the type of machines mentioned to ever act as true IP routers (as opposed to NATs), and it doesn't seem like this would be either hard or burdensome for the first-line routers to do.

    Employing this would mean that DDoSers would be confined to forging return addresses within the zombies' own subnet, which would make both blocking and back-tracking much easier.

    It's plain that this isn't done, so there must be a good reason why people much more network savvy than I haven't implemented it - what is it?

    • Hmmm... good idea. I have had this idea with respect to e-mail servers and making sure each e-mail sent out had no forged information.

      Basically, I guess that this would require some sort of change to IP. One solution would be so send the "front line router" a connection packet. Then have the router send you back a public key. Then whenever the client encrypts his IP address (or some other unique peice of information(MAC?) and send this along with every outgoing message.

      The router could then maintain a lookup table with IP's and encrypted message to determine which ones to drop.

      You might have to double the number of front line routers to handle the overhead, but I guess this would help quite a number of security related questions.

      I realize there are probably a number of problems with this as I am not a security guy, but are they any reasons this basic idea wouldn't work?
    • by swb (14022) on Sunday October 27, 2002 @11:05AM (#4541582)
      "Good" networks prevent forged packets by doing what you suggest, dropping packets with bogus source addresses at the edge of the network or at appropriate ingress points.

      I think the argument that is made for not doing this at a lot of ISPs is that with most Cisco routers its expensive as a lot of their routers can't fast switch with ACLs applied, they process switch, turning an adequate router into an inadequate packet-dropper.

      It can also be a PITA to maintain -- if you put it at the very edge, like on an ISPs peering router with their upstream, it doesn't prevent in-block spoofing (eg, spoofing packets within the ISPs block). If you try to beat that on all the aggregation routers, you have a lot of ACLs to maintain; customer churn could put address blocks all over the place.

      I'd argue that ISPs should make it a term of service that *their* customers ACL their edge routers; we-catch-spoofing-we-cut-you-off language.
      • by Cato (8296) on Sunday October 27, 2002 @11:22AM (#4541673)
        This is mainly laziness - there are tools to help you do this, from Expect-based scripts up to commercial router provisioning tools (which can also be used to activate IP VPNs and QoS).

        As for router capacity - Junipers don't have this problem, and if the ISP manages the CPE router on the customer site they can just push it down to that device. On a Cisco, where you have symmetric routing (probably the case for most smaller customers i.e. not dual-homed), you can just set the IP reverse-path forwarding option, which is very efficient - on each packet, the router does a routing lookup on the *source* address, as if it was trying to send a packet back to its origin. If the routing table doesn't have an entry for that source address that points out via the interface the packet was received on, the source address has been forged. This is not much overhead at all - just one more routing lookup.

        For dual-homed customers, the provider has to use ACLs or perhaps a managed CPE, but ideally this would be a selling point for the ISP helping to generate cash to pay for router upgrades if needed - it safeguards the customer's network from being used to generate DDoS attacks with forged source addresses, which could save the customer from a lawsuit.
      • its expensive as a lot of their routers can't fast switch with ACLs applied ... It can also be a PITA to maintain

        All true, but increasingly no defence for a lazy ISP. It's jolly inconvenient for me to stop my car at red lights, but it's my duty as a good road-using citizen to do so, even if I don't think I'll get into an accident.

        if you put it at the very edge, like on an ISPs peering router with their upstream, it doesn't prevent in-block spoofing

        Indeed, it's an imperfect solution, but it does give a start to a beseiged site - better they (and their downstream) block a whole ISPblock or whole ISP than go down entirely. Unfortunately, there's a selfish counterargument to this - an ISP who is a good citizen and implements blocking may (if it's subsequently used for DDoS) find itself cut off from (e.g.) yahoo or ebay - one that doesn't will get a _little_ more service, until the target itself expires.

        I'd argue that ISPs should make it a term of service that *their* customers ACL their edge routers; we-catch-spoofing-we-cut-you-off language.

        I'd second that - on the contract line following "thou shalt not maintain an open SMTP relay".

  • Old Idea (Score:5, Informative)

    by Brew Bird (59050) on Sunday October 27, 2002 @10:07AM (#4541289)
    This idea has been hashed to death for years.
    The basic implementation has already been done.

    What is novel and new about this paper is the suggestion that upstream routers are going to allow any tom, dick and mary to tell them what packets to throttle.

    Always ass-uming that the larger switches can actually do this on the scale that is hinted at in the paper.

    While issue 1 is specificly a political issue between carriers and customers, one could always point to the ease of which BGP routes are exchanged as an example of how easy this would be to do. Unfortunatly, since we are now talking about something that could effectivly put a transit provider out of business, there is no way issue number 1 will be overcome, unless the router manufactures give me the same kind of filter and ruleset technology I have for BGP. This would allow me to ignore anything I want from anyone, and would have the net affect of the feature being disabled!

    as for 2, I'm sure some router manufacture has been touting this type of 'feature' on thier new multi-gig-a-bit MPLS/IP-does-everything-at-once switch. Don't believe it until it's out of the lab, guys. As many times as carriers have been screwed over by these new startups and their 'awsume powerful technology', I'm supprised anyone believes thier line of crap anymore.

    It's too bad DDOS attacks don't go on for weeks, then we could use something like RBL to deal with it. Since they are so transitory, blackholing on the fly, (which is basicly what this paper is advocating) would require a lot more thinking about than has been put into this work.

    Perhaps, instead of trying to complicate our lives with Yet Another New Protocol, you could simply come up with and IDS concatonation system, that puts together 'lists' of known DDOS sources at the current moment, and put it into a BGP feed... What a concept! Taking 2 technolgies that are known to work, and available to ANYONE that does BGP on the internet, and making it work!

    Thank You, Come Again.
    • Perhaps, instead of trying to complicate our lives with Yet Another New Protocol, you could simply come up with and IDS concatonation system, that puts together 'lists' of known DDOS sources at the current moment, and put it into a BGP feed... What a concept! Taking 2 technolgies that are known to work, and available to ANYONE that does BGP on the internet, and making it work!

      This kind of reminds me of DShield [dshield.org]. And I think you're right, if we could automate such an internet-wide distribute of potential DDoS participant hosts then when an attack begins, the victim could invoke "the blacklist" and hopefully cut out a big chunk of the sources.

    • Ok, disclaimer first - I haven't actually read the paper. That said, if you're right about:

      What is novel and new about this paper is the suggestion that upstream routers are going to allow any tom, dick and mary to tell them what packets to throttle.

      Then, lol. Do they really think this is a good idea in any way, shape or form?
      This opens up an even worse class of DOS attack than the one that it plugs. Effectively I can clamp off your traffic by accusing you of DOS'ing a bunch of servers out there somewhere (by forging requests from those servers).

      Or even worse, again by forging requests from a server I can fire off pushbacks to a large number of edge routers and close down most of your traffic from those areas.

      Is there no authentication in there at all?
    • Re:Old Idea (Score:4, Insightful)

      by Cato (8296) on Sunday October 27, 2002 @11:29AM (#4541706)
      A BGP feed will only help if you want to drop ALL traffic to a given IP prefix - the ACC proposal actually lets you limit traffic by port number as well.

      Also, a BGP-only solution would only let you drop traffic, so it's not very useful for flash crowds, where the traffic is legitimate but excessive. It's also not useful where the port / prefix etc can't precisely identify only DDoS traffic - rate limiting allows some good traffic to get through while also limiting the DDoS. Blackholing != limiting (did you read the paper at all?)

      I agree that this can be prototyped using existing technology (see my post elsewhere), but if this approach proves useful, a dedicated protocol would be helpful - though this could perhaps be piggybacked onto BGP using additional attributes to carry the filter and rate limit information.

      • Yes, I do understand that a BGP feed will do that. Can you think of a better incentive to have the source of the DDOS actually do something about it. (Flash crowds are another issue completely, and should be handled by the local site. I think you would be hard pressed to find any site that doesn't already have a solution for 'flash' crowds availble. If it is a regular issue, you can always Akamize... If not, there is nothing stopping you from rate limiting at the edge. Forcing the core of someone else's network to rate limit based on your arbitrary rules is totally the wrong way to deal with flash crowds, IMHO)

        I actually had this argument with a few router vendors a few years ago, my argument was quite simple. When you get to the point where the core switch has enough power to do all of things that it would need to do in order to.

        1) filter on src/dst/port/URL whatever
        2) implement these filters into multiple OC-192 trunks connections without taking too much a % loss
        3) making it scale to 'Internet' size.

        You have spent more money buying the switch than if you had simply gone with an 'all or nothing' solution.

        AND you achieve pretty much the same result. (from the 'I need to let paying customers use the bandwidth vs these DDOS hax0r l33t skript kiddies point of view)

        This is the diffrence between someone who doesn't have to pay for implementation of a technicaly superior soltion, vs someone who has to worry about how little money is being made with these expensive switches, because everyone expects their multi-megabyte broadband connection to be $29.95 a month.

        Did you REALLY think it was a coincedence so many IP backbones have gone out of business? Trying to keep up with the Jones doesn't help, and feature enhancment requests to high end switches like this only increase the complexity and cost of Internet, for dubious improvment.
  • by The Moving Shadow (603653) on Sunday October 27, 2002 @10:10AM (#4541305)
    While Pushback technology can help the servers to stay online, they literally push the network load off to another branch of the network where it can congest normal networkconnections. For important servers like the nameservers that have been attacked last week - where they (btw) used a similar technique of pushing requests e.g. network data off to another part of the network - this is a good method. But you run the risk of creating congestion somewhere else on the network. So people working upstream from the attacked server will probably suffer from poor accesibility. It's just a choice what you want to sacrifice, either the targetted servers or the people upstream. But i agree this technology is a step forward towards an appropriate security answer to DDOS attacks.
  • The paper talks about pushback filters based on destination-IP based address filters. Consider a DDoS attack on a popular site such as slashdot. Pushback will affect EVERYBODY, not just the unpatched zombies. If exploited correctly, this makes for a perfect tool for the hacker to obtain a 100% denial. This is an arms race, we can't afford to give hackers our nukes, unless we make sure they can't be used against us.
  • Um, this isn't new. (Score:5, Interesting)

    by Mordant (138460) on Sunday October 27, 2002 @10:19AM (#4541344)
    Bellovin came out with this a while ago. It's an interesting concept, but has the following practical drawbacks:

    1. All the various vendors would have to implement it.

    2. False positives. A new form of DoS would be to generate enough spoofed traffic to trigger this sort of thing -aimed at someone else-. Imagine your outrage when your l33t IRC buddies spoof your IP address block whilst attacking www.slashdot.com - no more imbecilic, outdated "Gee, whiz!" types of posts for you to read.

    3. Oftentimes, rate-limiting via CAR, traffic shaping, or other methods consumes more CPU cycles on the routers than simply blocking the offending traffic (assuming this is possible, which depends upon the attack methodology).

    The best way to combat DoS attacks generally is use strong platforms which process ACLs and other features in hardware (ensuring that your config allows those features to be processed in hardware; logging ACLs like a 'deny ip any any log' just won't cut it, these days), ensure you have the ability to 'draw off the poison' by sinkholing traffic headed for the destination by advertising a null route for it on a sinkhole router (this isn't always possible, it depends upon the target of the attck; you may not want to sinkhole all requests to your Web server, for example), ensure you have as good a traffic sniffing/IDS-type capability as possible, make use of Netflow tools like CAIDA cflowd/OSU flow-tools/Flowscan/Panoptiis/FLAVIO/Arbor Networks' Peakflow DoS, and know how to get in touch with the folks at your ISP(s) who can help with identifying the (even spoofed, via Netflow tracing) sources and blocking the offending traffic upstream of you.

    If you're a commercial site, strongly consider a distributed Web site, hosted at different locations and using some sort of Global Server Load Balancing technology (GSLB; Cisco's Distributed Director and 4480 are two examples of this) to send people to different sites depending up their location, network topology-wise.
  • "The authors of the paper have an initial implementation on FreeBSD."

    I wonder when the LinPushbackd, GNU Pushbackd and PHPMyPushbackd projects will appear on SF.

  • by fluor2 (242824)
    Doesn't IPv6 fix this? IPv6 NOW! [ipv6.org].

  • If they forge the send from info wouldnt that make this idea sort of useless?

    Might even reverseDOS innocent people.. id be pretty upset if that happened to me.. I might even sue if i lost revenue..
  • by constantnormal (512494) on Sunday October 27, 2002 @10:41AM (#4541464)
    in a press release by the Office of Homeland Defense, it was announced that an insidious plot by hacker terrorists had been thwarted. It seems that this subversive web site, www.slashdot.org, would trigger random DDOS attacks on targets identified on their web site. It has yet to be ascertained what their intent was, as no logical pattern has been detected. The investigation continues.

    Welcome to the Twilight Zone.
    I certainly hope the filters used to detect true DDOS attacks are effective enough to prevent this scenario.
  • Criticism: By giving smaller routers the power to command the behaviour of larger routers upstream, you are dangerously opening up a loophole that could allow someone in control of a router to maliciously affect upstream behaviour (potentially a huge scope!).

    Improvement: Only allow routers to pushback/command up one or two hops to limit the scope of potential reverse-DoS attacks.

    Easy testing: This doesn't refer to the above issue, but still... have AT&T set up a test site running their BSD implementation and then post a story to slashdot to have us test it out :)

  • My firewall blocks all incoming ICMP except a few select types. Quench is not one of them. It could conceivably be used against you, so I block it. Why wouldn't the guys who write the scripts for the kiddies make changes to their code so that zombie machines ignore source quench ICMP?

    I'm not sure how effective source quench is against routers in the path of a zombie host.
  • Heh heh (Score:2, Interesting)

    by tuxlove (316502)
    What if the script kiddies attacked their targets with loads of source quench packets? Can you source quench a source quench attack? :)
  • A DDoS attack usually involves unwilling or unknowing participants. This technology will do little more than knock out a few innocent computers and cause havoc at the ISP's when people are demanding to know why their "internet broke"
  • by Animats (122034) on Sunday October 27, 2002 @11:53AM (#4541815) Homepage
    This is a promising idea, but not totally effective. It protects the network, not the destination. The current implementation effectively blocks all inbound traffic for a given IP address. This kicks the target IP address off the net, which in itself is a denial of service. This approach could even make an attack against a host more effective, by shutting off all its incoming traffic. It just limits the collateral damage of denial of service attacks. This makes sense from the telco perspective (note that it's from AT&T), but not from the hosting service perspective.

    An effective solution has to identify the source nodes causing the trouble and block them, not the target. This is hard, but not impossible. The big problem is doing it for fake source IP addresses.

    It may be necessary to view routing the way we now have to view mail forwarding - open relays get blocked. If a router isn't sure of the IP addresses of its input, it shouldn't be forwarding those packets. Routers that continue to do so may find themselves blocked.


  • In a DDoS the flood is coming from helpless slobs all over the net who didn't start it and are unaware. You're going to roundtrip that garbage traffic across the net a second time for even more congestion, and then push it at the client sending it?

    If they do it right it may help a small amount in awareness, but the real answer to DDoS is that there's no good answer in the current Internet. Just like Curious Yellow, the only good answer is that OS vendors change their ways very soon and get security together, so that breakins are infrequent and require intelligent effort, as opposed to todays world of a 3-month old script off the net easily seizing 100,000 machines.
  • This reminds me of Gibson's 'Black Ice'... :)

    I wonder when we will start seeing automated retaliatory attacks on DDos'ers and other hacking attempts... Just think: An automated scan of the remote hostile system(s) and then sending pre-programmed attacks to those computers determined by those port scans.

    If the RIAA can get away with attacking servers who are sharing copyrighted content, then couldn't a company retaliate in the same way against machines who are attacking their servers?

    Could make for some interesting wars :-)
  • This sounds like Steve Gibson [grc.com]'s suggestion from gibson research [grc.com].

    I wrote a paper in a similar vein last spring about stopping ddos attacks, it's the second section of this paper [uci.edu]. It seeks to fix the underlying problem, not create a band-aid.
  • try this (Score:2, Interesting)

    by trybywrench (584843)
    A properly DDOS'd router or network doesn't have the queue space to deal with control packets. Most likely they will be dropped just like the DOS'ing packets. I don't think RED ( common queueing algorithm ) differenciates between types of packets. Some sort of QOS based algorithm would be needed to ensure that control packets get highest queueing priority. But then that QOS algorithm has to be installed and working in the entire network which isn't likely.
  • by GreatDave (620927) on Sunday October 27, 2002 @01:52PM (#4542399)
    Most of the reasons why have been said before but to sum it up...

    Sending quench packets back to the routers feeding you DDoS packets, and eventually back to the host in question, is a good idea in theory. Kinda like communism. But in practice it won't work. First of all, there are the CPU strain resources on the routers and hosts. With DDoS, you have thousands if not more hosts all banging on your router, and your router is going to get a cardio workout going through its tables and deciding what gets throttled. Secondly, the return addresses on DDoS zombie packets are forged a good 80% of the time, meaning that you'll probably only hit 2 or 3 routers upstream with your quench packets.

    A better solution? Null routes come to mind, but there are the CPU issues again. I'd like to see some "technology" similar to this where a customer of a commercial ISP could modify firewall rules on the _ISP's_ router to control traffic coming into their netblock. Perhaps a few routers upstream too. This really appears to be the only logical "quick fix" at the network level for DDoS.

    A better fix would be to keep those zombies from ever coming into play by nuking everyone's NT/XP boxes, but that'll have to wait until penguins or daemons rule the planet.
  • How would this deal against spoofed ip's?

    The script kiddy would just have to send spoofed ip packets to the server with Pushback installed with an ip address of a server they want to hack. Spread this out over a number of large number compromised zombie machines, against a large number of high-bandwidthed Pushback servers, and know that their actual target is being DDOS'd by Amazon, Yahoo, Microsoft etc. etc. etc.
  • Ugh (Score:2, Insightful)

    by richard_willey (79077)
    Call me simple or old fashioned, however, I have an intrinsic distaste for technical solutions that require intermediate system to do real time monitoring of packet flows. Even if you are using some type of stochastic sampling, this type of implementation is still going to have a significant effect on forwarding performance. Its worth noting that 99% of all the routers out there do NOT support basic IP options. For all intents and purposes, options such as "Source Quench" or "Source Route" or "Record Route" are theoretical constructs. They are not enabled or supported in the control/management plane.

    I've always been a proponent of big dumb pipes and inteligent end nodes. I probably always will be. The overhead associated with supporting intelligent intermediate nodes is simply too high.

    Richard

"Laugh while you can, monkey-boy." -- Dr. Emilio Lizardo

Working...