Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet

Stopping Distributed Denial Of Service 120

Anonymous Coward writes: "Fernando Schapachnik has updated his proposal for defeating Distributed Denial of Service Attacks based on changing network routes. His paper describes a technique that can be used to defeat the recent DDOS attacks in real time. The solution presented here is based on routing and it requires a certain amount of extra network infrastructure. "
This discussion has been archived. No new comments can be posted.

Stopping Distributed Denial Of Service

Comments Filter:
  • A few problems here.

    #1. Getting the provider to change DNS is better then an actual attack. You now have 1-48 hours of cached DNS floating around on the Internet. Mission accomplished.

    #2. Any solution to the problem has to take into account multiple gateways. While the author said he'll show one gateway for simplicity, I counter that this is not a simple problem and can not be reduced to a sigle gateway network even in demonstration.

    #3. Half the battle is these attacks is finding out which of your providers is sending which traffic. To do this in most cases you must be able to filter packets. Filter packets at a Gb/s or so is impossible unless you ARE WAY over built.

    #4. Having a "stub network" is not a new idea. I saw the presentation for CenterTrack at the Nov Nanog.
    http://www.caida.org/k/centertrack.pdf
    Again havng the processor power to actually accomplish this is not a little problem.

    Kashani

  • They could probably even cause current routers to start this behavior with a microcode

    Firmware patches are useless because modern routers don't have time to process every packet through its CPU. High end routers do most things on the line card, which doesn't involve the router's CPU.

    This is why traceroute no longer produces sensible results. ICMP has to be processed using the router's CPU and therefore is given second priority, which is one reason why the RTTs don't always increase through successive hops.

  • Moderating yourself isn't possible. Go bug Rob [mailto] if you don't believe me.
  • That's right.. there's a secret illuminati of karma whores you don't know about... we have thousands of accounts on hundreds of servers.. and our soul purpose is to piss off the illustrious Anonymous Coward! Why? Because we all spend 16 hours a day just sucking up karma! Yes, that's right.. hundreds of us doing nothing but whoring karma! MY GOD, IT'S A CONSPIRACY!

    He's found us out! NNNOOOOoooooooooo! Okay, did you e-mail rob like I asked? Okay then. He'll tell you that there are IP address filtering options in the slashcode to prevent people from the same IP addy from moderating their own comments.. regardless of who they logged in as. Yeesh..

  • I just rooted 1000 boxes. They are now all running webcrawlers on your website. You are a high-volume website such as amazon.com. Seperate the traffic being generated by the webcrawlers from your customer's traffic. You have 30 minutes. You may bring whatever supplies to the server room that you need. Begin.
  • That's why the proposal specifies the Time To Live value for the DNS entry as zero. This means that EVERY request for the domain name will cause the authoritative DNS server to be hit. That sounds a bit silly to me. If implemented as suggested, the DNS servers would fall down instead.

    I get it, instead of letting the DOS attack happen on the targeted box, it takes down the DNS network instead? Doesnt sound too good to me....

    -=Bob

  • most sites nowadays seem to be "ooh - look at my ugly photo" with a piece of text describing the life of some halfwit from the Midwest who thinks that they're an 31337 haXoR because they can make a web page in Frontpage Express.
    Then again, those sites invariably have a hitcounter showing a grand total of 17 hits. 12 from said halfwit, 3 from 'friends', and the remaining 2 from people who have inadvertently stumbled across it on some search engine.

    So they only take up server space, and that's not really a big deal. Besides, the 'web' is a bit of a lost cause anyway. Now if you'd said usenet and email...

  • Creative.

    --

  • Doesn't seem too excitingly original to me. Just a variation on having a backup web server located at another datacenter, and directing DNS to the backup web server when the primary gets attacked. Lots of other posts have already pointed out the faults with this basic mechanism (like the 0 DNS TTL??? there goes your DNS server. A few minutes would be a bit more reasonable. Plus, if the DDOS checks DNS every once in a while, they can attack both).

    The only difference with this is it requires just one server machine instead of a primary and a backup, with the same trick of changing DNS, but also directing away the old route. So you save a little money - just need one server, one ISP, etc. Then again, this is only a benefit if you're not willing to just buy two servers, co-lo them at two different datacenters, stick your DNS servers at a third datacenter, and be done with it. And if you're not willing to do that, you're probably going for a cheap ISP who'll get overloaded by the attack, making this whole method useless anyway.

    Just my two cents.

    -Andy

  • We know DDOS clients are hard to track because they forge the IP source on outgoing packets. This paper says that with the DNS ttl=0 we could then use the resulting pattern of DNS traffic in combination with the IP traffic to locate the actual IP of attacking machines.

    Now, aside from all the other noted problems with the proposed scheme....

    The problem for the attacker then become how to track the changing IP of the target without disclosing their actual IP with continual DNS lookups on the target.

    One way to do this is one could just have the attacking machines scan their local network for DNS lookups on the target, and forward any new IP's to the "evil master" who could recoordinate all the rest of the clients to the new IP.

    Another way to do this would be to allocate one group of compromised machines to just performing the DNS lookups and report info back to the master for distribution to the actual attacker clients. In this case, the boxes doing the DNS lookups disclosing their IP would give the attackee much less information in that they could work to prevent tracking of the routing changes, but would see no benefit (in the form of the actual attack load decreasing), until they had succesfully tracked and identified ALL the DNS tracking boxen involved.

    NEXT! :-)
  • Correction.

    net-wide, not web-wide. The web doesn't play in.


    --
    "Rune Kristian Viken" - arcade@kvine-nospam.sdal.com - arcade@efnet
  • The point was not that "artificial" clients cannot be faster. The point is that it far easier to find the bots doing the webrequests and just drop packets from that ip-adress where it fits, for instance at the border routers - if you have the technical possibilites.
    If the source-ips are spoofed then there is an important datapoint missing.

    And you're clearly right that an hour outtage is bad enough. And also if you have very big files it's impossible to distinguish a dos attack from legitimate traffic (free beos was a good example ;-)).
  • [Get-the-airbag-for-that-one-bit-more-of-safety argument deleted]

    Your analogy with DDoS and airbags is wrong, and for the same reason on each comparison.

    1) The real solution, the only one that will really work, is for the ISP's to shut down the non-internal-source packets. Period. The rest of the solutions can and will be gotten around, easily.

    2) Other DDoS solutions are dangerous in that they can and will drop legit packets... just as air bags are dangerous to small people, adults and children alike.

    If a good safety harness is good enough for Dale Ernhard going 200mph, it's good enough for me going 60. If shutting down the non-internal-source packets, and making it stick, will stop this problem cold and not harm other traffic, then let's do it, and keep pounding on folks until they do.

    Fancy gee-whiz workarounds are no substitute for personal responsibility.

  • Point well taken...of course I lived on a farm with bad country music so I have issues with the 80's in general.

  • Are you still here? GO AWAY

    -Internet, part Owner.

  • Well, apart from one honest typo I still stand by my original arguement. So why don't you answer me these simple questions, since you obviously have them all and they are all "right" (Pun intended):

    Who owns the internet? I mean give me the name of the organization or the entity who actually owns it. The military? Not since DARPA/ARPANET. IBM? CISCO? MCI-WorldCom? While a great many compaies own the backbones, subnets and actual machines on which the internet resides and operates, the whole internet as an entity is greater than the sum of these parts. In many countries the governemt (GASP!!!) owns the backbones so, by extension, do the people of these countries. Asked another way, which entity could just shut the internet off?

    What will be serverd by "wresting control" of the internet away from those who do not understand it? What is the internet for? By your arguments, only people intimately knowledgable about the operation of the telephone system should be allowed to use a phone. Everyone else who calls their friends or relatives, orders a pizza or listens to phone sex is wasting valuable telephone bandwidth which could be better used by people like you, who obviously have more important things to say. Should only the elite few have telephones? You seem to think the internet should only belong to (read: be used by) an elite few with messages and information worthy enough to take up bandwidth. Do I need to know how to program TCP/IP in order to use the internet? How much knowledge must one have in order to join your little internet country club?

    So what's your big plan, take the internet away from people you deem unworthy? How?

    To paraphrase the company down the street (literally) "what do you want the internet to be?"

    If you don't like that drooling idiot's page, don't visit it. I believe that's what a bookmark file is for.

    BTW, I can see by your statement that I was correct. Unless you happened to be in England (or other parts of the UK) I suspect you're at home (how else could your neighbours - this spelling is another clue - hear you) and therefore have don't have a job or are late for class. Since I am at work doing web development, on one of those servers, on one of those networks, by your argument I am part owner of the internet and you are not.

    Once you've answered my questions I'd like you to leave, please - your narrow mind is trespassing where only open minds should be.

  • The best defences seem to be, in order:
    * protect yourself from IP spoofing by configuring your firewall
    * hold off on honouring changes to DNS info (so you can check whether they're legit "manually").
    * and of course: keep installing those security patches!


    Uh, well, if it were only that simple. Config your firewall to drop spoofed packets? Nice idea, but if your firewall is swamped with source-routed packets to drop, you're out of business anyway.

    Same goes for "security patches." The problem is network-based, and its solution is going to be network-based. Patches to hosts aren't going to help.

  • If he edited the article submission script to automatically mirror any pages and images the article directly linked to then it would make the load slacken off a lot

    I guess he'd have to get in touch with a site admin before duplicating any of their content, but otherwise I can't see any problems - this probably wouldn't be necessary for the .com stories. It would enhance /. for the readers and stop other people's servers being squashed.
    +++++
  • Yes, that's what I also noticed. There are long delays before DNS servers will update their entries. This scheme works fine on a small site where there aren't thousands of copies of DNS entries cached on major ISPs around the world. It fails for any large site where the ISPs will continue to provide DNS entries which use the old IP addresses...for hours or days.
  • Right, so this scheme will work for any site expecting 17 hits a week but not for Yahoo or WhiteHouse.gov. With TTL=0 you get to develop the fastest DNS server farm on the planet, and you can devote a percentage of your bandwidth to DNS queries.
  • Man, down 15 points in heavy trading! Those poor freaks that are being paid with stock options gotta all be updating their resume' today, eh?
  • Seems like this proposal is relying on the web site being able to change its address and routing info faster than the attackers are able to react. This is a losing proposition.

    It's apparent that a distributed attack requires a distributed solution. But as many posters have pointed out, that usually requires a lot of cooperation from a lot of ISPs. But what about using the caching infrastructure? Assume the web site under attack actually allowed its content to be cached (pretty big assumption due to cookies and banner ads). Now if the site is attacked, all the content that's currently cached could still be served from caches. This doesn't cover dynamically generated content or content that hasn't been cached. So this is a partial solution, but it's deployable right now, today, this instant.
  • The only solution to this is to catch the people doing it quickly. Then drop the boom on them. Really, it's the only viable option. Make them suffer, and ignore their age. Who cares if it's some twit 17 year old elite computer guru or not, it's still antisocial behavior and they need to be punished.

    The only way to catch the people responsible is to make sure that you can't easily do long distance faked source addresses.

    And the only way to do that is to make a clear seperation between carriers and service providers. Carriers should not be able to create or modify traffic. And all service providers should be required to have routers that will not pass outbound packets with impossible source addresses.
  • True, true... but if you get creative and have your DDOS program crawl and start a session (by following an item link, or even better, use a search) then you can more or less guarantee getting fresh dynamic pages (and cause more problems)...

    The main dynamic pages (store front, product listings, etc) would be cached, but hey, if one is planning a DDOS attack, one should be smart enough to take more down more efficiently ;-)
  • Won't work for pages that are built dynamically, though (i.e. /. or Amazon)... If it uses incoming info (like cookies) to produce the page, then your solution won't really hold up too well... If everything is static, then you wouldn't have as many problems to start with - the server and database load is what ends up killing you, and if that doesn't, there could end up being far too much traffic through the pipe to be able to do anything anyway...
  • And my point is, without giving up your identity, how do you crack into the machines needed for a DDOS attack without IP spoofing? This would make it much harder for a cracker to set up a DDOS attack.

  • Anyone who wants to try their hand at defeating this type of attack will have a good test case to work with soon:

    MSNBC:Hactivists to attack biotech firms [msnbc.com]

    DoS people's website [demon.co.uk]

    Personally, I'd like to see this one stopped. Why? Because if this actually does anything, Chuck Schumer and the rest of the Washington Crowd will use it to obtain sweeping new powers. On the other hand, if it fails miserably, people can point out that the solutions to these types of things are technological, not legal.

    Basically, giving the U.S. government an excuse to crack down on Internet freedom is not a useful way to protest things in my opinion. Besides, a Script-kiddie style DoS attack (they won't even have to obtain their own tools) which will be blamed on hackers isn't particularly 37337. Oh, and I would seriously hate the abomination of a word 'Hactivist' (bleah!) come into common usage... -_-

  • I personally think the web is a great thing as it stands. Some of the commercialization is good. I love being able to order computer parts, and DVDs over the internet, and my bank Wingspan is completely internet based. I do however miss the old days when there wasn't the web pages with music, flash multimedia shit, and urban legend junk mail. I think we should set up another network kind of like internet 2. Instead of limiting it to colleges we'd limit it to IT professionals, corporations, and scientists. No more www.hotsex.com allowed, we can still get that on the internet. No banner ads, no trolls, no spam. If someone breaks the rules you warn them, if they persist, sever their connection. All of that bandwidth just for the exchange of useful IT & science related info.

  • ... but it wouldn't even require the phone call if you:

    1) Set up three or more networks with their own routers.
    2) Set up round-robin DNS on one network, all other servers on others, i.e.

    example.com. IN A 10.0.0.1
    IN A 10.1.0.1
    Even better, put your primary and secondary DNS on separate networks. No, it's not regulation, but we live in dangerous times.

    Then, when one router gets saturated, resolution proceeds to the next. On a tri-network, at least two separate attacks must be made. Even if one of them was the primary DNS's network, a secondary on one of the others might still be available. Even if the only remaining network has no DNS, cached information might allow the site to be accessible for some time.

    This would be a little easier that phoning your ISP and expecting them to actually help you. That would require competence, a rare commodity.

    Naturally, the problem with my plan is that we're already out of addresses on IPV4, so unless you run special masks to partition your network, you'd have to beg for more addresses from Internic.
  • it reroutes all the traffic a different way to the server? but.. what about updating DNS entries in other servers? i mean, you can't just go around changing paths and whatnot... and this certainly doesnt sound like 'real time' to me. maybe a few hours, maybe a few days,.... but... it seems impractical to me.
    *********************
  • i strongly disagree, and i dont want to speak for everyone, but im pretty sure that many people would also disagree with what you said. first, the internet is a constantly-evolving structure; the creators never would have imagined that it would evolve into the incredible entity that it is today. but the fact remains that it is a new medium. BBSes and such are still around; no one is stopping anyone from using them.

    and if you're interested in the old-days, the times where a 2400 BAUD modem was a luxury, then you shouldn't be complaining. sure, with java applets and graphics up the wazoo, the bandwidth is diminished, but you will Still get your 2400 baud (and i dare say, probably More than that).

    the internet is whatever you want it to be. if you want it to be your shopping mall, then so be it. it can also be your newspaper, your television, your comic book, your bulletin board, or your game room for playing Quake II. the internet is a living, breathing thing. it's a dream come true for companies, and the children of today couldnt imagine a world without it.

    just some random comments =)

    ***************
  • If you're willing to burn lots and lots of IP addresses, then it is possible to win the fight against DDOS attacks. Here's how to prepare www.example.com:
    • Allocate a large range of IP addresses (say 1E10).
    • Of the 1E10 addresses, blackhole 99% of them. Choose the set to blackhole with a slowly-changing cryptographic method. (This leaves 1E8)
    • Teach your routers to pass traffic for the valid IP addresses to the web server (www.example.com). Traffic for the other addresses should be logged.
    • Whenever a DNS request comes in, pick one of the valid IPs using a crytographic hash function of the requesting-host's IP address.

    There are a couple attacks that the bad guys can attempt.

    • Attack the whole address range of www.example.com.

      This is an attempt to overwhelm the routers/pipes near www.example.com. To defend against this, additional infrastructure must be in place:

      • For each network block, there must be a a public key.
      • For each backbone router, there must be a public key.
      • There must be a list of backbone routers.
      • When under attack (as seen by lots of traffic dropping), edge routers, must contact many backbone routers and send an authenticated message with the crypto-key used for IP filtering.
      • Backbone routers receiving the key can now drop useless traffic.

    • Attack against valid IP addresses of www.example.com.

      Each attack exposes the attacker to identification. The attacker may be able to overwhelm www.example.com in the short-term, but the attacking hosts can be identified and dealt with one by one.

      The attack streams can be filtered. Again possibly using public key filtration system.

    To do this requires a lot of infrastructure. It requires IPv6. It would change the balance of power though. It would allow attacks to either be shrugged off or traced. Either way, it's a lot better for the good guys.

  • I'll try to clarify, but I'm not sure I understand your question.

    Web servers would have many IP addresses. In my example, www.example.com had 100,000,000. The set would slowly change over time. The rate of change would need to be balanced against the DNS refresh/timeout settings.

    Or are you asking about what happens when someone attacks from your box? In that situation, you would be blackholed for a particular web site. This could be inconvienent. On the other hand, this would only happen if your machine was exploitable in the first place.

    I'm not proposing a long-term blacklist. When an exploited machine is identified, its owner should be tracked down. The machine should be secured. Perhaps a long-term blacklist might develop if certain machines were used repeatedadly.

    Regarding the router level. By using a public-key crypto system to distribute the filtering, it's my hope that no one router would be overloaded. This is a vulnerability though.

    I suspect the router problem is solvable. For small-fry, their upstream providers can do the filtering for them. For larger players, the distributed public-key authenticated filtering can solve the problem.

    I guess it could auto-scale: when a request-to-filter is published, each router that receives the request could implement it, and then check a few minutes later to see if it is filtering much. If not, it would drop the filter. The would allow the traffic to proceed farther to where it would reach a concentration where the benifit of filtering outweighs the cost of filtering.

  • Efforts to stop denial of services attacks remind me of efforts to stop software piracy, spam mail, and even to stop hackers (sic) breaking in to networked systems. Dynamic IDS-style DOS detection that changes routing will probably only up the ante in DOS attacks in terms of granularity of the sources and the timing and nature of such attacks.

    You either offer a service professionally or you don't.

    If a service provider has a well organised incident response team, then this is likely the best way to deal with (usually short term DOS incidents.

    e-commerce is a four letter word ... %^)

    anomynous coward

  • The recent spate of DDOS attacks on large corporations on the web are IMHO a good thing. The internet has become increasingly commercialised, and is now filled with such "innovations" as Java applets, Flash animations and
    banner ads. All these contribute to is bandwidth probelms, and we need to go back to when they weren't part of the web.


    And DDoSing a bunch of people and adding more bandwidth waste won't stop waste.

    The web was never designed to be a haven for companies to promote useless products, spy on users and make money from patents. We don't need them, and we certainly don't want them on our web. If DDOS attacks make them
    think twice about using the web, then I'm all for them.


    Unfortunately I couldn't have afforded to get on the "web" back in the good old days. Companies pretty made sure the infrastructure there so I could.
  • Even though this will help, short-term, people will also find vulnerabilties to this type of setup, and actually, the system which tries to anaylze the packet's source, could make the system more vulnerable. This, does not seem that it is a good solution, since, like the war on many things, the solution is one that just pushes the people to find other ways around the prevention. A good theory, but it doesn't seem easily implemented, or the best way to stop attacks.
  • Establish WELL KNOWN backup sites. Large websites like Yahoo should probaly do this for cost reasons of operating a huge site, as opposed to having distributed ones. DNS registration is cheap, and if evey company that was worried about DOS attacks just had well established backup sites like foo.com, foo2.com, foo3.com, etc. they would have a cheap solution that would actually reduce some costs by having the load on their websites somewhat distributed. -The only reason I have a FAT32 partition is to mount it like the b1tch it is.
  • You don't need the stub network. You can route 10.0.0.x to the bit bucket on the ISP's aggregate router, which would be better for the router's CPU utilization anyways

    True dat, but wasn't the idea of rerouting the traffic rather than dropping it in the bit bucket to trace it?

  • for best performance you want the 'best' routes

    Yes, but the author is talking about rerouting the DDOS traffic here, assuming you can tell the difference from 'legitimate' traffic.

    What I want to know is how the hell is one supposed to reroute one without the other, which is something I don't thing he addresses.

  • Forgive my naivety,

    You're forgiven. For your spelling too.

    but surely one way of reducing the bandwidth problems on the web and I guess DOS attacks is to use broadcast IP packets from the webserver.

    As others have surely mentioned, multicast (NOT broadcast) IP is usefull for static pages or for streaming audio and video, cases where every customer sees the same thing. But it's not useful for e-commerce, where every customer's page is customized for that specific customer. Likewise with HTTP proxies; caching only helps for static pages.

  • by gclef ( 96311 )
    In addition to the other comments I've read (many of them very appropriate), I'd like to add one worry:

    We're already running out of IP space. Now, this guy's talking about putting 2,3,4, etc IPs on *each* box to play these routing games. No way. There just aren't enough IPs. There would be in IPv6, but then, we're not using IPv6, so that's a moot point.

  • The attacking host wouldn't be discovered.

    By looking in named logs you will see the address of the machine that made the request, which could be anything.

    Most boxes are don't handle their own DNS, rather, they get it from some other host. The machine that you pull DNS from will either query a root server at then the primary nameserver for the domain, or hand the query off to another DNS server.

    Either way, you won't find the exact machine.
  • The DDoS problem is a classic "arms race" -- the problem is sorting out the "good stuff" from the chaff sent by DDoS attackers. It reminds me of a dialogue in Hofstadter's (in?)famous Godel, Escher, Bach.

    In Hofstadter's dialogue, the audiophile Crab tries to impress the Tortoise with his excellent stereo system -- and the Tortoise keeps giving the Crab records that are carefully crafted to destroy the stereo system by resonating with the works. If only the Crab were willing to accept a little bit of harmonic distortion, his stereo would be unable to reproduce the sounds and destroy itself when playing the malicious records!

    The point of the dialogue was to illustrate a necessary weakness in logical systems -- but in this case there is a strict analogy with the Internet: the better the system works at distributing information (playing music), the more susceptible it is to DOS attacks (the malicious recordings).

    The Crab's ultimate (failed) solution was to design a custom record player that would laser- scan the disks, perform a harmonic analysis, and rearrange its modules with a robotic arm if it detected a destructive harmonic sequence. Of course, the Tortoise supplied a record that would resonate with the robotic arm itself, thereby breaking the stereo.

    In our current (DDOS) case, some attacks are preventable with filtering -- but that filtering lowers the efficiency of the system as a whole, consuming more resources per request and making the system more susceptible to sufficiently clever attacks. Some attacks may even aim straight for the filtering scheme itself, aiming to cause false diversion of real traffic Several folks have commented that clever h4x0rs can spoof arbitrarily realistic web traffic. Ultimately, there's no way to know the intent of any given packet, and sophisticated filters will simply fall to more sophisticated spoofing.

  • by Anonymous Coward
    I'll admit up front that either I'm ignorant, or the professor is. I'll let you be the judge.

    Not to sound rude, but both of you are ignorant... but at leasty you have a better grasp of reality ;-)

    My understanding of routers is that for best performance you want the 'best' routes. What consitutes best is determined by the algorithm, but generally speaking it's the route with the lowest latency (OSPF, for example, uses this metric). So by changing the routes you'd be making things less optimal. The other problem is convergence, or how quickly routers adapt the new routes. Routers have special protocols to make sure that when a route goes down (or a new one comes up) that change propagates through the router 'network' and each router updates it's router table to keep track of which packets go where. Any change in routing on a network requires a finite period of time to propagate through the router network. Changing routes too often can severely impact performance - router loops come to mind as one possible Bad Thing. So, short of redesigning all existing router protocols or creating a new one which very rapidly updates all routers on a large network, I don't see changing routes as a solution - the cure is worse than the disease. Lastly, even assuming all these issues were worked out, TCP/IP is designed to be fault tolerant - if your packet gets eaten it will be regenerated. If the routes change, the packet will be re-routed to get to it's destination. In short, if you DDoS over a TCP/IP connection.. or generate packets which require the remote end to maintain state (which TCP needs!), you're going to kill the remote host regardless of how the packet got there.

    Ok, lets comment on this:
    First of all, best performance IS NOT EQUAL to fastest. It means that it does whatever it is supposed to do betst, which can be fastest, but in my defenition it always means 'in the most correct manner possible with as little as possible overhead', AND ONLY IN THAT ORDER!!. ANyway, before you talk about performance,m define what you mean with it.
    Asssuming better performance means faster here, latency is only one measure for this, and a measure that is rather unreliable for that matter. If you want to look at this kind of performance you always use bandwidth + latency. One without the other makes really no sense whatsoever for measuring performance. I can create a very low latency on a router with simply ensuring that my measurement is very small (uses very little bandwidth) and that there is no other trafic on the router.
    Anyway, besides those performance issues, you make another mistake:
    First of all, most DDOS attacks count on a few things:
    - Fill up a servers state table with half open requests for tcp connections.
    - Flood the servers network connection with more trafic then it can handle.
    - Use any cpu, memory and other resources that the server has.
    In order to fill the state table, connections are usually created, and never used. As a result you can't count on tcp connections to carry on your ddos, but thats not really important, the dns is going to tell you where your target moved, all you have to do is watch the dnses closely, and you can ddos without being hindered by this plan at all. Simply put, proposed plan does not work because it is very easy to circumvent, it takes less efford to circumvent it then it takes to implement this proposal. The best solution at this time is to impliment packet filtering on the router. This has it's own set of problems, and is hardly a pancea. And how can you derive whether a DDoS is really occurring? The slashdot effect comes to mind as a good example. -o Question authority! Yeah, says who? o-

  • These recent DOS attacks have been lanuched from third party systems that were subverted. Most of these systems suffered from poor system management and disregard for basic security.

    This isn't rocket science! Basic well known measures would likely keep out most of these script kiddies. The problem is the owners and operators of thee subverted systems aren't being held even partially responsible for the DOS attack launched from their systems right under their noise.

    It is well known in international law that a sovereign nation state is responsible not only for its own actions but it must try to prevent third parties from launching attacks on neighbouring states from its territory. Standing by and doing little or nothing to prevent these attacks is paramount to having participated in the aggression.

    I think the same principle should be applies to systems connected to the internet. The owners of such systems must meet a certain level of due diligence in ensuring that its systems are not subverted and used to launch attacks on others. If it can be shown that they were negligent in maintaining a minimum competent level of security they should be held liable to some degree for the losses suffered by the attacked company or institution.

    I am certainly not in favour of solutions that could possibly divert even more of our economy into the unproductive and parasitic hands of the legal profession but it wouldn't take too many well publicized examples to shake up the entire computer community and cause security to be taken a lot more seriously than it is today.

    Technical solutions are appealing to the lazy. Why change the way thing are done when someone can just come up with a magic bullet to fix it. But these solutions are usually just a mirage that often cause more problems than they fix. We see that with low tar cigarettes, airbags, fat and sugar substitutes, and a endless list of other techno-whizzy but wrong-headed solutions to largely self-inflicted problems.
  • by Anonymous Coward

    When are we gonna stop the Slashdot Effect?
  • by Anonymous Coward

    Changing routes is actually a bad way of putting it. What the author is suggesting is really just changing the address of the server. When there's too much traffic to the server's IP address, you have it pick another one. But because it's in the same network (e.g. 10.0.0.0/16), the change doesn't need to propagate to the whole internet. The only routing change that needs to occur is that the ISP routes the old address to null. I'm not in network operations, so I don't know if ISPs would have objections to something like that. (You would want to make sure, for example, that you didn't create a DoS against the ISPs router by filling its routing table with entries of the traffic you are discarding.)

    I'm not sure what you mean when you say "if you packet gets eaten it will be regenerated". Yes, TCP is fault-tolerant. But it doesn't make packets out of thin air! When a sending host doesn't receive an acknowledgement for the data it sent, the host resends the packet. Presumably, however, folks originating DoS attacks aren't going to bother with this. Even if they did, the packets they resend will be directed to the old IP address. But since traffic to that address is being discarded, it's not an issue. [1]

    The separation of legitimate traffic from the attack traffic is done using the DNS. The idea is that if I'm trying to get to Yahoo, and the original address is disabled, then I'll just hit reload, check with DNS again, and get the new address. The assumption is that either the attacker only checks with DNS once (and hence his traffic is discarded), or that if he checks with DNS frequently, then you can see his many queries in the DNS logs. Given the hierarchical nature of DNS, I'm not sure I buy that.

    [1] Depending on what it is that you're trying to protect. Although you've freed up resources on the server, you may be saturating your ISPs links, because the attack traffic has to make it to your ISP before it is discarded.

  • "Most European technology just isn't worth our stealing," -- Former CIA chief James Woolsey, referring to Echelon
    Well, since Linux was given away it wasn't nescessary to steal it.

    --

  • Packetstorm [securify.com] had a contest [securify.com] for papers regarding defense against DDOS attacks. These papers covered the territory fairly well, I think.
  • I don't think the technique will work very well. It did get me thinking about what an attacker might try next, once (or if) this form of defence became common.

    There's a long-standing problem with the way the DNS service operates, which lends itself scarily well to a DDoS-style attack. The problem is that it should be possible (especially with a congested network) to masquerade as a site's DNS server and effectively own the DNS info (and change it to point to a compromised host) from that point on.

    Worse, the attack can be continued up to site responsible for top level domain information.

    The best defences seem to be, in order:

    * protect yourself from IP spoofing by
    configuring your firewall

    * hold off on honouring changes to DNS info
    (so you can check whether they're legit
    "manually").

    * and of course: keep installing those
    security patches!

    dec
  • Ah, but look at what I presented above. There is no need to spoof the originating IP when you are hitting it from thousands of nodes across the net (and you don't own any of them)! No one can shut down thousands of machines at the same time.

    Robert Morris Jr. did it.
  • I tried giving him a call. About the third sentence in I hung up. Of course, the sentence "the police are on their way over" might have made that happen quicker...
  • DDOS is not defeated through blocking the packets once they've started, it's defeated by making sure that people with a 24/7 server on the net have decent security. Most of the non-un*x DDOS tools could be caught with a virus scanner and the un*x ones should be avoided by keeping up with the latest security patches.

    Screwing with IP stuff is a serious case of locking the gate after the horse has bolted.

  • OK, if something like Trinoo or TFN is producing a DDOS, how can we tell the attacking boxen apart from the legitimate ones? If I'm a customer on the same subnet as a rooted client which may or may not report the same IP address through a firewall, I'm not coming back to your site if I get a message suggesting I am involved in a DDOS.
  • IPv4, in general, has already been extended like crazy... why not just work towards developing and migrating to IPv6 instead?
    Even if it was decided tomorrow to use IPv6 instead of IPv4, how long will it be before it propagates across the net?
    Most home users will need a new OS. Microsoft Research have a preliminary version of a IPv6 implementation [microsoft.com] available, but it's early days yet, and unlikely to end up in any consumer OS for a while.
    Additionally, a translation layer such as this [ietf.org] is not likely to help much as there will have to be legacy support, and this may well be exploitable.
  • ha.ha.ha

    just change your settings to ignore jonkatz.

    Some of us actually enjoy reading his insightful articles.


    --
    "Rune Kristian Viken" - arcade@kvine-nospam.sdal.com - arcade@efnet
  • Its pretty obvious that you don't understand what a DDoS attack is all about. A firewall can do NOTHING about it. A DDoS attacks makes your connection 100% busy (or your cpu, or you run out of memory...)

    A firewall can't do ANYTHING to kill it.


    --
    "Rune Kristian Viken" - arcade@kvine-nospam.sdal.com - arcade@efnet
  • Assuming that the attacker isn't too smart, the attacking host would be discovered - if he queries the DNS every two seconds. All they would need to do, would be to dig through the dns-server-log, to find out who queried the DNS server plenty of times. Then that host would be tracked down, and at least ONE of the servers that was used in the attack, could be blacklisted. They could simply throw in a 'deny' rule in their routers, and block DNS lookups from the host in question...

    Also, your proposal says that the attackers script would re-query a dns server every few seconds -- something I don't think any of the current tools does. They do a one-time-lookup against DNS, and after that, attack the associated IP address(es).


    --
    "Rune Kristian Viken" - arcade@kvine-nospam.sdal.com - arcade@efnet
  • The problem is the owners and operators of thee subverted systems aren't being held even partially responsible for the DOS attack launched from their systems right under their noise.

    And that, IMHO, is a good thing.

    I for one, has had a machine cracked once. Several people I know, has had the same happen to them. A friend of mine got his computer rooted less than a week ago. He played around with BIND, trying to learn to configure it a couple of weeks ago -- but forgot to kill the process after playing around with it. He also forgot to check for the latest security patches before starting to play around with it. Oh, and he doesn't read bugtraq everyday, like you and me.

    And don't forget, under your propsal, if I found a major bug in some windows service, running on 95% of all dialup windows client worldwide .. I would be able to sue 95% of all dialup clients, due to their negligence to update their systems.

    This obviously is wrong.

    or is it?

    Yes - it is. Because only the major corps, and a FEW interested individuals, would be able to provide internet access - if your proposal became gospel.


    --
    "Rune Kristian Viken" - arcade@kvine-nospam.sdal.com - arcade@efnet
  • Look at the two things DDoS attacks target: Bandwidth and the remote host(s). Network bandwidth is becoming a non-issue (in the 5-10 year range), so ignore that for now.

    No. Its the bandwidth-DDOS'es that will continue to be the problem. As long as there is compromizeable hosts and the possibility of IP-spoffing, it will be "no problem" to exhaust a hosts bandwidth..


    --
    "Rune Kristian Viken" - arcade@kvine-nospam.sdal.com - arcade@efnet
  • You're saying that like the 80's was a bad time.

    A time without spam.

    A good time.

    Let's go back in time. :-)

    --
    "Rune Kristian Viken" - arcade@kvine-nospam.sdal.com - arcade@efnet
  • I do NOT think slow propagation of DNS will be the major problem. TTL = 0 would "solve" it, but then - a lot of people scream, you would have DoS attack against the DNS servers instead. Personally I think that argument is moot - as you can have several DNS servers, and preferably on different backbones.

    The problem is that EVEN if you set TTL to 0, the attacker can discover this, and since we're talking about a distributed attack -- which may be updated to attack a new address pretty quickly -- we are still talking about a devastating attack. Perhaps one could reduce the attack from 100% to 30%, but it would still be devastating.


    --
    "Rune Kristian Viken" - arcade@kvine-nospam.sdal.com - arcade@efnet
  • You're right, I didn't think clearly. I forgot that you could query for the IP address through most DNS servers - as they allow querying of web-wide addresses, from anywhere.

    My mistake. Most DNS servers are misconfigured - I forgot.

    In any case, we both agree that the essay mentioned doesn't solve anything. :)


    --
    "Rune Kristian Viken" - arcade@kvine-nospam.sdal.com - arcade@efnet
  • IP-spoofing is useless for cracking in a machine unless you do something very clever.
    With IP spoofing you never get an answer back (unless you sit on a router where between the target and the spoofed host, but someone who "ownez" a router could have more fun then stupid DOS-attacks).
    The usual tactic is going through a chain of already cracked hosts where you can manipulate the logs.
    Spoofing could come into play when you send the command to the DDOS clients sitting on the cracked hosts. There you don't need an answer. But you could also simply use a cracked host.
    Perhaps spoofing can help to make the DDOS more effectiv by ie. using non-existant hosts or hosts on the same subnet as the spoofed addresses. IE. you could ping www.example.com with large packets spoofed to come from www2.example.com, also hitting www2.example.com with the echo-replies.

    The main use for spoofing in the DDOS-attacks lies in simulating "real-world traffic" and making it harder to track the sources. The victim is unable to stop your DDOS attack at its border routers if it cannot find a common characteristic in it. If you spoof with random ip-adresses you eliminate a big common denominator.
  • That doesn't fly, do the statistics. Either your 1000 boxes create a much higher traffic than the 100000 real people hitting in the same time span or you won't bring my server into trouble.
    But if your 1000 do make enough "noise", it's not hard to seperate the worst offenders. Statistically on of your boxes generates 100 times the traffic of a human user. So if I just single out the 100 most "active" IP-adresses, chances are high that I'll never hit a real human.
    And I wouldn't give a fuck if I hit altavista's crawler which just wanted to index my page in the moment of such a DDOS-attack (chances for such coincidence are pretty low).

    Ok, I should have the infrastructure to be able to
    (a)do the statistics and
    (b)drop packets from certain IP-adresses.

    If I don't have that, I'll not be able to stop the attack in 30 minutes. But if I have, I'll stop the attack easily.

    As I said above, the spoofed source adresses are the real key to "success" of a DDOS-attack if the victim's infrastructure is mature.
  • Oh yeah, I forgot, the "Open" in open source and the "Free" in free software only apply to uber geeks. The internet is ours and we don't want companies and housewives and other "idiots" (to quote some other posts) on our precious internet. It's an exclusive club, after all, and if you can't boot into Linux and configure a firewall, you have no business using it.

    Grow up. If your grown up, get a real job an get out of the university. If you have a real job and aren't in some ivory tower at a University, get a life.

    The web and the internet are for everyone. Period. The net and the Web belong to know-one. Period. It was designed for the free flow of information - ANY INFORMATION (no-one care wherether you think its useful or not, they find it useful). And no amount of legislation, DDoS or whining by the "internet Amish" (BTW, This is you)is going to make it go back to 80's.

    If you don't want to see java applets or pretty pictures, turn them off in your browser...or use Lynx.

  • rely on having root access on bunches of machines also try to spoof the originating address of the command packets so that tracing the attack becomes a much more difficult propisition.

    Well, there is a big difference between making a DDOS impossible and making it easier to catch the people behind it.

    But even that aside, if you have root access on bunch of machines, it's trivial to route your command packets through some of these machines, arranging at the same time for them to forget that they ever saw anything resembling a command packet. If a couple of these machines are public servers, you do get very high security -- again, without using any spoofed packets.

    Kaa
  • better ideas and info can be found here:

    http://packetstorm.securify.com/distri buted/ [securify.com]

    http://packetstorm.securify.com/pap ers/contest/ [securify.com]

    Make sure to check out the papers by Mixter [securify.com], RFP [securify.com], and Simple Nomad [securify.com].

  • The net would be far better off without all these idiots and the companies that pander to them.

    Unfortunately, this elitest, arrogant attitude is the wrong thing for the internet. The internet and the web are about the freedom to exchange ideas. To say whatever you want and top have it possibly be heard. Just because it's grown beyond it's original intention doesn;t make it bad or that we should go back. I think the web has become something greater than the sum of it's parts.

    Yes it is not 100% reliable, but realisticly it is really only just starting to "grow up". We have come to expect all services to be like the phone company in terms of qos, telephones have been around for 100 years, that is a bit of time to get most of the kinks out.

    It makes me retch when I see people get that "the web should be all text" or that "The average user is too stupid and shouldn't use the web" idea. What we need is everyone to be able to get there, hell it helps pay my salary so I have a stake in this :).

    My point is simple, the web and the internet are about information and access to it. YEs companies are commercializing it, but you can still get the info you want. Be discerning, be picky, but never assume that because somone doesn't meet your standards for some things that they should not be allowed to use it. That is close-minded and has no place online.
  • >But, alas,that would require more political muscle than I have!

    We could always give our governor a call... he's got the muscle... though how much of it is political muscule is another story....

    Maybe we could start the new Jesse V. site - whomping-ass-in-MN.com ;-)
  • There is no need to spoof the originating IP when you are hitting it from thousands of nodes across the net (and you don't own any of them)!

    But there still is a need for IP spoofing...

    To crack the machines that are going to participate in a DDOS attack the cracker must spoof his IP address to avoid being cought. Stopping IP spoofing at the router level would stop crackers from getting into the machines that they need to use for a DDOS attack. This wouldn't stop them from cracking the machines but it would make it more difficult for all cracking in general.

  • You had me going for a while there. I actually thought you knew what you were talking about until you uttered this:

    The *best* solution, IMO, would be to use that Millenium computer center in DC to monitor backbone traffic in realtime and give those people access to the network to shut down these things dynamically and in real time.

    ummm...first, who's backbone are you suggesting the government monitor? (there is more than one backbone provider after all) And this wildly assumes that ALL of an ISPs traffic actually traverses their backbone. But let's play along anyway...

    How about some quick math.

    OC192 = 4,478,850 MB/hr

    that's just ONE OC192! What? Don't believe in the magic of OC192 yet? ok.

    OC48 = 1,119,600 MB/hr

    Hey, that's better. Where are you going to put that data while you're analyzing it...and what are you going to analyze it with?

    And what security professional in their right mind would ever choose to fight potential hacker activity by giving "someone else" the ability to make changes to their network???

    IMHFO the *BEST* solution to the problem is to get system administrators to lock down their stupid boxes or get them off the net so they can't be used in one of these attacks, but then I probably don't understand the issues as well as you do.
  • RE: your routing questions... yes, routers create their routing tables based on some algorithm (which depends on which protocol you are using) to find the best route to a host. In this case, there would be only one route to get to the host (assuming that the customer is single-homed, which the guy who wrote this proposal did), so no best-match decision has to be made. There is just one route to the customer's network. You can, however, manually change the routing to improve the situation. In the case of a simple DOS attack (not DDoS), say from 123.123.123.1, you can just route packets with that source to null0 (just a bit-bucket address all routers have), so the router throws the packets away. In the case of a DDos attack, that won't work since there are several sources.. so you can route packets that have a protocol equal to ICMP and a dest. equal to the cst's server to null0.

    The reason to do this is that access lists, packet filters, take up MUCH more router CPU than ip routes, so often if you put up an access list on a huge DoS attack, it will kill the router, and you'll be no better off.

    //Phizzy
  • Yeah.. I had originally thought to use hotname instead of IP, as that would be pretty easy, but wasn't sure if the kind of packet-spoofing they would have to use would be able to handle hostname resolution.

    I think this guy is a prospective DDoser himself, because this 'solution' would make things worse rather than better.

    //Phizzy
  • Well, yes, but half of the point of these DDos attacks is that it doesn't matter if you find out one or two or ten of the IPs that are attacking, none of them are most likely owned by or associated with the attacker (if they are any good). It is much more important to stop the attacks then to track down who is responsible for them.. if the attacks themselves are harmless, there is no need to find out who is doing it.

    and you're right, I don't think any of the current tools do constant DNS lookups, but anyone who isn't just a 5kr|p7 k|dd|3 can change their script to dynamically do DNS, and then release it to the k|dd|35.

    //Phizzy
  • Even an hour's outage costs a lot of money at a big commercial site, such as airline reservations.


    Spoofing is pretty essential for the SYN-flood attack, which is really devastating when you haven't fixed your system to prevent it and you've got 1000 smurves pounding you with it.


    But even non-forged machines can send web requests 100 times faster than regular users - not only is there no think time required, the client can request very big files, and drop them when they arrive rather than displaying them. 1000 smurfs on 56kbps dialups are 56Mbps, bigger than a T3. 1000 smurfs on cable modems can be much bigger - their upstream bandwidth is limited (but requests are small), and their downstream bandwidth is enormous, if they're not all on the same segment - figure 5-10Mbps per segment, though there'll be a good bit of clustering of smurfs (and if you can get away with spoofing anywhere, it's probably cable modems.) 1000 smurfs at 100 T3-connected universities can suck down 4.5Gbps, which is still large in today's Internet.


    Transparent Caching at ISPs helps a lot. They're installing it anyway, just to manage their own bandwidths, but in the process it means that smurfs repeatedly requesting big static pages get turned into cache hits and stop bothering the target and the net. On the other hand, requests for big uncacheable dynamic pages are a problem - especially if the pages are real CPU-burners to calculate. So attacks on search engines can be moderately nasty, and slashdot-like conferencing system, and other things that try to be really dynamic.

  • First of all, it doesn't solve the problem, because you have to protect ALL your protocols from use as a DDOS target, not just the web. But if you wanted to build something web-like, what could you do?


    *casts don't map well to dynamic HTML, and don't map well to secure connections; you probably won't be able to fix those, so they're still a DDOS target.


    Broadcasts go to everybody. You simply can't broadcast everything on the web to everybody - that would be reinventing Usenet, very badly, and wouldn't scale, would have broadcast storms, and would simply be blocked outright.


    Multicasts go to people who ask to join a multicast group, which would need to be sparse-mode for the web to have a chance of scaling. (There are 2**28 (2**27?) valid Class D addresses, and more web pages than that, so you've also got a conservation problem...) This means that instead of sending a packet with a request to foo.bar.com for a web page, you send a send a request to foo.bar.com to find out what Class D address it broadcasts foo.bar.com/user42/bazz/page.html on, then send a
    request to some collection of routers to join you as a receiver of that Class D multicast group, and then send a request to foo.bar.com saying you're ready now and could they please multicast that page (and if you're lucky, related pages), and later, somehow decide you're done with that address and deregister yourself from the multicast group. If you did this, you'd need to re-invent the mechanisms for caching servers (which are what keep the web alive today), though that's certainly a doable thing, and you'd have to worry about how to prevent DDOS attacks in your new complex environment.


    For instance, suppose you own a smurf at BigCableModem.net, and everybody who requests home.netscape.com/index.html (the Net's most popular web page) gets put in group D.D.D.D. So you join the group, and start requesting that page as fast as you can. Either BigCableModem.net, or CachingCompany.Net, or worst-case Netscape.com, starts blasting away as fast as it can, and everybody still in that multicast group starts receiving copies of that page as fast as the routers can deliver them, until they deregister themselves from the group. Of course, you can receive them much faster than they can, because you've actually told your LAN card not to listen to those mcast addresses even though you've told the ISP's router you want them (or you've at least told your card to /dev/null them.) OBTW, you're signing up for as many popular sites' multicasts as you can, to flood as big a set of mcast trees as possible, as well as flooding your neighborhood cable system or your DSL provider's upstream feeds from their DSLAM, or their dial POP's feeds. It may take the smurves down faster than in some DDOS attacks, but if you've got any chance of forging IP addresses on your requests, you can delay the eventual Death Of Smurf. Could be ugly...

  • The next DDoS attacker needs to avoid the defenses against the previous attacks. An easy way around this problem is for the attacker to create a broadcast channel of some kind, which the smurves can use to communicate the latest attack information, including DNS. Suppose you own 1000 smurves. Instead of each one querying DNS for the target every second, each one queries every 1000 seconds, or 4000 seconds, with some protocol to spread out the queries, and broadcasts the result. Then each smurf receives up-to-date attack information, but no smurf queries often enough to be obvious, at least to an automatic attack detector. An easy improvement is not to broadcast the results of the query if they're the same as the previous result, or perhaps _any_ previous result.


    Lots of broadcast techniques are possible.
    IRC is one choice. Writing to some free-web-server web page is another. Building a server that (ab)uses a free web page service or web-based email account is another. IP Multicast is fun if you can do it.

  • Breaking your DNS caching is bad enough, and makes retrieving your pages slower and less reliable when you're not under attack, but changing your address also doesn't work well. Too many things cache DNS. Some DNS servers have minimum lifetimes on the values in their caches, so they won't respond to your attempt to escape the attack, and many browsers cache DNS results, so anybody who was using your site at the time of the attack also loses.
  • The few DDOSes I've seen that rely on having root access on bunches of machines also try to spoof the originating address of the command packets so that tracing the attack becomes a much more difficult propisition. Now it's true that your average script kiddie doesn't take precautions outside what the script he downloaded provides, but the average script kiddie is also easily caught.
  • B) What's to stop the haxx0rs from writing a small script that polls the dns servers every
    5..10..30 seconds and changes all of the DDos boxen to the new IP instantly


    Actually, they wouldn't have to do this. If you set the TTL for a DNS etry very low, then I am assuming
    that the expire time on the record as a whole is very low, probably not more than a minute.
    With this being true, if the attacker uses the hostname rather than IP, he won't have any problems.
    His internal cahce will expire quickly, as well as whatever servers he is using for resolution.

    This will result in a flooding of the target's DNS servers as well as a DDOS attack.

    In looking this over, this "solution" will never work.
  • You are absolutely right.
    The numbers are too much in favour of the cracker/script kiddie. Since scanning for (temporary or more permanent) security holes can be automated, one vulnerable machine per thousand scanned (or even ten thousand) will provide soil enough for a DDoS atttack. And one admin in ten thousand, will be behind on the latest hazards, or have a valid reason to relax security (temporally) or will simply make a mistake.
  • When the day o DDoS hit a while back, there were articles on this sort of DDoS attack. Essentially, this is NOT a DDoS. It's many very small, ineffective DOS attacks. In order to shut down a web site running on anything bigger than, say, a dialup connection, everyone who received the email would have to open it at the same time.
  • ...against a propperly launched DDoS attack.

    And just what is a propperly launched DDoS? One in which a multitude of machines make legitimate requests of the server so as to overwhelm its capabilities and deny legitimate users access.

    In other words, an induced slashdot effect.

    Or, to look at it another way, how would you filter a clever DDoS without filtering slashdot users? The two can be made identical.

    Please forgive my spelling. I'm an American so English is only my second language.

  • These recent DOS attacks have been lanuched from third party systems that were subverted. Most of these systems suffered from poor system management and disregard for basic security.

    This isn't rocket science! Basic well known measures would likely keep out most of these script kiddies. The problem is the owners and operators of thee subverted systems aren't being held even partially responsible for the DOS attack launched from their systems right under their noise.

    Technical solutions are appealing to the lazy. Why change the way thing are done when someone can just come up with a magic bullet to fix it. But these solutions are usually just a mirage that often cause more problems than they fix. We see that with low tar cigarettes, airbags, fat and sugar substitutes, and a endless list of other techno-whizzy but wrong-headed solutions to largely self-inflicted problems.

    I don't accept your entire list, but even if I did there is still an obvious difference here. Instead of doing it the easy way so that they won't have to change their behavior, in this case the companies are looking for an "easy way" so that they won't have to change everyone else's behavior.

    Sure, companies worried about DOS attacks would probably love it if every other company on earth got its act together. But doesn't it make more sense to pursue protections for themselves today rather than hoping that eventually everyone else will solve their problems for them?

    And this isn't a short term solution that precludes the ones you have suggested. There's a big difference between wanting an airbag because you think you don't have to wear your seatbelt or drive safely if you have one, and getting one because even with a seatbelt and safe driving it can give you that extra advantage if some idiot slams into you. Companies can try to reduce the severity of current DOS attacks AND work to make them less frequent in the future.

    -Kahuna Burger

  • Yes, that's what I also noticed. There are long delays before DNS servers will update their entries. This scheme works fine on a small site where there aren't thousands of copies of DNS entries cached on major ISPs around the world. It fails for any large site where the ISPs will continue to provide DNS entries which use the old IP addresses...for hours or days.

    That's why the proposal specifies the Time To Live value for the DNS entry as zero. This means that EVERY request for the domain name will cause the authoritative DNS server to be hit. That sounds a bit silly to me. If implemented as suggested, the DNS servers would fall down instead.

    In this plan the TTL is the limiting factor for switching over the server address. If it were more than about five minutes, you'd just end up DoS-ing yourself. One or two minutes would be better in terms of latency, but since people often take more than a couple of minutes to actually read a web page before going on to the next one, there would still be a lot of load on the DNS servers.

    As long as most of the "eyeballs" are querying their ISP's domain name servers, a TTL of a minute or two will reduce the DNS load... assuming the web page is really popular, like Yahoo or Ebay, or the ISP has a lot of subscribers, like AOL.

    But really, as long as too many border routers allow packets with forged source addresses to get out, this will continue to be a problem. Not that even that would stop DoS attacks entirely, because one could always forge a valid "inside" address, or one could just run an automated DDoS with unforged packets that look like normal web traffic.
  • We are currently expiriencing a distributed denial of service attack. Please authenticate yourself as a human by clicking on not this link [slashdot.org], or this link [slashdot.org] either, but this link [slashdot.org], and not this one [slashdot.org].
  • The solution presented here only works if the number of offending computers is small.

    If for instance the DDOS attack was a virus which spread via email (thus affecting thousands of machines as "Happy '99" virus easily did), and it opened a connection to the target site like www.yahoo.com and simply hit it with large packets without the need for masking the originial IP; the plan presented here would fail.

    How are you going to deny access to thousands of IPs or identify thousands of machines that are attacking your site, contact their owners and get them to disinfect their machines?? It is unfeasable. Thus, the target site would be easily screwed.
  • I assume that because of your elitist attitude about how the Internet should be that you've been around for a while. I've been on the internet for about six years and was using my C64 to connect to local BBS', so I'm not a newbie by a long shot.

    I would bet that if the government made a proposal to censor "morally offensive material" (i.e. CDA) on the web you'd be all up in arms (as would I), yet here I see you saying that you believe that the Web should exactly the way you want it. You even go so far as to say that it is OK for idiots with basic computer skills to illegally crash corporate web sites.

    "All these contribute to is bandwidth probelms, and we need to go back to when they weren't part of the web."

    That's got to be one of the lamest excuses I've ever seen. Bandwidth problems? If you are referring to your own bandwidth, turn off your Java and Flash plug-in, simple as that. If you are referring to the internet in general you have a very weak point there also; the companies that run flash and java apps are going to purchase the amount of bandwidth they need. The internet would have less bandwidth (no real change in free bandwidth obviously) if the companies didn't have those things.

    I think your real problem is that you got on the internet early and now you're upset that you are no longer one of the elite few. You just want the computer-illiterates off the net and you can only do this by removing the corporations.

    My final point is this: IF YOU DON'T LIKE CORPORATE WEB SITES YOU DON'T NEED TO VISIT THEM!
  • IPv4, in general, has already been extended like crazy... why not just work towards developing and migrating to IPv6 instead?
  • the internet is whatever you want it to be. if you want it to be your shopping mall, then so be it. it can also be your newspaper, your television, your comic book, your bulletin board, or your game room for playing Quake II. the internet is a living, breathing thing.

    agreed. the problem is that with the overpowering interest of big business comes government intervention. it's no secret that the corporate world turns the dials of the US legislation. in effect, i am afraid that those who wish the net to be their shopping mall are keeping it from being a living, breathing thing.

    DDOS attacks certainly aren't the answer. it'd be nice to have better top-level domain naming, instead of everyone being .com. then perhaps certain regulation could be placed on those wishing to be a "dot com." i'm getting off-topic here,,
  • by Anonymous Coward on Monday April 03, 2000 @07:29AM (#1154857)
    Look at the two things DDoS attacks target: Bandwidth and the remote host(s). Network bandwidth is becoming a non-issue (in the 5-10 year range), so ignore that for now.

    For the remote hosts, we need protocols that do not allocate resources unless they are absolutely necessary. Look at upcoming protocols like SCTP [ietf.org]. The protocol mandates that the initial connection sequence be stateless on the server side. So at levels below the application, DDoS attacks become much, much harder. This is essentially the SYN cookie hack, but made official.

    So what about the application level? Well, applications need written to allocate state only when absolutely necessary. This doesn't necessarily imply pushing all state to the client side, however. Mainframe folks have been doing some this for a long time. It'd be interesting to see just how much carries over to a networked system.

    And NI / bus bandwidth on the receiving host? This one's a cool problem. How much processing can be done in the NI to reduce host bus traffic? And how can one reserve resources in the NI to statistically guarantee that proper sessions work during a bombardment with bogus sessions? (Extra credit: How does one move some of the app-level down to the NI [washington.edu] to help? Or out to the routers [sri.com]?)

    These are the interesting areas for server-side DoS defenses, not DNS and router games. Then things like CIDF [isi.edu] and/or the idwg work [ietf.org] for detecting and squelching DDoS attacks... Imagine if every Gnutella, SETI@home, and distributed.net client and server also helped with DDoS detection... Much more interesting and practical than DNS and router tricks.

    And then there's the boost in performance SETI and distributed.net would get from the new IMPS [isi.edu] protocol...

  • by Signal 11 ( 7608 ) on Monday April 03, 2000 @06:39AM (#1154858)
    I forgot to add a small footnote about the use of a 'stub' network. The underlying problem with a DDoS is that it uses up all of a necessary resource, such as CPU, memory, HDD, or bandwidth. Using up the entirety of any of the above resources will make the server unresponsive. This guy's example centers around diverting 'DDoS' traffic to a dummy-net while the 'regular' traffic got through. Unfortunately the paper makes one monumental oversight - how do you seperate the DDoS from the real traffic? Sure, it's pretty easy if they're all ping-flooding you, but what if you have 8000 hosts all running webcrawlers through, say, www.yahoo.com ? Try seperating the webcrawler traffic from your user traffic. You can't. My thought on marginalizing such a measure would be to employ traffic analysis - something crypto nuts are familiar with - find the machines that deviate from the normal traffic flow through that port by more than, say, 69% and then cut them off at the kneecaps. This only results in the l335 h4x0r needing to find more hosts, but it's better than nothing. However, the problem with my approach is still that it requires additional resources to impliment (ie, bigger CPUs in the routers, and more memory to track such things). In short, DDoS won't be solved anytime soon.. but we can marginalize it's effects through packet filtering and by taking proactive measures on the backbone to monitor these types of things.

    The *best* solution, IMO, would be to use that Millenium computer center in DC to monitor backbone traffic in realtime and give those people access to the network to shut down these things dynamically and in real time. But, alas, that would require more political muscle than I have!

  • by Kaa ( 21510 ) on Monday April 03, 2000 @06:51AM (#1154859) Homepage
    Once address spoofing is eliminated, all DDOSes that I know of will be eliminated.

    You don't know of many DDOSes, do you?

    First of all, non-smurf-type attacks are not going to be affected. Let's say I have control over 10,000 machines (worm, virus, whatever). If I tell all of them to start demanding something big (preferably, a fat dynamic page) from web server couple of times a second, that web server will be dead in very short order. No spoofing anywhere here.

    Second, even smurf-type attacks use local subnet broadcast address, and in many cases the spoofed ICMP packet will never go through a router (it was generated already on the inside).

    So, yes, rejecting spoofed packets will help. No, it will not stop DDOS attacks completely.

    Kaa
  • I haven't read through the nitty gritty of the linked article here yet, but the March edition of Sysadmin magazine had an article, Router-Based Network Defense by Gilbert Held "Held discusses the use of router access lists (using Cisco routers as an example) to minimize certain types of network attacks." It's not online at www.sysadminmag.com yet, but it explores some of the basic concepts involved in configuring CISCO routers to stop such attacks. Worth the read if you are interested. You can still get it at your local newsstand.
  • by Signal 11 ( 7608 ) on Monday April 03, 2000 @06:29AM (#1154861)
    I'll admit up front that either I'm ignorant, or the professor is. I'll let you be the judge.

    My understanding of routers is that for best performance you want the 'best' routes. What consitutes best is determined by the algorithm, but generally speaking it's the route with the lowest latency (OSPF, for example, uses this metric). So by changing the routes you'd be making things less optimal. The other problem is convergence, or how quickly routers adapt the new routes. Routers have special protocols to make sure that when a route goes down (or a new one comes up) that change propagates through the router 'network' and each router updates it's router table to keep track of which packets go where. Any change in routing on a network requires a finite period of time to propagate through the router network. Changing routes too often can severely impact performance - router loops come to mind as one possible Bad Thing. So, short of redesigning all existing router protocols or creating a new one which very rapidly updates all routers on a large network, I don't see changing routes as a solution - the cure is worse than the disease.

    Lastly, even assuming all these issues were worked out, TCP/IP is designed to be fault tolerant - if your packet gets eaten it will be regenerated. If the routes change, the packet will be re-routed to get to it's destination. In short, if you DDoS over a TCP/IP connection.. or generate packets which require the remote end to maintain state (which TCP needs!), you're going to kill the remote host regardless of how the packet got there.

    The best solution at this time is to impliment packet filtering on the router. This has it's own set of problems, and is hardly a pancea. And how can you derive whether a DDoS is really occurring? The slashdot effect comes to mind as a good example.

  • by arcade ( 16638 ) on Monday April 03, 2000 @07:44AM (#1154862) Homepage
    Sorry, this won't work.

    I've already seen some critic of the approach here on slashdot, and I see several points myself. My points may not be the best against it - but here they are.

    First, yes, this proposal will absolutely make it a tad more difficult to attack the victim. But not difficult enough. Increasing the difficulty only increases the skill level needed by the cracker, and is security throuh obscurity.

    Say i want to attack yahoo.com. The first thing I do, is to scan a couple of million IP addresses for vulnerable hosts. I crack into the 0.1% (probably more..) that is vulnerable, which gives me , say, 1000 compromized hosts to play around with. Heck, lets say its only 0.01% thats vulnerable. Still leaves me with 100 machines that I've got full access to. (That's a very, VERY modest number).

    Ok, now I do an nslookup against my target. Ah, nice IP. Ok, i let, say 10, of the machines I've rooted - start attacking the victim. Making it switch to the new IPs .. I do a new nslookup .. ah, nice. new ip/new network. Then I initiate a new attack on this ip/network. Ah, good, no more bandwidth there neither. new nslookup .. ah! A third IP! new attack. maybe a forth or fifth ip .. no problem, attack them and kill'em.

    If the attacker is reasonable smart, he does a lot of bouncing via Wingates, making it impossible to track him down. One or two bounces, and he's pretty safe. We can forget about tracking him down. Its irrelevant if its possible -- the DDos has already been executed. It could be done as a terrorist act, by someone who isn't located in a 'specific' place. Therefore depending on tracking down the culprit isn't good enough.

    OK, say that the customer has enough bandwidth, to sustain an attack from a divided attack. That it takes all 100 hosts to kill its bandwidth. Ohwell, then he does attack with every single host. The victim goes down for 5 seconds, switches to new ip, the new route take [unknown time, I guess a couple of minutes, I don't know that stuff] to spread out netwide. The attacker notices this after, say, 1 minute, and takes attacks the victims new IP - full force. The victim switches back.

    This would still disrupt trafic to the site. If a site if down for abour 30% of its requests - it is down for to many people. 30% of the people accessing the page will get an error message about the host beeing unreachable. That's unacceptable. The rest would find the service disrupted after getting the front page, and having 70% chance (probably less) of getting to the next page - due to the new ip address.. dns lookup, and so forth. remember, it takes time to propragate DNS too. Far too long time.

    So, the suggested approach doesn't work. It may, perhaps, slow down the attack, or STOP the unskilled attacker -- but it wouldn't fool - for example - me. Now, luckily I'm not a smurfpuppy .. but if it won't fool me, then my guess is - it won't fool the dedicated attacker.

    However, there is a way to prevent, or at least slow down / make it easier to track DOS attacks (not DDOS..or it would take some more time). If ISP's there came new router software, making it possible to grep the packet stream for certain packets (ICMP for example), and giving access to this - to other ISP's... then it would be possible to speed up the tracking. This, however, has severe privacy issues. As well as the load probably would be far to much for the router to handle. I'm not skilled enough in router-related things, to know if it would be possible (load-ways) to apply -- but I think it would be a possible solution, if implemented in all core-routers. possibly with limited information (only what router the packetstream is coming from)


    --
    "Rune Kristian Viken" - arcade@kvine-nospam.sdal.com - arcade@efnet
  • Fernando seems to have a rather limited understanding of the internet, routing, and DNS, and a little knowledge is a dangerous thing. But he has taken the right step by publishing and requesting the internet community to enhance it.

    Here are a few points that make this proposal too simple for actually defeating Multi Sourced Denial of Service attacks, and puts an even greater strain on the internet. I also don't see any difference between an MSDoS and the slashdot effect.

    DNS TTL=0
    This defeats the purpose of caching, and would require all resolvers to pick up the A record from the authoritative DNS server of example.com for every new connection to the web site. Since no other DNS servers on the internet will cache a TTL=0, example.com's DNS machine had better be very beefy and have a huge pipe to the internet to handle the requests. Also, every client's resolver will cache the IP address for the duration of the connection, even TTL=0, so if the route changes then the connection breaks.
    Imagine if all customers of eBay or CNN had to reconnect every time some script-kiddy triggered the MSDoS protection mechanism. Meta DoS!

    Don't tweak with the DNS system, it works pretty well as it is (ignoring TLD politics)

    The ISP may also stop publishing the route to 10.0.0. This probably has a cost on BGP disaggregation and routing updates
    No, BGP route damping will kill this for all but route updates from adjacent ASes. It also means that every ISP has to keep two ranges of IPv4 addresses to drop one and switch to the other. Route aggregation will make this effect almost negligible more than 2 ASes away.

    The internet runs on BGP, because it hides all the complexity of the internet from routers. So route flapping gets killed before it can be seen by neighbors.

    For this technique to be effective, the ISP total bandwidth must be several times the bandwidth it sells to its customer
    ROTFL! This kills the proposal dead. Just try telling an ISP to keep their upstream bw several X what they re-sell to their customers. No, DoS attacks happen pretty rarely given the amount of use on the internet, so better and cheaper to let the FBI go out and bust a few script kiddies every once in a while.

    example.com should change its network structure to
    Here the stub network is coming off of the ISP's main router, not example.com. So unless example has a good buddy relationship with its ISP to change router configurations on the fly, this won't work. I don't trust more than 2% of my customers to have the knowledge to set their own config on my kit. If they want a change, they come into my office and we sit down together and hash it out on test machines before committing anything. At the most, I might accept limited routing updates from them, but they would never be propagated into my BGP tables.

    Perhaps you could show the stub coming off the border network. Cisco routers have a null device to drop packets into rather than wasting extra time trying to figure output routing. This is where the route to etoys.com used to go before they got smart :-)

    Its identity can be hidden by making it unresponsive to ICMP
    This is done in many places, it still shows up as * * * in traceroutes, and the addresses can be pretty easy to figure out. Routers not implementing ICMP break the fundamentals of IP routing, which is a no-no but accepted as a deterrent to the stupidest skiddies on the net. Anyone with a little knowledge can figure out several ways to stealth ping a network device or to query adjacent devices. This has nothing to do with shunting MSDoS attacks.

    Some slightly better solutions, generally recommended but not always followed

    Anti-spoofing
    The key characteristic of skiddy attacks is to hide the origins of the MSDoS attack. It is recommended in RFC2267 that access points be secured from source address spoofing.

    Really, this level of checking should be done at every access router. But thats another layer of complex commands the ISP net admin's sometimes don't support. Convincing them to do it will require an ISP to be held responsible in court (and pay damages) for not properly configuring a router behind a modem bank and allowing a skiddy to run a DoS from one of their dial in lines. I've already consulted for lawyers who were looking into doing just that, but they realised most ISPs and universities are so low-profit they couldn't make a ton of money from the case.

    Customer and university routers should be checking anti-spoofing in both directions, for security and to make DoS from your own networks less likely. Most places only do anti-spoofing on packets coming from the internet, but leave the outgoing packets unfiltered.

    Even BGP border routers should verify the source address of a packet actually belongs in their domain. This can be done for stub BGP regions but it can't reasonably be implemented at tier 1 and 2 level with transit BGP regions.

    If it were truly an MSDoS attack from rooted hosts all over the internet, then the attack could just use the real IP addresses and routing happens normally. This is what has been seen lately with TTFN and Trinoo, and makes detection that much harder.

    Anti-smurf
    Pretty much every router should not allow directed broadcasts to create a smurf flood, but many still do :-(

    Route control and choke routers
    Any large customer along the lines of a Yahoo or CNN should be able to negotiate the purchase of an upstream choke router and an authenticated routing update mechanism for it. This would allow them to switch floods from their saturated link to a null device for a minute. They could then start to analyze what had just hit them, and create some quick-fix access lists for a special choke router.

    Even smaller customers with some technical knowhow could be allowed to update their upstream router to shunt traffic to /dev/null for a while.

    The latest attacks don't rely on spoofed source addresses, SYN attacks, or any other filterable packets, making it easy to create filters for the rooted hosts IP address.

    Routing updates would go through a separate link dedicated to management and routing, and wouldn't carry regular traffic. If example.com is big enough to attract an MSDoS, they could afford an extra 64k link to their network managers office.

    There is a lot of work going on behind the scenes to make the internet a little more secure so our quake match^H^H^H^H^H^H^H^H^He-commerce isn't interrupted.

    the AC

  • by Phizzy ( 56929 ) on Monday April 03, 2000 @06:40AM (#1154864)
    alright.. let's begin with my problems with this.

    A) You don't need the stub network. You can route 10.0.0.x to the bit bucket on the ISP's aggregate router, which would be better for the router's CPU utilization anyways
    B) What's to stop the haxx0rs from writing a small script that polls the dns servers every 5..10..30 seconds and changes all of the DDos boxen to the new IP instantly
    C) You don't have to call your ISP to reroute traffic, you can do that in a BGP advertisment, which customers control.
    D) Besides, if you're going to call your ISP, there are MUCH better ways to releive a DDos attack. Cisco is releasing/has released a slew of new tools such as reverse-path TCP acknowledgement (to deny spoofed return-path IPs automatically) and CAR (Committed Access Rate, which allows you to rate-limit certain TCP protocols much more easily, and to drop packets that are flooding without overutilizing a router. Note that you need a big scary Cisco box with 128meg of ram on each VIP slot to take care of this, but any real ISP should have that)

    The point is that this should be taken care of at the ISP level, not at the customer level, as ISPs have the experience and infrastructure to deal with it. These little customer tricks to try to deal with DDos are easily defeatable.

    //Phizzy
  • by Greyfox ( 87712 ) on Monday April 03, 2000 @06:34AM (#1154865) Homepage Journal
    All you have to do is get every ISP in the world to filter packets originating in their network which have non-local addresses.

    That's not easy for you or me, but a company like Cisco should be able to set their routers to do this by default. They could probably even cause current routers to start this behavior with a microcode patch.

    Once address spoofing is eliminated, all DDOSes that I know of will be eliminated.

No man is an island if he's on at least one mailing list.

Working...