Please create an account to participate in the Slashdot moderation system


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
The Internet

Techie Story On TCP Stacks 76

a9db0 writes: "Ars Technica is running an article on TCP stack research done by Stefan Savage at the University of Washington. Stefan presented one interesting tool and a couple of ingenious hacks. The tool measures response time more accurately between nodes without additional software on the server. The hacks are TCP modifications, one of which could help defeat DDoS attacks. "
This discussion has been archived. No new comments can be posted.

Techie Story On TCP Stacks

Comments Filter:
  • Did you even read the article? and if you did? have you any clue about TCP/IP? This guy just pulled a neat trick with TCP, no need for ICMP, do you know how many sites are blocking their a lot of ICMP traffics these days? Your statement above makes no sense. If you don't have something useful to say, don't talk.

  • OptAck has been around for a while, but any commercial IP stack isn't going to implement it. It can and does break TCP transfers, and lusers will just complain the network is broken.

    I did like the graph of how a flood of TCP packets shows up at the same time, essentally dumping all 60Mb of IE across a fat pipe all at once. That works when you are only a few hops away from the server (UoW to Redmond, line of sight), but it falls apart if you have 18-20 routers inbetween with widly fluctuating available bandwidth.

    The problem is this can be turned into a very effective DoS tool. By using OptAck you can get the server to flood the outgoing pipe.

  • Yes, sorry. I found a powerpoint file from one of Kung's lectures on the web, and he uses this as an example. It is not stated whether or not this is congruent with reality - so I took it for truth without further evidence.

  • You get what you pay for.

    actually, no. DSL is the same price as cable out here, it's just that the phone company can't (won't) deploy to my neighborhood. As a result, I'm left with no choice but oversubscribed cable. It costs the same, but I'm getting less for my money. I'm NOT happy.

  • Heh. Anyone here noticed the feedback loop we've all fallen into? People dislike things that Slashdot is doing, so they begin trashing it with 'trolls' and the like. This causes Slashdot to being implementing rather fascist techniques to attempt to cut down on the trolls. This causes more people to get fed up with things that Slashdot is doing, causing more people to behave in an immature manner. This causes Slashdot and the high-karma gang to implement MORE draconian measures to try to prevent the "growing spread of corruption". This causes MORE people to become offended, some rightfully so [any legal system will eventually hit someone innocent], causing MORE fed-up people to start behaving childishly. And noone's going to stop, because after all, "Hey, I didn't start it!" What are we to do?
  • I'm a moderator for the third time in a month. This sucks!

    Well, you could always moderate another thread you don't care about to burn your points up, and so your posting doesn't take away your moderations.

    That's what I do, I look for a Katz article and moderate it. And since I'm so unbiased about Jon ... well, what can one do ...

  • by JoeBuck ( 7947 ) on Wednesday May 10, 2000 @01:04PM (#1861114) Homepage

    Well, duh.

    Because this researcher is telling you exactly what he is doing, so you can implement it in a compatible way, while MS is not telling how to build a modified Kerberos that is compatible with their scheme.

  • by pp ( 4753 )
    You could just add detectors for stuff like this to your network, and start deprioritizing packets from offenders. It's not exactly trivial, but it's possible.

    TCP certainly isn't perfect, but it's pretty good job at using bandwidth efficiently and fairly
    even when the link gets congested.
    If everyone starts breaking the rules it won't really work for anyone.

    Taking the highway system example, people can get away with driving 160km/h, and if there's not much traffic it's reasonably safe too. But everyone doing it at the same time and you start getting _BIG_ problems.
  • it's in the article? whoops. I had heard about the guy before hence why i posted the links. I didn't check which links the article supplied, sorry.

    Gonzo Granzeau
  • An alternative approach to Stefan's that doesn't involve shoehorning information into IP data packets is Steve Bellovin's ITRACE []. I think this is more feasible in practice, and it seems to be gaining some momentum in the IETF.

    Basically roughly every 20,000 packets, a router chooses a packet at random and sends an ICMP traceback message to the packet's destination listing the router's address and the previous and next hop that the data packet took. At the receiver, if you're being seriously flooded, you start monitoring the traceback packets and when you get enough you can piece together the paths back to the attackers.

    It won't stop the attack itself, but will at least help in discovering the cracked hosts being used to launch the attack.


  • Let's not forget that Microsoft BROKE Kerberos. This guy is proposing a change that allows existing TCP to continue working, unaffected.
  • The method described to me was based on timing the period between
    outgoing packets: it did not depend upon seeing the ack packets. This
    kind of traffic analysis of this kind was made necessary by the MBONE
    multicast protocol, which was built on top of UDP (which does not do
    the same kind of binary backoff that TCP does): if there are widely
    deployed protocols that do not respect binary backoff, then the
    network really would grind to a halt, and so some external method of
    `niceness checking' is required.

    Cisco make routers that do the necessary tests to spot abuse. It's
    worth noting that the consequence of being blacklisted is not having
    your service blocked altogether, only that intermediate routers will
    have to route around the routers that drop your packets: it will spoil
    your performance but not interrupt it. Rememeber that IP makes no
    assumptions about packets actually arriving. Yes it can be abused:
    but we knew that anyway, and it's much harder to do that than the DDoS

    Proof? You could ask Cisco I suppose. If you're willing to put up
    with less than proof look at all the IETF discussions about the MBONE
    protocol. I'll have a look around and see if I find any online articles about testing for backoff.

  • At least on my system, the maximum TCP window size can be controlled on a per-route basis. You could probably dynamically determine an appropriate max window size from RTT information. The idea is that an optimistic-ACKing client operates on the assumption that the window can grow without limit, so one imposes a relatively large but finite limit on the server side. At some point the client will then ACK data that hasn't been sent, because it's assuming the server has increased the window when it fact it has hit it's limit. That should create a permanent hole in the TCP data stream, causing interesting times for the client machine.

    Assumption: any link has a capacity determined by transfer speed and latency. Rough estimate is that the window will naturally settle at about 2x capacity, give or take. Correct or not?

  • They do, however, share the same connection to abilene. [].


  • by anticypher ( 48312 ) <> on Wednesday May 10, 2000 @10:52AM (#1861122) Homepage
    OptAck has been around for a while, but any commercial IP stack isn't going to implement it. It can and does break TCP transfers, and lusers will just complain the network is broken.

    I did like the graph of how a flood of TCP packets shows up at the same time, essentally dumping all 60Mb of IE across a fat pipe all at once. That works when you are only a few hops away from the server (UoW to Redmond, line of sight), but it falls apart if you have 18-20 routers inbetween with widly fluctuating available bandwidth.

    Time to hack this into the linux net3 stack as a switch during compile time. ENABLE_OPTIM_TCPACK_FLOOD=true and then get some hacked utilities taking advantage of it. Could be good for cable/dsl/OC3 people, but won't do much for poor modem users. A carefully controlled predictive TCP ACK can increase modem connections as well for big transfers. Another fun research project to take up my precious time AAAAUUUUGGGGGHHHHH!!!! :-) :-)

    the AC
  • by artdodge ( 9053 ) on Wednesday May 10, 2000 @10:54AM (#1861123) Homepage

    #1: Red Herring. We're talking about protocol-level enhancements that make attacks like TCP-based DDoS fundamentally difficult to perform. This is a totally different subject from "making sure all programs on my workstation are free of buffer overflows." It is also true that the types of solutions needed to correctly protect systems are usually fairly intrusive and systemic. As the article says (you did read the whole thing before posting, didn't you?),

    These changes, while a very complete solution to the problem, probably face the same fate as almost any proposed TCP change -- they'll be ignored. The installed software base is too big and too hard to change.

    #2: University IT departments treat researchers pretty uniformly as "clueless", and assume that their own employees are clueful. The result? Clueful hacker-researchers with well-maintained machines are all locked up behind firewalls and active monitoring unnecessarily, while wide-open boxen sit on the public subnets waiting for j0e h4x0r to set up a DDoS outpost.

  • by Anonymous Coward
    Very simple. It's not their job.

    Researchers are paid to do research. Not system administration.

    System administrators are paid to sysadmin. Not to do research.

    I am a PhD student at a leading university, and know enough about networking to make some of our systems more secure than they are at present. However, sysadmins are a strange bunch -- they jealously protect their turf; they are NOT going to give root access to a mere 'researcher' like me, so that I can secure their systems for them. (Yeah, since their systems are so insecure, I probably could crack 'em and get root, and then fix it, but why bother? They'd never appreciate the 'help' -- they'd probably kill my user account for 'unauthorized activities' once I told them about it.) Besides, it's just not worth my time to do their job for them.

    It's worse than that, though. Public universities just cannot keep up with the IT salaries. When you're paying a history prof with a PhD $40k, it's really hard to convince the regents/deans to fork over > $100k for a truly qualified sysadmin. So universities only pay rock-bottom salaries. This leads to two types of university sysadmins: (1) rock-bottom talent (2) 'temporary' -- they work in academia for reasons OTHER than salary; maybe they like the hours, or NOT being on call on the weekends, or they're working on a degree and want reduced tuition, ...

    In case (1), it's easy to see why university computer systems are so unprotected. In case (2), the sysadmin job is NOT the person's primary focus in life, so some things (like keeping current on bugs/security fixes/best practices) fall through the cracks, no matter how talented the person is.

    The answer? Fire some profs and use the money to hire a GOOD sysadmin at a salary that'll keep him around (e.g. near $100k), instead of jumping ship in six months when he gets an offer that doubles or triples his measly current salary of $30k.

    And if you think there's a university out there willing to do that, I've got a bridge in Brooklyn for ya.

  • Have you read the IPv6 spec at all? It has allowances for tracebacks. Now I don't know if they're any good, but they exist.
  • by owens ( 183768 ) on Wednesday May 10, 2000 @11:52AM (#1861126)
    The Web100 Project [] is working on putting automatic TCP tuning into the stack. This will allow a TCP connection to use all of the available bandwidth, without breaking any of the internal algorithms or stomping on other connections. It is already possible to tune most TCP implementations by measuring the bandwidth*delay product and tweaking the socket buffer size; the NLANR TCP Tuning [] page has instructions.
  • An interesting point on this from the article:
    "It turns out it's possible for a receiver to cause any standards compliant TCP stack to send data arbitrarily fast. So, you, my little cable modem equipped friend, can suck down the latest IE in a fraction of the time you should be able to get it.

    Sure, your neighbors might get no bandwidth in the meantime, but that's not your concern, is it?"

    This turns out to be one of several new attacks made possible by by really knowing how to hack your TCP setup

  • I really like the fact that research is publicized
    on such a popular site as Slashdot. I think Slashdot is definitely the place where people should be able to consistently find out about new developments in science.

    Perhaps Slashdot could run some sort of a sweep/review of the latest hot papers in particular research areas or published on recent conferences and post the summaries, impressions and links.

    This is already being done for books and all kinds of miscellanous topics (think Quickies).

    Occasional discoveries in CS, Physics and Chemistry are also sometimes publicized. How does the selection process work? Why does some research find its way to Slashdot and tons of other, no less exciting, research does not?
  • What about privacy?

    What about it? All they can get is the IP address of the attacker. If you're making a legitimate connection, you haver to supply your IP address so that the results can reach you! The only reasons to spoof an IP address are nefarious.

    Even then, this can only trace packet floods, because a huge number of packets are needed for a trace to be effective. IIRC, the article says 100*n packets minimum, where n=number of hops, are required. If you figure 10 hops to get somewhere interesting, you need 1.5 MB incoming traffic to get a trace. FTP or HTTP requests don't generate that kind of traffic in any reasonable time.

  • Since the routers along the way only would stamp a portion of their IP address, and only do so a fraction of the time, this would only provide useful mapping information for large-scale, distributed attacks. Also, does this scheme record the IP address at the very start of the process (the users' IP address), or does it start with the first router along the way?

    I think the individual user is still anonymous under this scheme, but I ain't no expert.

  • Lovely how the "Lameness Filter" (COPYRIGHT 2000 Slashdot Thought Police. All rights reserved.) didn't catch this but refuses to let me post the following:

    "If I Ever Meet The Inventor Of RSH I Will KICK HIS ASS!"

    Moderation is a failure.

  • You are making all of this up.
  • Colleges and universities are never going to be convinced to pay what is necessary for a good sysadmin. This is the way my the CS department at my college (a fairly major engineering/science school) dealt with this problem for their network and unix shop.

    The CS department would hire a clueful sysadmin who was just out of college and did not have an impressive enough resume to get a full sysadmin job elsewhere, but had personal experience. They would place the SA underneath the professor who was a cluefull researcher in the area of networking and operating systems. My college also maintained a staff of part time student sysadmins who performed tasks for the lead sysadmin, and could help a new lead grow accustomed to the environment. Some of these students stayed over the summer to research and to do admin tasks that couldn't be done during the school year, and this is when the new lead was trained.

    After a couple years, the lead would get a new job for twice what he was making for my college, and we would start looking for a new lead. This worked quite well for everyone involved, and the college didn't need to be convinced to pay real money.
  • Yes, but it would also cause aborted connections if the client happened to have two ACKs on the net at once, and they arrived out of order.
  • If you can find most of the intermediary machines used as launch points, of which the assumption is there will be a lot, you can hope that at least one of them will have logs and/or tracks which the cracker forgot to wipe or missed wiping. Sure, most of 'em may be duds, but it may only take one good, unaltered log out of a couple of hundred machines to trace the attack back much closer to the source.
  • A couple more reasons why universities are often used for attacks
    • #3 - Insiders vs. Firewalls - Attacks by students. Firewalls are usually designed to keep unauthorised outsiders out. But universities have lots of bright kids with time and computer resources on their hands, who know a lot more about computers than they did in junior high school, know a lot more people who know a lot more about computers, and have a lot more computing resources than when they were using their Mom's AOL account and 486 Win3.1 box. One of the standard computer security problems is "How do you know you're talking to the server you think you're talking to and not to some grad student at Berkeley?" Well, if you're the sysadmin at Berkeley, that's a tough question :-) It's harder than the corporate "disgruntled employee" situation, except that most of your security problem students aren't malicious - they're just more creative than you are....
    • #4 - Newbies with lots of bandwidth - Most college students aren't experienced computer security experts - they're English Majors, and Chemical Engineers, and MBA-seekers, and pre-law or pre-meds, and Freshman CS Students who aren't all experienced yet, and most of them are running Windows versions that are fundamentally insecure even when administered well. And all these attractive targets are in one place with lots more bandwidth than dialup users and relatively stable IP addresses - so if you crack one of them, you can use it to search for more targets, and it's a lot easier on a campus LAN than in a dialup network. Once you've got your suckers, they can output a lot more bandwidth than AOL newbies you've suckered with a new game program like "Attack On Troy", though networked games are a fun attack at colleges as well - especially high-pressure high-tech schools where students do their recreation intensely as well.
    • #5 - Not every school is MIT. Podunk Community College may not have quite the same resources to abuse, but it doesn't have the same level of defenses, either, and it may have more resources than half the small ISPs on the market.
    • #6 - Early Adopters of Networked applications - Universities are great places to distribute things like napster://horse_with_no_name.mp3 and IRCfreefone and Quake 6.2: Mass Destruction and CryptoStealthGnuTella and UsenetPornHider and that eXcellent rave-support tool XFinder. Bad Guys don't need to infect everybody - just enough people to reach critical mass.
    It's a target-rich environment out there. We've been lucky so far.
  • Oh really? What makes you say that, I wonder?
  • Well Well.. I wonder if IPv6 wouldn't be a better (or alternate) solution. With tools like PING, any kid can just flood a modem with his massive T1. Yet tools like PING and TRACEROUTE are the finest troubleshooting tools there are!
  • Sting is set apart from other such tools by two characteristics. First, you should note that existing tools, like ping and traceroute rely on ICMP packets, which are increasingly deprioritized or filtered. (Just try pinging or if you don't believe this is happening).

    enichols [~] oxygen >ping
    PING ( 56 data bytes
    64 bytes from ( seq=0 ttl=243 time=82.7 ms.
    64 bytes from ( seq=1 ttl=243 time=82.4 ms.
    64 bytes from ( seq=2 ttl=243 time=77.7 ms.
    64 bytes from ( seq=3 ttl=243 time=76.6 ms.
    64 bytes from ( seq=4 ttl=243 time=77.6 ms.
    64 bytes from ( seq=5 ttl=243 time=80.6 ms.
    ---- ( PING Statistics ----
    6 packets transmitted, 6 packets received, 0% packet loss
    round-trip (ms) min/avg/max = 76.6/79.6/82.7 (std = 2.42)

  • IPv6 just provides a larger address space and adds support for IPSec (which can also be supported on IPv4.) It has got nothing to do with the problems you named. And ping and traceroute are the least sophisticated tools possible.
  • I really don't see how this can affect Dos attacks? If anything, it could make them worse.


    Here's my Microsoft parody [], where's yours?

  • The thing I found neastest in the piece was the idea of diddling your TCP stack to ACK in nonstandard ways. This is the second part of the paper... anyway, apparently this can be used to grab all of the bandwidth at the remote site.
    You may be wondering how significant all this really is. Well, it's pretty significant. Stefan told us about the one time he attempted to use his modified TCP stack to download IE from Microsoft. He reported so completely flooring the University of Washington's Internet connection that he never tried again.
    Umph. How long until script kiddies are using this for DDOS? Fortunately, the fix is capable of being done in a distributed manner.
  • I'd agree...I mean, if you're constantly expanding the window size on one connection to a client until you use all available bandwidth, becuse they're sending excessive acknowledgements, that would mean no other uses would get connections.

    sounds like an interesting variant on ddos (distributed if you sent the same ack's from different sources, anyway)

  • And ping and traceroute are the least sophisticated tools possible.

    Yes, and as such, they are also the most useful.
  • by Anonymous Coward
    MIT and Harvard do not share the same link. MIT has a connection to BBNPlanet (Genuity). Harvard has a connection to AT&T. They do have a private peer going, but as far as I know they do not redistribute each other's routes.

    Also, note that lftp already does this multiple-file-transfer thing. Just 'pget -n file' to download with X simultaneous connections. It really does speed up transfers.
  • Let me just clarify this lameness filter thing in normal English... My post above bypassed the "Lameness Filter", just by using 1337speak. Interestingly, the lameness filter only seemed to check the subject line for overuse of capitals, rather than the body of the message. You can have as many capitals as you like in the body of the post, it seems. Even more interestingly, if you put in a subject with no capital letters (but lots of 1337 words), this still triggers the lameness filter.

    This leads me to the conclusion that the lameness filter is either designed only to let the non-lame (i.e the 31337) through, or it is a spelling mistake (which should have said "Lameass Filter"). Either way Taco, your "lameness" heuristic is pretty poor and I suggest you remove it from Slashcode.

    The lameness filter is just supressing free speech, and will drive more people to using 1337speak when they want to troll. Is improving the signal to noise ratio really a fair price to pay for the recent spate of censorship that has taken place on Slashdot (e.g. Taco's "Bitchslapping" [] technique (the thing responsible for the abolition of Slashdot-Terminal, but which has also caught some innocent users in the crossfire, such as people who dared to moderate Signal 11 down) and the lameness filter.)

    Don't get me wrong, I still love Slashdot, but it just isn't the same as it used to be.

    Please note: I am only posting AC because I don't want to get on the wrong side of a "bitchslapping". It is now too dangerous to express one's opinion on this site as a logged-in user.

  • by Shotgun ( 30919 ) on Wednesday May 10, 2000 @11:05AM (#1861147)
    If I was running a large site and I were concerned about people running 'predictive acknowledgers', could I not modify my stack to send packets of varied size? I could just modify the last bit or two of the packet size semi-randomly. The bogus ACK would be ignored, the luser using such a technique wouldn't get his download and would eventually play fair.

    Also, if I tell the server to dump my 2Meg download into 1 packet, what happens when my wife picks up the phone and interrupts transmission? Will the whole 2Meg need to be resent? IOW, is this technique only useful on extremely reliable connections (which are VERY rare)?

  • A user connects. Server uses sting to determine network characteristics. If the user is ACKing faster than what is reasonably possible start decreasing the window size. That'll teach them to try to cheat!! Bwhahaha...

  • Sorry, I had remembered from a lecture I had remembered this fact from a lecture I had been to. This may be a false memory, as this was back in fall 99. Theoretically though, if this was the case, one could do QoS on aggregates of IP ranges, protocols, whatever. The same goes for single stations, although one would need massive cpu and memory to prevent packet loss, depending on the size of the network.
  • Why is this a nifty solution, but when Microsoft hacked the Kerberos thingee a little to make it work with Active Directory, everyone freaked out?
    Lord Omlette
    AOL IM: jeanlucpikachu
  • ok, so when are these hacks going to be incorporated into download accellerator? :)
  • by infodragon ( 38608 ) on Wednesday May 10, 2000 @12:23PM (#1861152)
    It is this set of victim machines which launches the final attack.

    I personally doubt that there is any defence against a propperly executed DDOS attack.

    Stefan is not proposing a way to catch the perpetrator, but to locate the computers that are performing the DDoS attack.

    As in the article simply put...

    The basic idea behind the approach Stefan outlined is for each router that forwards a packet to mark it with information that will allow the recipient of the packet to trace it to it's source.

    This is over simplified but in the article he explains a way to mark packets, in a kinda random way, in such a manner as to be able to trace the source and then taking the proper action. Temporarly shutting down the deliquent computer's internet connection.

    This would not prevent the DDoS attack but it would speed up the process of shutting it down by removing the human factor in tracing the attacks.

    Because there is no difference between a propper DDOS and "The SlashDot Effect."

    Yes there is! A DDoS attack is a larg number of computers sending/requesting Massive amounts of information. The "Slashdot Effect" is Massive amounts of computers sending/requesiting moderate amounts of information. Except for large downloads then they are requesting Massive amounts of information, i.e. when netscape pre-6 was announced :)

  • Ok several questions about the method for identifying the DDOS user.

    First the method employed is to XOR the addresses of the first and second routers on an edge. Now it is clear that you can trace back IF you are sure what the IP of the secondary router is. However given that the data can follow multiple paths how are you ever certain what this IP is. Secondly as it is a probablisitic process the second IP of the router may be one of many. Is this solved because the IP's of routers along the path are very sparse?

    Secondly what prevents a DDos attack from faking this field. Make it look like the attack came through another nearby router.

    Thirdly as most DDOS bounce pings off of remote boxes this doesn't let you catch the perpratrtor only identify what boxes are pinging you (these boxes most likely not being aware they are used in a DDOS attack won't be using these methods) thus as this method doesn't allow you to block the DOS attack (most of the packets will be encoded only with routers close to the destination and you don't want to cut off all trafic) what good is it?
  • by Chalst ( 57653 ) on Wednesday May 10, 2000 @12:38PM (#1861154) Homepage Journal
    Jannotti says that there is nothing to stop a user ignoring
    the `niceness' constraints in TCP: actually the strategy suggested
    will get you blacklisted on quite a few routers, which means it will
    simply drop all packets originating from your IP address. The routers
    use standard traffic profiling tools to spot just the kind of tricks
    Janotti describes.

    To plug some work done in my department, Azer Bestavros has done
    some nice work on network []
    profiling : the idea I liked most was a way to make the TCP binary
    backoff work better by grouping together similar packets: this can be
    done entirely end-to-end, and really gets big improvements in overall
    performance. See in particular the paper `QoS Controllers for the Internet'.

  • by tqbf ( 59350 ) on Wednesday May 10, 2000 @12:40PM (#1861155) Homepage
    Are you people stupid? Am I missing sarcasm here? What are you thinking when you advocate Linux compile-time options for congestion- control subversion? That this is a "nifty feature" for Linux kernels to have?

    Congestion control was developed in response to a congestion *crisis* in the late 1980s. Proper congestion control is a requirement for the Internet to function. The LACK of congestion control is common streaming and multicast protocols is a commonly cited major hurdle for the deployment of multicast applications on the Internet.

    It's been a nightmare scenario for awhile now that Microsoft (they of the "transient failure" RST packet) would unscrupulously try to gain a competitive advantage by manipulating congestion control. By "breaking the rules" they could make a faster stack. Another scary thought is that silly "Internet Accelerator" products could actually sell REAL accelerators, that provide horsepower boosts at the expense of the entire network.

    What you DON'T want to see happen is for Linux to gain "turbocharging" via congestion-ignorance. What that does is set up an arms race between Linux and every other stack vendor, and particularly Microsoft. That arms race could easily lead to congestion collapse and yet another Internet scaleability crisis.

    What Stefan Savage is describing are VULNERABILITIES in common TCP/IP stacks. They need to be fixed, and programs that take advantage of them need to be considered in the same light as programs that get rid of pesky security measures on remote computers --- as exploits.

    Just chiming in here, because I think it's odd that people here are paying more attention to the clever backtracking hack Savage came up with and less attention to the important, new security vulnerabilities he has documented.

  • I read that and thought, "Wow, that sounds just like MY college!" Then I looked at your user bio... it IS my college!

    I'm actually on CS staff, and a great thing it is. And yes, the sysadmin turnover is still what it was (I assume you're a graduate).

    Small world, I guess.

    --Nate Eldredge,

  • There is one other method of encoding data, which would allow for more data throughput. The one I am thinking of is to make a subtle modification to a TCP/IP stack, so that it must send all IP packets twice for which it wishes to encapsulate data within. The first packet sent would have a sequence number which is made to look wrong, and the second packet would have the correct sequence number. The receiving host could have a similar modification made so that it recognizes when there is data to be found buried in the payload of the (seemingly) error ridden packet.

    With this method, you could potentially encode a larger percentage of covert data per byte of legitimate data sent.

  • by Anonymous Coward
    Just so you know, if everybody turned off the TCP bandwidth control mechanisms today, the Internet would go into meltdown. That's right: it would not work. Tragedy of the Commons.
    Thank you for your time.
  • Most cable modems have QOS/traffic shaping built-in, so I don't think this will do much for you. It might help with ping rates, but not bandwidth. (Stomp all over me if I'm wrong!)
  • No. You are way wrong on number two. Our central systems where I work are much, much more secure than most people think. There is a full time person who does nothing but work on security things.

    And, we dont packet filter here. We run nfr, yeah.. but the only time we packet filter is when a research group asks for full control of a machine, or in other words, they want us to lose responsibilty of that machines actions.

    Then we firewall their machines.

    It isn't 'clueless', its about where the 'blame' goes when one gets cracked and sits for weeks unnoticed. If it was to happen to the paid sysadmins, I would fire them on the spot if it were obvious there was a crack.

  • Could you actually document which providers "blacklist" noncompliant TCP streams, and how they manage to do that? I don't believe you: in backbone routing, it's expensive just to have to look at more than IP addresses, and keeping enough state for a TCP stream to analyze congestion control seems completely unfeasable.

    Not to mention the fact that the control mechanism they have ("blacklisting IPs" on the backbone) is trivially exploitable by malicious users to deny service to random Internet sites.

    Detection at the "edges" --- at third-tier providers and universities --- seems feasable, but expensive and error prone. Predictive acknowledgement is especially susceptable to false positives, as it involves keeping state between data packets and the ACK responses, AND relying on timing and reliability of packet capture/analysis.

    I'm willing to bet that no major carrier is actually profiling TCP traffic to find "greedy" stacks. Can you prove otherwise?

  • A few articles, as promised:

    1. RFC 2309 []
    describes the need for some kind of proactive congestion control to
    deal with protocols that do not implement any kind of backoff. This
    proposal spawned a whole lot of research into testing for fairness.
    Sally Floyd, one of the authors of the RFC, has the slides (PS) for a []
    talk which gives a good basic overview of the issues.

    2. A standard for congestion control is proposed in RFC 2481 []. It is easy
    to spot abuse by end users who claim to comply with this proposal.

    I'll ask about the blacklisting and post here when I have some

  • Your confusing packet size with window size; you're telling the server to use a larger window, it still gets sent across as ~1500 byte packets. Yes, if you've acked data you haven't received, you'll need to restart the whole transfer if any data is dropped (which is very likely if you're using an obscenely large window size), as Stefan says in his article. But if you haven't acked data you haven't yet received, only the (~1500 byte) packets that were lost need be retransmitted (provided your TCP receive stack keeps sufficient buffer space for all the unacknowledged data; this might be true for a 2MByte download, but if you're downloading all of IE, I doubt if you've got that much memory in your system, so much of it would be transmitted multiple times.)

    In summary, in a noisy or bandwidth limited environment, playing games with the ACKs probably won't buy you much, and if you congest the pipe to the point where the routers start discarding packets, you're arbitrarily large download is likely to take more, not less, time.

  • Would the slashdot of old prefix the story with "techie"? I guess most people who read slashdot are not longer techies.

    I ask this question in all seriousness. oh well.

  • That's sorta the idea. Play games, hose yourself.

  • In regard to an earlier response, discarding data the sending TCP stack received ACKs for would not cause abortive connections. Because a legitimate receiving TCP stack would only send ACKs for data it had received (therefore that the sending stack had sent), it would have no affect and the order in which the ACKs are received is irrelevant. In fact, that is how TCP stacks work. The low end of the TCP window (of data to send) represents the earliest packet sent that the stack has not received an ACK for. Once it receives an ACK, it moves the window up and that data is discarded. Otherwise, the stack resends the data.

    I think the point you are missing is that with optimistic ACKing, ACKs are (hopefully) not sent for data that has not been sent. ACKs are sent for data that has not yet reached the receiving TCP stack (but been sent). This causes the sending TCP stack to think that data is being received faster than it is, which causes it to grow the high end of its data window, causing it to send more packets. The trick of optimistic ACKing is to send back ACKs fast enough to match the growth of the sending stack's window, but to not outpace that growth, so that all of the ACKs are for data that has been sent (just not yet received). This results in exponential growth of the TCP window. Of course, the problem is that while all of the data is sent, it will not necessarily all reach the receiving stack due to packet loss. The solution, as the article mentions, is for the receiving stack to reconnect and use the HTTP Range header to get only those packets that did not make it through the first time.

    Your other suggestion, restricting the maximum window size, is not feasible. What would you set as the maximum size? At work, I am connected to the Internet by a super-highspeed, backbone connection (as are all of my friends at school). You have no way to tell what type of connection a person would have, so your maximum window would have to accomodate the fastest, the top 1% say, making it useless for the 99% with connections below that. Besides, that is not a limitation you want in your TCP stack. What happens when we get faster connections? Do you really want to have to patch your TCP stack everytime you upgrade your network? Limitations on bandwidth are best left to your router or firewall. Even then though, you are only going to be able to stop the extreme cases.

    I think Stefan is thinking along the right lines for a solution: including a random piece of information in a packet that has to be echoed in the ACK. Although that would require changes to the TCP protocol, I think there may be a similar solution that would not.

    An alternate solution would be to perhaps send a packet out of order, like a window a head of the most recently sent packet. The optimistic ACKers would then send back ACKs for the intervening data (because they would assume that the other data was on its way), which mostly would not have been sent yet, or they would send ACKs continuing on from the advance piece of data (to try to meet the expected growth of the window), once again sending ACKs for data that had not yet been sent. The resulting data loss would eliminate any gains from the optimistic ACKing. This solution is not quite as drastic as Stephan's, as it would not require changes to the TCP protocol or the client TCP stack (I think; I will have to do some research to verify that). Since optimistic ACKing relies on being able to predict the next packets sent, I think that this solution, adding some unpredictability to the packets sent using out of order packet sending, would effectively neutralize optimistic ACKing.

    Nathan Florea
  • Uhm...having two acks on the net and arriving out of order isn't "play[ing] games," its how the net sometimes works.

  • I did like the graph of how a flood of TCP packets shows up at the same time, essentally dumping all 60Mb of IE across a fat pipe all at once.

    Er, that was all 60K of [].

  • by flibbertigibbet ( 181956 ) on Wednesday May 10, 2000 @10:25AM (#1861169)
    Windows TCP/IP stacks already do this kind of retarded ACKing, and its trivial to modify others to do so. That's where intelligent traffic shaping comes in. Even if you don't modify the TCP/IP stack, you can write a proprietary program to open, say 50 connections to download the same file, or multiple files at the same time and use far more bandwidth than anyone else on the network.

    HT Kung has been doing some work on this. MIT and Harvard share the same net link and pay the same price, but MIT has more net users and therefore more connections (as in streams) so they use much more bandwidth. So you do traffic shaping and stop all those nasty bastards opening 300 concurrent connections from their desktop at once from using the entire network.
  • Oh yeah:
  • Now I know what my next nonprofit time-wasting project will be! The prospect of even greater download speeds with a cable modem is just too great to pass up.
  • No adult has ever learned to "share" their bandwidth...

    Yeah. Damn those jerks for actually believing they'll get the high bandwidth they payed the cable company for. :P

    How is it that downloading something is "abuse"? I'm paying for 1.5Mbit DSL, if I'm paying for it, I'm gonna max it.

    I know cable users are on one big segment, and that's why I'm paying a bit more for DSL, I'm more likely to get the bandwidth I'm rated for.

    You get what you pay for.


  • The place where he's addressing the DDOS attacks is at the end of the article. He's not actually stopping the attacks, he's just allowing the victim to analyze the flooding packets and find out where they're coming from. I guess that by analyzing the traffic more quickly (encoding route information in the TCP header) the good guys should be able to black hole the bad guys sooner, thus shutting down the attack.
  • by rcwash ( 103111 ) on Wednesday May 10, 2000 @11:15AM (#1861174)

    A DDOS attack involves two layers of victims. The obvious victim is the recipient of the attack. But before the attack can be launched several (hundred) intermediate systems must be penetrated and exploited. It is this set of victim machines which launches the final attack.

    The procedure proposed by Stephen is quite clever and could be used to trace the attack back to the first layer of victims. But that is where it would end. The procedure requires hundreds of packets to make its trace. But the attacking machine is only listening for a single packet - whose IP can be spoofed - for the command to launch the attack. So the perpetrator remains safe behind his proxy army until he starts bragging on irc.

    I personally doubt that there is any defence against a propperly executed DDOS attack. Why? Because there is no difference between a propper DDOS and "The SlashDot Effect."

    Forget the ICMP packets. Want to take down a web site? Flood it with web page requests. You now have nothing to filter on and the legitimate users are crowded out.

  • by Todd Knarr ( 15451 ) on Wednesday May 10, 2000 @11:17AM (#1861175) Homepage

    Check me if I'm wrong, but wouldn't simply having the server's TCP stack discard all data for which it had received an ACK, regardless of whether that data had been transmitted or not, in combination with a finite maximum window size and discarding ACKs that do not correspond with the end of a packet, make optimistic ACKing completely counter-productive?

  • hehe. When you send multiple ACK's at once, it's called a SACK. Nevermind, the pun isn't funny anymore.
  • Another thing that bears mentioning is the fact that DDoS attacks can (and may already be) ack'ed before they arrive... This means only a small amount of requests would have to be issued from a moderate base of compromised systems. Stefan was suggesting that someone wanting to boost performance use HTTP to re-request page chunks that didn't arrive fully, but somehow I don't think the DDoS people really care about receiving the information intact. :)

    I'm not a TCP/IP guru, but would a possible rememdy be to vary the length of data being requested, so at least the ability to pre-ack the transfer would be one step harder?

    Tracking back to the attcking hosts at least provides the victim with the ability to deny access relatively local the attacking machines. Even if the attacking computers are spoofing, if you know a particular machine is being routed through, you can deny access from that router in somewhat of an automatic manner with Stefan's suggestions in place.

  • An alternate solution would be to perhaps send a packet out of order, like a window a head of the most recently sent packet. The optimistic ACKers would then send back ACKs for the intervening data (because they would assume that the other data was on its way), which mostly would not have been sent yet, or they would send ACKs continuing on from the advance piece of data (to try to meet the expected growth of the window), once again sending ACKs for data that had not yet been sent. The resulting data loss would eliminate any gains from the optimistic ACKing. This solution is not quite as drastic as Stephan's, as it would not require changes to the TCP protocol or the client TCP stack (I think; I will have to do some research to verify that). Since optimistic ACKing relies on being able to predict the next packets sent, I think that this solution, adding some unpredictability to the packets sent using out of order packet sending, would effectively neutralize optimistic ACKing.

    So, how would this differ from semi-randomly modifying the packet size? I 'think' the benefit of modifying the packet size would be that it would make for a simpler modification of current stacks. Just modify the 'window growth' algoritmn. Then if you don't ACK properly, the server assumes you're cheating, gets confused, and drops your connections. A very strong disincentive. It just seems to me that sending a packet out of order would require a lot more bookeeping and much more modifications to the current algorithmns (I'm not spelling that right am I?)

  • The problem is that TCP ACKs are acknowledging data octets, not TCP packets. When I referred to packets, I meant octets of data rather than packets containing that data. I apologize for the confusion.

    The way a TCP ACK works is that it says "I have received all of the data up to this octet." TCP packet size is not determined by window size or acknowledged data. It is determined by the MTU and urgency of the data needed. If stack can wait until it has enough data to create a packet of the Maximum Transfer Unit, it will (there is a timeout). If the receiver has indicated that it needs the data ASAP, the TCP stack will push out the data octets as fast as it can (with a very short wait), regardless of how small the packet is (IIRC, I am a bit rusty on this part). So altering the packet size will have no effect.

    Also, the out of order octets would not need to introduce much overhead. Simply a sequence number far enough a head to not be reached for a while ("while" is intentionally vague; I need to do some more research on how far ahead the stack could and should go). If that is acked with out the TCP stack having sent the previous data, the stack knows that the receiving stack is cheating and can close the connection.

    The key idea here is to send information that can not be acknowledged (because octets before it have not been sent) and see if it *is* acknowledged. Perhaps a better implementation would be as the send window grows, the TCP stack skips two octets but otherwise continues sending normally. A valid stack would not be able to acknowledge any of the subsequent octets, but an optimistic ACKing stack would. And because the receiving stack caches the out of order octets, once the sending stack determines it is dealing with a legitimate receiver and sends the two missing octets, all of that data is still valid and there is very little inefficiency introduced. The keys here that I still need to work out are:
    • When does the sending stack use this trick (when the send window grows, etc.)?
    • For how long does it do it (i.e., at what point will a receiving stack start dropping OOO packets and how much time will it take to trick an optimistic ACKer)?
    • How much data does it skip (e.g., it would be more efficient if the sending stack skipped a whole packet's worth of data, that way it would not send a packet with only two octets of data when it decided it was dealing with a legitimate receiver)?
    • And lastly, when does it *not* do this (i.e., are there any cases where this introduced inefficiency is intolerable)?
    As you can see, the idea has already evolved quite a bit in this post, so there is still some serious work that needs to be done. But I think that this is a very good solution to the problem. Can anybody else think of potential stumbling blocks or problems with this solution (or better solutions)?

    Nathan Florea
  • If both ACKs are for data that has been received, then the first ACK received won't cause any data loss and the second ACK received will cause nothing to be discarded because it's already been handled by the first one. There's no need to alter current behavior with regard to that part of things. It only hits clients who ACK data that they haven't yet received, and should merely cause the client to stall with a hole in the data stream it can't fill in.

  • Congestion control was developed in response to a congestion *crisis* in the late 1980s. Earlier than that, actually. I did much of it. See RFC 970 [], from 1985. That's the paper that introduces "congestion collapse" and "fair queueing", along with the now-relevant remark
    • It is worth noting that malicious, as opposed to merely badly-behaved, hosts, can overload the network by using many different source addresses in their datagrams, thereby impersonating a large number of different hosts and obtaining a larger share of the network bandwidth. This is an attack on the network; it is not likely to happen by accident. It is thus a network security problem, and will not be discussed further here.
    There's the first description of the denial of service attack. We should have done more to fix it back then, when there were maybe 1000 machines on the Internet and we could have changed TCP.

    Savage has done a good job on this. I think I see a way to stop the optimistic ACK attack (the hard case) with mods to the attacked end only, but it's ugly and needs more thought. The general idea is to introduce some randomness into the segment sizes sent, and if the replying ACKs don't reflect this, the ACKs are probably fictitious.

    Another class of attack suggests itself. Streaming protocols over UDP are probably very vulnerable to attacks like this. If you can convince some video server that you have huge bandwidth, you may be able to get it to flood a section of the net. Those proprietary streaming protocols need a hard look.

    John Nagle

  • If any of you are interested, Stephan Savage (the guy who's work this article is based off of) has a home page located here [].

    He's looking for an academic job, and some of his papers (Especially the project team that created the SPIN [] kernel stuff) are quite impressive.

    Gonzo Granzeau

Programming is an unnatural act.