Techie Story On TCP Stacks 76
a9db0 writes: "Ars Technica is running an article on TCP stack research done by Stefan Savage at the University of Washington. Stefan presented one interesting tool and a couple of ingenious hacks. The tool measures response time more accurately between nodes without additional software on the server. The hacks are TCP modifications, one of which could help defeat DDoS attacks.
"
Re:TCP Hacks (Score:1)
His potential solution to DDos Is worrisome (Score:1)
Re:Optimistic ACKing (Score:1)
I did like the graph of how a flood of TCP packets shows up at the same time, essentally dumping all 60Mb of IE across a fat pipe all at once. That works when you are only a few hops away from the server (UoW to Redmond, line of sight), but it falls apart if you have 18-20 routers inbetween with widly fluctuating available bandwidth.
The problem is this can be turned into a very effective DoS tool. By using OptAck you can get the server to flood the outgoing pipe.
Re:ACK hacking (Score:1)
Sorry.
Re:libcap? (Score:1)
actually, no. DSL is the same price as cable out here, it's just that the phone company can't (won't) deploy to my neighborhood. As a result, I'm left with no choice but oversubscribed cable. It costs the same, but I'm getting less for my money. I'm NOT happy.
Admittedly off-topic (going meta) (Score:1)
Re:offtopic (Score:1)
Well, you could always moderate another thread you don't care about to burn your points up, and so your posting doesn't take away your moderations.
That's what I do, I look for a Katz article and moderate it. And since I'm so unbiased about Jon
Re:yay extend and embrace? (Score:3)
Well, duh.
Because this researcher is telling you exactly what he is doing, so you can implement it in a compatible way, while MS is not telling how to build a modified Kerberos that is compatible with their scheme.
Re:libcap? (Score:1)
TCP certainly isn't perfect, but it's pretty good job at using bandwidth efficiently and fairly
even when the link gets congested.
If everyone starts breaking the rules it won't really work for anyone.
Taking the highway system example, people can get away with driving 160km/h, and if there's not much traffic it's reasonably safe too. But everyone doing it at the same time and you start getting _BIG_ problems.
Re:Little more info on the source... (Score:1)
--
Gonzo Granzeau
ITRACE for tracing DDOS attacks (Score:2)
Basically roughly every 20,000 packets, a router chooses a packet at random and sends an ICMP traceback message to the packet's destination listing the router's address and the previous and next hop that the data packet took. At the receiver, if you're being seriously flooded, you start monitoring the traceback packets and when you get enough you can piece together the paths back to the attackers.
It won't stop the attack itself, but will at least help in discovering the cracked hosts being used to launch the attack.
-Fzz
Re:yay extend and embrace? (Score:2)
Re:A few comments on the article (Score:2)
outgoing packets: it did not depend upon seeing the ack packets. This
kind of traffic analysis of this kind was made necessary by the MBONE
multicast protocol, which was built on top of UDP (which does not do
the same kind of binary backoff that TCP does): if there are widely
deployed protocols that do not respect binary backoff, then the
network really would grind to a halt, and so some external method of
`niceness checking' is required.
Cisco make routers that do the necessary tests to spot abuse. It's
worth noting that the consequence of being blacklisted is not having
your service blocked altogether, only that intermediate routers will
have to route around the routers that drop your packets: it will spoil
your performance but not interrupt it. Rememeber that IP makes no
assumptions about packets actually arriving. Yes it can be abused:
but we knew that anyway, and it's much harder to do that than the DDoS
attacks.
Proof? You could ask Cisco I suppose. If you're willing to put up
with less than proof look at all the IETF discussions about the MBONE
protocol. I'll have a look around and see if I find any online articles about testing for backoff.
Re:Control predictive ACKing (Score:1)
At least on my system, the maximum TCP window size can be controlled on a per-route basis. You could probably dynamically determine an appropriate max window size from RTT information. The idea is that an optimistic-ACKing client operates on the assumption that the window can grow without limit, so one imposes a relatively large but finite limit on the server side. At some point the client will then ACK data that hasn't been sent, because it's assuming the server has increased the window when it fact it has hit it's limit. That should create a permanent hole in the TCP data stream, causing interesting times for the client machine.
Assumption: any link has a capacity determined by transfer speed and latency. Rough estimate is that the window will naturally settle at about 2x capacity, give or take. Correct or not?
Re:ACK hacking (Score:2)
--
Optimistic ACKing (Score:3)
I did like the graph of how a flood of TCP packets shows up at the same time, essentally dumping all 60Mb of IE across a fat pipe all at once. That works when you are only a few hops away from the server (UoW to Redmond, line of sight), but it falls apart if you have 18-20 routers inbetween with widly fluctuating available bandwidth.
Time to hack this into the linux net3 stack as a switch during compile time. ENABLE_OPTIM_TCPACK_FLOOD=true and then get some hacked utilities taking advantage of it. Could be good for cable/dsl/OC3 people, but won't do much for poor modem users. A carefully controlled predictive TCP ACK can increase modem connections as well for big transfers. Another fun research project to take up my precious time AAAAUUUUGGGGGHHHHH!!!!
the AC
Re:Interesting But... (Score:4)
#1: Red Herring. We're talking about protocol-level enhancements that make attacks like TCP-based DDoS fundamentally difficult to perform. This is a totally different subject from "making sure all programs on my workstation are free of buffer overflows." It is also true that the types of solutions needed to correctly protect systems are usually fairly intrusive and systemic. As the article says (you did read the whole thing before posting, didn't you?),
#2: University IT departments treat researchers pretty uniformly as "clueless", and assume that their own employees are clueful. The result? Clueful hacker-researchers with well-maintained machines are all locked up behind firewalls and active monitoring unnecessarily, while wide-open boxen sit on the public subnets waiting for j0e h4x0r to set up a DDoS outpost.
Re:Interesting But... (Score:2)
Researchers are paid to do research. Not system administration.
System administrators are paid to sysadmin. Not to do research.
I am a PhD student at a leading university, and know enough about networking to make some of our systems more secure than they are at present. However, sysadmins are a strange bunch -- they jealously protect their turf; they are NOT going to give root access to a mere 'researcher' like me, so that I can secure their systems for them. (Yeah, since their systems are so insecure, I probably could crack 'em and get root, and then fix it, but why bother? They'd never appreciate the 'help' -- they'd probably kill my user account for 'unauthorized activities' once I told them about it.) Besides, it's just not worth my time to do their job for them.
It's worse than that, though. Public universities just cannot keep up with the IT salaries. When you're paying a history prof with a PhD $40k, it's really hard to convince the regents/deans to fork over > $100k for a truly qualified sysadmin. So universities only pay rock-bottom salaries. This leads to two types of university sysadmins: (1) rock-bottom talent (2) 'temporary' -- they work in academia for reasons OTHER than salary; maybe they like the hours, or NOT being on call on the weekends, or they're working on a degree and want reduced tuition,
In case (1), it's easy to see why university computer systems are so unprotected. In case (2), the sysadmin job is NOT the person's primary focus in life, so some things (like keeping current on bugs/security fixes/best practices) fall through the cracks, no matter how talented the person is.
The answer? Fire some profs and use the money to hire a GOOD sysadmin at a salary that'll keep him around (e.g. near $100k), instead of jumping ship in six months when he gets an offer that doubles or triples his measly current salary of $30k.
And if you think there's a university out there willing to do that, I've got a bridge in Brooklyn for ya.
Re:TCP Hacks (Score:1)
'Legitimate' TCP performance tuning (Score:4)
Making TCP Really Fast (Score:1)
This turns out to be one of several new attacks made possible by by really knowing how to hack your TCP setup
More research on Slashdot (Score:1)
on such a popular site as Slashdot. I think Slashdot is definitely the place where people should be able to consistently find out about new developments in science.
Perhaps Slashdot could run some sort of a sweep/review of the latest hot papers in particular research areas or published on recent conferences and post the summaries, impressions and links.
This is already being done for books and all kinds of miscellanous topics (think Quickies).
Occasional discoveries in CS, Physics and Chemistry are also sometimes publicized. How does the selection process work? Why does some research find its way to Slashdot and tons of other, no less exciting, research does not?
Re:His potential solution to DDos Is worrisome (Score:3)
What about it? All they can get is the IP address of the attacker. If you're making a legitimate connection, you haver to supply your IP address so that the results can reach you! The only reasons to spoof an IP address are nefarious.
Even then, this can only trace packet floods, because a huge number of packets are needed for a trace to be effective. IIRC, the article says 100*n packets minimum, where n=number of hops, are required. If you figure 10 hops to get somewhere interesting, you need 1.5 MB incoming traffic to get a trace. FTP or HTTP requests don't generate that kind of traffic in any reasonable time.
Only in the extreme... (Score:1)
I think the individual user is still anonymous under this scheme, but I ain't no expert.
Re:wELL HOLy FUCKiNG SHiT i CAN't BELiEVE iT (Score:1)
"If I Ever Meet The Inventor Of RSH I Will KICK HIS ASS!"
Moderation is a failure.
Re:A few comments on the article (Score:1)
You are making all of this up.
Re:Interesting But... (Score:2)
The CS department would hire a clueful sysadmin who was just out of college and did not have an impressive enough resume to get a full sysadmin job elsewhere, but had personal experience. They would place the SA underneath the professor who was a cluefull researcher in the area of networking and operating systems. My college also maintained a staff of part time student sysadmins who performed tasks for the lead sysadmin, and could help a new lead grow accustomed to the environment. Some of these students stayed over the summer to research and to do admin tasks that couldn't be done during the school year, and this is when the new lead was trained.
After a couple years, the lead would get a new job for twice what he was making for my college, and we would start looking for a new lead. This worked quite well for everyone involved, and the college didn't need to be convinced to pay real money.
Re:Control predictive ACKing (Score:1)
Logging! (Score:1)
Insider Attacks; Lots of Bandwidth, Newbie Users (Score:2)
Re:A few comments on the article (Score:2)
TCP Hacks (Score:1)
Hmmm. maybe he should do a little research himself (Score:1)
enichols [~] oxygen >ping www.yahoo.com
PING www9.yahoo.com (204.71.200.74): 56 data bytes
64 bytes from www9.yahoo.com (204.71.200.74): seq=0 ttl=243 time=82.7 ms.
64 bytes from www9.yahoo.com (204.71.200.74): seq=1 ttl=243 time=82.4 ms.
64 bytes from www9.yahoo.com (204.71.200.74): seq=2 ttl=243 time=77.7 ms.
64 bytes from www9.yahoo.com (204.71.200.74): seq=3 ttl=243 time=76.6 ms.
64 bytes from www9.yahoo.com (204.71.200.74): seq=4 ttl=243 time=77.6 ms.
64 bytes from www9.yahoo.com (204.71.200.74): seq=5 ttl=243 time=80.6 ms.
^C
---- www9.yahoo.com (204.71.200.74) PING Statistics ----
6 packets transmitted, 6 packets received, 0% packet loss
round-trip (ms) min/avg/max = 76.6/79.6/82.7 (std = 2.42)
//Phizzy
Re:TCP Hacks (Score:2)
It could be my ignorace of this subject, but... (Score:1)
tcd004
Here's my Microsoft parody [lostbrain.com], where's yours?
ACK hacking (Score:2)
Re:It could be my ignorace of this subject, but... (Score:1)
sounds like an interesting variant on ddos (distributed if you sent the same ack's from different sources, anyway)
Re:TCP Hacks (Score:2)
Yes, and as such, they are also the most useful.
Re:ACK hacking (Score:1)
Also, note that lftp already does this multiple-file-transfer thing. Just 'pget -n file' to download with X simultaneous connections. It really does speed up transfers.
Re:1337N3%% f!173R (An open letter to Slashdot) (Score:1)
This leads me to the conclusion that the lameness filter is either designed only to let the non-lame (i.e the 31337) through, or it is a spelling mistake (which should have said "Lameass Filter"). Either way Taco, your "lameness" heuristic is pretty poor and I suggest you remove it from Slashcode.
The lameness filter is just supressing free speech, and will drive more people to using 1337speak when they want to troll. Is improving the signal to noise ratio really a fair price to pay for the recent spate of censorship that has taken place on Slashdot (e.g. Taco's "Bitchslapping" [slashdot.org] technique (the thing responsible for the abolition of Slashdot-Terminal, but which has also caught some innocent users in the crossfire, such as people who dared to moderate Signal 11 down) and the lameness filter.)
Don't get me wrong, I still love Slashdot, but it just isn't the same as it used to be.
Please note: I am only posting AC because I don't want to get on the wrong side of a "bitchslapping". It is now too dangerous to express one's opinion on this site as a logged-in user.
Control predictive ACKing (Score:3)
Also, if I tell the server to dump my 2Meg download into 1 packet, what happens when my wife picks up the phone and interrupts transmission? Will the whole 2Meg need to be resent? IOW, is this technique only useful on extremely reliable connections (which are VERY rare)?
Use sting to counter ACK (Score:2)
Re:ACK hacking (Score:1)
yay extend and embrace? (Score:1)
--
Peace,
Lord Omlette
AOL IM: jeanlucpikachu
download accellerator (Score:1)
Re:DDOS trace won't work. (Score:3)
I personally doubt that there is any defence against a propperly executed DDOS attack.
Stefan is not proposing a way to catch the perpetrator, but to locate the computers that are performing the DDoS attack.
As in the article simply put...
The basic idea behind the approach Stefan outlined is for each router that forwards a packet to mark it with information that will allow the recipient of the packet to trace it to it's source.
This is over simplified but in the article he explains a way to mark packets, in a kinda random way, in such a manner as to be able to trace the source and then taking the proper action. Temporarly shutting down the deliquent computer's internet connection.
This would not prevent the DDoS attack but it would speed up the process of shutting it down by removing the human factor in tracing the attacks.
Because there is no difference between a propper DDOS and "The SlashDot Effect."
Yes there is! A DDoS attack is a larg number of computers sending/requesting Massive amounts of information. The "Slashdot Effect" is Massive amounts of computers sending/requesiting moderate amounts of information. Except for large downloads then they are requesting Massive amounts of information, i.e. when netscape pre-6 was announced
DDOS countering? (Score:2)
First the method employed is to XOR the addresses of the first and second routers on an edge. Now it is clear that you can trace back IF you are sure what the IP of the secondary router is. However given that the data can follow multiple paths how are you ever certain what this IP is. Secondly as it is a probablisitic process the second IP of the router may be one of many. Is this solved because the IP's of routers along the path are very sparse?
Secondly what prevents a DDos attack from faking this field. Make it look like the attack came through another nearby router.
Thirdly as most DDOS bounce pings off of remote boxes this doesn't let you catch the perpratrtor only identify what boxes are pinging you (these boxes most likely not being aware they are used in a DDOS attack won't be using these methods) thus as this method doesn't allow you to block the DOS attack (most of the packets will be encoded only with routers close to the destination and you don't want to cut off all trafic) what good is it?
A few comments on the article (Score:4)
the `niceness' constraints in TCP: actually the strategy suggested
will get you blacklisted on quite a few routers, which means it will
simply drop all packets originating from your IP address. The routers
use standard traffic profiling tools to spot just the kind of tricks
Janotti describes.
To plug some work done in my department, Azer Bestavros has done
some nice work on network [bu.edu]
profiling : the idea I liked most was a way to make the TCP binary
backoff work better by grouping together similar packets: this can be
done entirely end-to-end, and really gets big improvements in overall
performance. See in particular the paper `QoS Controllers for the Internet'.
"Turbo Stack" Kernel Options?! (Score:3)
Congestion control was developed in response to a congestion *crisis* in the late 1980s. Proper congestion control is a requirement for the Internet to function. The LACK of congestion control is common streaming and multicast protocols is a commonly cited major hurdle for the deployment of multicast applications on the Internet.
It's been a nightmare scenario for awhile now that Microsoft (they of the "transient failure" RST packet) would unscrupulously try to gain a competitive advantage by manipulating congestion control. By "breaking the rules" they could make a faster stack. Another scary thought is that silly "Internet Accelerator" products could actually sell REAL accelerators, that provide horsepower boosts at the expense of the entire network.
What you DON'T want to see happen is for Linux to gain "turbocharging" via congestion-ignorance. What that does is set up an arms race between Linux and every other stack vendor, and particularly Microsoft. That arms race could easily lead to congestion collapse and yet another Internet scaleability crisis.
What Stefan Savage is describing are VULNERABILITIES in common TCP/IP stacks. They need to be fixed, and programs that take advantage of them need to be considered in the same light as programs that get rid of pesky security measures on remote computers --- as exploits.
Just chiming in here, because I think it's odd that people here are paying more attention to the clever backtracking hack Savage came up with and less attention to the important, new security vulnerabilities he has documented.
Re:Interesting But... (Score:1)
I'm actually on CS staff, and a great thing it is. And yes, the sysadmin turnover is still what it was (I assume you're a graduate).
Small world, I guess.
--Nate Eldredge, nate@cs.hmc.edu
Another method of encoding data. (Score:1)
There is one other method of encoding data, which would allow for more data throughput. The one I am thinking of is to make a subtle modification to a TCP/IP stack, so that it must send all IP packets twice for which it wishes to encapsulate data within. The first packet sent would have a sequence number which is made to look wrong, and the second packet would have the correct sequence number. The receiving host could have a similar modification made so that it recognizes when there is data to be found buried in the payload of the (seemingly) error ridden packet.
With this method, you could potentially encode a larger percentage of covert data per byte of legitimate data sent.
Re:libcap? (Score:1)
Thank you for your time.
Re:Cool Cable Modem hack (Score:1)
Re:Interesting But... (Score:2)
And, we dont packet filter here. We run nfr, yeah.. but the only time we packet filter is when a research group asks for full control of a machine, or in other words, they want us to lose responsibilty of that machines actions.
Then we firewall their machines.
It isn't 'clueless', its about where the 'blame' goes when one gets cracked and sits for weeks unnoticed. If it was to happen to the paid sysadmins, I would fire them on the spot if it were obvious there was a crack.
Hrm.
Re:A few comments on the article (Score:2)
Not to mention the fact that the control mechanism they have ("blacklisting IPs" on the backbone) is trivially exploitable by malicious users to deny service to random Internet sites.
Detection at the "edges" --- at third-tier providers and universities --- seems feasable, but expensive and error prone. Predictive acknowledgement is especially susceptable to false positives, as it involves keeping state between data packets and the ACK responses, AND relying on timing and reliability of packet capture/analysis.
I'm willing to bet that no major carrier is actually profiling TCP traffic to find "greedy" stacks. Can you prove otherwise?
Re:A few comments on the article (Score:1)
1. RFC 2309 [ietf.org]
describes the need for some kind of proactive congestion control to
deal with protocols that do not implement any kind of backoff. This
proposal spawned a whole lot of research into testing for fairness.
Sally Floyd, one of the authors of the RFC, has the slides (PS) for a [aciri.org]
talk which gives a good basic overview of the issues.
2. A standard for congestion control is proposed in RFC 2481 [ietf.org]. It is easy
to spot abuse by end users who claim to comply with this proposal.
I'll ask about the blacklisting and post here when I have some
references.
packet != window (Score:1)
In summary, in a noisy or bandwidth limited environment, playing games with the ACKs probably won't buy you much, and if you congest the pipe to the point where the routers start discarding packets, you're arbitrarily large download is likely to take more, not less, time.
Techie? (Score:1)
Would the slashdot of old prefix the story with "techie"? I guess most people who read slashdot are not longer techies.
I ask this question in all seriousness. oh well.
Re:Control predictive ACKing (Score:1)
That's sorta the idea. Play games, hose yourself.
Re:Control predictive ACKing (Score:2)
I think the point you are missing is that with optimistic ACKing, ACKs are (hopefully) not sent for data that has not been sent. ACKs are sent for data that has not yet reached the receiving TCP stack (but been sent). This causes the sending TCP stack to think that data is being received faster than it is, which causes it to grow the high end of its data window, causing it to send more packets. The trick of optimistic ACKing is to send back ACKs fast enough to match the growth of the sending stack's window, but to not outpace that growth, so that all of the ACKs are for data that has been sent (just not yet received). This results in exponential growth of the TCP window. Of course, the problem is that while all of the data is sent, it will not necessarily all reach the receiving stack due to packet loss. The solution, as the article mentions, is for the receiving stack to reconnect and use the HTTP Range header to get only those packets that did not make it through the first time.
Your other suggestion, restricting the maximum window size, is not feasible. What would you set as the maximum size? At work, I am connected to the Internet by a super-highspeed, backbone connection (as are all of my friends at school). You have no way to tell what type of connection a person would have, so your maximum window would have to accomodate the fastest, the top 1% say, making it useless for the 99% with connections below that. Besides, that is not a limitation you want in your TCP stack. What happens when we get faster connections? Do you really want to have to patch your TCP stack everytime you upgrade your network? Limitations on bandwidth are best left to your router or firewall. Even then though, you are only going to be able to stop the extreme cases.
I think Stefan is thinking along the right lines for a solution: including a random piece of information in a packet that has to be echoed in the ACK. Although that would require changes to the TCP protocol, I think there may be a similar solution that would not.
An alternate solution would be to perhaps send a packet out of order, like a window a head of the most recently sent packet. The optimistic ACKers would then send back ACKs for the intervening data (because they would assume that the other data was on its way), which mostly would not have been sent yet, or they would send ACKs continuing on from the advance piece of data (to try to meet the expected growth of the window), once again sending ACKs for data that had not yet been sent. The resulting data loss would eliminate any gains from the optimistic ACKing. This solution is not quite as drastic as Stephan's, as it would not require changes to the TCP protocol or the client TCP stack (I think; I will have to do some research to verify that). Since optimistic ACKing relies on being able to predict the next packets sent, I think that this solution, adding some unpredictability to the packets sent using out of order packet sending, would effectively neutralize optimistic ACKing.
Nathan Florea
Re:Control predictive ACKing (Score:1)
Jeff
Re:Optimistic ACKing (Score:2)
Er, that was all 60K of http://cnn.com/index.html [cnn.com].
---
Re:ACK hacking (Score:5)
HT Kung has been doing some work on this. MIT and Harvard share the same net link and pay the same price, but MIT has more net users and therefore more connections (as in streams) so they use much more bandwidth. So you do traffic shaping and stop all those nasty bastards opening 300 concurrent connections from their desktop at once from using the entire network.
Re:ACK hacking (Score:2)
http://www.eecs.harvard.edu/~htk/
Cool Cable Modem hack (Score:2)
Re:libcap? (Score:1)
Yeah. Damn those jerks for actually believing they'll get the high bandwidth they payed the cable company for. :P
How is it that downloading something is "abuse"? I'm paying for 1.5Mbit DSL, if I'm paying for it, I'm gonna max it.
I know cable users are on one big segment, and that's why I'm paying a bit more for DSL, I'm more likely to get the bandwidth I'm rated for.
You get what you pay for.
The point is... (Score:2)
DDOS trace won't work. (Score:5)
A DDOS attack involves two layers of victims. The obvious victim is the recipient of the attack. But before the attack can be launched several (hundred) intermediate systems must be penetrated and exploited. It is this set of victim machines which launches the final attack.
The procedure proposed by Stephen is quite clever and could be used to trace the attack back to the first layer of victims. But that is where it would end. The procedure requires hundreds of packets to make its trace. But the attacking machine is only listening for a single packet - whose IP can be spoofed - for the command to launch the attack. So the perpetrator remains safe behind his proxy army until he starts bragging on irc.
I personally doubt that there is any defence against a propperly executed DDOS attack. Why? Because there is no difference between a propper DDOS and "The SlashDot Effect."
Forget the ICMP packets. Want to take down a web site? Flood it with web page requests. You now have nothing to filter on and the legitimate users are crowded out.
Re:Control predictive ACKing (Score:4)
Check me if I'm wrong, but wouldn't simply having the server's TCP stack discard all data for which it had received an ACK, regardless of whether that data had been transmitted or not, in combination with a finite maximum window size and discarding ACKs that do not correspond with the end of a packet, make optimistic ACKing completely counter-productive?
Re:Optimistic ACKing (Score:1)
Re:DDOS trace won't work. (Score:1)
Another thing that bears mentioning is the fact that DDoS attacks can (and may already be) ack'ed before they arrive... This means only a small amount of requests would have to be issued from a moderate base of compromised systems. Stefan was suggesting that someone wanting to boost performance use HTTP to re-request page chunks that didn't arrive fully, but somehow I don't think the DDoS people really care about receiving the information intact. :)
I'm not a TCP/IP guru, but would a possible rememdy be to vary the length of data being requested, so at least the ability to pre-ack the transfer would be one step harder?
Tracking back to the attcking hosts at least provides the victim with the ability to deny access relatively local the attacking machines. Even if the attacking computers are spoofing, if you know a particular machine is being routed through, you can deny access from that router in somewhat of an automatic manner with Stefan's suggestions in place.
Re:Control predictive ACKing (Score:2)
So, how would this differ from semi-randomly modifying the packet size? I 'think' the benefit of modifying the packet size would be that it would make for a simpler modification of current stacks. Just modify the 'window growth' algoritmn. Then if you don't ACK properly, the server assumes you're cheating, gets confused, and drops your connections. A very strong disincentive. It just seems to me that sending a packet out of order would require a lot more bookeeping and much more modifications to the current algorithmns (I'm not spelling that right am I?)
Re:Control predictive ACKing (Score:1)
The way a TCP ACK works is that it says "I have received all of the data up to this octet." TCP packet size is not determined by window size or acknowledged data. It is determined by the MTU and urgency of the data needed. If stack can wait until it has enough data to create a packet of the Maximum Transfer Unit, it will (there is a timeout). If the receiver has indicated that it needs the data ASAP, the TCP stack will push out the data octets as fast as it can (with a very short wait), regardless of how small the packet is (IIRC, I am a bit rusty on this part). So altering the packet size will have no effect.
Also, the out of order octets would not need to introduce much overhead. Simply a sequence number far enough a head to not be reached for a while ("while" is intentionally vague; I need to do some more research on how far ahead the stack could and should go). If that is acked with out the TCP stack having sent the previous data, the stack knows that the receiving stack is cheating and can close the connection.
The key idea here is to send information that can not be acknowledged (because octets before it have not been sent) and see if it *is* acknowledged. Perhaps a better implementation would be as the send window grows, the TCP stack skips two octets but otherwise continues sending normally. A valid stack would not be able to acknowledge any of the subsequent octets, but an optimistic ACKing stack would. And because the receiving stack caches the out of order octets, once the sending stack determines it is dealing with a legitimate receiver and sends the two missing octets, all of that data is still valid and there is very little inefficiency introduced. The keys here that I still need to work out are:
Nathan Florea
Re:Control predictive ACKing (Score:1)
If both ACKs are for data that has been received, then the first ACK received won't cause any data loss and the second ACK received will cause nothing to be discarded because it's already been handled by the first one. There's no need to alter current behavior with regard to that part of things. It only hits clients who ACK data that they haven't yet received, and should merely cause the client to stall with a hole in the data stream it can't fill in.
Re:"Turbo Stack" Kernel Options?! (Score:2)
Savage has done a good job on this. I think I see a way to stop the optimistic ACK attack (the hard case) with mods to the attacked end only, but it's ugly and needs more thought. The general idea is to introduce some randomness into the segment sizes sent, and if the replying ACKs don't reflect this, the ACKs are probably fictitious.
Another class of attack suggests itself. Streaming protocols over UDP are probably very vulnerable to attacks like this. If you can convince some video server that you have huge bandwidth, you may be able to get it to flood a section of the net. Those proprietary streaming protocols need a hard look.
John Nagle
Little more info on the source... (Score:2)
He's looking for an academic job, and some of his papers (Especially the project team that created the SPIN [washington.edu] kernel stuff) are quite impressive.
--
Gonzo Granzeau