Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Upgrades

Fast TCP To Increase Speed Of File Transfers? 401

Wrighter writes "There's a new story at Yahoo about a new version of TCP called Fast TCP that might help increase the speed of file transfers. Sounds like it basically estimates the maximum efficient speed of your network, and then goes for it, dumping a lot of time-consuming error checking." There's also an article at the New Scientist with some additional information.
This discussion has been archived. No new comments can be posted.

Fast TCP To Increase Speed Of File Transfers?

Comments Filter:
  • by Anonymous Coward on Thursday June 05, 2003 @12:33AM (#6121103)
    Faster pr0n!!!!
    • C'mon.

      HUGE performance increase is possible, just by ommiting the optional EVIL bit.

  • by havaloc ( 50551 ) * on Thursday June 05, 2003 @12:33AM (#6121106) Homepage
    Let's see. Transmission without error-checking is called UDP, isn't it?
    • by Ark42 ( 522144 ) <slashdotNO@SPAMmorpheussoftware.net> on Thursday June 05, 2003 @12:36AM (#6121125) Homepage
      No, UDP has error checking per packet via a checksum. What they are talking about is probably something to do with TCP "slow-start" where TCP connections speed increases slowly so as not to flood the network at first. I think the speed starts out exponentially with each packet then backing down some when packets are dropped.
      • by Ark42 ( 522144 ) <slashdotNO@SPAMmorpheussoftware.net> on Thursday June 05, 2003 @12:40AM (#6121144) Homepage
        The article seems kinda stupid to me, it describes a basic "stop-and-wait" protocol where only 1 packet can be in transit at a given time, and if it gets lost, it is retransmitted. I am pretty sure normal TCP has a window where it can send up to X packets at once and retransmit any particular missing one. I am sure there is room for improvement, but TCP is a fairly complex protocol already and the article seems to forget about all that.
        • by zcat_NZ ( 267672 ) <zcat@wired.net.nz> on Thursday June 05, 2003 @01:03AM (#6121258) Homepage
          I can only assume that the 'description' of how normal TCP works has been 'simplified to the point of being wrong' by reporters, because it is totally wrong. The description of 'why fastTCP is better' then proceeds to describe 'normal tcp as it actually works.'

          Any then they totally confuse the issue by mentioning that you can use multiple high speed links in parallel to get higher overall bandwidth. Boy am I impressed.
          • Yeah, I was right.. there's a link to caltech further down which actually describes what they're talking about. Good to know that nobody but the reporters really think we're all still using xmodem/kermit like one-packet-and-wait protocols
          • by tincho_uy ( 566438 ) on Thursday June 05, 2003 @05:34AM (#6121919)

            No, the guy at new scientist got it right... TCP uses an AIMD (additive increase multiplicative decrease) rate control algorithm. The rate at which you send is controlled by the window size at any given time. If you detect a loss, you decrease your window, dividing it's size by 2. If packets are arriving ok, you make small increments to your window size.

            This new protocol uses a different window management algorithm. It uses the acks as probes, (I guess they measure delays) and if 'the coast is clear', it maxes it's transmission speed

            I do wonder about FAST TCP congestion control capabilities, thugh... As for the poster who taled about slow start, sorry pal, but slow start is just the name...At that state, the transmission rate is increased quite fast, actually

    • by marbike ( 35297 ) on Thursday June 05, 2003 @12:37AM (#6121127)
      Well, without error checking and session state.

      Really, I am not sure that this is a good idea. TCP includes error checking for a reason. I see this as a way to transmit corrupted files, not a way to speed up the internet experiance as a whole.
      • by harvardian ( 140312 ) on Thursday June 05, 2003 @01:35AM (#6121376)
        I see this as a way to transmit corrupted files, not a way to speed up the internet experiance as a whole.

        Without trying to be mean, you see it that way because you don't understand what's going on (mostly because the post was misleading). Fast TCP packets will still have a checksum and everything, so you're not going to get corrupted files. The change here is that normal TCP halves its "window size", or the amount of info that's out on the network at once without receiving an acknowledgement of receipt, with each error. This means that if there's one minor slowdown when 10 packets are currently out from your computer to the recipient (you've put out 10 packets without getting an ACK back yet), then your computer will reduce its window size and only allow 5 packets to be out at a time, effectively halving the transmission rate. Since TCP continually tries to get faster, it will always hit a bottleneck, resulting in your connection vacillating between optimal speed and half of that (approximately, I guess it might be worse than this on high-speed networks based on what I've read here).

        In Fast TCP, they do this "congestion control" in a different way. Rather than halving the connection speed with every slowdown to ensure stability, they send as much data as possible as long as the network seems clear on the recipient's end (I think they estimate this with round-trip time of some sort).

        So the "error checking" being changed by Fast TCP is NOT bit checking -- it's transmission rate checking. You'll still always get your files intact.
        • by harvardian ( 140312 ) on Thursday June 05, 2003 @02:03AM (#6121449)
          FYI, a great website for understanding how TCP congestion control works is here [mkp.com]. It explains how TCP additively increases its window size as traffic goes through okay but then halves its window size when it runs into a problem.

          And I should clarify my first post as well by explaining what a "transmission error" is that would cause the window size to halve. From the article above:
          It is rare that a packet is dropped because of an error during transmission. Therefore, TCP interprets timeouts as a sign of congestion, and reduces the rate at which it is transmitting.
          Basically, what I mean by a "transmission error" is a timeout -- the sender sends a packet and never gets an ACK for it. TCP works on the premise that packets are mainly dropped when congestion is high enough for routers to drop packets because of maxed buffers. Thus it makes sense to reduce transmission rate when no ACK is received to adjust to the capacity of the network.
        • Normally, when you're sending stuff from point A to point B, there is a lot of buffering on the way. At the point where you lose a packet, you've overflown a buffer somewhere. Halving your throughput at that point is probably a good idea - you may be backing off your data transmission a lot right now, but there should be plenty of data sitting in buffers out there that needs to be cleared anyway.

          If you only back off a little bit, what happens is you just go overrun that same buffer again, and just send o
        • by skaya ( 71294 )
          Since TCP continually tries to get faster, it will always hit a bottleneck, resulting in your connection vacillating between optimal speed and half of that (approximately, I guess it might be worse than this on high-speed networks based on what I've read here).

          This explanation must be somewhat simplistic, because everybody already did some 100 mbps transfers on fast-Ethernet LANs (even with a couple of routers), and we did not notice that the transfer speed was oscillating between 50 and 100 mbps.

          Also,

          • This explanation must be somewhat simplistic, because everybody already did some 100 mbps transfers on fast-Ethernet LANs (even with a couple of routers), and we did not notice that the transfer speed was oscillating between 50 and 100 mbps.

            That's because the oscillation happens so fast that you can't see it happening (or see the next paragraph for an alternate explanation). I mean, it is not a disputed fact that TCP will frequently halve its window during a large file transfer under normal Internet cond
    • by mondoterrifico ( 317567 ) on Thursday June 05, 2003 @12:48AM (#6121191) Journal
      No, UDP is a connectionless protocol, kinda like how our postal service works. TCP is a virtual connection more like when you make a phone call.
    • by subreality ( 157447 ) on Thursday June 05, 2003 @12:49AM (#6121201)
      #1. No. UDP has error checking. The difference between UDP and TCP is that TCP is a connection-based, sequence-enforcing protocol, where UDP is basically raw connectionless datagrams that arrive in any order and you have to handle packet loss and reordering in your application.

      #2. RTFA.

      #3. They're not getting rid of error checking. It sounds like they're reworking the windows for ACKs in TCP to allow better streaming over high speed, but realistic (IE, slightly lossy) networks. Current TCP aggressively backs off when packet loss is detected, to prevent flooding the weak link in a network connection. It works really well for consumer network speeds, but on very high speed networks (EG, 45 Mbps), even very light packet loss will drop your speed dramatically down. TCP just wasn't meant to scale to these kinds of speeds, and some reengineering needs to be done to make it work smoothly. Many of the current extensions to TCP have made matters a lot better, but it's still going to have trouble scaling to gigabit, high latency networks, and it's best to start dealing with these issues early.

      • by Anonymous Coward
        Actually UDP error checking is optional. If the checksum field is zero, the packet integrity is not validated.

        Tom
    • by Snoopy77 ( 229731 ) on Thursday June 05, 2003 @12:56AM (#6121229) Homepage
      The title seems to suggest that it is called "Fast TCP"??

      Gee ... we're not even reading the title of the stories anymore.
    • Well this DOES have error checking... Just a different type.

      That said, UDP is probably a better option for 99% of high-bandwidth traffic. Higher-lever error checking could accomplish the same thing with potentially less overhead.
  • by diesel_jackass ( 534880 ) <travis...hardiman@@@gmail...com> on Thursday June 05, 2003 @12:34AM (#6121109) Homepage Journal
    This would be badass when combined with BitTorrent [bitconjurer.org]!
  • SmartTCP. It sounds like equipment is constantly tweaking the connection for the optimum through put.
  • by Emugamer ( 143719 ) * on Thursday June 05, 2003 @12:35AM (#6121119) Homepage Journal
    Why not just number all packets between two hosts and if the recipient doesn't recieve a packet it requests that particular packet to be resent? I see problems with man in the middle attacks but is there any other reason?

    just wondering
  • zmodem??? (Score:5, Interesting)

    by case_igl ( 103589 ) on Thursday June 05, 2003 @12:37AM (#6121130) Homepage
    I remember back in my BBS days what a big deal zmodem was when it started getting used all over the place. As I recall, it changed the block size that you would receive dynamically based on line quality.

    So when you sent a block of 2k and got no errors, the frame size increased to 4k...8k... etc etc... Sounds like a similar approach.

    Case

    P.S. That was a long time ago in a FidoNet far far away, so my terms may be off.
    • Back in the day when I had a dialup modem as well as the option of connecting to the net via PPP or via dialup shell, I observed that for me using zmodem was MUCH faster then FTP over TCP/IP. Very useful for that super big demo, or linux distrobution.

      As a result I cached stuff on my provider, and set to download overnight. Also Zmodem has a very spiffy resume feature my FTP at the time didn't support. My provider supported this "suck up our lines at night" as it left the lines open for their business cu
      • Re:zmodem??? (Score:5, Informative)

        by polymath69 ( 94161 ) <dr.slashdot@NoSPam.mailnull.com> on Thursday June 05, 2003 @02:07AM (#6121458) Homepage
        Note, there still is, to my knowledge, nothing slower then kermit.

        At the risk that you're trolling, Kermit is actually very good indeed (after 1990 or so,) assuming you set your options correctly.

        The defaults are slow, but they work; that's Kermit's raison d'etre and why it's still around. But Kermit was probably also the first protocol to implement sliding windows and configurable blocksizes; Zmodem probably got that idea from Kermit. Set your options correctly, and Kermit's damn good.

        The age of the BBS is over (I ran one for about 12 years) but I'm pretty sure I'll use Kermit again before I have cause to use Zmodem again.

        • Don't be so sure. (Score:4, Informative)

          by fireboy1919 ( 257783 ) <rustypNO@SPAMfreeshell.org> on Thursday June 05, 2003 @03:32AM (#6121640) Homepage Journal
          It comes with Solaris right now. You can also get it for Linux. Why?

          It's useful when the ssh client has it built in because you get pretty much the same speed and the ability to download between clients.

          By the way, I know about two zmodem-enabled ssh clients:
          1) SecureCRT [vandyke.com]- nonfree/Windows only.
          2) Zssh [sourceforge.net] - open-source, cross-platform.

          The actual applications which initiate the transfer are called "rz" and "sz."
          • Re:Don't be so sure. (Score:3, Interesting)

            by evilviper ( 135110 )
            It's useful when the ssh client has it built in because you get pretty much the same speed and the ability to download between clients.

            Any good reason not to just use SCP? I know you can transfer files in the same SSH window (using zmodem), but it wouldn't take too much work to modify the SSH client to start a file copy over the current connection using SCP...

            So what's the advantage here?
    • Thanks for all the memories. I just got flashbacks from the days of Primal BBS, Demon's Abyss BSS (133t w@r3z sites around Long Island, NY) and waiting hours to download Wing Commander on 3 1.2mb floppys, zipped up of course with PK-Zip v2.04g. :)

      ZModem was sweet indeed when it came out. I went from 2400 baud to 14,400 baud using Zmodem and became the cool kid on a very geeky block.

    • Re:zmodem??? (Score:5, Interesting)

      by joshuac ( 53492 ) on Thursday June 05, 2003 @01:51AM (#6121419) Journal
      Actually, the Zmodem that was widely used (real zmodem) maxed out at 1k blocks, but it would steadily scale down to as small as 16 byte blocks (if I recall correctly).

      There were variants that did 8k blocks (and often referred to themselves as Zmodem8k), but none of these were true zmodem protocol.

      Still, nothing can be quite as fast as ymodem-g :)

      A little more on topic; what they are describing does not dynamically scale the packet size, only dynamically adjust the transmission speed up to the point that ack's start slowing down, but (hopefully) before any packets actually get dropped. I suspect disney and such will be quite disappointed if they think they are going to get a 6000x speedup in practical use as hinted at in the articles. Perhaps a 10% speedup for joe blow on a dialup modem, _maybe_. Take a look at your connection some time when downloading a file; you will probably find you can already peg your bandwidth quite nicely.
    • Re:zmodem??? (Score:5, Interesting)

      by G27 Radio ( 78394 ) on Thursday June 05, 2003 @02:21AM (#6121493)
      I used to run an Apple II BBS/AE in the mid to late 80's (201). X-modem was king when I started. But Y-modem and then Z-modem surpassed it.

      X-modem transmitted files as 256 byte blocks of data along with an 8 bit checksum (IIRC.) The receiver would respond with an ACK (Acknowledgement) or a NAK (Negative Acknowledgement) after each block. If it was a NAK the sender would re-send the block. If it was an ACK it would send the next block.

      Y-modem increased the block size to 1k which was helpful since the turnaround time between packet and acknowledgement was wasting a lot of time. It also used a 16-bit CRC (Cyclic Redundancy Check) instead of an 8-bit checksum. Apparently the CRC was much more reliable.

      Around the time that error correcting modems started becoming popular (USR Courier 9600 HST) a variation of Ymodem popped up called Ymodem-G. Ymodem-G would send 1k-blocks with CRC's non-stop without waiting for an ACK. If the receiver got a bad block it would simply abort the transfer and you'd have to start it over.

      Zmodem would also send blocks and CRC's non-stop unless it got a NAK back. It would resume sending at the block that caused the NAK. The variably sized blocks were pretty cool too.

      Feel free to correct any errors. It's been a long time.
    • Re:zmodem??? (Score:3, Insightful)

      by Bios_Hakr ( 68586 )
      The TCP sliding window protocol does this. When you request a page, some data is sent to you. You send an ACK. the server then sends that same ammount of data, plus some more. Then you send an ACK. This continues until you stop sending ACKs. Then the server knows that it needs to back off a bit.

      I'm pretty sure this is a standard in TCP/IP.

  • Uh oh... (Score:5, Funny)

    by ctishman ( 545856 ) <ctishman@NOSPaM.mac.com> on Thursday June 05, 2003 @12:38AM (#6121137)
    "Caltech is already in talks with Microsoft and Disney about using it for video on demand," the magazine added. "Hey! Let's take a technology that's potentially revolutionary, and give it to Microsoft!" Yay for Caltech!
    • "Hey! Let's take a technology that's potentially revolutionary, and give it to Microsoft!"

      I have a feeling that when they give it to them, they will receive a check with around eight to ten 0's in it.
  • Cool (Score:2, Insightful)

    speed or accuracy..either one...
  • Window size anyone? (Score:2, Interesting)

    by sigxcpu ( 456479 )
    Isn't estimating the effective bandwidth of the link exactly what tcp window is all about?
    I read the article and did not understand what do thay add that is better then the standerd tcp enhancements of selective ack and big window sizes.
    clue anyone?
  • I'm sorry, but without further technical details, this sounds like the sort of technical mumbo-jumbo that snake-oil salesmen were peddling back in the dot-com era.
    • without further technical details, this sounds like the sort of technical mumbo-jumbo that snake-oil salesmen were peddling back in the dot-com era.

      The New Scientist [newscientist.com] makes it quite clear on how the Fast TCP is done, if you know anything about how TCP works (and how the window size halves in the event of packet losses)

      shame on a relatively low-ID user making such trollish comments...
      • by IvyMike ( 178408 ) on Thursday June 05, 2003 @02:07AM (#6121457)
        First of all, the New Scientist article doesn't mention anything about TCP sliding windows or congestion control.

        The whole "driving a car while looking 10 meters ahead" analogy ignores a lot of the work TCP does to keep things moving fast. The "trasnmists, waits, then sends the next" packet paragraph is almost deliberately misleading.

        It tosses about a "6000 times faster" statistic without explaining 6000 times faster than what. Is my dad's 28.8 modem going to suddenly be getting throughput of 172Mbps? Of course not, but what difference is it going to make to him? I think maybe none at all, and FastTCP is only for very large network hauls, but the article has claims about me downloading a movie in 5 seconds.

        My DSL line is 768kbps; I get downloads of large files through it of around 85kBps, which is a data throughput rate of 680kbps. That means that all the layers of the OSI burrito, including resends, checksums, and routining information, add up to about 12% overhead. Not the best, but not that bad, either. How much improvement is FastTCP going to get me?

        The numbers for their practical test of Fast TCP connected two computers that got 925Mbps, and the "ordinary" TCP got just 266Mbps. Even that's pretty unbelievable to me; I find it hard to believe that TCP was running at about 25% efficiency.

        Extraordinary claims demand extraordinary evidence. Like I said before, without further technical details, this doesn't actually sound all that different than the claims of Pixelon [thestandard.com], which also had an eye towards video on demand.

        Maybe they've got something; someone linked to the actual caltech article, which I haven't had a chance to read in detail (and wasn't linked to at the time I started my post). Caltech certainly is a cool place, so there is probably something interesting going on. But the New Scientist article is a fluff piece, pure and simple, and if calliing shennanigans on it makes me a troll, so be it.
    • I stand by my claim that the New Scientist article sounds like snake oil. It's a misleading article, pure and simple.

      But, now that I've read some of the documents from the Caltech site [caltech.edu], and I think I understand the claims, the research is fairly interesting, at least in the world of "ultrascale" networking. Of course, I'm just an unfrozen caveman engineer, and that world confuses and frightens me, so my understanding might be slightly off. Here goes anyways.

      As I understand it, the authors are saying th
  • Great! (Score:5, Funny)

    by Lu Xun ( 615093 ) on Thursday June 05, 2003 @12:43AM (#6121160)
    Who needs erro>*H~@}&)aA=cking anyway?
  • by nweaver ( 113078 ) on Thursday June 05, 2003 @12:43AM (#6121163) Homepage

    Looking at the information on their web page at caltech [caltech.edu], the FAST network project is working with alternate TCP window sizing schemes.

    Namely, instead of reducing window size in the case of packet loss, window size is changed based on round trip latency. The problem being that reducing the window size in response to loss works well on most networks, but has a serious problem when dealing with very high-bandwidth links.

    In such a case, the conventional TCP windowing will shrink greatly in response to even one or two lost packets, which when you are sending a LOT of data, will occur.

    The real work (and it seems to be somewhat covered in their web pages) is how to use latency for congestion detection/control, but I haven't read it in enough detail to quite understand this, NOR how this scheme will interact with conventional TCP streams.

    • How about TCP Vegas [nec.com]? They use RTT measurements to proactively determine congestion.

    • Namely, instead of reducing window size in the case of packet loss, window size is changed based on round trip latency. The problem being that reducing the window size in response to loss works well on most networks, but has a serious problem when dealing with very high-bandwidth links.

      In such a case, the conventional TCP windowing will shrink greatly in response to even one or two lost packets, which when you are sending a LOT of data, will occur.


      I don't have a ton of knowledge about TCP, but is it me,
    • That idea seems to be more or less straight from TCP Vegas. Is it clear to you what they're doing differently?
    • by stj ( 607714 ) on Thursday June 05, 2003 @02:15AM (#6121478) Homepage Journal
      Well, as far as I remember, there were more problems than that.

      The problems with very high bandwidth links, TCP and RTT estimation start from the fact that TCP can do that estimation just every ACK received and with very high bandwidth links it changes much faster and in greater degree. So, TCP can't efectively estimate the available capacity, since it cannot probe the channel frequently enough.

      Caltech's Vegas looks great on pictures, however, there were papers pointing out that it's not exactly fair, especially with multiple bottlenecks of real-world topology. Then there were papers fixing that, and papers critisizing those solutions, and as the result I don't see Vegas anywhere around (except for some Cisco routers maybe) - the best I see is NewReno+SACK+FACK+ECN. I can imagine that more aggressive scheme will have an advantage over TCP, although NewReno is pretty aggressive if compared to most RT rate control schemes, so it's difficult to imagine anything more aggressive than that, that would yield in times of congestion.

      The best description of what they really propose seems to be in their Infocom's paper from April this year. That looks pretty good, too. But again, as it was with original Vegas, it will probably come out that it has some flaws, they will be fixed, the fixes will have some flaws, and so on. And for the time being everybody will continue to use NewReno. *snicker*

      Fact is that there is enormous (partly bad) experience with using TCP Reno, and with current abundance of capacity in the backbones, it doesn't seem that there is much on an interest in precise traffic control. I've got to have my first problem watching some movie trailer, yet. ;-)

      One thing worth mentioning - no reasonable application uses TCP for multimedia (why Disney then?). RTP/UDP with a reasonable model-based rate control can easily at least match Vegas, and often outperform it because of the kind and amount of feedback used to adapt to the network conditions for particular application. Caltech's scheme was constructed for ultra highspeed networking and tested for processing of vast data volumes produced by LHC to overcome deficiences of traditional TCP in that case. They have a real nice article on experiments with that with good results. But that's not quite the same as typical situation.
  • Hmm, just think how much faster IIS can get infected with this one!
  • Yes, but (Score:2, Funny)

    by Anonymous Coward
    ...will Fast TCP have the Evil Bit?
  • not optimistic (Score:5, Insightful)

    by ravinfinite ( 675117 ) on Thursday June 05, 2003 @12:46AM (#6121181)
    "When the researchers tested 10 Fast TCP systems together it boosted the speed to more than 6,000 times the capacity of the ordinary broadband links.

    6,000 times? The tests done in labs are usually stripped-down and the results overstated just for statistical pleasure. In the real world, however, such figures are rarely achieved.
  • by nerdwarrior ( 154941 ) <might@cs.[ ]h.edu ['uta' in gap]> on Thursday June 05, 2003 @12:46AM (#6121183) Homepage
    Measuring the round-trip time for packets and using this to information to predict the bandwidth delay product is nothing new. This is essentially one of the effects achieved with existing TCP congestion control algorithms such as TCP Tahoe, TCP Vegas and TCP Reno. The article is light on details and doesn't lead me to believe that they've done anything signicantly different from these three. Furthermore, if it *is* doing something different, how can it still obey the existing congestion control algorithms without thrashing? After all, we can all boost the speed of our TCP connections by simply turning off congestion control, as long as nobody else does it either. ;) [UDP's lack of congestion control is precisely why a few streaming video users can clog up an entire pipe for themselves, screwing everyone else who's using it.]
  • smells like... (Score:5, Interesting)

    by wotevah ( 620758 ) on Thursday June 05, 2003 @12:46AM (#6121185) Journal
    When the researchers tested 10 Fast TCP systems together it boosted the speed to more than 6,000 times the capacity of the ordinary broadband links.

    Does that mean TCP has 99.99% (humor me) overhead ?

    But seriously, you can probably use large windows to send streams of packets such that a single ack is required for a bunch of them, but it's impossible to achieve 6000x more throughput just by "optimizing" the TCP protocol. Even over Internet (I'm not even talking LANs since there is obviously not that much room for improvement due to the low latency).

    • Re:smells like... (Score:5, Interesting)

      by wotevah ( 620758 ) on Thursday June 05, 2003 @12:58AM (#6121241) Journal
      It's lame to respond to my own post, but the other article points out that they actually used a different architecture where TCP achieved 266Mbps and their optimized version got 925Mbps, which the author chose to compare with broadband speeds (6000x the capacity of broadband).

      Still, those numbers don't look right. AFAIK TCP has 5-15% overhead, so they must have been using a high-bandwidth, really-high-latency line to get that much improvement. Really high.

      Under these conditions (that obviously are unfavorable to TCP) I would be curious to see how "fast TCP" compares to any real streaming protocol (UDP-based with client feedback control). I have a feeling that the UDP stream is faster.

      • by Daniel_ ( 151484 ) on Thursday June 05, 2003 @03:04AM (#6121573)
        If I'm reading the article right, they're using the same technique that a doctoral candidate did his Phd thesis on at OSU about 3 years ago.

        TCP is extremely bursty - it pumps all the packets it can as fast as it can over the network as soon as the window opens. Then it waits for replies to all the packets. What typically happens is the burst from the NIC overloads the local router causing numerous dropped packets. This gives the imporession to the sending machine that the network is overloaded and results in a ~90% reduction in bandwidth utilization.

        The change is to include a timer that allows the NIC to space the initial burst over the entire window. This prevents the overloading at the router and permits the NIC to reach near its theoretical maximum bandwidth.

        In tests involving one router, the results were an order of magnitude increase in bandwith utilization. I'd be interested in seeing their test setup to see how they got such dramatic improvements. Normally TCP/IP is not that ineffecient - even with its extreme burstiness.

  • by Madwand ( 79821 ) on Thursday June 05, 2003 @12:48AM (#6121189) Homepage

    It's called congestion collapse and the condition is described by RFC 896 [rfc-editor.org] by John Nagle.

    Just firing packets into the network willy-nilly is very bad; it's the "tragedy of the commons" all over again...

    • Nagle (Score:5, Informative)

      by zenyu ( 248067 ) on Thursday June 05, 2003 @02:04AM (#6121452)
      It sounds like they are working on a replacement for the Nagle algorithm. Nagle works well on clean connections, even if badly tuned slow-start gets annoying when you have gigabit connection and it still takes a minute to ramp up to full speed on an ISO download. Where "fast tcp" would really help is on a dirty connection. I had to connect to a supercomputer a few years ago over a 100Mbps link that corrupted or lost 30% of all packets and I had to use my own streaming on top of UDP to avoid getting hammered by shrinking windows (I still needed congestion control.) On this type of connection I'd expect their "fast tcp" might give a 10x speedup. On a normal non-noisy and relatively slow DSL connection, like I have at home, I'd be surprised by a 10% speedup.

      In other words the story is all wrong, but what they are doing is actually worthwhile. You sometimes have noisy networks, especially when they are wireless or in an industrial environment. The big long haul telecoms lines are better off doing error correction on line, but in the last mile you never really know the noise characteristics so this should be handled better on the TCP level. I would probably do something like FEC with the number of recoverable errors per packet and per lost packet per logical block, tuned to the error characteristics of the network. Then call it TCP2 and release an RFC and some BSD licensed source code.. (I thought of doing this as part of building an ISP friendly P2P protocol but decided I didn't have the time..) Their solution has the advantage that it works just great with regular old TCP implementations.
  • by malfunct ( 120790 ) on Thursday June 05, 2003 @12:49AM (#6121194) Homepage
    It looks like this protocol is more proactive in monitoring the line. It looks for clues that it needs to slow down before a packet gets lost. A great deal of time in a TCP connection is spent waiting for acks and resending data, this is made worse by the typical latency across then net (if I remember right it averages 30 to 800 ms for domestic connections depending on time of day, and between 700 and like 1200 for connections abroad).

    This protocol figures out ahead of time if it needs to slow down so its always getting acks back instead of waiting for timeouts. Also it avoids the binary backoff time that happens with timeouts.

    So in response to many of the previous posts it loses none of the robustness of TCP. In the worst case its as slow as TCP and in the best case it should be equally as fast as TCP. In the average case, however, it shows a huge performance increase. Most of the time on the network is the average case so this is a good thing.

  • by trinity93 ( 215227 ) on Thursday June 05, 2003 @12:49AM (#6121195) Homepage
    Looks like this this [ietf.org]

    SCTP is a reliable transport protocol operating on top of a connectionless packet network such as IP. It offers the following services to its users:

    -- acknowledged error-free non-duplicated transfer of user data,
    -- data fragmentation to conform to discovered path MTU size,
    -- sequenced delivery of user messages within multiple streams,
    with an option for order-of-arrival delivery of individual user
    messages,
    -- optional bundling of multiple user messages into a single SCTP
    packet, and
    -- network-level fault tolerance through supporting of multi-
    homing at either or both ends of an association.

    The design of SCTP includes appropriate congestion avoidance behavior
    and resistance to flooding and masquerade attacks.
  • Caltech Site (Score:4, Interesting)

    by mib ( 132909 ) <mib@post.com> on Thursday June 05, 2003 @12:52AM (#6121210)

    This is part of a whole bunch of TCP and networking related work at CalTech.

    I hate to do this to them, but the Caltech Networking Lab [caltech.edu] site has more info.

    From what I see, the improvement here is to use packet delay instead of packet loss for congestion control. They claim this has a bunch of advantages for both speed and quality.

    Here is a Google cached copy of their paper [216.239.37.100] from March 2003.

  • Man! (Score:5, Funny)

    by nhaines ( 622289 ) <nhaines@@@ubuntu...com> on Thursday June 05, 2003 @12:52AM (#6121213) Homepage
    If only I were using Fast TCP, this could have been first post!
  • by po8 ( 187055 ) on Thursday June 05, 2003 @12:53AM (#6121215)

    As near as I can tell from the popular articles, and the web page referenced in the New Scientist article, "Fast TCP" is not a new protocol, but rather better congestion control for standard TCP. I'm not a network guru by any means, so please take the comments below with a grain of salt.

    Currently, TCP implementations use a standard trick [berkeley.edu] to play nice with small router queues. Using precise timing would be better. I hassled Mike Karels over it about 10-15 years ago, but the consensus at the time was that the hardware wasn't up to it. Now it is. Also, modern routers have gotten clever about queue management, which screws up the trick.

    The new proposal is to take advantage of modern HW to measure latencies. Existing TCP could thus be used more efficiently, by allowing larger amounts of data to be outstanding on the network without trashing routers.

    It is not widely understood that in 1988 the Internet DOSed itself because of a protocol design issue, and Van Jacobsen got everybody to fix it by a consensus change to the reference implementation of TCP. These articles appear to report (badly) ongoing research into that issue.

  • So they've got a TCP stack that changes it's window based on round-trip time instead of packet loss, to avoid overcorrection for minor packet loss on high-bandwidth networks. That's a pretty good idea, of the semi-obvious "Doh, why didn't we think of that?" variety.

    It will take a little research to find good algorithms, which I presume they've already done, but there's nothing stopping some enterprising soul (who wants his porn faster) from adding this to linux in a couple weeks. So I guess the real que

  • Uhm... (Score:2, Interesting)

    by davburns ( 49244 )
    Both linked articles were pretty content-free. I'm trying to read between the lines and figure out what they're really doing. The article seems to imply that this is only a change on the TCP sender's side, not clien TCP stacks or anything in between.

    Maybe they're measuring the round-trip delay, and then sending more data than can fit in the reciver's window, on the assumption that ACKs "should be" in flight. Maybe they also notice when an ACK is overdue, and send a duplicate packet early, rather than w

  • Duplicate effort? (Score:3, Interesting)

    by Bull999999 ( 652264 ) on Thursday June 05, 2003 @01:34AM (#6121372) Journal
    data link layer technology, like Ethernet, already has error checking built into it's frames, so why is there a need for another error checking at the higher transport layer?
  • by JDizzy ( 85499 ) on Thursday June 05, 2003 @01:35AM (#6121373) Homepage Journal
    The speed, er... rather the window size is changed according to a rigid design, and ideas like this in the past have failed because once everyone is doing them, they stability of the network decreases. Mind you this is distinctly different that "removing error checking" because it is basically taking two steps forward instead of one step at a time. If you leap ahead two steps, and fail you simply step back one (which is still one step forward). The packets are still resent on the TCP leaving the application to not worry about data quality. Looking at packet captures of ftp traffic show that ftp is an aggressive consumer of bandwidth, and that ramping up or down near the beginning or end of the TCP session is were the greatest amount of *inefficiency* is found. So the idea is to make tcp more aggressive near the ramp-up/down stages of the connection. The idea also being to remove some of the agonizingly redundant error checking in favor of self-throttling optimistic educated guessing. Fast TCP simply wants to do the dirty work ahead of time instead of gradually discovering the safe speed limit. Fast TCP will bump into glass bandwith ceiling at mach 10, instead of 10 Mph, quicly recovering by resending the big chucks at a fraction of the window size previously sent. So it could also be described as willig to find the threashold quickly in echange for knowing the boundrys instead of wasting precious time ramping up. Traditional TCP hates data loss to the point that is "drives slow in a parking lot" attempting to never have a colision, whne the protocal itself is well designed to recover from such an event already.
  • by zeds ( 671023 ) on Thursday June 05, 2003 @01:45AM (#6121397)
    The New scientist writer clearly has no understanding of how TCP/IP or the Internet work in general and how Caltech's FAST could improve data transfer efficiency. His sensationalist claims that this could enable downloading a dvd in seconds are so much ignorant crap. 6000x faster than broadband? That has more to do with the fact they used an INCREDIBLY FAT PIPE (a 10gigabit connection), probably in a laboratory setting, than any of FAST's optimizations. It's true TCP/IP's efficiency maxes out at a certain rate, but that doesn't really matter in the real world, because nobody is actually downloading movies from dedicated 10gigabit links to the backbone. Not to mention that you won't see anyone serving anything at these speeds for the next decade or so. I wonder what this suggests about the accuracy of articles on subjects I know nothing about. It's an academic curiosity folks.

    See caltech's press release on FAST [caltech.edu] for an article that actually makes sense.

    Also, could someone please explain to me why boringly predictable stereotypical slashdot feedback is being modded up?

    "Whoa! Faster pr0n!"

    "Imagine a beowolf cluster of these!"

    -Insert completely unrelated Microsoft bashing post here-

    -Insert completely unrelated technobabble from some geek posting out of their ass (without reading the article first)-

    News for nerds. Stuff that matters. Discussion that doesn't.

  • by Animats ( 122034 ) on Thursday June 05, 2003 @02:02AM (#6121447) Homepage
    First, this has nothing to do with removing error checking. It's about better TCP window adjustment. Read the papers. [caltech.edu]

    Second, it's intended for use for single big flows on gigabit networks with long latency. You have to be pumping a few hundred megabits per second on a single TCP connection over a link with 100ms latency before it really pays off. It won't do a thing for your DSL connection. It won't do a thing for your LAN. It won't do a thing for a site with a thousand TCP connections on a gigabit pipe.

    Third, unlike some previously hokey attempts to modify TCP, this one has what looks, at first glance, like sound theory behind it. There's a stability criterion for window size adjustment. That's a major step forward.

    (I first addressed these issues in RFC 896 [sunsite.dk] and RFC 970, [sunsite.dk] back in 1984-1985. Those are the RFCs that first addressed how a TCP should behave so as not to overload the network, and what to do if it misbehaves. So I know something about this.)

  • by Fizzl ( 209397 ) <fizzl@@@fizzl...net> on Thursday June 05, 2003 @02:09AM (#6121465) Homepage Journal
    They represent TCP totally wrong. Not only that, they describe the whole network infrastructure wrong.
    No wonder I have trouble explaining how the network works to my sister, or even to my mother who happens to have his masters in tech. (Albeit in mechanical engineering)

    Let's see.

    "The sending computer transmits a pack, waits for a signal from the recipient that acknowledges its safe arrival, and then sends the next packet"
    No honey, thats why we have the buffers. So you could receive packets out of sequence and wait for the middle ones to arrive. This is why we have 32-bit seq and ack fields in the tcp header just after the src and dst ports. seq tells the packets order in the queue. Ack tells the seq ofthe next packet (from other peer) so we can use random increments to prevent spoofing of packets. Or make it harder atleast.
    But that's out of the scope of this rant.

    If no receipt comes back, the sender transmits the same packet at half the speed of the previous one, and repeats the process, getting slower each time, until it succeeds.
    Umm, No. I'm not 100% sure but I think the network devices are dump thingies that talk to each other on predefined carriage frequencies. Thus, you can't really "slow down" the speed to increase possibility to get the packet through. And certainly this has nothing to do with TCP. Resending of failed packets is a Good Thing (TM). They are just sent again untill they reach their destination or the "I give up"-treshold has been reached.

    "The difference (in Fast TCP) is in the software and hardware on the sending computer, which continually measures the time it takes for sent packets to arrive and how long acknowledgements take to come back"
    This is the only difference? Wow! Shit. We are definetely going to get faster speeds by adding overhead with this calculation.

    Now, I'm through with my rant.

    I really really would like to see an actual white paper how this works. There has to be more to this. By the sound of just these articles, it seems to me that someone was paid to develop new, faster protocol that would magically be backward compliant with TCP. Finally the persons couldn't come up with anything smart but gobbled together something that might sound plausible.
    Of course you can get "more than 6000 times the capacity of ordinary broadband links" by using your very own dedicated parallel LAN links. You just need fast enough computer to handle the TCP stack. You would also need some fricking fast BUS's on you computer to make any use of this bandwith. Remember, the hard drives, mem chips and other storages aren't exactly 'infinite' in speed either.
    If the demo consisted only of two computers exchanging data, there would be no need to estimate the speeds as it would be very unlikely to get packet collisions because of disturbance from other network devices. Also, again, that has nothing to do with TCP-stack. Again, this useless speed calculation is just more overhead.

    And now I'm rambling.

    Shit, why can't i just stop.

    I'm angry, that's why :(

    I hope someone would answer me with insight of what am I overlooking. This looks so useless to me.
    • You are wrong on some points.

      No honey, thats why we have the buffers. So you could receive packets out of sequence and wait for the middle ones to arrive. This is why we have 32-bit seq and ack fields in the tcp header just after the src and dst ports. seq tells the packets order in the queue. Ack tells the seq ofthe next packet (from other peer) so we can use random increments to prevent spoofing of packets. Or make it harder atleast.

      But that's out of the scope of this rant.

      This is true, but when the T

  • Ugh, reporters.. (Score:4, Interesting)

    by Mike Hicks ( 244 ) * <hick0088@tc.umn.edu> on Thursday June 05, 2003 @02:17AM (#6121482) Homepage Journal
    Heh, I saw this article on Yahoo, and was immediately concerned. Stupid reporters cut out way too much information, and make the people on wee dialup systems think that they'll get the moon.

    Anyway, I think this is primarily interesting for people on really fast connections (ranging in hundreds of megabits per second up to gigabits) with relatively large latencies (tens/hundreds of milliseconds as on a transcontinental link rather than nanoseconds/milliseconds like on a LAN), but I'm sure the research will have some effect on LANs and even the standard broadband connection. Impact on dialup and other not-quite-broadband connections would likely be miniscule.

    One main issue with TCP is that it uses a "slow start" algorithm, which other people have mentioned. Real TCP stacks probably tweak the algorithm quite a bit, but from the description in Computer Networks (3rd edition, 1996) by Andrew Tannenbaum, TCP packets start off with a small "window"--how much data can be in transit at a time. The window grows exponentially as packets are transmitted and acknowledgements received until a pre-set threshold is reached, and then the window starts growing more slowly (Tannenbaum's example grows exponentially to 32kB at first, then by 1kB per transmitted packet).

    If a packet is lost, the process starts over and the threshold is set to half the window size you had before the dropped packet (I imagine many systems reduce the window size by lesser amounts). Now, this particular algorithm can cause quite a bit of nastiness. It's possible the window size will never get very large. This isn't a really huge problem on low-latency links like in a LAN where you get acknowledgements really quickly, but a hyperfast transcontinental link could be reduced to mere kilobits per second even if the percentage of dropped packets is fairly low.

    Additionally, this slow start algorithm will eventually force you to restart at a smaller window size. Given enough data, you'll eventually saturate the link and lose a packet, so until the window grows enough again, there will be considerable unused bandwidth. Good TCP stacks would attempt to guess the available link speed and stop growing the window at a certain point.

    Smart system administrators can tweak kernel variables to make systems behave better (preventing the window from getting too small, having larger initial thresholds, for instance), but it looks like a lot of work on Fast TCP and related projects is related to making this a more automatic process and growing/reducing the transmit window size in a more intelligent manner.
  • by DASHSL0T ( 634167 ) on Thursday June 05, 2003 @02:22AM (#6121494) Homepage
    Error checking hasn't been removed from TCP; we've removed it from Slashdot story summaries instead, speeding up the posting by nearly 6000x.
    --
    Linux-Universe [linux-universe.com]
  • Nothing new here (Score:4, Interesting)

    by Vipester ( 41079 ) on Thursday June 05, 2003 @02:24AM (#6121498)
    Whoever wrote these articles is not the brightest crayon in the box. Their explanations of how "regular" TCP works and how FAST works are both exceedingly wrong. Read the FAST group's overview [caltech.edu] for an explanation of what they're doing. It's semi-heavy with technical networking terms but you'll learn that this has nothing to do with error checking.

    Congestion control based on roundtrip times is old news but is uncommon AFAIK. What really happens is direct feedback from routers along a transmission's path. This is done in TCP Vegas, which was first proposed in 1994 and I think is fairly common now. The problem with scaling this or any of the other common TCP implementations to high speed/high delay links is the reaction to detected congestion. "Normal" TCP aggressively scales back its send window (send rate) when it detects congestion, usually chopping it in half. The window/rate then grows linearly until something goes wrong again. This results in alot of lost throughput in high-speed networks especially if the amount of "real" congestion is low. The FAST group is working on a new TCP implementation that doesn't react so aggressively to congestion. This is great for those high-speed/low-congestion networks we all wish really existed but is not something you want to use on the always-backed-up Internet. Would probably make things worse.

  • by Minna Kirai ( 624281 ) on Thursday June 05, 2003 @04:23AM (#6121753)
    The last sentence of both news articles suggests that this broadband-optimized TCP system could be used by corporations like Disney to provide video-on-demand. (If they're talking to Microsoft, on the other hand, the result will just be a modification to the TCP/IP stack in Windows(r), which doesn't care at all what kind of data it's transmitting)

    That's just wrong, at least according to the ways media companies have traditionally desired their materials to be broadcast over the internet. They typically use streaming protocols, which not only gives the user one-click startup, but also makes it non-trivial to keep a local copy of the file (enhancing the corporation's feeling of control).

    However, a well-designed streaming protocol won't use TCP at all. TCP hides many characteristics of the network from the application software, and to stream properly it needs to know as much as possible. One example of why TCP is bad for streaming: in streaming, you try to keep advancing time at a constant rate. Once 156 seconds of playing have elapsed, you want to be showing video from exactly 156 seconds into the source file. If at 155 seconds some packets were dropped, you should just skip over them and continue onward. TCP, however, will always try to retransmit any lost packets, even if that means they'll arrive too late to do any good. TCP has no knowledge that packets may expire after a fixed time, but a custom-built UDP protocol can be aware of that constraint.
    (Here's a reference on preferring UDP in video streaming [wpi.edu])

    On the other hand, maybe a corporation will realize that properly controlled non-streaming playback can provide a better end-user experience (guaranteeing, for example, that once playing starts, network failures will never interrupt it). In that case, they might either try to push Microsoft to integrate this faster TCP/IP into Windows(r), or more interestingly, implement it themselves in customized player software.

    It's possible to implement a protocol equivalent to TCP on top of UDP, with only a tiny constant amount of overhead. So a programmer for realplayer, quicktime, or mplayer might be able to add the techniques from this research to his own code, even without support in the operating system.
  • Humm (Score:3, Insightful)

    by Znonymous Coward ( 615009 ) on Thursday June 05, 2003 @09:28AM (#6123026) Journal
    I have an idead! Fast TCP transfers with no error checking... And we'll call it UDP.

  • by isorox ( 205688 ) on Thursday June 05, 2003 @09:58AM (#6123265) Homepage Journal
    dumping a lot of time-consuming error checking

    Sounds like a slashdot editor
  • by HydraSwitch ( 184123 ) on Thursday June 05, 2003 @10:01AM (#6123306) Homepage
    I believe this is HSTCP [ietf.org].
    For more info, you can also take a look at:
    Web100 [web100.org] and Net100 [net100.org].
    It basically amounts to improving the AIMD algorithm and changing the way slow start works as well. Also, whoever said it before that this will not help your DSL connection is correct. It is meant to help high speed long RTT paths. And it does so -- quite well.
  • Seconds... (Score:3, Funny)

    by AnotherBlackHat ( 265897 ) on Thursday June 05, 2003 @03:27PM (#6126302) Homepage

    Scientists in California are working on a fast new Internet connection system that could enable an entire movie to be downloaded in a matter of seconds.

    Sure it does. I'm thinking around 10000 seconds.

    -- this is not a .sig

  • by aminorex ( 141494 ) on Thursday June 05, 2003 @03:37PM (#6126387) Homepage Journal
    This really devastates the credibility of CalTech
    as an institution. It seems clear that some group
    at CalTech pumped this to the media, to the point
    where a categorically deceptive series of fluff
    pieces entered the news stream.

    Compare this to the "cold fusion" debacle in '89:
    Pons and Fleischman reported valid, and eventually
    reproducible results without hype, but the media
    pumped it with speculation. Pons and Fleischman,
    excellent, highly competent and productive stars in
    their field, were essentially tainted by no fault
    of their own, and run out of town on a rail.

    It's galling.
  • by porky_pig_jr ( 129948 ) on Thursday June 05, 2003 @05:36PM (#6127342)
    the challenge is to make this 'better tcp' to co-exist with other versions of tcp currently deployed on Internet, like Tahoe and NewReno.The key point is the notion of 'fairness'. The way TCP (at least the currently deployed versions) are designed is to cooperate (in a sense) as to provide the equal share of bandwidth to competing flows. That's one of the reasons why it is so difficult to introduce TCP-like protocol with radically new design. I believe one of the reasons TCP Vegas (which has a good potential) does not get deployed on a wide scale is that it is not entirely clear how the mix of Tahoe, NewReno and Vegas would perform in terms of fair share.

    however from what i've been reading on caltech site, it appears that one of the usages of this protocol would be to download very large file on dedicated pipe (like movie on demand). From movie server to the user through private connection. This makes sense. You can streamline lots of things. OPtimize protocol for a fat pipe. Whatever ... I wouldn't call it a 'breakthrough' though.

One man's constant is another man's variable. -- A.J. Perlis

Working...