Forgot your password?
typodupeerror
The Internet Operating Systems BSD

Better Bandwidth Utilization 196

Posted by michael
from the neat-hack dept.
jtorin writes "Daniel Hartmeier (of OpenBSD fame) has written a short but interesting article which explains how to better utilize available bandwidth. In short it gives priority to TCP ACKs over other types of traffic, thereby making it possible to max both upload and download bandwidth simultaenously. Be sure to check ot the nice graphs! Also note the article on OpenBSD Journal. OpenBSD 3.3 beta is now stable enough for daily use, so why not download a snapshot from one of the mirrors and try it out?"
This discussion has been archived. No new comments can be posted.

Better Bandwidth Utilization

Comments Filter:
  • How ironic (Score:4, Funny)

    by JPDeckers (559434) on Wednesday March 05, 2003 @10:50AM (#5440347) Homepage
    How ironic, an article about better utilizing available bandwidth, and already /.-ed with 0 comments. Guess bandwidth is utilized now.
  • by gmuslera (3436) on Wednesday March 05, 2003 @10:51AM (#5440351) Homepage Journal
    rule 1: don't put an article in slashdot pointing to your site
    • by Anonymous Coward
      True enough.

      rule 2: Have a backup plan in case someone sends you a bunch of ACK ACK ACK ACK ACK ACK to cause a denial of service.
  • by dereklam (621517) on Wednesday March 05, 2003 @10:51AM (#5440354)
    Thanks for linking to OpenBSD... I was wondering what that whole BSD thing was!

    Now if only I could find that Linux thing...

  • by Anonymous Coward on Wednesday March 05, 2003 @10:52AM (#5440355)
  • by adzoox (615327) on Wednesday March 05, 2003 @10:54AM (#5440376) Journal
    I think this may be of most use to two way satellite connections and maybe to service providers - however; I don't see how one can get much faster than a cable modem or DSL connection - the internet comes through at the same bandwidth and speed whether I am wireless or T1 or cable modem/DSL - this is the majority of network traffic nowadays

    Corporate networks are already optimized under 100 or gigabit ethernet with Cisco routers which automatically handle collisions and error corrections.

    • by arkanes (521690) <.arkanes. .at. .gmail.com.> on Wednesday March 05, 2003 @10:57AM (#5440398) Homepage
      If you have a non-shaped asynchronous connection, like most forms of DSL and cable, it's pretty easy to cap out your upstream. When you do that, your downstream goes through the floor because your ACKs don't get through. This just says that if your routers prioritize ACKs, your downstream will still be fine even if your upstream is saturated. This isn't exactly new, my cable ISP already does this.
    • by somethingwicked (260651) on Wednesday March 05, 2003 @11:05AM (#5440463)
      That "Intro to the Internet" class from college is a little hazy now, but I don't recall it being as simple as the "internet" coming out of the pipe like water.

      Someone far more knowledgable than myself will get to correct me, but I seem to recall there was a process of-

      Send some stuff-wait for ACK.

      When you get the ACK, send some more.

      By turbocharging the ACKs, you are reducing that lag time
      • If you *really* want to speed things up, send
        pre-emptive ACKs before you get the data, right
        about when they would be expected.

        What, you lost a packet? Go back and fetch it
        later using the application-layer protocol.

        Voila, hyper-http.
        • If you're using an application-layer protocol anyway, just use UDP and save the bandwidth on ACK entirely.
          • Yes, it's a pity that HTTP doesn't use UDP to start with. Why design a stateless protocol and then put it on top of TCP, requiring a connection to be set up and torn down for each HTTP request?

            (OK, newer HTTP versions can fetch multiple pages while keeping the connection open - but still it seems that UDP would be a better fit. Except, perhaps, for POST requests, since those are not usually idempotent.)
            • Why design a stateless protocol and then put it on top of TCP, requiring a connection to be set up and torn down for each HTTP request?

              Because you want reliability. Unfortunately, reliable UDP (or transactional TCP) is not widely supported.

              Also, because many HTTP responses don't fit in a single UDP packet.

              • If UDP is reliable enough for NFS, it should be reliable enough for web pages, right? If the reply to your request doesn't arrive after a certain time you can just send the request again.

                Good point about the response not fitting in a UDP packet: does NFS avoid this problem by always requesting small enough chunks of data to fit in a single packet?

                Once you start having to do both rerequesting dropped packets and reordering those that arrive out of sequence it does start to look as though TCP is a better bet, since it does these things for you. Nonetheless, since dropped UDP packets are fairly uncommon in practice it might be quicker most of the time to save on the overhead of setting up a TCP connection and just send a single UDP packet instead.
                • If UDP is reliable enough for NFS, it should be reliable enough for web pages, right?

                  NFS is designed for local area networks, which drop packets much more rarely.

                  Once you start having to do both rerequesting dropped packets and reordering those that arrive out of sequence it does start to look as though TCP is a better bet, since it does these things for you. Nonetheless, since dropped UDP packets are fairly uncommon in practice it might be quicker most of the time to save on the overhead of setting up a TCP connection and just send a single UDP packet instead.

                  Dropped UDP packets are not uncommon at all when sending files over the internet. There is a certain optimal bandwidth available between the webserver and the client. Send too fast, and you'll start losing packets. Send too slowly, and you're not utilizing your bandwidth. TCP does a (fairly) good job discovering that optimal bandwidth. One problem occurs when the link is asynchronous. Acknowledgement packets get dropped, and bandwidth is underutilized. Prioritizing ACKs largely solves that problem.

        • Again, to the best of my recollection, what you are suggesting is similar to the TFTP and many streaming techniques approach:

          Take FTP and strip the overhead error checking and if something doesn't come out right, refresh and download it again.

          For streaming, you get more throughput, and every now and them you might miss a frame in exchange for the higher quality you can obtain with the lower overhead

        • TCP Daytona (Score:4, Informative)

          by Patrick (530) on Wednesday March 05, 2003 @12:15PM (#5440911)
          send pre-emptive ACKs before you get the data, right about when they would be expected.

          The technique you suggest is one of several proposed by Stefan Savage in TCP Congestion Control with a Misbehaving Receiver [washington.edu]. He called it TCP Daytona. :)

      • by Patrick (530) on Wednesday March 05, 2003 @12:20PM (#5440933)
        Send some stuff-wait for ACK.

        When you get the ACK, send some more.

        By turbocharging the ACKs, you are reducing that lag time

        Not quite. TCP streams use pipelining: you send N packets (N is the "window size"), and each time you get an ACK you send one more. So in the ideal case there's no lag, because the ACK for packet 3 lets you go ahead and send packet 10 (if N=7).

        When a packet (or its ACK) gets dropped, TCP assumes the network is congested, and cuts N in half, and very slowly increases it back to where it was. So after each dropped packet or ACK you have a while during which you're not using the full link. Several drops in a row can reduce your throughput by a factor of 100 or more.

        Prioritizing ACKs doesn't reduce the lag time. It reduces the likelihood that TCP will overreact and reduce its sending rate due to perceived congestion.

        • Prioritizing ACKs doesn't reduce the lag time. It reduces the likelihood that TCP will overreact and reduce its sending rate due to perceived congestion.

          Prioritizing ACKs may prevent drops but the main feature is essentially reducing lag time. TCP is self clocking, in that the sender can't send any more packets until it sees an ACK. If you get the ACKs out faster you'll get the replies faster. Thus prioritizing ACKs will make your downloads go faster. Since they are small packets this probably won't too adversely affect your upload bandwidth (may increase latency slightly).

          This won't incorrectly set TCP's RTT timer because if anything you've shaved a few ms off your RTT. The new RTT may be less but it's not a lie.
          • Prioritizing ACKs may prevent drops but the main feature is essentially reducing lag time. TCP is self clocking, in that the sender can't send any more packets until it sees an ACK. If you get the ACKs out faster you'll get the replies faster. Thus prioritizing ACKs will make your downloads go faster.

            No, no, no. The article showed a 10:1 drop in average throughput, and extreme variability in instantaneous throughput. That cannot be explained away as just a matter of latency.

            You can increase the latency of a link, and it will still run at full speed, once TCP's estimate of RTT is properly updated. Except in extreme cases (a T3 to Mars, say), a TCP pipe will be full regardless of its latency, as long as the drop rate is low and the latency is roughly constant.

            So what's happening here is that the ACKs are either getting dropped (falling off the end of a queue) or are getting delayed so far past the average RTT that TCP thinks they've been dropped. When an ACK is dropped (or presumed dropped), the sender halves its outgoing bandwidth. Each additional drop, the bandwidth gets halved again.

            Prioritizing ACKs here is about keeping them at the head of the queue so that they don't get dropped. It has almost nothing to do with latency.

    • ok statments like that tick me off its a silly thing to say

      if your route is long then the chances are that along the way you will have a bottleneck and when you go across water it gets worse as all the repeaters get in on the act

      saying things like " internet comes through at the same bandwidth" is plain silly

      regards

      John Jones
      • Same bandwidth on my laptop whether optimized with this solution or any connection that I can connect to.

        A T1 to T1 connection usually gets me no better gameplay or internet page render speed than a cable modem connection or DSL connection or WiFi connection.

  • In short it gives priority to TCP ACKs over other types of traffic, thereby making it possible to max both upload and download bandwidth simultaenously

    It appears that server ACKs have been optimized as well.

    SERVER-ACK*wheez*ouch*ACK*sizzle*Damnit*

    I am sure, however, that the bandwidth is being optimized as it came back almost immediately unreachable

    Joking aside, this is the ultimate hack if it works as breezed through in the summary-just tweaking the priorities, more bandwidth!
  • by Thijssss (655388) on Wednesday March 05, 2003 @10:56AM (#5440389)
    Actually ever since my isp changed from A2000 to Chello we had the same problem as this guy has with his download being killed by his upload, a few months ago me and some friends figured the same solution but we had no idea how to actually do it on a windows based machine, anyone with a idea?
  • The problem is (Score:2, Interesting)

    some P2P software will start distributing a "P2P accelerator" which marks all packets as ACKs.
    • Re:The problem is (Score:5, Informative)

      by The Evil Couch (621105) on Wednesday March 05, 2003 @11:23AM (#5440565) Homepage
      it's a possible way to game the system, however they can also ignore what the packets are marked as and just boost the priority of the smaller packets, which are almost always system messages. if the bump up everything under 64 bytes, then they'd get the same effect, but without the possibility of someone cheating the system like that. though I'm pretty sure someone else has already done that.
  • Interesting (Score:3, Interesting)

    by Ec|ipse (52) on Wednesday March 05, 2003 @10:57AM (#5440397)
    I was able to read up to the results section due to being /.'d but what I have read was quite interesting. I like the idea of prioritizing which packets go out first by what their intent is rather then everything goes out and fights for the bandwidth.
  • by Anonymous Coward
    Of course, the effectiveness of this technology depends on both networks which handle the ACK to have the service implemented.

    Still, a very simple and effective solution to an age-old problem. I like.
    • Not completely. If you have a severe up/downstream difference, providing priority to packets on your end will improve the availability of your bandwidth, even when you are maxing it out.
  • Linux solution (Score:3, Informative)

    by eddy (18759) on Wednesday March 05, 2003 @10:58AM (#5440400) Homepage Journal

    The Linux Advanced Routing & Traffic Control HOWTO [lartc.org] discuss how to achieve the same thing on linux using QoS. See section 9.2.2.2 [lartc.org](Sample configuration)

    • Re:Linux solution (Score:5, Informative)

      by pe1rxq (141710) on Wednesday March 05, 2003 @11:11AM (#5440499) Homepage Journal
      No it doesn't....
      It is a differend solution to a different problem caused by the same thing....

      The cause is the big cache in the modem, it results in a delay on outgoing traffic.
      One problem is that interactive traffic gets, well, less interactive (e.g. the echo characters in a remote shell have a delay). This is solved in the HOWTO you refered to.
      Another problem is that the downstream acks get delayed resulting in less downstream data. This is solved in the mentioned article.

      A combination of the two would be really great and could probably be done in both linux and openbsd.

      Jeroen
      • Yes it dows. It's not a different problem. I can only read the description; "how to better utilize available bandwidth", which is exactly what the FAQ describes how to do (that is, making sure the ACKs get through so that uploading doesn't kill your downloads, which gives better utilization).

        If the article _IS NOT_ about "how to better utilize available bandwidth" then I guess you have a point, but then I can only suggest you address the submitter and ask him to properly describe his submissions in the future.

  • by Black Parrot (19622) on Wednesday March 05, 2003 @10:58AM (#5440401)


    Put lower priorities on p0rn, MP3s, Windows viruses, and Slashdot referrals. That should speed everything else up by about two orders of magnitude.

  • by jj_johny (626460) on Wednesday March 05, 2003 @10:58AM (#5440402)
    Daniel has done some good work in micromanaging the available bandwidth to make sure that ACKs get through to minimize retransmits due to drops as well as other causes. When you look at low bandwidth links the time in queue and to transmit can be much bigger than the near instananious transmission times that you expect on high capacity lines.

    A little off topic but I always find it interesting that people with hicap gear (Foundry, Cisco, etc.) are always talking about QOS when it really only makes sense most times on low bandwidth lines. So his work is really important when you look at where it is in scheme of things - out at the end users line.

    • Not necessarily. Even organizations with extremely high-bandwidth connections have budgets. If you can up throughput by 10% using a QoS solution when a corresponding bandwidth increase would cost twice as much, which would you choose? Obviously this particular project is more geared toward end users and small shops with limited bandwidth, but QoS as a whole does have benefits for everyone.
    • by TFloore (27278) on Wednesday March 05, 2003 @12:31PM (#5441043)
      Though you might see more effects of this on a low bandwidth link, it is not just for low bandwidth.

      A fair number of protocols do transmit windows of a certain size. They'll send a certain amount of data, and not send more data until the oldest packest in the window gets an ACK back. You therefore only have so much data "in-flight" at any one time. Strongly asynchronous link (like aDSL and cablemodems) can require strikingly different window sizes than synchronous links.

      The right amount of in-flight data is dependent on the speed of your pipe, obviously, but a lot of applications still use defaults set for low-bandwidth pipes. You can argue that the proper solution for this is to change the defaults, but if you just give ACKs priority, you don't need to worry about it, and the less you force users to change, the better. (The transmit window size has to be a user setting, directly or indirectly, either by asking a window size, or by asking "what kind of pipe do you have?" and guessing a window size from that.)

      This is dependent on the protocol, true, but giving ACKs priority is actually a decent generic solution to what many consider an application-specific problem.

      QOS is also often about bandwidth guarantees, not necessarily throughput. You have a 155mbit link shared among several applications, and an application that *requires* 45mbit. So you use QOS to guarantee that application gets 45mbit if it wants it, and everything else shares the remainder. If the app isn't going, then that 45mbit it requires can be made available to other apps until it is required.
    • It's really useful for things like Frame Relay WANs, where you can get mixed and matched speeds all over the place.

      For example, I have the equivilent of a T1 (1.544mb CIR Frame) going to Qwest. From Qwest, I have a 256k CIR Frame link going to a remote office.

      When the office sends data to me, it's fine. When I send back, there are massive amounts of Red Frames. Dropped packets means re-transmits which means delay. Delay is bad when you are running an interactive application over these links. Think of a garden hose connected to a fire hydrant. The garden hose could dump water into the fire hydrant fine (assuming the water for the hydrant is turned off elsewhere...). When the fire hydrant turns on, however...

      Now I have QoS maps based off of the DLCI for each office, so it throttles back our link to Qwest to match the remote connection, so everyone talks happily, instead of blasting the little link into oblivion. Now, Red Frames aren't seen very often, unless the Qwest circuit is saturated, and we get chopped back to our base CIR. It makes a difference. Not a huge one, but a noticeable difference.

      Traffic Shaping is your friend. It's all about making the mose efficient use of what you have. (Or making sure that you still have bandwidth when your roommate is leeching gigs of pr0n...) M

  • Very Usefull (Score:2, Interesting)

    by volts (515080)
    This is a really useful pointer to a very simple optimization. We've recently replaced our SonicWall firewalls with OpenBSD, so using ALTQ will be really straightforward. I wonder how easy it is to accomplish on Linux.
  • erm... (Score:1, Offtopic)

    by lingqi (577227)
    Be sure to check ot the nice graphs!

    so exactly how many people is this comment directed to? I mean, might we get as much as 1% of the readership checking out the graphs before certain unfortunate server suffers a horrible death / temporary trauma?

  • by blkwolf (18520) on Wednesday March 05, 2003 @10:59AM (#5440412) Homepage
    You can find Daniels original email on the subject at:
    http://marc.theaimsgroup.com/?l=openbsd-pf&m= 10463 0260218727

    It contains a little more of the pf rules than the article does, and has all the relevant information you need except for the nice /.'d graphs

  • by solcity (652067)
    hmm.. ./ed already Looks like ./ proved him wrong on this one!
  • by Anonymous Coward
    Bandwidth is fixed. Any number of crappy operating systems can max out bandwidth. What they meant to say is how to reduce latency.
    • Title is correct! (Score:5, Interesting)

      by DarkMan (32280) on Wednesday March 05, 2003 @11:46AM (#5440715) Journal
      Your correct, bandwidth is fixed. This is about better bandwidth utilisation.

      The article is /.ed but the gist is that if you consider a full duplex conection, and you max out one side of that, say uploads, then the ACK packets get swamped, so your downstrem bandwidth is spent re-transmitting, or empty whilst the other end is waiting for ACKs.

      The bandwidth is there, it;s just under utilised. By prioritisng the ACK's, so that they get boosted through, it becomes possible to saturate both upstream and downstream pipes at once, at peak efficency, rather than one of the coasting along, waithing for the other.

      Note that this only applies for TCP/IP and similar, reliable, protocols. If you had a UDP app (e.g. media streaming done properly), then this trick won't affect it at all, as it doesn't wait for an ACK.
  • by eldimo (140734) on Wednesday March 05, 2003 @11:04AM (#5440458)
    Ahhh... That bring down memories of the infamous ZModem protocol that was widely used in warez BBSs. There was a rule that said "you need to upload if you want to download". Therefore, the fastest way to get a software would be to upload at the same time as the download. The funny thing is that the synchronization was done by hand. Meaning that you needed to see the download caracter at the screen, to start uploading. If you miss by a couple of seconds, you could not get the synchronization right, and had to start over.
    • there's nothing "infamous" about zmodem. it was widely used and not just on warez boards.
    • by Surak (18578) <surak@mailblo[ ].com ['cks' in gap]> on Wednesday March 05, 2003 @11:51AM (#5440748) Homepage Journal
      Ummm...

      A) Zmodem is still around, at least in the *nix world. You can get lrzsz from here [www.ohse.de].
      Some telnet clients still support Zmodem, and you can use lrzsz to transfer files via telnet. Personally, I'd rather use ssh as it's a lot more secure, but in cases where either you can only use telnet or when you are on network you can trust (i.e., not the Internet), you can still use Zmodem.

      b) Zmodem is not, nor has it ever been a bidirectional protocol -- you can't upload and download at the same time unless you have two different connections. There *were* protocols that would let you do this (Puma comes to mind), but you most decidedly could NOT do this with Zmodem.

      • smodem springs to mind of those bi-directional protos.

        it had chat too which was extra cool during big downloads or when trading files with friend..
        (and it was simple to use and needed no by hand synchronising if you had setup the terminal program ok, if my memory serves me well you could initiate the uploads during downloads too, though i barely dialed up to any bbs's after we got isdn circa summer of '96..)
        • smodem was another one, but it wasn't as popular as Puma or Lynx (the protocol -- not to be confused with Lynx the browser, that is :) or BIModem, at least in the Detroit Area.

          Maybe had something to do with T.A.G. or Telegard (one of the two) having a default config for Puma, I think.
    • No, zmodem was one-way, though it was the most widely used (everywhere, not just BBS or warez). It was usually slightly faster than ymodem and leaps and bounds above xmodem. I'm fairly sure the bidirectional protocol was just called 'bimodem.'
    • The bidirectional protocol I recall using was HS/Link. Heck, the protocol even had a chat interface so you could talk with the sysop or user on the other end as you transferred!
    • I think the protocol that let you upload & download at the same time was bimodem. It also let you chat with the sysop while the file transfers where going.

      It was released on December 7, 1988.

      Here's a link to textfiles.com timeline [textfiles.com]
    • Err... you must have Zmodem confused with something else..it was one way only. You are right about the widely used part though, and not ony warez boards but everywhere. In fact it was the only thing going in the later BBS days.

      Maybe puma or one of those oddball protocols are bidirectional, but that was pretty useless to warez runners back in the day, because everybody knows that real k-k00l warez runners use USRobotics Courier HST 9600 high-speed modems, and they were only fast in one direction. Real warez runners spit on v.32 modems...Ahhh the good old days ;-) Sorry for the OT folks...
      • You're right about Zmodem. However there was one little trick that usually worked. It was called "leech Zmodem", and it took advantage of loopholes to keep your download ratio good. When it received the last block of a file correctly, instead of acknowledging it, it would NAK and request a re-transmit from near the beginning of the file, then abort the download.

        Poorly written BBS software would only remember the last block downloaded as an indicator of how much you downloaded.

    • The bidir protocol we used all the time was "HS/Link" or something. It was pretty damn cool... although yes, we did see all the other assortment of bidir protocols back then (Mpt, Puma, smodem (?), jmodem, etc.)

  • .CX Domain (Score:4, Funny)

    by goldspider (445116) <ardrake79@@@gmail...com> on Wednesday March 05, 2003 @11:13AM (#5440509) Homepage
    Was I the only one who was a little apprehensive about clicking on a link from the .cx domain?
  • by sn0rt (218268) on Wednesday March 05, 2003 @11:20AM (#5440551)
    Isn't there an attack that essentially floods a server with ACK's? Just wondering what effect that would have if ACK had priority over other traffic.

    Or maybe it was flooded with SYN's? Damn. I can't remember.
    • Re:Security hole (Score:5, Interesting)

      by Arethan (223197) on Wednesday March 05, 2003 @12:11PM (#5440889) Journal
      It's called SYN flooding. The idea is that a system only has so much memory to work with. Each established TCP session has memory overhead for packet ordering and split packet reassembly. Generally, when a system recienves a SYN packet, it assumes that the session is going to be valid, and it begins the process of completing the TCP session building, which happens to include setting aside some memory for session management.

      For each SYN packet you send, you eat up a little bit more memory and CPU time on the victim. Do it enough times, and the system runs out of memory or processor time, and the system becomes unable to perform its regular operations. Effectively causing a Denial of Service.

      If you're smart, you'll form the SYN packets to have source addresses that differ from your real IP, otherwise a) you're traceable; and b) your machine will be flooded with SYN/ACKS. If you are even smarter, you'll use an IP that, while valid and routable, belongs to a host that either doesn't exist, or is currently off. Otherwise the 2nd level victim recieving the SYN/ACKs from your initial target will send RSETs for every SYN/ACK, since it never requested to initial the connection. When your target gets the RSET for the SYN/ACK, it will close the session, freeing up the memory and CPU time that you are desparately trying to fill. Essentially, a non-existant host will never respond to a SYN/ACK, so the target system has to wait for a timeout duration before closing the session, which makes it easier for you to eat up CPU and memory. Unfortunately though, the fake spirce IP on your SYN packets will likely have to be within your ISP's network range, as all smart ISP network administrators perform egress packet filtering to prevent such attacks from originating within their network.

      Better tactics include sending the SYNs from multiple machines that have different providers. Thus preventing load from the SYN/ACKs from filling your ISPs pipe. This effectively makes the attack a DDoS, rather than a DoS.

      Either way, you can't really perform these attacks in much safety, as competent network administrators will have sniffers in place to detect these attacks as they cross their network. So #1) if your ISP admin is smart, you're busted by them regardless; and #2) if the chain of smart admins follows you all the way back to your sources, you're busted by the authorities (which if you cross state lines means the Feds, which will suck quite adamently).

      So, that is how it works, but I wouldn't recommend trying it.
  • I don't click on any .cx domains!
  • ...my packets are being acknowledged extremely well by the author.
    Trace Routing to the author's link
    <a href ="http://www.benzedrine.cx">Insomnia Site</a> yields the following path:
    ....

    6 pop1-hou-P7-2.atdn.net [66.185.136.77]
    7 bb2-hou-P0-2.atdn.net [66.185.150.146]
    8 bb2-tby-P7-0.atdn.net [66.185.152.247]
    9 bb1-tby-P1-0.atdn.net [66.185.152.242]
    10 bb2-atm-P7-0.atdn.net [66.185.152.245]
    11 bb2-cha-P6-0.atdn.net [66.185.152.31]
    12 bb2-ash-P13-0.atdn.net [66.185.152.50]
    13 pop1-ash-P1-0.atdn.net [66.185.139.195]
    14 BritishTelecom.atdn.net [66.185.151.110]
    15 t2c1-ge6-2.us-ash.concert.net [166.49.208.221]
    16 t2c1-p8-0.nl-ams2.concert.net [166.49.208.133]
    17 t2c1-p8-0.uk-lon2.concert.net [166.49.208.90]
    18 t2c1-p2-1.ch-zur.concert.net [166.49.164.46]
    19 t2a1-ge5-0-0.ch-zur.concert.net [166.49.186.17]
    20 ixp1-p0-0-0.ch-zur.concert.net [166.49.223.10]
    21 gw.dl.zhl-zhh-00.netstream.ch [62.65.130.1]
    22 gw.fiber.dd-zh-00.netstream.ch [62.65.128.133]
    23 gw.fiber.dd-dd-01.netstream.ch [62.65.128.146]
    24 ...
    .................

    I think he was expecting 'nightmares' anywayz,that'z why he chose to name his machine INSOMNIA:-)

  • Slashdotted - Mirror (Score:5, Informative)

    by SILIZIUMM (241333) on Wednesday March 05, 2003 @11:34AM (#5440641) Homepage
    Since the website seems slashdotted now I've set up a mirror. You can see it there [infinit.net].
  • by Anonymous Coward
    Less pr0n and warez?
  • Try it (Score:5, Funny)

    by genka (148122) on Wednesday March 05, 2003 @11:42AM (#5440683) Homepage Journal

    "OpenBSD 3.3 beta is now stable enough for daily use, so why not download a snapshot [openbsd.org] from one of the mirrors [openbsd.org]and try it out?"


    Windows XP is now stable enough for daily use, so why not download a snapshot [kazaalite.tk] from one of the mirrors [sharereactor.com] and try it out?"

    (intended as a joke)
  • by Arethan (223197) on Wednesday March 05, 2003 @11:42AM (#5440687) Journal
    Seems to me that this could really help broadband providers supply a better service if this was implemented in the firmware of the modems and the headend equipment.

    Then again, since when have most broadband providers really ever cared about supplying good speeds when the user maxes out the outrageously capped upstream...
  • by mekkab (133181) on Wednesday March 05, 2003 @12:04PM (#5440843) Homepage Journal
    If the acks are sped up, this interferes with TCP keeping track of the statistical average Round Trip Time.

    So if the network is congested and an ACK SHOULD time out but doesn't, TCP will keep on flooding the network, ruining the pool for everyone.(see: Tragedy of the commons [dieoff.com])

    Yes, I agree that this is a big-O style worse case scenario, but its something to consider.
  • by puzzled (12525) on Wednesday March 05, 2003 @12:10PM (#5440880) Journal


    It seems to me that a great many /. readers have a cursory knowledge of how TCP/IP works. This is true of almost every other topic and I don't have a generalized solution for ignorance, but in this case a quick read of the first volume of Stevens' excellent TCP/IP Illustrated Series should do the trick.

    Reading that book will give you a foundation to understanding how a single endpoint behaves in an IP network. If you want some understanding of the guts of a large scale internetwork I'd suggest the Cisco Press IP Quality of Service book.

    There are a great many things near and dear to /. reader's hearts - the god given right to steal music by treating a retail DSL/Cable connection like a dedicated wholesale circuit being the prime example - that are more easily understood after a read of these two books.

    If you're impatient you can look at my journal - I've covered some of the issues there.
  • This is cool and all, but what I don't get is why OpenBSD still doesn't have SMP support. Is it because they focus so much on security that other things fall by the wayside or is SMP insecure? :)

    I won't use OpenBSD until SMP gets in. Until then, I'll stick with FreeBSD.
  • looking to replace my hddless router with package that utilizes the same concept or is there any hardware cable routers that do bandwidth shaping?
  • by JRHelgeson (576325) on Wednesday March 05, 2003 @12:22PM (#5440955) Homepage Journal
    For the benefit of all: The follwing is the article in its entirity - sans the graphics which can be seen at: (provided the servers are working)

    http://www.benzedrine.cx/ackpri-norm.jpg
    http://www.benzedrine.cx/ackpri-priq.jpg

    benzedrine.cx - Prioritizing empty TCP ACKs with pf and ALTQ Prioritizing empty TCP ACKs with pf and ALTQ

    Introduction ALTQ is a framework to manage queueing disciplines on network interfaces. It manipulates output queues to enforce bandwidth limits and priorize traffic based on classification.

    While ALTQ was part of OpenBSD and has been enabled by default since several releases, the next release will merge the ALTQ and pf configuration into a single file and let pf assign packets to queues. This both simplifies the configuration and greatly reduces the cost of queue assignment.

    This article presents a simple yet effective example of what the pf/ALTQ combination can be used for. It's meant to illustrate the new configuration syntax and queue assignment. The code used in this example is already available in the -current OpenBSD source branch.

    Problem I'm using an asymmetric DSL with 512 kbps downstream and 128 kbps upstream capacity (minus PPPoE overhead). When I download, I get transfer rates of about 50 kB/s. But as soon as I start a concurrent upload, the download rate drops significantly, to about 7 kB/s.

    Explanation Even when a TCP connection is used to send data only in one direction (like when downloading a file through ftp), TCP acknowledgements (ACKs) must be sent in the opposite direction, or the peer will assume that its packets got lost and retransmit them. To keep the peer sending data at the maximum rate, it's important to promptly send the ACKs back.

    When the uplink is saturated by other connections (like a concurrent upload), all outgoing packets get delayed equally by default. Hence, a concurrent upload saturating the uplink causes the outgoing ACKs for the download to get delayed, which causes the drop in the download throughput.

    Solution The outgoing ACKs related to the download are small, as they don't contain any data payload. Even a fast download saturating the 512 kbps downstream does not require more than a fraction of upstream bandwidth for the related outgoing ACKS.

    Hence, the idea is to priorize TCP ACKs that have no payload. The following pf.conf fragment illustrates how to set up the queue definitions and assign packets to the defined queues:

    ext_if="kue0"

    altq on $ext_if priq bandwidth 100Kb queue { q_pri, q_def }
    queue q_pri priority 7
    queue q_def priority 1 priq(default)

    pass out on $ext_if proto tcp from $ext_if to any flags S/SA \
    keep state queue (q_def, q_pri)

    pass in on $ext_if proto tcp from any to $ext_if flags S/SA \
    keep state queue (q_def, q_pri)
    First, a macro is defined for the external interface. This makes it easier to adjust the ruleset when the interface changes.

    Next, altq is enabled on the interface using the priq scheduler, and the upstream bandwidth is specified.
    I'm using 100 kbps instead of 128 kbps as this is the real maximum I can reach (due to PPPoE encapsulation overhead). Some experimentation might be needed to find the best value. If it's set too high, the priority queue is not effective, and if it's set too low, the available bandwidth is not fully used.
    Then, two queues are defined with (arbitrary) names q_pri and q_def. The queue with the lower priority is made the default.

    Finally, the rules passing the relevant connections (statefully) are extended to specify what queues to assign the matching packets to. The first queue specified in the parentheses is used for all packets by default, while the second (and optional) queue is used for packets with ToS (type of service) 'lowdelay' (for instance interactive ssh sessions) and TCP ACKs without payload.

    Both incoming and outgoing TCP connections will pass by those two rules, create state, and all packets related to the connections will be assigned to either the q_def or q_pri queues. Packets assigned to the q_pri queue will have priority and will get sent before any pending packets in the q_def queue.

    Result The following test was performed first without and then with the ALTQ rules explained above:

    • -10 to -8 minutes: idle
    • -8 to -6 minutes: download only
    • -6 to -4 minutes: concurrent download and upload
    • -4 to -2 minutes: upload only
    • -2 to 0 minutes: idle

    The first graphs shows the results of the test without ALTQ, and the second one with ALTQ:

    Image 1, ACK PRI Normal [benzedrine.cx]

    Image 2, ACK PRI PRIq [benzedrine.cx]

    The improvement is quite significant, the saturated uplink no longer delays the outgoing empty ACKs, and the download rate doesn't drop anymore.

    This effect is not limited to asymmetric links, it occurs whenever one direction of the link is saturated. With an asymmetric link this occurs more often, obviously.

    Related links

  • by Froqen (36822) on Wednesday March 05, 2003 @02:57PM (#5442405)
    Windows XP uses a DDR Fairness technique to solve the same problem, I wonder how the two techniques compare?
    See "QoS for Modems and Remote Access" at this KB article [microsoft.com].
  • by golo (95789)
    I've heard that this guys [allot.com] have implemented the same idea as one of the trick they use in their traffic shaping/QoS products. They're for WAN links IIRC so that any client (in the remote sites) can take advanatge of it.
  • ACK Shaping (Score:4, Informative)

    by nimrod_me (650667) on Wednesday March 05, 2003 @04:15PM (#5443250)
    This is what is known today as "ACK traffic shaping". First on the market, I believe, was packeteer (www.packeteer.com) with their PacketShaper.

    Unlike most conventional traffic shapers which queue and control the data rate on the outgoing channel, PacketShaper controls the rate of acknowledgements on the reverse channel.

    This is usually used to *slow* traffic. I.e., instead of having the router drop packets (thereby wasting resources until the source TCP understands that the net is congested and reduces load) it just slows the ACKs and the sender automatically reduces its sending rate.

    Anyway, the real nice thing about the OpenBSD implementation is that they merge their packet filter (pf) with the ALTQ queuing code. Now this is really powerful.

    Sounds like a good time for all BSDs to adopt this new combination instead of relying on less-capable mechanisms. E.g. FreeBSD has ipfw for filtering and dummynet for queue management. I don't know how pf compares with ipfw but ALTQ is definitely better than dummynet.

    Nimrod.
  • throttled (Score:3, Informative)

    by zquestz (594249) on Wednesday March 05, 2003 @05:09PM (#5443788) Homepage
    Just in case you don't run openbsd or linux (wondershaper) and are looking for ack packet priority, you can get throttled from http://www.intrarts.com/throttled.html and have the same functionality for Mac OS X and freebsd. It is great to see this information finally getting out to the public, as it does offer significant improvements in network performance.

Slowly and surely the unix crept up on the Nintendo user ...

Working...