Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet

Where The Bandwidth Goes 353

An anonymous reader writes "An often overlooked fact about network bandwidth utilization is that the bandwidth consumed on networks is more than the sum of the data exchanged at the highest level; it's data+overhead+upkeep. In the early 90's I worked for a large multi-national company whose software engineering department had a transatlantic x.25 circuit connection to it's European engineering headquarters. It was necessary that the connection be 'on' 24x7 due to the spanning of a large number of time zones, disparate working hours and tight contractual requirements. Very large data transfers were sometimes operationally essential. But the financial people used to scream constantly about the circuit costs (charged per packet, IIRC) of several thousand dollars/month. The sys admin realized that if he just reduced the frequency of keep-alives, he could shave something like 10% off the monthly bill. This article points out that p2p applications are greater bandwidth hogs than one might think because of the foregoing and more - they also search, accept pushed advertising and do other transactions that are transparent to most users, but add up. I doubt that developers of those free p2p applications have gave much thought to efficiency. This will be no surprise to many of you, but helps explain why ISP's rushing to put caps on transfers."
This discussion has been archived. No new comments can be posted.

Where The Bandwidth Goes

Comments Filter:
  • I doubt it. (Score:3, Insightful)

    by garcia ( 6573 ) on Wednesday September 11, 2002 @03:35PM (#4240101)
    it has nothing to do w/the advertising, the searches, etc. It has to do SOLELY w/the LARGE downloads that users of P2P networks do.

    Over the summer (when no one was in this little college town) I was steadily get 250+k/s downloads (mostly updating Debian ;)) Now that everyone is back (and I assume loving Kazaa to it's limit) I average about 75 to 100k/s.

    I am even tempted to call Road Runner and complain (I am just too lazy to fix Win98 and have it running so they can do their tests).

    DiVX and MP3s are what kills the bandwith. Not the little "inefficiencies" that P2P authors added in.
    • Re:I doubt it. (Score:5, Interesting)

      by Anonymous Coward on Wednesday September 11, 2002 @03:46PM (#4240195)
      uh, no. the actual bandwidth hogs there are not the files themselves, but the packet overhead of transferring those files. Keep in mind that searches are broadcast, so when one person searches next to you, they send out N search packets, one of which goes to you. You send out N search requests on their behalf to all the people you're connected with, and so on. So if all N of your peers are searching simultaneously (this is on a closed network of just you and N people, keep in mind), you're forwarding somewhere like N^2 packets.

      At N=16 and average packet size of 128Bytes, that's 16*16*128 = 32KB. 32KB for just you and 16 friends, nobody else. As soon as you add more people to the topology, the math gets trickier and the numbers get much, much larger. Also keep in mind that the 128 is "ideal", not including the overhead of TCP establishing sessions, etc.

      There were a lot of papers published on this about a year ago, i don't have any references to them, though.
    • Re:I doubt it. (Score:5, Interesting)

      by Deagol ( 323173 ) on Wednesday September 11, 2002 @03:48PM (#4240210) Homepage
      It has nothing to do w/the advertising, the searches, etc. It has to do SOLELY w/the LARGE downloads that users of P2P networks do.

      IMO, you're very wrong.

      The university I work for has it's spies watching the border routers, logging streams. Daily, they release a "top talker" list to a select few individuals (not myself) who notify the admins of aberrant hosts. This is to stop blatant abuse, as well as cut off possibly compromised hosts.

      Occasionally, I would leave gnut running in a shell when I left for the day. I'd usually end up on "top talkers" with 1-2GB of traffic when no downloads were running! This was solely the chatter of the gnutellanet in action.

      Of course, I do configure my client to talk to a rather wide neighborhood, but still...

      • so you are going to tell me that when I download 5 DiVX movies a day (700+mb each) that I am *NOT* hogging 3.5G of bandwith for that day?

        Sure, traffic hogs it, but so do the files. When you have 85% of your cable modem subscribers downloading 700+mb files daily, all day, everyday, you are going to see a significant drop in your overall bandwith.

        Again, I will point out that when no one was here in this college town (25000 person difference) that the bandwith was WIDELY available. Once RR started getting everyone back on, the bandwith went to hell.
        • Re:I doubt it. (Score:3, Interesting)

          by Deagol ( 323173 )
          Perhaps I responded a bit too harshly. Your original point seemed to imply that nothing other than downloads acounted for a non-trivial use of overall bandwidth. I responded with anecdotal evidence that this is not always the case.

          With the "always on" mentality of broadband users, and the fact that most clients simply hide in the system tray when you click the top-right "x" on the window rather than shut down, it wouldn't surprise me if a substantial amount of bandwidth wasn't directly related to a particular client downloading.

          • At my school at least, the biggest use of bandwidth seems to be people who leave filesharing programs on all the time, which ends up sharing their download directories by default, even if they haven't configured them to share additional things. Having even a few dozen people sharing DivX movies on a high-speed pipe uses up a large percentage of the school's bandwidth, far more than the network chatter does (we're talking on the order of 30-40 GB/day for a single host).
            • The point is... (Score:2, Insightful)

              by Steveftoth ( 78419 )
              that it's not the movies as much as the protocol.

              I bet that running a ftp server that has the same content will result in less traffic even if the movie is downloaded more often. Why? Because of the crosstalk inherit in the p2p protocols.
        • Re:I doubt it. (Score:3, Insightful)

          so you are going to tell me that when I download 5 DiVX movies a day (700+mb each) that I am *NOT* hogging 3.5G of bandwith for that day?

          Guh.
          His point was that even when you're not downloading your 3G of files a day, you're still using a good bit of bandwidth. The ofiginal article mentions that 10% of his network's bandwidth could be chopped by simply cutting down on network keep-alive packets. If the grand-parent's comments are any indication, Some P2Ps may be well above that.

          Or to put it another way: I'm not saying that you always look stupid -- just when you say things like that.

      • Re:I doubt it. (Score:4, Insightful)

        by jbarr ( 2233 ) on Wednesday September 11, 2002 @04:25PM (#4240470) Homepage
        I really wouldn't necessarily characterize it as "chatter". After all, isn't the point of P2P to allow multiple hosts to "share the load"? Though you may not be downloading anything, many might be downloading from you. (This is, of course, assuming you have your client configured to share.) KaZaA, by default, puts its installable executable in your shared directory making it available for anyone to grab.

        If you have ever "expanded" the downloding sources, it often shows the download being done from multiple sources. It could just be that your client is uploading part or all of some shared file.

        Not that there aren't inefficiencies, though.
        • No, I don't allow downloads. I usually hop on to find some obscure file, then hop right off. I don't share files much these days, certainly not with my work machine.

          I don't even allow caching -- too gray an area for my tastes, especially on my employer's equipment.

        • Though you may not be downloading anything, many might be downloading from you. (This is, of course, assuming you have your client configured to share.) KaZaA, by default, puts its installable executable in your shared directory making it available for anyone to grab.

          You're missing the point - go back and read the article linked in the story. The point is that excluding uploads and downloads these P2P networks are producing a lot of nework traffic. The example quoted is up to 1.6GB a day just for running the client. Again, this is excluding the bandwidth required for uploads and downloads to/from your machine. This is just the overheads of communication, searches and ad pushes.

          Not that there aren't inefficiencies, though.

          The point being the inefficiencies are so large that just having a few hundred P2P machines running on your network can amount to a significant bandwidth drain, even before they share a single file.
      • Re:I doubt it. (Score:4, Interesting)

        by SerpentMage ( 13390 ) <[ ] ['' in gap]> on Wednesday September 11, 2002 @04:32PM (#4240529)
        Here in Europe I have to pay for bandwidth. And so long as I do not run anything like Gnutella I have no bandwidth problems. I can share with Kazza and no problems. But the moment I share with GNUTella, my bandwidth shot through the roof. I kept it running a week and have never started it again.
      • Re:I doubt it. (Score:2, Insightful)

        by putzin ( 99318 )

        It's not just p2p either. I run Ethereal every now and then on my DSL router to keep track of those intrepid hackers using my wireless connection (the girl two floors down apparently thinks her computer is actually "on" the internet) and to see if anyone has decided it is time to break in. Just watch a typical email session on hotmail. 80%+ of the traffic is solely advertising and hotmail related extra crap. There were 1000 plus packets before I even saw the first message header (I thought someone was doing something naughty). I couldn't believe it at first, but really, that's just amazing. Now the point.

        Which is, we really don't notice all of the extra traffic generated every time we hit a website or fire up Morpheus. Generally, you expect the downloads, but you don't expect the protocol overhead, or the ads, or keepalives, or whatever else might be bundled in. This is where we could save bandwidth if we wanted to. But, we don't want to. I would freak if Ameritech imposed a bandwidth restriction.

      • Gnutella is a bad example since it's about the worst possible senerio in terms of wasted bandwidth. Something like kazaa (assuming you're not a supernode) or edonkey use a trivial amount of bandwith while idle since they're not constantly receiving and sending searches (i.e. they keep servers and clients separate). I've said many times that gnutella is a steaming pile because it wastes so much bandwidth on searches, your example is proof just how bad it is.
    • by Anonymous Coward on Wednesday September 11, 2002 @03:51PM (#4240236)

      Every time I visit a web page using my cable modem, I feel a pang of guilt. By visiting a web page, I:

      • drive up bandwidth costs for the webmaster that are not covered by advertising
      • consume my ISP's bandwidth
      • consume shared bandwidth, slowing down my neighbor's computer slightly.

      Bandwidth is a finite resource which we should all conserve. One day, eventually, the Internet will run out of bandwidth.

    • by 0x0d0a ( 568518 ) on Wednesday September 11, 2002 @04:44PM (#4240627) Journal
      Freenet is more efficient than, say, the Web would be. Those DiVXes don't need to cross your ISPs downstream connection at all.

      Gnutella is noisy, but that's not the fault of the creators. Blame the RIAA -- the first P2P applications were centralized. If you can give up the requirement that there be no single, trusted point of failure, it's much easier to make an efficient network. They attacked Napster, and now people have moved to mostly less efficient approaches.
  • I wonder how much bandwidth could be saved annually if people who developed webpages maybe optimized their html a little better? Removing extraneous spacings, simplifying form field namings ("fn" instead of "FirstName"), that kind of thing. Especially sites that get insane amounts of traffic. You know, like Slashdot. :)
    • Re:Optimize html (Score:2, Insightful)

      by heyeq ( 317933 )
      If you're anywhere near being a k0der you'll be squeezing your head in digust. for fifteen+ years we've been battling against asm-style two/three letter variable declarations, and finally have languages that have helped us define naming conventions and the like, and you want us to go BACK to TLA's??? (TLA = two/three letter acronym)

      are you insane?
      • go look at the html code from google - notice how they abbreviate every object name to ONE letter in the interest of bandwidth.

        i'm sorry that you learned how to code sloppily, and are bitching about streamlining code for efficiency, and cost savings.

        most of us dont need the damn hungarian notation that MS has spreads like gospel truth. It makes for unreadable names that convey less meaning that a nice clear variable name.

        oh - and i know when to use a goto to streamline code, too :-)
        • by EvanED ( 569694 ) <evaned@gm3.14159ail.com minus pi> on Wednesday September 11, 2002 @03:58PM (#4240292)
          >>most of us dont need the damn hungarian notation that MS has spreads like gospel truth

          Why said anything about that? And besides, MS now discourages its use.

          >>It makes for unreadable names that convey less meaning that a nice clear variable name.

          Which 'fn' is not but 'FirstName' is.

          Now, if you have a dynamically generated page, you could use constants that are set to short stuff like 'fn'. Less code to be transmitted while still keeping most of the readability of the original code. If you discover a bug, temporarily switch to a different set of constants ('FirstName' instead of 'fn') until you sort it out so the resultant HTML is more readable. (Same goes with whitespace: Make a constant ENDL or NEWLINE that is set to '\n' while debugging, then changes to '' for production.)
          • >>>most of us dont need the damn hungarian notation that MS has spreads like gospel truth

            >Who said anything about that? And besides, MS now discourages its use.

            Got a reference for that? It's shoved down our throats whether we like it or not simply because 'it's the MS way'. It's damned impossible to debug something called 'lpszglname' especially when it isn't even a string any more because it was changed years ago...
      • Quite Sane really (Score:2, Interesting)

        by ACNiel ( 604673 )
        I couldn't disagree more.

        I just got into an amazingly poorly written program after about a year, and was bewildered by the names, and what they really meant. And it was my code. And yet, I couldn't disagree with you more.

        The streamlining that was discussed by the parent isn't for the sake of the coder, it is for the sake of the user. Mostly your argument is founded in long names really don't hurt anything. And if that is true, than long, descriptive names do their job. Here is a prime example of where they do make a difference. Here the variable names (for variables, and even javascript, or vb script embedded in a page) could make a tremendous amount of difference. And that is wht you would be stream lining for.

        As for superfluous naming, well that can be just as bad, and unreadable as short names. Addled is addled, and you can use short (maybe more than 3, this isn't RPG afterall) descriptive names, without having to type an entire sentence.

        junk1, junk2, junk3 will never be a good idea, but if your form has 4 variables, and you name them
        FNm, LNm, MI, and Age, I don't think anyone will be confused.
      • <hypertext-markup-language>

        <paragraph> Would you rather write HTML like <emphasis>this</emphasis>? </paragraph>

        <paragraph> Sure, &open-quotation-mark; TLAs &close-quotation-mark; may be annoying to read, but they are certainly OK to use if they are understandable enough. </paragraph>

        </hypertext-markup-language>
    • Or, as an alternative, how much bandwidth could be saved if webservers used compression, like mod_gzip [remotecommunications.com]?
    • I tend to think that poorly-optomized html is just a drop in the bucket. If there were one thing to optomize, it ought to be images. The average page uses at least an order of magnitude more data for its images than the html. Using smaller images, or just saving things in more efficient formats, such as jpg, or lossless compression formats, would be a big step in the right direction.

    • Good point. However, readability is important. I do a lot of ASP (yeah, I know..., migrating to perl) coding and optimize my code for scalability and speed; however, I won't sacrifice readability. I leave my comments in. Yes, I know I can strip them out for the production server, but I prefer to leave my comments in and use vbcrlf line feeds to break up the html into something readable. Helps with debugging on the development server.


      That said, text compresses pretty damned well. So I'm less concerned with text than with images. I much prefer png to jpeg & gif, but gotta make sure the client supports it (dynamically feed whichever).


      For sites with massive traffic, you're right - it behooves you to optimize everything you can - but not to the point of making life difficult for the development team:


      Monkey 1: "Hey, what the hell is 'lnm'?"
      Monkey 2: "Hell if I know."
      Monkey 1: "You wrote it!"
      Monkey 2: "Yeah, but that was like 4 months ago. What do the comments say?"
      Monkey 1: "DOH!"


      I don't like my HTML generated in one humumgous unbroken string when I look at the source.

  • by afidel ( 530433 ) on Wednesday September 11, 2002 @03:37PM (#4240123)
    Actually P2P work does focus on efficiency because efficiency determines how large the network can scale on a give set of hardware (the users machines and comodity internet connections). ISP's want to cap bandwidth because their current business model demands that they oversubscribe their uplink by around 20-200 times depending on the type and pricing of the comodity connection. Besides caps are based on total bandwidth usage which includes networking overhead (the routers accounting program doesn't care about payload usually)
  • We need web caches (Score:5, Insightful)

    by Nicopa ( 87617 ) <nico DOT lichtmaier AT gmail DOT com> on Wednesday September 11, 2002 @03:39PM (#4240137)

    We need web caches [mnot.net]... It's stupid to have files crossing the ocean thousands of times. Besides not using web caches causes that those who cannot afford bandwidth costs cannot put content in the web... Caches now! [vancouver-webpages.com].

    Web developers must not be afraid of web caches, since the HTTP/1.1 protocol allows them to precisely define how and when their content will be cached.

    • by Cloudmark ( 309003 ) on Wednesday September 11, 2002 @03:54PM (#4240259) Homepage
      While caching does offer a lot of advantages, there are also pitfalls, particularly for those providing it.

      Working for an ISP myself and specifically with the bandwidth tracking section, we deal with prety much every type of high bandwidth application out there and in many cases we could save an immense amount by caching. Unfortunately, if we cache and then illegal material is downloaded, we can be held responsible for that material. It's unfortunate that efficiency must be sacrificed but right now it's generally too dangerous for anyone to run a serious caching system.

      The rule of thumb for ISPs, at least in North America, is generally that if it's on a client system (subscriber - your PC), then it's not our problem (legally). If a file resides on our cache, then we can be held responsible for it by law enforcement agencies.

      As to the general suggestion that a great deal of bandwidth is consumed by overhead, I think there is some merit to it but that it's a fairly small amount compared to what is used by deliberate downloads and transfers. Systems are moving towards greater efficiency in order to improve speed and to work with lower bandwidth platforms (phones, PDAs, etc) but bandwidth is unlikely to be a major motivator. Most broadband subscribers either download too little to cause serious issues (6gb a month or so - limited overhead) or extreme volumes (100gb a month - overhead is dwarfed by content).
      • Unfortunately, if we cache and then illegal material is downloaded, we can be held responsible for that material.

        That's not really true. Even under the DMCA you can qualify as a safe harbor and avoid that liability. The requirements are too burdensome for an invididual, and that IMO is only one flaw among many in the DMCA, but for an ISP that employs full-time staff it's entirely doable and many ISPs have done it. You'll need to find another excuse.

      • What about specifing a really, really short 'life' for it to be cached.

        I know in squid you can specifiy that anything over x minutes is to be discarded. Then again I'm not sure if squid can handle an entire ISP's worth of traffic (probably though).

        That would solve your problem with caching illegal content if it was just discarded after, oh, say 90 minutes.
      • Have you considered maybe some sort of whitelist cache? For example, when 100 people are all downloading a RedHat ISO or Microsoft service pack, surely you'd like to move it across your outside pipe only once, and there isn't much chance that RedHat and Microsoft are distributing kiddie porn.

        Or is "isn't much chance" not good enough? Argh.

        This sucks. If you're protected from liability on how your wires are used, that protection should extend to caches. It's just common sense. Sounds like we've got a fucked up law or something.

      • Unfortunately, if we cache and then illegal material is downloaded, we can be held responsible for that material.

        Not necessarily. A rider on the DMCA allows service providers in the United States to cache web pages, provided that they meet certain criteria (which are easy with HTTP/1.1) and designate one of their employees as a DMCA agent. Read more on this page [finehummel.com].

    • by Sloppy ( 14984 )
      I agree. But I must be wrong about something, because ISPs don't seem to be deploying caches. ISPs would seem to have the most to gain from caches, and they are also at a very natural and sensible point for it.

      I know there are ISP slashdotters. Any of you guys want to explain why web caches aren't worthwhile?

    • AOL is what makes us afraid since it shows that content providers don't care about refresh dates or content but rather cutting the bandwidth. Even if it means compressing images and caching a page for extended periods of time.

  • huh (Score:2, Funny)

    by freakboy303 ( 545077 )
    Now I feel kinda bad for downloading all that pr0n *wipes hands* Well not that bad.
  • Kazaa.... (Score:5, Interesting)

    by Anonvmous Coward ( 589068 ) on Wednesday September 11, 2002 @03:41PM (#4240154)
    I ran Kazaa for a couple of weeks once to play with it. After shutting it off, I noticed that despite my low ping, I was getting really nasty little lags every few seconds no matter what server I was playing on. Just for giggles, I fired up ZoneAlarm and took a look at the log. Within minutes, I had 500 'events' where users from the Kazaa network were sending 'Where are you?' messages (I assume...). It took 3 days for it to die down.

    Yes, I can understand ISP's getting ratty about it. AT&T (supposedly) rotates IPs once and a while. If they did that, some poor schmuck could potentially have had degraded net performance with no obvious cause for it. It's not that likely, but if it happened it could cause a support issue with AT&T. If enough people ran Kazaa (I can only assume other P2P progs are similar...) then rotation of IPs could turn into a headache for support staff at ISP's.

    • by Frank of Earth ( 126705 ) <frank AT fperkins DOT com> on Wednesday September 11, 2002 @03:56PM (#4240281) Homepage Journal
      had 500 'events' where users from the Kazaa network were sending 'Where are you?' messages (I assume...).

      Actually if you would have converted those messages to hex and then put then outputed that result to an mp3 file, you would of heard in an AOL voice "You have pr0n!"
    • Re:Kazaa.... (Score:3, Interesting)

      by garcia ( 6573 )
      AT&T's DHCP server does rotate IPs every 4 days. If you happen to have the same one for a long period of time it has to do w/the fact that you PC is getting the IP again before anyone else.

      I get "little lags" no matter what server I am on no matter what kind of connection I have (except dialup) and it has been happening way before Kazaa. Broadband basically blows when it blows and is amazing when it is amazing.
      • You can attempt to keep your same IP address all of the time if you allow your ISP's DHCP server to ping you.
        You must have a specific rule in place for this if you are running a firewall and normally block pings.
        • RR doesn't ping me. I don't lose my IP address unless I leave the cable modem turned off for a long period of time.

          I don't know RR's times on DHCP though.
  • "This 4000% overhead is annoying but tolerable on lightly loaded networks."
    (From RFC896)

    Of course, that's talking about bytes of overhead vs bytes of real data - there would be much less than 4000 packets per packet containing real data.
  • by The Fanta Menace ( 607612 ) on Wednesday September 11, 2002 @03:46PM (#4240200) Homepage

    The more they cap usage, the less people will use (obviously). Then content providers such as streaming radio stations will start to drop off as it becomes more expensive for users to access them.

    After that it becomes a vicious circle, with fewer content providers, there's no reason for users to keep their service. Then the ISPs go broke.

    Take a look at the Australian example. Almost all broadband providers have a 3Gb monthly cap. The ABC [abc.net.au] has just started an internet-only radio station, but I really wonder why. It wouldn't take too many days of listening to it for a user to totally max out their cap. I predict the station will be closed due to lack of interest, within a year.

  • Back when xolox (for windows) was in ver 1.2, it would requery *forever*. Althogh this worked great for getting results, I am sure it killed the network.

    They have seemingly 'fixed' this in the new release, but it now has banner ads and popups all through it. Ug.

    It's pretty good, even though they have some catching up to do. (They went down for awhile for fear of getting sued alá napster.)

  • by digitalsushi ( 137809 ) <slashdot@digitalsushi.com> on Wednesday September 11, 2002 @03:48PM (#4240211) Journal
    as an ISP i can say that we make our money by a gamble that people use X amount of bandwidth. p2p breaks our precious little ratio of what we expect and what we need.


    the geek it me though, says "waaa" and that things that dont evolve, die. and the things that dont die. p2p pushes the envelope right now, but all that encourages is more network growth. just think of p2p as those pains you had in your legs when you were 14. sure, it may not be the most efficient thing in the world, but the underlaying infrastructure has to take that into account, or get out of the way for one that can.

    • by Fastolfe ( 1470 ) on Wednesday September 11, 2002 @04:37PM (#4240562)
      p2p breaks our precious little ratio of what we expect and what we need

      Uhh, yah, except this is how they're determining how much they can charge you. If the ratio becomes permanently skewed, the way they "evolve" as you put it is to simply skew their prices to compensate. Though your end user connections may be effectively "unlimited", someone upstream pays for the bandwidth by how much data gets transferred. I guarantee the costs will filter down.

      So as a business, what would you do? Raise your rates for all "unlimited" customers? Create a new class of DSL customer with a lower bandwidth cap and re-figure the ratio? Block P2P activity entirely? Write into the end user contract some soft usage caps and go after the top 1% of bandwidth consumers? All of the above?

      I don't really think P2P is going to drive growth (i.e. more bandwidth for less cost) any more aggressively than the growth we're already seeing. I just think it's going to annoy ISP's and make them re-think some of their "unlimited" bandwidth plans.
  • by RollingThunder ( 88952 ) on Wednesday September 11, 2002 @03:49PM (#4240225)
    The office I'm at used to have a contract with a monthly cap - a mere 20GB, with fairly hefty per-GB fees after that.

    One Monday morning, I came in, and glanced at the MRTG graphs over the weekend. Keeripes! Somebody had been pushing data at about 250Kbps from Friday night until about 6 PM on Sunday, sustained.

    I did a quick calculation, and then informed the bosses that we were going to be paying a lot more than usual this month, and asked if they wanted me to find out why. Of course they did.

    Turned out it was one of said managers. He fired up Limewire, grabbed something on Friday, and forgot to shut it off. Seeing our nice low-latency, high capacity link (E10 or thereabouts, just with a really low traffic cap), it went supernode... and we paid about twice the usual for it.
    • by afidel ( 530433 ) on Wednesday September 11, 2002 @04:17PM (#4240422)
      We had the same problem. Although our multi-T1 connection is not metered we did have it brought to its knees for almost a day when one person set kazaa to be a supernode. I got a very angry call from the wanops people to go tell this user to knock it off and that they would recomend disciplinary action if it happened again.
    • we had someone visit a site with a really badly set up ad system, it kept refreshing the image every 14 mili seconds..
      he managed to pull 8 gig overnight... and at 8c a Meg that gets expensive
  • by fortinbras47 ( 457756 ) on Wednesday September 11, 2002 @03:51PM (#4240235)
    With gnutella, QueryHit packets can make up as little as 1% of traffic (by numbers of packets, not size) while Ping and Pong packets can be well over 50% of packets. Check out this article [ucsd.edu] to see more detail.

    Gnutella is not one of the more advanced protocols, but most of it's problems are present at varying levels in other p2p systems. It's not really surprising that P2P software which spends so much time trying to connect to computers, connect to a computer to start a download etc... and search in a geometric spiderring fashion are quite inefficient.

  • Damn dialup crappyness, retype whole message time :-(

    Question:

    Isn't the excuse of capping broadband connections a moot point, because the general broadband thing is that it is a shared resource? So X ISP saying that 1% of their users are hogging 60-70% of the available bandwidth, then they use that to say 'Right, we're raising prices', in the sense that if there is any load balancing, then the other 99% of users would be able to level up the bandwidth if they needed more, so it divides up (theoretically equally)??

    Although I'm spoilt rotten living with my brother in CT, cos the Optimum Online connection (around 5Mbit at it's fastest) has been no trouble at all, damn UK rural 'broadband' (or lack of). Oh well, I suppose I don't have many new Farscape eps to download when they come out :-(.

    Just my 2 pence.

  • X.25 hax0rz (Score:2, Offtopic)

    by drwho ( 4190 )
    Back in the heyday of "X.25" networks, there were a lot of illegitimate users. There was inadequate technology to protect and track.

    It is rumored that there are accounts on public x.25 networks, belonging to large corporations, that have worked for over 13 years.
  • The Real issue... (Score:2, Interesting)

    by Quasar1999 ( 520073 )
    ISPs are putting bandwidth caps on accounts because they see it as a source of revenue. Plain and simple. The crap about how 5% of the users use 95% of the bandwidth is really starting to piss me off... they advertised always on, unlimited bandwidth when I signed up, and now they have enough customers used to the speed, so they essentially upped the price (just like soup companies reduced the size of their cans of soup, but kept the price the same, if you want more, buy a larger can...) if you want more bandwidth, upgrade your package, or better yet, pay $7.95 a GB/Month over our generous 3 GB/month...

    Isn't there a law against doing this sorta crap? They said always on, unlimited bandwidth... now they're charging through the nose, claiming crappy stats on usage, and blaiming it on p2p networks... I can't even download my legitimate MSDN ISO images without going over my monthly bandwidth limit, let alone actually doing anything else on the net...

    End rant... :P
  • The sys admin realized that if he just reduced the frequency of keep-alives, he could shave something like 10% off the monthly bill.
    Translated: he cut down on pr0n.
  • by stratjakt ( 596332 ) on Wednesday September 11, 2002 @04:02PM (#4240326) Journal
    ..is airing this commercial of goofy testimonials for their broadband cable service. A kid says "Ever been in the belly of a whale? I have", another guy goes "I go to the moon and back twice a day", etc.. etc..

    Now, one of them has some guy say "I collected everything Mozart ever did... In 10 minutes!"

    To me that's comes through loud and clear as "*wink* *wink* *nudge* *nudge* napster(etc)!"

    I would say p2p is the driving force behind non-geeks getting broadband. They don't need it for e-mail, or casual web-surfing. They don't play games, but I know many people eager for an alternative to the bland junk on the radio. (Plus due to geography, radio reception is poor here)

    Same thing with the 'work from home' bunk they promote, and yet block VPN connections.

    It's like dangling a carrot in front of a mule to get him to move, and he stupidly chases it not realising he'll never reach it. It works fine in cartoons, but eventually the mule becomes frustrated, kicks you, and refuses to move at all.

    Someone is smart enough to figure a way to give out the bandwidth and make money at the same time. And, it won't be a monopoly. Maybe 802.11 will be our savior?

  • by Arcaeris ( 311424 ) on Wednesday September 11, 2002 @04:06PM (#4240345)
    I can totally understand the limitations of bandwidth in the face of 2p2 software.

    I was in my first year of college (living in the dorms) when Napster became popular. That same year, they banned it from all campus computers. The IT guys here said that of the estimated 7200 dorm room computers on campus, a minimum of 6500 were running Napster at any given time. They were forced to ban it because the bandwidth usage was taking away from vital staff/faculty related web-based tools and network services that needed to be maintained. In fact, nothing else could be run on the network.

    Now Napster's gone, and I haven't lived on campus since Kazaa and such became popular. I'm pretty sure I know how they're dealing with it.

    If one university had to do it, then imagine what the average cable/DSL provider has to deal with. Granted, they don't have as much essential network stuff.
    • Yes but, (if your University was like mine), you don't pay for it. Access to the universities private network was a privelege of living on campus. They were bound by a fixed budget that came out of our tuitions/res fees and had to accomodate everyone. Our house, our rules..

      I pay for my cable powered internet. I don't see their right to tell me what I can and can't do with it, it was part of no contract I signed, save some ambiguous crap about removing "abusive" users at their discretion.

      I made another post in this forum about how they use p2p and VPN as incentives to sell the service. Bait and switch.

      The business model in short (and not a lame SP troll):

      Split a 10mbit pipe over 1000 users. Most only know how to read e-mail and read dilbert cartoons so they'll never notice we oversold ourselves. Kick the few that will off, cite bandwidth abuse as the reason. (How you 'abuse' something they sold you unlimited access to still escapes me)

      The 'stupid sheep' they counted on forking 40-100 bucks a month for something they'd never use, found something to use it for.
  • I know where at least a few hundred gig a week of bandwidth goes to...
  • the big hog on network bandwidth is TCP/IP...big surprise there.
  • Wow (Score:5, Insightful)

    by Salamander ( 33735 ) <jeff@NOsPam.pl.atyp.us> on Wednesday September 11, 2002 @04:22PM (#4240454) Homepage Journal

    The article itself was kind of ho-hum, but the following part of the Slashdot intro caught my attention:

    I doubt that developers of those free p2p applications have gave much thought to efficiency.

    Again...wow. One would need to search far and wide, even on Slashdot, to find another example of such absolutely astonishing cluelessness. Timothy has obviously never talked to a P2P developer in his life. Sometimes it seems like efficiency is just about the only thing P2P developers think about, unless someone's on a security/anonymity rant. Little things like robustness or usability get short shrift because so much of the focus is on efficiency. Hundreds of papers have been written about the bandwidth-efficiency of various P2P networks - especially Gnutella, which everyone who knows anything knows is "worst of breed" when it comes to broadcasting searches.

    It's unfortunate that the most popular P2P networks seem to be the least efficient ones, and doubly unfortunate that so many vendors bundle spyware with their P2P clients, but to say that P2P developers don't give much thought to efficiency is absurd. They give a lot more thought to efficiency than Slashdot editors give to accuracy, that's for damn sure.

    • Hundreds of papers have been written about the bandwidth-efficiency of various P2P networks
      I beg to differ. Hundreds of papers have been written on the bandwidth-inefficiency of P2P networks, and the fact that they are still being written is evidence that they are still inefficient.
      • Nonetheless, people have obviously thought about it a lot, and timothy is still full of crap for saying otherwise. :-P

  • compression (Score:5, Interesting)

    by Twillerror ( 536681 ) on Wednesday September 11, 2002 @04:22PM (#4240457) Homepage Journal
    Remeber back in the good ol' modem days. I remember getting 10 k a second on some transfer even with a 56.6.

    If a P2P network protocol is text based, say like XML, it should compress pretty well and keep some of this extra bandwith down.

    If HTTP would actually support compression natively we could save tons of bandwith in those HTML transfers. The page I'm typing this comment on is 11.1 k. zipped it is 3.5, and I think I have fast compression on. I'm sure the main slashdot page would save even more. Slashdot could litterally save megs a day.

    It would simply be a matter of Apache and IIS supporting it. And maybe a new GETC command in HTTP that works the same. The browser would ask if the server supports it, and then go from there. Or try it and if it failed, try it normally. Apache or IIS would be smart enough to not try and compress JPEG, GIF, and other pre-compressed files.

    Everything from FTP to SMTP could save a little here and there, which adds up quick.

    Perhaps the real answer is to write it into the next version of TCP and have it hardware accelerated.

    • The page I'm typing this comment on is 11.1 k. zipped it is 3.5, and I think I have fast compression on. I'm sure the main slashdot page would save even more. Slashdot could litterally save megs a day.

      It's not a free lunch, you have to consider the resulting increase in server CPU load. It's probably not an issue for low traffic sites but it's definitely a concern for sites like slashdot.
    • Re:compression (Score:4, Informative)

      by ShaunC ( 203807 ) on Wednesday September 11, 2002 @04:54PM (#4240727)
      It would simply be a matter of Apache and IIS supporting it
      Apache does support it, it's called mod_gzip [remotecommunications.com] and Slashdot already uses it. The IIS equivalent (sort of) is called PipeBoost [pscode.com].

      Shaun
    • If a P2P network protocol is text based, say like XML, it should compress pretty well and keep some of this extra bandwith down.

      In terms of compressed size, there's no (theoretical) advantage to a compressed XML stream over a compressed binary stream. The reason that XML compresses so well is that there's obvious redundancy that's easy to "squeeze out". A good binary protocol will have less easily removed redundancy -- so it'll be smaller to start with, but won't compress as well. If both represent the same protocol, in theory they should (post-compression) come down only to the same size.

      Also, as mentioned, mod_gzip is available and used today.
  • Once again... (Score:5, Interesting)

    by _Knots ( 165356 ) on Wednesday September 11, 2002 @04:25PM (#4240471)
    I say P2P mesh networks (ala Gnutella) need to have intelligent meshing algorithms so that the network tries to minimize the number of mesh links crossing a given physical uplink or a given backbone segment.

    Such a scheme would return optimized search results because your net neighbors would know of your query before somebody on the other side of an uplink (and, as there is less routing between you, can transfer files faster in theory).

    On top of that, with such a router-aware network the wasted bandwith of broadcast packets multiply crossing a given line due to reflection by peers on the other side would be virtually gone once the network became aware of the layout - ideally each node wouldn't have to learn but could get some kind of topological information from a node it connected to ("You are in the same /x block as a.b.c.d - please connect to that node and drop this connection") or maybe even ask the remote node to preform some kind of query for it ("who wants a.b.c.e, because I don't?"). Our current "host caches" like router.limewire.com could gain some intelligence for whom they introduced to whom.

    Instead of capping upload and download capacities as much as done now, perhaps those limits should be relaxed but a P2P "introduction" program installed on the ISP's router so that clients behind the firewall mesh with each other before a few of them send meshing links spanning the uplink.

    Yes, downloads will still follow the usual TCP/IP pathways - which we presume are most efficient already. But the broadcast discovery packets which now ricochet around the network would, with an intelligent meshing algorithm, span as few uplinks as possible to query hosts as network-close as possible. All in all this would reduce traffic.

    Somebody want to blow holes in this for me?

    --Knots;
    • Actually, it's a really good idea and IMO an area where I believe not enough work has been done. Which is not to say no work has been done. There are several projects aimed at finding the best way to determine network distance, and several more that seek to use that information to create more optimal connection topologies. I don't have my links to that stuff handy right now, but if you send me email I should be able to dig them up.

      Part of the problem is that many P2P networks have dependencies on some model or other of the higher-level abstract topology through which they route search queries, and it can be difficult to map (for example) a hypercube onto the actual IP-network topology. Lacking a good solution to that problem of mapping one topology onto another, many P2P developers punt; they try to minimize hops through the overlay network, and vaguely hope that by doing so they'll make up for the extra hops through the underlying network. In many cases it even seems to work, because the search algorithms that operate on the higher-level topology can be extremely efficient.

      Nonetheless, if someone could figure out a way to reconcile those advanced search algorithms with a more "reality-based" topology, that would be great. If you think you have ideas on how to do that, by all means explore them. The more the merrier.

    • An excellent idea in theory. Now try and code such a beast :-)

      Trying to build efficient structure into peer networks is like building a house of sand. They are extremely volatile.

      You need to take into account numerous factors, like many orders of magnitude in bandwidth capacity between peers, NAT and unNAT'ed hosts, high churn rates, volatile peer groups, etc, etc.

      Most people who have tried to overlay fragile yet elegant topologies on top of peer networks have seen them crumble under volatile real world scenarios.

      This is not to say it is impossible, but that it is much harder to implement such a network in today's internet environment than it first appears.
      • Re:Once again... (Score:3, Interesting)

        by _Knots ( 165356 )
        Nat vs unNAT: Treat the NAT as an uplink that we should try to limit connections through.

        High churn rates / volatile peer groups: yes, there's a lot of changeover in everything, but I'd wager that copying intelligence on connection ("Here's everything I know about the network around me") would endow newcomers with a good base to start off with.

        There's nothing fragile about this topology: it's a runtime dynamic mesh topology - exactly like Gnutella's now. The sole difference is that groups of peers would try to actually group themselves by network-proximity (probably IP range, or for things like Road Runner or at a university, reverse DNS mappings might help). Yes, it might take some more effort from users to specify how to identify members of their local group and get it right. But there are surely some decent ways (IP ranges, as stated) of getting it *usually* right.

        It shouldn't hurt the network - it should be an option to turn it off, it should turn itself off if it detects its being unhelpful. The incentive towards the users (inside a university or on a local cable loop) would be much faster downloads due to less routing overhead.

        Really you could think of this as ultrapeers agreeing amongst themselves as to which of them will actually route outside a group.

        --Knots;
  • by tandr ( 108948 ) on Wednesday September 11, 2002 @04:25PM (#4240477)

    Ready?

    Slashdot.org and traffic redirected from its links.
  • I wonder (Score:2, Interesting)

    by A5un ( 586681 )
    How much of current network traffic (data/voice) are really just protocol? I mean all the way down to physical layer (yes, the 1 and 0's). Seems like every layer of abstraction tags a protocol header on to the real payload. Is there any study done in this? I won't be surprised if more than 50% of network traffic are just protocols (IP headers, TCP signals, SONET header or even CRC bits).

  • I run a site that has a bandwidth test, and there are people who run big multi-megabyte tests every hour or less to "see if there are any problems" in their connection. Multiply this times lots of people and lots of bandwidth test sites [testmyspeed.com] and I'm convinced that a lot of the bandwidth on the Internet is wasted in testing connection speed!
  • If this "file-sharing" stuff were legal, it would be easy to do it efficiently. Each new song would go out on USENET into some binaries group, traverse each link no more than once, and reside in a nearby news server. A modest-sized disk drive per ISP could hold MP3 versions of the entire catalog of popular music. No problem.

    So that's the standard with which P2P networks must be compared on an efficiency basis. It's not looking good right now. Current P2P architectures scale badly. This is well known in the networking community, but not widely realized by end users.

    A big problem is that it's hard for a program to tell "how far away", in some sense, another IP address is. You can measure latency and bandwidth, but those are only hints. If many programs are doing this, the overall result is suboptimal. There's been considerable work on efficient self-organizing networks, mainly for military use. That's where to look for better architectures.

    • If you thing peer-to-peer networks offer total anonymity, try sharing some pr0n that's illegal in your locale, along with some realistic-appearing stories about killing $HEAD_OF_STATE for a month or two, 24x7, and get back to us about that anonymity thing.

      The current apps (other than Freenet/GNUnet) all either connect to or request a TCP connection from the machine sharing the material. When the client retrieving what you're sharing connects to or is connected from your machine, your IP address is known and that's one level of indirection from your identity (barring use of an open proxy).

      Although many of the clients, particularly some of the Gnutella ones like Limewire attempt to obfuscate the addresses a little at times, the protocol is open, and $THREE_LETTER_AGENCY or $COPYRIGHT_CARTEL is free to write a client to reap the IP addresses of those sharing certain content (q.v. Ranger).

  • If you take a look at the spec for the Gnutella Protocol [gnutelladev.com], you will understand where all this "extra" traffic is coming from.

    I've been messing about with gnutella on and off for about 3 months now, I hope to make an open source functional client eventually. It's quite an interesting area because there is so much work to be done on security and efficiency.

    The only problem is that P2P networks are never going to be as efficient as centralised server networks and certainly never as fast. I suppose a cynic (like me!) could blame the entertainment industry for forcing out server based file sharing networks.

    But I believe the death of server based file sharing is a good thing. The bad side of the server-client model is that it can (and usually is) controlled by an authority and its security is often obscurity based (the obsure bit being hiden on the server). Peer to Peer networks however, offer total anonominity as well as giving users access to the whole component.

    Peer to Peer networks are the next step in securing freedom of information on the internet and preventing government control.

    It's when Peer to Peer mobile phone networks are produced that things will really get interesting....

  • by PureFiction ( 10256 ) on Wednesday September 11, 2002 @04:58PM (#4240764)
    I doubt that developers of those free p2p applications have gave much thought to efficiency

    Some of us have. Search is much of the bandwidth in peer networks is wasted (downloads are downloads, but search can eat up a lot of bandwidth for little return)

    There are some efficient, effective peer network search apps [cubicmetercrystal.com] currently in development. Hopefully we can eventually leave gnutella and kazaa in the past and move on to more open, efficient networks...
  • I might be missing the point, but why complain about cost? If the cost of a big private line is a problem, then you should consider VPN.

  • First off all, the P2P networks by design will generate far more traffic than necessary. Necessary, being of course a single set of central servers that collect data from the entire network and serve out that data ONCE to anyone requesting it. However, Napster as we all know died a painful death because there was a single point of failure. Kazaa, gnutella, and others have no head you can cut off. Even if the company that sponsors Kazaa were to be sued/prosecuted into oblivion, the network would remain. The downside, of course, is an excessive amount of unnecessary traffic.

    The second big problem is the fact that as far as I can tell, none of the P2P networks take advantage of the teired nature of the internet, attempting to search local networks first, and searching further ONLY when something can't be found closer. Bandwidth is always more scarce (and therefore more expensive) the closer you reach for the backbone. Any effort to keep the traffic within the local network of the ISP costs THEM less, which means they would be far more willing to promote those types of networks, or at the very least not attempt to restrict them.

    The network admins for universities were especially outspoken against Napster at the height of that craze, since that single program was consuming all the upstream bandwidth, where there is a DAMN good chance that with a student population in the tens of thousands, there's probably a 99% chance that anything a student was searching for could be found somewhere on the university network, which typically has much larger pipes than the internet upstream.

    -Restil
  • Use QoS? (Score:4, Interesting)

    by Cato ( 8296 ) on Thursday September 12, 2002 @03:50AM (#4243361)
    Putting all P2P traffic into a 'low priority' queue on all routers, and HTTP traffic and everything else into a 'normal priority' queue, would help this. Actually some sort of bandwidth allocation (WFQ, CBQ, etc) could be used rather than priority queuing. P2P apps would get the whole pipe if no higher priority traffic is around, but just X% if there is other traffic.

    Of course, this is wildly impractical given the complete lack of uptake of QoS in the Internet - but since bandwidth hogs such as Pointcast and P2P drove earlier adoption of single-point QoS boxes such as Packeteer, it is not beyond the bounds of possibility. ISPs could deploy this without cooperation from other ISPs, just as a way of giving better service to non-P2P traffic within their network.

    Of course, some would say that P2P should not be segregated - in which case, perhaps they could buy a premium service that puts P2P into 'normal priority'...

The first 90% of a project takes 90% of the time, the last 10% takes the other 90% of the time.

Working...