Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet Networking

P2P Remains Dominant Protocol 88

An anonymous reader writes "Last week, a press release was issued by Ellacotya that suggested something quite startling — HTTP (Hyper Text Transfer Protocol, aka Web traffic) had for the first time in four years overtaken P2P traffic. However a new article from Slyck disputes this, and contends that P2P remains the bandwidth heavyweight."
This discussion has been archived. No new comments can be posted.

P2P Remains Dominant Protocol

Comments Filter:
  • Protocol? (Score:5, Insightful)

    by dreamchaser ( 49529 ) on Friday June 22, 2007 @06:48AM (#19606563) Homepage Journal
    Here I thought P2P was a class of applications, you know, ones that communicate peer to peer.

    WTF. We can't even blame editors for this crap anymore, because they gave us the Firehose.
    • by akzeac ( 862521 )
      It's the title of the article. Blame Mr. Thomas Mennecke.
    • So true (Score:5, Insightful)

      by vivaoporto ( 1064484 ) on Friday June 22, 2007 @06:55AM (#19606607)
      A lot of P2P applications even uses http in one phase or another of its execution, what is the case of bittorrent clients communication with trackers, that is done over using http requests.

      What they might be implying is that the so called "legitimate" traffic (casual WWW surfing) is outpacing filesharing. Ironically, this growing is due the popularization of tools that allow users to share the files via www, tools like Youtube and Flickr (and pornotube, *cough*) that they would share via P2P applications like Kazaa, Napster or IMesh.

      Bottom line is: people don't care about the tools, but about the use they do to the tools. Nothing to see here, move along.
      • by jZnat ( 793348 ) *

        share via P2P applications like Kazaa, Napster or IMesh.
        It's 2007 now, so the applications you were looking for are Limewire (Gnutella), Emule (ed2k), and BitTorrent. IRC/XDCC still exists of course.
      • Pornotube? It's all about spankwire my friend.
    • by Nephrite ( 82592 ) on Friday June 22, 2007 @07:38AM (#19606823) Journal
      Don't you know that P2P stands for "Protocol to Pirate"? Shame on you!
    • and this is the reason many ppl got this article wrong. It wasnt talking about P2P as an protocol, but as a group of protocols versus HTTP. YouTube, which uses http as the video transfer, uses http too and eats 10% of the web traffic. All the P2P traffic mentioned belongs to P2P programs and protocols, not just as single protocol. And this should be clear to you, but maybe its cool to always say something against like the kids ;)
  • your joking right (Score:3, Interesting)

    by Celt ( 125318 ) on Friday June 22, 2007 @06:50AM (#19606575) Journal
    as much as everyone loves http traffic, its not going to overtake the likes of bittorrent traffic anytime soon (unless of course ISP's start blocking all P2P related traffic)
    • I didn't read the article (I'm lazy and at work) but I really have to wonder what they consider P2P traffic... What protocols/clients are they looking at? Is it just BitTorrent, or are they looking at things like Kazaa and LimeWire as well? What about private BitTorrent clients like the one Blizzard uses to update World of Warcraft? I guess I'm not surprised that the various P2P systems are transferring more data than HTTP does... HTTP is generally just text and small pictures, maybe the occasional str
    • HTTP taking over P2P? Pff...I knew that was false because I haven't heard of any great new pr0n websites that could overtake my torrent (pun intended) of P2P pr0n.
  • That'll be AJAX (Score:4, Interesting)

    by Anonymous Coward on Friday June 22, 2007 @06:51AM (#19606577)
    HTTP (Hyper Text Transfer Protocol, aka Web traffic) had for the first time in four years overtaken P2P traffic

    That'll be because AJAX has lead to a massive increase in HTTP traffic. How much traffic do the Web 2.0 "applications" from Google alone generate, do you think?

    Many people have been saying that Web 2.0 is an utterly wasteful way to do things. There's the proof. Now can we stop building Web 2.0 "applications", please?
    • How does loading part of a page consume more bandwidth than loading the entire page again with different content? I have to read my mail somehow, you know, it's not like I see the login page and leave satisfied.
      • Re: (Score:2, Funny)

        by dintech ( 998802 )
        I have to read my mail somehow, you know, it's not like I see the login page and leave satisfied.

        With all the spam I have to deal with, I think I'd leave more satisfied just with the login page.
      • For a lot of AJAX applications, the HTTP overhead of each request is a significant fraction of the total data transferred. On top of this, AJAX apps typically use XML for data transport, which is not exactly lightweight. This gives a lot of total bloat when compared to a protocol that's actually designed for the purpose.

        • by curunir ( 98273 ) *
          As long as the app doesn't do stupid stuff like make requests to the server on each key press, an well-designed AJAX application will result in significantly less traffic. XML might not be a lightweight representation of data, but neither is (X)HTML. If you're talking about simply encoding data, XML will be far more efficient than XHTML, even when marked up semantically so that it can be styled with CSS. Regardless, both formats compress down extremely well with gzip compression. And JSON (which I believe G
          • Re: (Score:3, Insightful)

            by TheRaven64 ( 641858 )

            Your comment makes me believe that you've never had to think about these issues when designing a real-world application. You've no doubt done zero real-world tests to see what the difference in traffic comes out to (our logs show AJAX saving us considerable bandwidth...we've basically halved our bandwidth per user since AJAXifying our site)

            We recently investigated moving from an old BBS-style application that users used as a talker (accessed via SSH) to an AJAX web-app. For a single day, the traffic was around 300MB; more than the total SSH (not counting SCP) traffic of the machine for an entire month, with fewer uses on the AJAX version. It was also far more than the XMPP server that runs on the same machine and has an order of magnitude more uses manages to get through.

            Of course AJAX is an improvement over reloading the entire page. I

            • by curunir ( 98273 ) *
              The comment you were originally replying to was specifically talking about web applications. Only a fool would believe that a traditional client-server application wouldn't outperform a web application in nearly every metric except for two, the ease of client installation and the development time. The comment you were replying to was assuming that one of those two metrics was required.

              GMail is a web-based email system. That is its purpose. Sure, it allows traditional mail protocols, but for most users the w
            • by Sancho ( 17056 )
              I didn't see anyone say anything about IMAP until you did. The original post that started off this thread just complained about all those "Web 2.0 apps". Webmail was around long before Web 2.0. AJAX Webmail ought to be less bandwidth-intensive than traditional Webmail because you don't load the entire page each time.

              For your SSH vs AJAX situation, it should have been obvious that the AJAX would be heavier on the bandwidth. Assuming that the application reads lines up until a newline, every line that the
      • Re:That'll be AJAX (Score:4, Interesting)

        by Intron ( 870560 ) on Friday June 22, 2007 @08:59AM (#19607623)
        Loading the whole page gets twenty "item unchanged, already in cache" and one new piece. So pressing a button may create a load on your browser to redraw the whole page, but not that much bandwidth.

        Web 2.0 applications seem to like maintaining a connection and continuously downloading some piece of meaningless crap. One travel site I was on recently was refreshing so much that my PC was practically unuseable. The page wasn't actually changing, just being continuously "updated".
    • Re:That'll be AJAX (Score:5, Interesting)

      by Phil John ( 576633 ) <phil@@@webstarsltd...com> on Friday June 22, 2007 @07:01AM (#19606633)

      Sure,

      when the public decides that they'd like to go back to waiting for a page-refresh to be able to do anything. When I first got a Gmail account I re-activated a long-dormant HoTMaiL account to compare it with and the difference in speed was like day and night.

      Web 2.0 may be quite wasteful in the amount of traffic being sent, but in these days of streaming video sites like YouTube we're talking about a drop in the ocean.

      IMHO the benefits far outweigh the drawbacks. To all the naysayers that opine about what to do when you don't have any net access, we're also moving into an era where you can, with a few caveats, be always on the net wherever you are. I live in the UK and with HSDPA, 3G and GPRS coverage I have a link to the internet about 98-99% of the time as I move about throughout the day. Accessing Web 2.0 apps via Opera Mobile on my Vario II is more than bearable (esp. with the new "grab and scroll" feature in 8.65). With the new crop of mobile AJAX apps being developed for the iPhone things could start getting very interesting.

      • Re:That'll be AJAX (Score:5, Interesting)

        by arivanov ( 12034 ) on Friday June 22, 2007 @07:36AM (#19606803) Homepage
        You are painting a very entertaining rosy picture as far as the UK is concerned.

        So let's see one day when I actually need a mobile access and the reality of mobile data in the UK not through pink mobile operator marketing glasses. So let's see shall we?

        1. Get up, sync the laptop, leave the house - so far nothing mobile, do not need it.
        2. Get on the train to Cambridge to London train. Try to connect to the net. Available GPRS timeslots at the Camrbidge railway station - around 2 (Vodafone and O2 are roughly the same here). Available capacity before 9am - 0bytes per second. The cretinous f***heads at the operator end QoS up the Blackberry traffic so if you have a train full of business people the capacity for the other data users is 0. Slightly better after 9, but still abissmall. 3G is a tad bit better, but this is temporary due to the low penetration of the 3G BB.
        3. Train Cambridge to London - no 3G coverage half of the time, GPRS coverage around 1 timeslot when available. 6+ tunnels most of them long enough to cause a VPN timeout and cause a reconnect (3G is slightly better due to soft handover here, but it is not available). Overall - just about usefull to reply a couple of emails. Browse? You gotta be kidding. In the morning - totally impossible due to BB eating all capacity. After that - about as bad as browsing on a 14400 modem.
        4. London - tube. No coverage. Whatsoever. The sole reason that our best beloved Mayor is a greedy c***. London tube refuses to put DAS or picocells because they want to give it exlcusively to a single operator and shave the profits. There is a ruling by the competition comission that this is not acceptable so the tube simply does not put any access in. Result - no access. 3G or no 3G.
        5. Arrive wherver - no need for 3G or GPRS as there is network and/or wireless.

        So overall - out of the 4h a day when I needed GPRS/3G coverage I got on the average around 10Kbit per second and it was unavailable half of the time. That is not service you can rely on. That is sh*te.
        • Re: (Score:1, Funny)

          by Anonymous Coward
          It'd be wasted on you anyway - you can't spell.
        • Re: (Score:1, Offtopic)

          "4. London - tube. No coverage. Whatsoever. The sole reason that our best beloved Mayor is a greedy c***. London tube refuses to put DAS or picocells because they want to give it exlcusively to a single operator and shave the profits. There is a ruling by the competition comission that this is not acceptable so the tube simply does not put any access in. Result - no access. 3G or no 3G."

          Yeah, well, there is that and a cell phone is a great way to set off a bomb remotely. That's what happened in Spain and co
          • In Spain it wouldn't have helped, as IIRC, the train wasn't underground at the time, so getting reception there was quite possible.

            As far as terrorism goes, this argument is bullshit. There are many train lines in Spain that aren't underground, and covering them would cost insane amounts of money. The other alternative would be shielding the train, but the doors have to open eventually.
    • Many people have been saying that Web 2.0 is an utterly wasteful way to do things. There's the proof. Now can we stop building Web 2.0 "applications", please?

      That's ridiculous. Compare Google Maps to the old Mapquest (the current Mapquest uses AJAX). When you move in the map, you load only part of the page. The reason it's faster is that it doesn't reload the whole thing every time you move -- hence it uses less bandwidth (on average) than the old way of doing it. Sure, AJAX allows for preloading of cont

    • by myspys ( 204685 ) *
      OR it could be this new thing called "video", you know.. youtube
    • I love the "+5 interesting, expresses an unsubstantiated opinion with little or no hard evidence but hell, I agree with him" mod there.
    • Re: (Score:3, Insightful)

      by Ephemeriis ( 315124 )
      AJAX actually allows you to, if you want, transfer less data. Gmail, for example, does not need to transmit an entire new page every time I open up a new email message...it just displays the contents of that message. Sure, caches and proxies and all that good stuff can reduce the actual amount of traffic generated by a full-page refresh...but it's still a full-page refresh, you're still requesting a redraw of every single picture and every bit of text - rather than just asking to redraw a small portion.

      Th
  • P2P (while actually a mix of several types of protocols) by default is 1000 - 1000000 times as bulky as most HTTP transfers are (unless you're downloading files off an HTTP file server) Most of the time though it's just text and pics. I think the article is just reaffirming what /. users already knew.
  • Nitpicking (Score:5, Informative)

    by TorKlingberg ( 599697 ) on Friday June 22, 2007 @06:52AM (#19606587)
    P2P is not one protocol, but many. Some P2P systems, such as Gnutella, even use HTTP for file transfers.
  • Than means NOTHING (Score:1, Informative)

    by Anonymous Coward
    P2P (which is a class of applications, not a specific protocol) was created to deal with huge files. Of course it will generate a lot of traffic. Duh!
  • HTTP is a protocol, P2P are a classification of applications, some of which use the HTTP protocol as a transport layer.

    Comparing the two is as pointless as comparing Real Player with TCP/IP. P2P is used to shift big binaries files around, HTTP to shift small TEXT files.

    Firehose has actually made the quality of stories go down!
    • Having used Ellacoya's products I can offer first hand knowledge of what they are talking about. Their hardware uses DPI (Deep Packet Inspection) to look for application signatures not TCP/IP port assignments.

      Packet shaping (or whatever the current buzzword is today) is accomplished by DPI looking at the application signature and rate-limiting on that criteria, it does not care what TCP/IP or UDP port the application is using.

      Their product has very fine-grained reporting functionality and reports on groups
    • by Ziwcam ( 766621 )
      I thought Apple was a computer company, not a mobile phone company [orange.co.uk]...
    • by Goaway ( 82658 )
      Meanwhile, everybody who is not an anal-retentive nerd understood the meaning perfectly.
  • encrypted and anonymous distributed p2p protocol will dominate forever and anti-pirates will be assimilated
    • You might want to start fixing that pesky abysmal latency and its friend, horrendously slow transfer rate; then we can talk.
      • You might want to start fixing that pesky abysmal latency and its friend, horrendously slow transfer rate; then we can talk.

        Fibre optics and hard disk makers will take care of the speed and volume. Think, you probably now have 6 times more HD space and 3X connection speed in comparison to what you had 5 years ago. In 2012 your system will again have 6X more space and 3X the network speed, in comparison to current.

        The amount of anonymizing hops used in Tor / Freenet does not need to be increased. So, time is clearly on a pirate's side.

        • Re: (Score:2, Insightful)

          by graphicsguy ( 710710 )
          That seems totally incorrect to me. If anonymizing makes things k times slower with current disk/network speeds, it will still make things k times slower when disks/networks are faster.
          • yeah... i think mr. barwasp is off his rocker. "tor and freenet unite..." just doesn't really make sense. unless he means he wants to design a new protocol where you onion route to a friend-to-friend network...which makes little sense anyway. i'll throw in my two cents:

            issues with tor:

            there are only a few hundred servers donating time, many of which are desktops, not real servers, and they have to accomodate a lot of load.

            when your tor daemon sets up a route (selects three tor servers to hop through), it se
            • Yes, I expect a new protocol that puts together the best parts of the past 10 years of P2P technologies. Especially the pest parts from Kazaa, Freenet, Tor and bittorrent.

              + Kazaa had an efficient algorithm for getting the file-chunks from various locatinos, it also had a decentralized packet quality voting system and an integrated search engine with specialized super-nodes.
              - Kazaa had not anonymity or encryption. Sharing files with other Kazaa users was voluntary and traceable, thus the fear among users wa
              • ah, yes, the thing i am planning is similar to what you describe, in that it involves many ideas that anonymizing networks like tor or freenet only implement a few of. it's called banana (there was a project on sourceforge i and a friend started with the same name, but it sat dormant for almost a year, and i've started from scratch in my free time about a month ago, and _lots_ to do, and i'm basically abandoning the piece of crap i left on sf.)

                however, i disagree on some points you had; personally, i think
                • Very good points,
                  Actually, I got a feeling that we are both describing the same system. Yes, from slightly different angles, but it is still the same system.

                  Why I think logging in and karma-points are needed?
                  1) All P2P- systems have quickly found enemies, who look to sabotage the system and ruin the user experience of the system. For example vandals have been
                  • Feeding the P2P systems with bogus-files, for example music files with random noise in the middle of a song
                  • Rating their bogus files to make t
                  • i disagree; i believe we are describing very different things.

                    you describe something like kazaa, where people can search for files by their human-readable name on some search mechanism built into the system. this system would have problems, as you describe, with vandals "Feeding the P2P systems with bogus-files, for example music files with random noise in the middle of a song". so they'll inaccurately give something a name it shouldn't have. this is a problem with all things where we have to translate from
  • by Peyna ( 14792 )
    Last week, a press release was issued by Ellacotya that suggested something quite startling -- HTTP (Hyper Text Transfer Protocol, aka Web traffic) had for the first time in four years overtaken P2P traffic.

    Okay, so the very young Slashdotter that just popped out of his mother might not know what HTTP actually stands for, but I can't believe there are any Slashdotters who don't know what HTTP is.
    • Okay, so the very young Slashdotter that just popped out of his mother might not know what HTTP actually stands for, but I can't believe there are any Slashdotters who don't know what HTTP is.


      Uhhhh...doesn't that have to do with this INTARWEB thingie? I think I've seen things like 'http:\\' before but I'm not sure where....

    • by PhxBlue ( 562201 )
      I'm not complaining. For once, the editors are following an established writing style [apstylebook.com].
    • by Torodung ( 31985 )

      Okay, so the very young Slashdotter that just popped out of his mother might not know what HTTP actually stands for, but I can't believe there are any Slashdotters who don't know what HTTP is.

      IMHO, the author of this article doesn't. I really doubt all that traffic is the result of HTTP 1.1 commands. HTTP, for instance, doesn't really support streaming video. The best you can do is grab 15 different animated gifs on a pipelined request.

      I believe he's referring here to "port 80/TCP" traffic, which is a good deal different than "HTTP traffic." Port 80 is the most abused "well known port" in the business. It is assigned to be used as HTTP, but on the average client system it's used for just about

  • by Colin Smith ( 2679 ) on Friday June 22, 2007 @07:32AM (#19606787)
    It'd be http based. Not for efficiency or any technical reason, but because it's the best camouflage.

     
    • Re: (Score:3, Informative)

      by diamondsw ( 685967 )
      It'd be http based. Not for efficiency or any technical reason, but because it's the best camouflage.

      Welcome to layer 5-7 packet inspection on modern firewalls. You're screwed.
      • Re: (Score:1, Interesting)

        by Anonymous Coward
        Welcome to HTTPS. Your firewall's screwed.
    • Duh, many P2P apps transfer data using HTTP.
  • conflict of interest (Score:3, Informative)

    by Anonymous Coward on Friday June 22, 2007 @07:35AM (#19606799)
    Ellacoya are well-known for selling routers optimised (and I use that word with the kind of looseness only Goatse man can convey) for bandwidth shaping, in particular for throttling P2P. PlusNet [plus.net] were one of the first ISPs in the UK to be hated for widespread deployment [atlasventure.com] of their kit.

    Remember, a press release is almost always marketing; and this form of marketing is about getting people to purcahse solutions for problems that don't quite exist as described. (Microsoft are good at this; Google are first rate.)
  • 2 reasons (Score:4, Insightful)

    by Opportunist ( 166417 ) on Friday June 22, 2007 @07:57AM (#19606945)
    Youtube (and similar services) and trojans.

    Both rely heavily on HTTP for data transfer. But then again, how do you measure that? By port? By header? Who keeps me from running a HTTP server on port 21? Who dictates that I must not wrap a package into a HTTP header so the corporate firewall doesn't get irate?

    Generally, I doubt that you can reliably measure it. Especially with P2P services soon implementing a wrapper to fool anti net-neutrality laws and traffic shaping the various ISPs either will implement soon or employ already.

    • Re: (Score:3, Interesting)

      by garcia ( 6573 )
      Youtube (and similar services) and trojans.

      Botnets [slashdot.org] mostly. They are continually hammering my site with 100s of hits in a few minutes and because they are from across the globe (mostly residential cable connections) I can't ban them fast enough.

      I keep them mostly out with the Apache rules linked to above but they are still hammering me.
  • So... If i set up a web server, and tell my friends to download my new web page, is that p2p or http? By the way, as long as HTTP isn't multicast, wouldn't it classify as a peer-to-peer protocol?
    • by fnj ( 64210 )

      So... If i set up a web server, and tell my friends to download my new web page, is that p2p or http?

      If it's a WEB server, it's http. Therre is no "or".

      By the way, as long as HTTP isn't multicast, wouldn't it classify as a peer-to-peer protocol?

      No, it's a client-server architecture as opposed to peer-to-peer. The fundamental point here, as approximately one million commenters have already pointed out, is that http is a PROTOCOL; peer-to-peer is an ARCHITECTURE or CLASS OF APPLICATIONS.

  • I think we need to start differentiating between all the different kinds of apps that run over port 80, not because it's the right choice of port but because they're badly written, and whether or not a streaming movie (or application update) can any longer be properly described as "HTTP."

    I am going to say "no." Many of these are apps in their own right that aren't really using HTTP for anything other than a handshake/init and should be doing their business over their own ports, especially all the streaming
    • by netik ( 141046 )
      I got news for you, it's done over a simple GET request with buffering, and yes, it's simple HTTP.

      When you scrub the video and move the pointer around, it just reissues the GET request with an offset, which is perfectly valid HTTP (and one way that HTTP supports resuming of downloads.)
      • by Torodung ( 31985 )

        I got news for you, it's done over a simple GET request with buffering...
        Well, I'll be darned. I'll be salting my hat now for later consumption.

        --
        Toro
  • As most slashdotters know, there is often a mistaken impression that the "World Wide Web" and HTML equal the internet. Web browsers processing HTML are just one application that rides on the internet, and it wasn't even the first application that did so.

    The importance of HTML was that it was the "killer app" that drove internet connections to people's homes. Naturally, the initial implementation of connectivity was tuned to HTML; particularly the standard implementation where bandwidth into the home far e
  • While it is true that the one research company may have had flaws in their study and/or other motivating factors. It can't be overlooked that the main source used in the article is a company that needs P2P to be the main part of Internet traffic to have their business work. So, what they say would also be highly suspect. The last source referenced that was more in line with the original work is probably the most accurate as they don't seem to have any bias.

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...