Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

P2P - From Internet Scourge to Savior 131

microbrewer writes "The MIT Technology Review has up a feature discussing the future of p2p networks. Specifically, they look at their role in content distribution, in the age of ubiquitous video services. Soon, the article asserts, the very same p2p-style networks that 'threatened' legitimate business may be the basis for most video-on-demand services." From the article: "So how could additional P2P traffic actually be a good thing for the Internet? Carnegie Mellon's Zhang points out that because peer-to-peer networks exploit both the downlink and uplink capacities of users' Internet connections, they distribute content more efficiently than centralized 'unicast' technologies. Zhang also says it should be possible to label P2P traffic so that service providers can track it and decide how much of it to allow through their networks. He and colleagues from the University of California at Berkeley have founded a startup, Rinera, to develop software that will give service providers such control."
This discussion has been archived. No new comments can be posted.

P2P - From Internet Scourge to Savior

Comments Filter:
  • by ZahnRosen ( 1040004 ) on Friday December 15, 2006 @11:39AM (#17255832) Homepage
    Powerful technologies can be used for powerful things. Blizzard hired the bittorrent developer to help it distribute patches for World of Warcraft. P2P isn't illegal, using it for stealing is... P2P doesn't steal files, users do.
    • P2P is a great idea, and I predict that clusters of processing power at the residential will become a reality. Its just an evolution of P2P and distributed computing. Editing a DVD? Use dad's new 16 core machine to speed to work while you edit on a power sipping laptop with wi-max at the pool.
      • Re: (Score:3, Interesting)

        This is an idea whose implementation I've been waiting on for a long time. I would love to have a closet in my house with a computer 'core'. Something that can be large enough to have enough cpus and hard drives to serve a family with thin clients. I think in the future this will be a standard instillation on new house builds.
        • This is an idea whose implementation I've been waiting on for a long time. I would love to have a closet in my house with a computer 'core'. Something that can be large enough to have enough cpus and hard drives to serve a family with thin clients.
          What's stopping that now? IMHO it's lack of demand. For the home especially, thin clients just don't have enough to put them above a few laptops plus LaCie drive for backups.
    • by bazorg ( 911295 )
      P2P doesn't steal files, users do.... In my experience, people sharing files with P2P don't delete the original after they finish their copy, so let keep the "stealing" out of this, mmmkay?
      • stealing in the IP world means making a copy you aren't authorized to make. Thats the sense I meant it in, at least. :)
    • Re: (Score:3, Informative)

      by arivanov ( 12034 )
      That is all true especially once you see what P2P is good at.

      Once you have discounted the illegal uses it becomes bloody obvious that most P2P uses are nothing but halfbaked emulation of multicast done by people with poor understanding of networking. Node discovery, node promotion to hypernode, sending single request to multiple interested parties are all trivial in a multicast environment. On top of the in a multicast environment the provider can easily enforce and control QoS, administrative boundaries, s
    • "Powerful technologies can be used for powerful things."

      Every technology developed by man, from animal husbandry to television, has eventually resulted in its use to improve porn. Scientists refer to this as the "Porn Point". Once that is reached then a new technology to even further the use/distribution of porn will be developed until we reach the "Porn Singularity". The "Singularity" is a point in the future when porn progress will improve at an incredible rate unprecedented in human history.
    • Of course but what you have to remember is that there trying to shift the cost of delivering content from the www site to the isp

      It costs an isp money for backbone bandwidth and it costs an isp money for routers and infrastructure.

      ISP's are A business like any other business they are in it to make money. They do this through the simple numbers game.

      They buy a 10 meg pipe and sell 1 meg to 600 people.
      Not all of the people will be using there whole 1 meg all of the time. Therefore this becomes an affordable
  • ISP Bandwidth (Score:5, Insightful)

    by b0s0z0ku ( 752509 ) on Friday December 15, 2006 @11:42AM (#17255884)
    Zhang also says it should be possible to label P2P traffic so that service providers can track it and decide how much of it to allow through their networks.

    Cap bandwidth or GB of transfer per day. Don't tell me what I have the "right" to use this data capacity for. I know Zhang is only suggesting that it's possible, not necessarily a good idea, but don't give the ISPs any stupid ideas.

    -b.

    • Re:ISP Bandwidth (Score:4, Insightful)

      by rudeboy1 ( 516023 ) on Friday December 15, 2006 @11:50AM (#17256012)
      I agree, this will only make it easier to let ISPs continue this rediculous crusade of charging more to make more bandwidth available, but limiting our ability to use it. This is the sort of thing that the telecomm companys on the wrong end of Net Neutrality would jump at as a chance to further their cause. I'm sure this is entirely possible; in theory, the idea is quite simple. I'm sad to see someone going out of their way to essentially further limit what we can do with thwe internet connection we pay good money for.
          It is my firm belief that if you pay for 3M down, 512K up, you should be able to use that for whatever the hell you want. No caveats, no addendums. That whole "BT and HD are choking the internet" thing is a load of bull.
      • Re: (Score:3, Insightful)

        by b0s0z0ku ( 752509 )
        It is my firm belief that if you pay for 3M down, 512K up, you should be able to use that for whatever the hell you want.

        BTW, I have no problem with capping total daily transfer at something less than (Mbit/s)*(8bit/byte)*(3600sec/hr)*(24hr/day) if that's what the ISP needs to do. Just state that limit explicitly in the contract and don't fuck with me unless I actually go over it.

        -b.

        • Thats the trouble though, the caps are never explicitly stated...and you better believe the p2p crippling protocol factoid will be hidden in some fine print somewhere, assuming they decide to inform of it at all. That way they can continue advertising "xMB down and yMB up" while simutaneously covertly removing or crippling all the applications that actually make any signifigant use of the bandwidth.
        • Actually - this would be a good application for load balancing. Why bother with data caps at all? What I'd rather see widely implemented by ISP's (and this is what my ISP does) is for the ISP to allow the client to purchase a data plan that is "burstable". If network traffic is light, then the connection runs at full speed. However, if the network loads up, then traffic to you is throttled until it lessens. This is very easy to do - Mikrotik has software that can do this and the routers can be purchased or
        • So, what are YOU going to do with your 32.4 gigabyte/Day bandwidth?
    • "I know Zhang is only suggesting that it's possible, not necessarily a good idea, but don't give the ISPs any stupid ideas."


      Unfortunately:

      He and colleagues from the University of California at Berkeley have founded a startup, Rinera, to develop software that will give service providers such control.
      Seems that Slashdot is not only giving ISPs the stupid ideas but also giving free publicity to a company that implements them :-(
    • Re: (Score:3, Funny)

      by h2g2bob ( 948006 )
      Follows a familiar pattern.
      1. X is brilliant! In the future everyone will X!
      2. Of course, to do X, you need Y.
      3. Oh, did I mention, we're starting a company for producing Y? ...... Profit!

      I don't know how many venture capitalists they'll find on slashdot, but we finally know step 2.
    • Re: (Score:3, Interesting)

      by msobkow ( 48369 )

      Caps are the wrong approach. Dynamic traffic management is the only viable option, with priorities set by the time-critical nature of the data.

      If BitTorrent protocols carried a data type specifier, perhaps a simple MIME type identifier, then the traffic management facilities might be enhanced to consider that information. It would also be reasonable to implement local BitTorrent cache servers so that when you do a transfer, you're effectively getting most of your data from within the ISP.

      If the data

      • Re: (Score:3, Interesting)

        by msobkow ( 48369 )

        Thinking about it, the one hole to the approach is that you're relying on content providers/publishers (including individuals) to be honest about the content of crypto containers. But as the key infrastructure provides identification of the signing encryption authority, that can be used to monitor abuses and automatically choke off those who claim they're sending a media stream or application library, but actually distributing illegal or infectious content.

        • Re: (Score:3, Insightful)

          by msobkow ( 48369 )

          The legal issues of personal privacy, copyright duration, consumer rights, etc. are not so clear cut, and have to be set by individual governments. American businesses need to remember they are but one player on the global market, and their law is not universal.

          The *AA are particularly blind to this issue. The US restrictions are not even constitutional in other nations.

      • Haw.

        The BitTorrent protocol has a very easily identifiable header, which can be quickly detected by an ISP's protocol analyzer and throttled as necessary.

        Or, I should say, HAD an easily identifiable header. Several ISPs did just this, but throttle the traffic to practically zero. The end result? Azureus created, and other clients adopted, an encrypted version of the BitTorrent protocol that is nearly impossible for ISPs to recognize.

        Perhaps BitTorrent bandwidth usage can become abusive, but when ISPs bec
        • by msobkow ( 48369 )

          The header does NOT identify content type in the sense I'm talking about, and anonymous/unsigned traffic bypasses the personal responsibility. As long as leeches and pirates use the torrents, the legitimate uses continue to be hampered.

          • You're right. Unfortunately, I can't think of any way to enforce any sort of content identification for peers to follow; I could write something in, so that a torrent hash is based on some sort of content ID, and that peers are forced by spec to transmit that ID, but it would be easy for someone to create torrents with content IDs completely different from the actual contents.
    • I fully agree. ISP's should not have the right to filter our content at all if we don't want them to, especially in cases where there is only one ISP available like it is for me back home (at college right now).
  • Yeah but (Score:5, Insightful)

    by rsilvergun ( 571051 ) on Friday December 15, 2006 @11:42AM (#17255888)
    how are ISP's going to take to users maxing out their upload bandwidth 24/7 running commercial p2p clients? Somebody's got to pay for the infrastructure. I can't imagine the current networks aren't optimized for web browsing and light uploading in short bursts (i.e. pictures, word docs and the occasional wmv).
    • Re:Yeah but (Score:5, Informative)

      by Capt James McCarthy ( 860294 ) on Friday December 15, 2006 @11:53AM (#17256086) Journal
      It depends upon your ISP. Speakeasy's agreement states that I can use all of my bandwidth 24/7 without any problems. A SysAdmin's ISP.
      • by Y0tsuya ( 659802 )
        Speakeasy used to be that way, but I've been getting calls from them to keep my download usage to under 100GB per month at their SF POP. That works out to only about 300kbps continuously. Speakeasy's going downhill. They don't have the money to upgrade capacity, so they're writing log analyzers to catch people going over 120GB/mo and calling them up to warn about TOS. The TOS has a section on "Moderation of Use". Read it [speakeasy.net].
    • Re:Yeah but (Score:4, Informative)

      by microbrewer ( 774971 ) on Friday December 15, 2006 @11:57AM (#17256158) Homepage
      Intra Network bandwith is not that expensive for ISPs it when they start to share data with other networks it gets expansive .

      The LX Systems techology in Peer Impact that is mentioned in the MIT article uses peer clustering techniques to keep as mach data in a ISPs domain as possible and they also use geo-location techniqies so the trafic doesnt travel long distances if it doesnt have to .
      • Re: (Score:3, Insightful)

        by ronanbear ( 924575 )
        Sounds like combining bittorrent and usenet to get the best of both worlds. It's the natural progression.
    • Around here at least, the best a consumer can get for upload bandwidth is 384kbps...without going to a T line from the phone company. If they can't handle supporting those paltry offerings, which their customers paid a non-platry sum to get...I'm going to have a hard time mustering much sympathy up for them.
    • how are ISP's going to take to users maxing out their upload bandwidth 24/7 running commercial p2p clients?

      Charge more of a premium for higher upload bandwidths, or cap total "free" upstream transfers with additional charges for usage beyond that.

      Its hardly as if ISPs (many of which are also hosting providers) haven't already had to deal with the "some people use too much bandwidth if there are no consequences" problem already and solved it in other contexts where the solutions can be directly applied to t

    • Somebody's got to pay for the infrastructure.
      "Somebody?" I pay every month, don't you? Or were you hoping Comcast would stop charging you after a couple years when they've recouped their investment in your share of the infrastructure?

      Banwdith gets cheaper and cheaper. Unless ISPs are planning on radically dropping their prices, they'd better be planning on continued bandwidth upgrades.

    • If they can't handle you using it, they shouldn't give it to you. I'm pretty sure I max out my bandwidth most of the time with Freenet...
      Of course, that may be why comcast refuses to stop the DoS attacks coming from their own network....
  • by ztransform ( 929641 ) on Friday December 15, 2006 @11:49AM (#17255988)
    It is a good theory that moving distribution to many decentralised locations will improve content distribution. But present-day distribution networks and large-bandwidth sites have already bought and installed the infrastructure to send large volumes of bandwidth to Tier-1 ISP distribution points, and so forth to smaller ISPs etc. This works today.

    I am agreed that P2P isn't necessarily bad - in fact if P2P algorithms could favour traffic within the same subnet, or indeed allow an ISP to somehow inform the P2P client which nodes are on the same ISP, then an ISP could actually benefit as traffic fills up the internal pipes and less traffic has to be purchased from other ISPs.

    To expand on this point, perhaps a multicast protocol like DHCP on the local subject could be implemented; call it the "ISP IP Directory" protocol, or IID, and basically a P2P client would send a multicast query to the IID address with a query ("is x.x.x.x within your network? Or within your preferred peers?") and the IID server would respond with a yes/no. Then P2Ps could optimally download from preferred addresses..!

    A shift in thinking in the design of P2P protocols is required if we really want to optimise bandwidth and content distribution.
    • Would this perhaps be possible with a system-wide implementation of IPv6? Isn't there room in the packet header for this sort of thing?
    • by crossmr ( 957846 )
      I've always thought Universities and colleges should do something like this, or users themselves should set up something internally. All the university has to do is set up some big fat pipes around the campus and give internal users the ability to set up internal ftp sites/trackers. One or two people may go outside to get it, but once its on there, there is no reason anyone else has to go outside to get it.

  • by multipartmixed ( 163409 ) on Friday December 15, 2006 @11:49AM (#17255998) Homepage
    Please, for the love God, somebody post a recipe to limit gnutella and bit-torrent traffic through a masquerading linux firewall. My home firewall just dies even though ip connection tracking seems to have WAY MORE than enough free connections.... every time two of my kids fire up a p2p client simultaneously. Bouncing the iptables kernel module DOES bring everything back to life.

    This is with a 2.4 kernel and iptables 2.7.

    So. Back on topic. Internet scourge? Dunno. Intranet Scourge? Yup.
    • If they have their own machines, rate-limit all that traffic. You could also potentially give them lower priority, with your machine having the highest. Aside from that, I'd hard-code some explicit priorities for their normal traffic (HTTP, FTP, games, whatever) and then dump the rest into a different queue that makes it take up much less of your bandwidth.

      That said, it's still somewhat difficult to limit incoming traffic, since you can't always control what the sender does. But most of the methods work
    • Iptables is shite. Get a netbsd router like pfsense. IPF ftw. As someone pointed out in another thread, iptables tries to be a firewall in about 30 lines where as netbsd's packfilter does it properly with thousands of lines of code.

      • by clark0r ( 925569 )
        I'll set you straight. pfsense is built on FREEbsd and PF, not NetBSD and IPF. Also, PF came from OPENbsd, not NETbsd. I'd rather build my own FreeBSD box and retain a lot more functionality. If you're already a linux geek then you're used to the command line enough not to need all that web-gui stuff that new users want / need, plus you get the full potential of the system on top.
    • DD-WRT can help. (Score:3, Informative)

      by Kadin2048 ( 468275 )
      The DD-WRT firmware for WRT-54GL routers will do this. It can de-prioritize various kinds of packets, I suspect based on header inspection. I don't know whether it's smart enough to pick up on the obfuscated Bittorrent packets used by newer versions of Azureus (which was designed to be resistant to this sort of inspection), but it will get some of it.

      I'm the "unofficial sysadmin" for my house, which is shared with several other single guys, by virtue of having the router in my room, and DD-WRT makes QoS fai
  • by PopeRatzo ( 965947 ) on Friday December 15, 2006 @11:54AM (#17256102) Journal
    "Zhang also says it should be possible to label P2P traffic so that service providers can track it and decide how much of it to allow through their networks."

    We have lived in such a rare time. We had access to a communication tool like no other in history. And for a brief moment, it was free - totally free. Unencumbered by the dictates of rich and powerful, it was without parallel in history. Anybody who connected to this great web of systems had just as much chance to make his message heard as anyone else. My email of undying love to my wife-to-be received the same access and dispatch as the advertising messages of multi-national corporations. Anybody with a good idea could put it out there for the world to see and if it had merit, it would gain in popularity. Google sprang from this freedom. So did Slashdot. And goatse. And it was the unusual confluence of public money and free enterprise, along with some very smart and generous folks, putting energy into something new and unprecedented that made this happen. Take one bit out of the equation - say the taxpayer-financed Department of Defense, or a Linus Torvald, or a Netscape or the many other pioneers who contributed to this vast project - and it doesn't happen, or it happens in a way that prevents the kid in the basement in Des Moines the opportunity to play.

    But people who have acquired wealth and power don't like it when any old slob can do what they do. I mean, what good is being rich and powerful if it doesn't let you move to the head of the line? Now, a race is on to crush the experiment in liberty that has been the Internet. I guess it was too radical, too much of a danger to tyranny and concentrated wealth, to last very long.

    We should all feel privileged for having seen the rise of this rarest of creatures - the fully open agora of information and ideas - and we should all feel sad that it couldn't be defended from the greedy and power hungry.
    • Well said - it's shocking how quickly the internet environment has been overstuffed with jag-offs and advertising. I'm afraid we're already on the verge of a largely locked-down and homogenized web experience.
    • Great post. But don't give up, there'll always be a darknet. Those clever sorts you mention will develop even more clever methods.
    • Luckily, encrypted traffic is not yet outlawed - we can fall back defensively to darknets. It may not be what the internet once was, but it can have the same spirit.
    • by Ponga ( 934481 )
      Augh man, you gettin' me misty-eyed over here! But listen, as another reply has already stated, the battle is ongoing. We've not lost yet.
      Here is the gist: It's the corporate/governmental entities -versus- the people. But the cat is out of the bag, my friend. I mean, the PEOPLE have been able to collaborate and communicate with each other on an unprecedented scale ever before witnessed in human history. We the people, are a collective you see. OUR collective efforts against theirs. Not to sound too corny, b
    • by ccp ( 127147 )

      We have lived in such a rare time. We had access to a communication tool like no other in history. And for a brief moment, it was free - totally free. Unencumbered by the dictates of rich and powerful, it was without parallel in history. Anybody who connected to this great web of systems had just as much chance to make his message heard as anyone else.

      ....

      But people who have acquired wealth and power don't like it when any old slob can do what they do. I mean, what good is being rich and powerful if it does

  • P2P is a decent way to go for popular stuff, but it's not so great when you're looking for obscure stuff.
    • Re: (Score:2, Informative)

      On the contrary.. it's been the most reliable and comprehensive way for me to access out of print and obscure international titles.
    • If its not on soulseek [slsknet.org], i'd be surprised. P2P isnt only bittorrent.

    • Maybe speed-wise, but not availability-wise. However, nobody said it's easy finding it. eDonkey is better in this respect as opposed to bittorrent.
    • by clark0r ( 925569 )
      If you're after the latest and greatest, try www.easynews.com Even on a 20mbps connection they've managed to *saturate* my downstream. Now that's bandwidth.
    • by rHBa ( 976986 )
      BitTorrent is fastest and is best for new popular stuff, especially large files (movies, apps etc). It also has the advantage that there is a web forum built into the tracker site so you can find out about quality, fake files etc before you DL.

      Soulseek is best for music, especially obscure stuff.

      eMule kind of fills the gap in the middle for me, it's usually a quicker DL than soulseek but not as wide a variety of music, it is good for other obscure stuff like eBooks, old TV episodes and sports events (stuff
  • by plasmacutter ( 901737 ) on Friday December 15, 2006 @12:02PM (#17256258)
    Seriously.. everyone i knew from close family to the furthest acquaintance didn't think broadband was necessary or worth it until p2p traffic caught on.

    yeah... all those people are using that 4-10 megabits a second so cnn.com will load faster.. riiight.
  • by br00tus ( 528477 ) on Friday December 15, 2006 @12:02PM (#17256272)
    As far as broadcasts over the Internet done in a technically sensible way, old-timers may remember the MBONE [wikipedia.org] initiative. This would have distributed broadcast video via IP Multicast. All of the "Tier One" ISPs I knew of, as well as many Tier Two ISPs had the capability to do this, the equipment in place - all they had to do was turn IP multicasting on on their Cisco routers. But management did not want to do this, because they thought it would fill their bandwidth up with video, which they didn't want. At the time, traffic shaping and billing technology was not really up to speed, people were still used to how NSFnet did things to some extent. So instead of multicasting, people did p2p, which is less efficient. After Napster began coming under legal assault, Gnutella was released with technology to specifically evade attempts to block it.


    Aside from technical issues, I think decentralization, peer-to-peer and so forth is the way to go. I don't want to be the little receiver of content from the Giant Corporation with DRM, monopoly price increases and whatnot. To me it makes sense (like Mbone did) and gives me more freedom. It allows me to publish content, which Youtube and whatnot can not censor if they wish. Which is precisely why it won't happen - we don't live in some federated decentralized anarchist council structure, we live in an imperialist, capitalist society where capital is centralized in a few hands, along with the media, political power for the most part, and so on. Which is why peer to peer decentralization has been under attack since day one.

    • by Red Flayer ( 890720 ) on Friday December 15, 2006 @12:24PM (#17256670) Journal
      But management did not want to do this, because they thought it would fill their bandwidth up with video, which they didn't want.
      Yup. Heaven forbid that their customers actually use all the bandwidth they pay for -- if that happened, how could they oversell their capacity?
      At the time, traffic shaping and billing technology was not really up to speed, people were still used to how NSFnet did things to some extent. So instead of multicasting, people did p2p, which is less efficient.
      The summary (and TFA) mentions that p2p can actually be more efficient than multicast, since it utilizes both the up- and downstream capacities of clients.

      Which is precisely why it won't happen - we don't live in some federated decentralized anarchist council structure, we live in an imperialist, capitalist society where capital is centralized in a few hands, along with the media, political power for the most part, and so on.
      You're right, of course. But that's tangential, it simply provides the mechanism by which monied interests can make sure they get their way.

      The issue I see is that the content distributors and the bandwidth providers can work together to get a lock on high profits for both. We're all familiar with the DMCA. But with the right tools (like what the author has created a company to do) the bandwidth providers can lock out the last competing method of distribution.

      The best solution I see is to designate bandwidth providers as common carriers, so that it will be illegal for them to discriminate between packets. Then again, that's government interference, so I'm sure a lot of the libertarians and anarchists here will disagree...
      • Yup. Heaven forbid that their customers actually use all the bandwidth they pay for -- if that happened ... the cost of my broadband connection would quadruple.

        Unfortunately, the real world is still as it is, so allowing ISPs to traffic shape, and block packets - otherwise they wouldn't be able to block spam-spewing clients on port 25.

        Most ISPs don't want to ban P2P traffic, they just want to spread it out so the network is fully utilised instead of saturated at 6pm and unused at 6am. There are a lot of pro
        • If you absolutely must have 24x7x100% utilisation then pay them for their unmetered 'business' accounts

          I do. But the history of false claims by a lot of ISPs is disturbing. Before I switched to the business account, I never got even 25% of the "up to" speeds I signed up for.

          Under common carrier status, the ISPs could still shape traffic, but it would have to be independent of content or source. One easy way to differentiate would be tiered pricing (like Fedex uses - pay more for quicker delivery). I th

          • No ISP will offer better facilities for P2P.. yet. Once it becomes 'legal' then we might see different plans from them. However, until then people just need to shop around (as with everything) for the better ISPs - eg the one I'm with has capped bandwidth, but only at peak time (4pm to midnight) and then its unmetered. So, my P2P, backups and big downloads take place overnight, and I'm happy with that.

            ISPs do traffic shap for P2P, and I think they should be allowed to do so to keep prices down, however a lo
      • What would be really nice is something like Bittorrent with a multicast system that was synergistic. Every time anyone uploaded a chunk, everyone in the swarm would get it.

      • The summary (and TFA) mentions that p2p can actually be more efficient than multicast, since it utilizes both the up- and downstream capacities of clients.

        1) That makes absolutely no sense. Multicast distribution is as effecient as you can possibly get.

        2) From TFA: "they distribute content more efficiently than centralized "unicast" technologies." He said UNICAST NOT multicast. Black and white difference.

      • The best solution I see is to designate bandwidth providers as common carriers, so that it will be illegal for them to discriminate between packets. Then again, that's government interference, so I'm sure a lot of the libertarians and anarchists here will disagree...

        Anarchists may disagree ("all government is bad"]. but principled libertarians won't ["government acting as the Invisible Hand by promoting equality: equal access to the marketplace of ideas"].

        It's certainly an unusual role for government

    • Re: (Score:3, Funny)

      by mypalmike ( 454265 )
      we don't live in some federated decentralized anarchist council structure, we live in an imperialist, capitalist society where capital is centralized in a few hands, along with the media, political power for the most part, and so on.

      I told you, we're an anarco-syndicalist commune. We take it in turns to be a sort of executive officer for the week, but all the decisions of that officer have to be ratified at a special bi-weekly meeting; by a simple majority in the case of purely internal affairs, but by a tw
    • As far as broadcasts over the Internet done in a technically sensible way, old-timers may remember the MBONE initiative. This would have distributed broadcast video via IP Multicast.
      Who's talking about broadcasts over the Internet? That's only useful for live events. The Internet is all about on-demand. If you want to broadcast the Super Bowl, use a satellite.
  • by ben there... ( 946946 ) on Friday December 15, 2006 @12:09PM (#17256376) Journal
    "Soon, the article asserts, the very same p2p-style networks that 'threatened' legitimate business may be the basis for most video-on-demand services."

    This has been said many times in the past few years, but it's still not feasible. One big reason YouTube is popular is because it is "Instant-On." No waiting for it to download. Generally no waiting for "buffering."

    BitTorrent and the like are incompatible with that feature. BitTorrent does not download videos (or any other file) in order, and it's actually somewhat harmful to the torrent to distribute the same chunks to everybody. BitTorrent works so well because it gives everybody on the torrent unique chunks to pass along. Not good for streaming.

    Secondly, ISPs drastically limit upload. This means that to get even close to realtime streaming downloads, the seeders (the content provider in this case) need to have massive bandwidth available. Otherwise, it will take to long for the torrent to really get going with other seeders, and the first ~50 people will have to wait to watch. So you're back to having powerful centralized servers again.

    Plus, what benefit do I have for letting them use my upload? With most broadband connections, saturating the upload makes browsing at the same time slow with high latency. It might make sense for community sharing, where the content provider can't afford the bandwidth, and therefore I would want to contribute, but it doesn't make sense for companies to demand that of me.
    • Peer Impact uses a p2p system similar to Bittorent but thier tracker "Traffic Cop" manages all the peers and data in the swarm , with Peer Impact you can watch the video in less than 2 minutes Peer Impact also give thier network members a system credit for contributing bandwidth and computer reasources to the network as an incetive .

      And then theres the Venice project (Yes I'm in the beta) that offers instant on long form ad supported video using p2p streaming .
    • by addsalt ( 985163 )

      This has been said many times in the past few years, but it's still not feasible. One big reason YouTube is popular is because it is "Instant-On."
      Not only that, but most people are inherently selfish and don't look at the big picture. Why should I upload a p2p file any any decent rate and sacrifice that bandwidth? Relying on files provided by the goodwill of others is inherently flawed.
    • One big reason YouTube is popular is because it is "Instant-On." No waiting for it to download. Generally no waiting for "buffering."

      I do end up having to wait for buffering -- but this is not specific to YouTube, and I still wish YouTube didn't exist. Existing formats, like mpegs, are well supported pretty much everywhere Flash is, and many places Flash is not.

      But at any rate, I think you're missing the point. Remember:

      the very same p2p-style networks that 'threatened' legitimate business may be the bas

    • by harks ( 534599 )
      Plus, what benefit do I have for letting them use my upload?
      A video distribution company might give some amount of credit back to customers who upload media to other customers.
  • Asymmetry (Score:4, Interesting)

    by mollymoo ( 202721 ) on Friday December 15, 2006 @12:21PM (#17256604) Journal

    P2P has one major problem - most broadband connections are asymmetric. Very, very asymmetric - ratios of 10:1 download:upload are common. Thus, in order for P2P to be able to saturate downstream bandwidth everyone would need to keep their P2P apps open for 10 times as long as it takes to download what they want. I don't think you're ever going to get a useful proportion of people to do this without a definite incentive. The cost of the bandwidth per movie is pretty small - I'd guess a few tens of cents. So econmically that's the value of the incentive you can offer. Are people really going to leave their PC on or an application open for hours and hours when they're not using it for the few tens of cents worth of incentive it would be economic to provide? I just don't see you average consumer doing this. It's cheaper to buy bandwidth from a major ISP than it is to 'buy' a hundred million tiny chunks of bandwidth from ten million customers. P2P works if people know they're helping the 'community' or getting something for free. Linux ISOs? P2P. Warez? P2P. Official Disney movies? Not so much.

    If you want to reduce bandwidth usage then reduce the number of packets you have to send. Multicast is the right answer. MBone and IPV6 have been around for a long time now. They just aren't very profitable for ISPs, so the push will have to come from the content providers.

    • I pay $40 CDN per month for broadband. With a 60GB monthly cap. download and upload combined.

      Given that the connection is asymetric (5MB down, 1MB up) with a ration of 5:1, and that video is around 500MB per hour (bearable quality).

      I can download 120 hours of video per month. If I have to have a 5:1 ratio, I could download 20 hours (spending 10GB for download and 50GB for upload). That would be 10 movies per month.

      That gives a price of $4 per movie!

      Blockbuster rents a movie at DVD quality -- one new release
  • somebody should check out metalink. the site seems to be down now, but it combines ftp/http and p2p easily. if p2p is blocked where you are, you still get the files by other means. aria2 [sourceforge.net] is a good command line client, but there are others w/ GUIs for Mac and Windows.
  • Realm of the Peers (Score:3, Insightful)

    by Doc Ruby ( 173196 ) on Friday December 15, 2006 @12:31PM (#17256808) Homepage Journal
    The Internet is inherently a P2P network. Client/server architectures, though popular now, are a recent overlay on the TCP/IP architecture. Multicast, the Internet version of the broadcast popular in analog comms for decades, is still enough at odds with Internet architecture that it's barely used.

    The Internet is a network of peer networks of peer hosts. P2P[2P[2P..]] is how everything works already. It's refreshing to see the decentralized, inherently "democratic" and primarily egalitarian Internet model starting to force centralized "old guard" media organizations to admit defeat. If they get on the bandwagon, they can be Ps in the P2P network. If not, they can keep their old network, and we'll barely notice they're gone.
  • by Ponga ( 934481 )
    MULTICAST.

    Why is this technology being, by-and-large, ignored??
    • Re: (Score:3, Interesting)

      by Raideen ( 975130 )

      Why is this technology being, by-and-large, ignored??

      Because we're still on a mostly IPv4 Internet and IPv4 has a very limited number of multicast addresses so content providers would have to fight for them and availability would depend on their schedules. Also, providers seem to be worried that it would saturate their networks. (Less bandwidth usage at the provider means that there will be more multicast services, meaning more clients, meaning more traffic across the upper tier provider's networks.) I'm waiting for what China has to show the rest of the wor

  • ...to a technical or topology problem - in fact it puts a helluva lot of traffic over the expensive "last mile" and is considerably less efficient than a central distribution. In that sense I really find his comparisons off. P2P is a solution for cost redistribution, less administration, "micropayment" using bandwidth and trading using bandwidth like a virtual currency (e.g. bittorrent).

    Assuming that P2P somehow has to level out at 1:1, on Easynews you essentially buy 20GB upload for $10, or about 64kbit su
  • by Animats ( 122034 ) on Friday December 15, 2006 @01:04PM (#17257428) Homepage

    The trouble with "P2P" in its present form is that the topology is designed to evade copyright, not minimize bandwidth. Peering nodes aren't necessarily near each other. You can, and do, get situations where the same content traverses the same backbone paths multiple times. There's no end user penalty for having faraway peers, but it generates unnecessary load.

    Reminds me of, many years ago, watching two coal trains passing each other in opposite directions. You don't see that kind of stupidity any more. Somewhere a trader will do a swap, rather than physically shipping the commodity around.

    Netnews does this right, assuming you want a broadcast system. Netnews was designed for slow links and bandwidth minimization. As I point out occasionally, Netnews could easily handle the entire audio output of the RIAA, which is only a few gigabytes per day, using far less bandwidth than the present "P2P" systems.

    What will work is ISP-level caching. AOL does this, although in a somewhat annoying fashion. In a different way, so does Akamai. We'll probably see more of that.

    • Hee -- I don't know what dates you more: the reference to coal trains, or the expression "NetNews".

      Does anybody actually call it that anymore?

      And yes, once upon a time, nn 6.4 was my newsreader of choice... But then along came tin..
    • They don't talk about it, but many ISPs are already transparently caching P2P traffic (and throttling, and blocking files reported as copyright violations) using CacheLogic products [cachelogic.com].
  • "So how could additional P2P traffic actually be a good thing for the Internet?"

    How is getting any creative work on demand (give or take a day) delivered right into the comfort of your house, NOT what the internet was designed to do? The internet was not designed to be a money making vaccum to suck out peoples wallets. There was a really good film floating around on the gootubes from the late 80s with douglas adams and Tom baker called Hyperland which describes what THEY thought the internet would be. Any g

  • P2P may be better than "centralized unicast" but that's because centralized unicast is dumb. Add caching at the ISP level, and unicast becomes way, way more efficient than P2P ever will.

    ISPs: install caches. Squid is free, and an array of huge disks is cheap. They don't have to be reliable disks -- you can use consumer-grade shit!

    File vendors (e.g. iTMS): make sure your server works correctly with caches.

    Problem solved, and P2P's so-called "efficiency" is totally crushed and embarrassed by the Real Th

  • That they start with symmetric connections, A-symmetric ones of today are one big protection scam.
  • I'm spectacularly unexcited about video over the internet. I've downloaded video, sure (insert one-handed downloading joke here), but I don't find it any more or less exciting than a lot of the other stuff available. Hell, the Gutenberg Project is more intriguing than yet another way to serve up advertising.

    HOWEVER:

    If video drives mainstream acceptance of P2P (and by mainstream, I mean corporate), then it's possible that ISPs won't be able to hide behind the "all you send is clicks and text" rationale that
  • I look forward to a day where service providers are simple common carriers-- you simply pay for the bandwidth, not for how you will apply it. Service providers monitoring traffic for certain "types" is a security and privacy violation, IMHO. All traffic should be encrypted and/or otherwise cloaked end-to-end so providers CAN'T tell what it is, period. How I use my bandwidth is MY business and MY business ONLY.
  • I like how the 'good' in any new technology or method, particularly those in the telecommunications arena and on the internet, doesn't become apparent until someone finds a way for large companies to profit from it. (That's not to say that the technology isn't already available and the benefits aren't already apparent...) Marketing campaign aside: these guys can go fuck themselves. The interweb does just fine without big commerce.
  • ...if I have to pay to download content, I am SURE AS HECK going to expect compensation for allowing others to use my costly bandwidth to download the same content. If I am leeching the content I feel no issue with leaving the BT node up for hours and helping everyone else, in a commercial situation that feeling will evaporate INSTANTLY.
  • As the Internet evolves, more and more of everyday data storage for a variety of needs from storing family albums
    to medical histories to corporate databases will be done in (highly encrypted and massively distributed)
    data clouds, including P2P-hosted data clouds.
    And more and more computing will be done in on-the-fly compute farms in grids, and some of this computing will no
    doubt be hosted on legions of small P2P edge-of-net computing resources.

    With such a scenario, how is it a good thing to allow ISPs to pe
    • by flyneye ( 84093 )
      I agree,the internet is a space for freedom,not big brother,especially when I'm paying him.
      I'll be damned if an ISP is entitled to snoop into what I'm doing if it isn't damaging his systems.I'm paying for bandwidth monthly,what I use it for is MY business.

"If it ain't broke, don't fix it." - Bert Lantz

Working...