Follow Slashdot stories on Twitter


Forgot your password?

Bittorrent Implements Cache Discovery Protocol 170

An anonymous reader writes "CacheLogic and BitTorrent introduce an open-source Cache Discovery Protocol (CDP) that allows ISP's to cache and seed Bittorrent traffic. Currently, Bittorrent traffic is suffering from bandwidth throttling ISP's that claim that Bittorrent traffic is cluttering their pipes. This motivated the developers of the most popular Bittorrent clients implement protocol encryption to protect bittorrent users from being slowed down by their ISP's. However, Bram Cohen, the founder of Bittorrent doubted that encryption was the solution, and found (together with CacheLogic) a more ISP friendly alternative."
This discussion has been archived. No new comments can be posted.

Bittorrent Implements Cache Discovery Protocol

Comments Filter:
  • Off the cuff thought (Score:5, Interesting)

    by Arimus ( 198136 ) on Monday August 07, 2006 @06:22PM (#15862114)
    Just read this and wonder what the legal position for ISP's will be with regards to caching non-legal P2P files (warez, music files etc)?

    With the files being on my PC and served from my PC I'm the responsible party... if the ISP then is caching that data to make it more available (speed/latency/load reduction etc) then the ISP could be deemed to being a party to an illegal act...
  • by MrZaius ( 321037 ) on Monday August 07, 2006 @06:33PM (#15862188) Homepage
    Given that a lot of torrents are copyrighted content, are ISPs really going to want to do this? The moment they start caching these files on their servers, they become a huge target for lawsuits.

    On top of that, what torrents are ever so common as to warrant the use of a cache? There are certainly legitimate users of bittorrent, if you can limit the cache to legitimate content. But what torrents would ever be accessed so frequently by individual users on any given network that this would make sense? My employer's just ~300-400 customers strong, but I don't see how this could be useful to any ISP, given that the largest would probably only benefit if the caches were replicated and stored close to the users.
  • by ajs ( 35943 ) <> on Monday August 07, 2006 @06:48PM (#15862288) Homepage Journal
    First off, many torrents are copyrighted, but many more are not, and they're both a problem for ISPs, so yes they'll WANT to. The question is CAN they? I thik they can, but have to look over the details more.

    If the system simply facilitiates the protocol blindly, then I don't see how they could be any more to blame for copyright violations than AOL's Web proxies. Sure, gigabytes of copyright violations move through AOL's proxies every day (and get cached to speed up downloads), but they literally don't have the processing power to try to make a distinction. Same goes for the ISPs and BitTorrent (or Gnutella, or any of the other high-bandwidth swarming download technologies).
  • by Andy Dodd ( 701 ) <atd7.cornell@edu> on Monday August 07, 2006 @06:50PM (#15862304) Homepage
    It looks like (from TFA), there will be restrictions in place that only allow caching of non-copyrighted, legal content.

    It goes a LONG way towards legitimizing BitTorrent in case anyone tries to sue Bram, but contains no real-world benefits.

    If ISPs want to reduce bandwidth overuse by seeders... Just IMPLEMENT MULTICAST ALREADY!

    Yes, I realize multicast has historically presented major problems in scalability at the backbone router level, but with modern processing power and memory economics, it shouldn't be that difficult to implement now, and in the end presents far more benefits (massive reduction in bandwidth usage) than its disadvantages (backbone routers need some pretty hefty amounts of memory to track all of the multicast groups.)

    Even "limited" multicast solutions like xcast (explicit multicast - basically instead of sending to a "multicast group" an IP datagram is given multiple destinations) would result in MASSIVE reductions in bandwidth usage by P2P applications like BitTorrent.

    Due to the nature of BitTorrent and how it is used in general, caching is just an extremely hackish and limited way of implementing a shitty form of multicast... If the backbone supported multicast, there wouldn't be any need for caching of torrents.
  • by Anonymous Coward on Monday August 07, 2006 @06:54PM (#15862315)
    I think the ISP's are right and that this particular program is doing a great time helping the Net to become clogged up. Not to extreme amounts ofcourse but all little bits help in the process.

    I've used torrents for 2 periods; one time I only let it run 4 - 6 hours to grab some mediafiles and noticed how my bandwith consumption was actually threefold of what I'd normally use. To download a 60Mb (or so) I was at some point with 20Mb downloaded and approx. 60Mb uploaded. Since speed goes both ways (what goes up can't come down (at the same time anyway) and I had a maximum amount of data traffic to consider I decided to stop using the program after this session. Picture my surprise when I kept noticing 'torrent connects' on my firewall logs for the next 4 weeks! I really consider that a major overhead, especially if you consider that not every firewall blocks a port by giving out a response "sorry no access" but many, like mine, simply ignore the whole attempt alltogether. Thats bound not to work well with regards to timing.

    And now that we're on the issue of firewalls. I think that the flexibility to change the used ports is something simply needed in such software. If you can change ftp ports, why wouldn't you be able to change torrent usage ports? However, it would have been a lot nicer if you could specify what port(s) you used so that others would stick to it. I don't like opening up a zillion ports on my firewall, so when I opened up the very basic range in my second session attempt (approx. 1/2 - 1 year later) I noticed that an increase of peers wasn't using the ports I specified to be using. In fact, even though I clearly indicated that I wanted the "default" range I kept torrent hits on ports never progagated (or so I assume) by my torrent client/server.

    So my simple conclusion is that while the whole concept (spreading the load over multiple sources) is a smart one the reality shows a completely different picture where there is a massive amount of overhead being created. Either they look at the global picture (no need anymore to keep sending 60Mb (for example) from your site over and over again, that load is spread over many sites) or simply take a look at a very narrow picture (no problem if there is a server with a slow upload somewhere, there are many others being used in parallel) but it seems no one pays attention to the generated overhead.

    Yes, its nice that you can grab a 60Mb file from many sites in parallel. But is it really as fast as people claim? Using several feeds means more overhead on your box with regards to dataprocessing. Then there's the bandwith itself to keep in mind, you only have so much to spare... But when I see that a 60Mb download actually generates 180Mb worth of data then I can't agree with people saying how much better a torrent is and that the spreading of data is actually a good thing. Sure, perhaps in a global picture... But for anything else (security, bandwith, etc.) I think its a poor concept.
  • by Sark666 ( 756464 ) on Monday August 07, 2006 @07:00PM (#15862349)
    When bittorrent 4.2 was released, there was already mention of this, and I thought ya right the isps will help with torrents, but supposedly isp caching (even copyright material) is allowed under the dmca. [] sc_sec_17_00000512----000-.html []

    " If a file shows up on the network frequently, the cache stores that file so that its seeded in the network rather than by peers. ISPs appreciate this because their access networks are terribly congested with P2P traffic. Caches are legal and covered explicitly in the DMCA"
  • by sdpinpdx ( 66786 ) * <sdp&scottp,us> on Monday August 07, 2006 @07:20PM (#15862459) Journal
    No ISP cooperation necessary. This has been tested experimentally a couple of times.

    See []
  • Another Cache? (Score:3, Interesting)

    by OverlordQ ( 264228 ) on Monday August 07, 2006 @07:21PM (#15862466) Journal
    Azureus has supported JPC ( []) for quite a while now.
  • by doshell ( 757915 ) on Monday August 07, 2006 @08:00PM (#15862728)

    Wouldn't IP Multicast [] be a more appropriate solution to this problem (and, for that matter, also for the whole lot of streaming content that flows on the 'net nowadays)? AFAIK it has been standardised for some time now, both for IPv4 for IPv6. Why, then, is it that multicast is virtually unused outside private networks?

  • by Pxtl ( 151020 ) on Monday August 07, 2006 @09:26PM (#15863126) Homepage
    I think you've hit the nail on the head with "ya right".

    Doesn't matter if ISP-side torrent-cacheing woudl turn every computer into a supercomputer - ISPs won't do it, for a variety of reasons:

    1) Legal liability, obviously. Sure, it's probably fine, but not-caching torrents is definitely fine, which is better than probably fine. This is called the "chilling effect".

    2) Easier just to not do it. The torrent-cache is one more system to maintain that they'd probably just rather do without. For any software problem there are two solutions, the right solution and the easy solution - and an ISP will always choose the easy solution unless it offends 99% of their customers.

    3) The only people who want this feature are the kind of users the ISP would rather be rid of. You know, the users that actually use their service instead of just checking email once in a while.

  • JPC (Score:3, Interesting)

    by eddy ( 18759 ) on Monday August 07, 2006 @09:42PM (#15863195) Homepage Journal

    Azureus already have LAN Peer Finder and JPC (Joltid Peer Cache []). Not sure how this is different from JPC on the practical level:

    Joltid PeerCache (JPC) is a device employed by ISPs to reduce the huge external network bandwidth required to support today's P2P networks. It basically acts as a caching web proxy (like Squid), only for P2P file data.

    Looks like by going its own way, the official client will once again create segmentation, just like with DHT.

  • by aprilsound ( 412645 ) on Monday August 07, 2006 @11:35PM (#15863622) Homepage
    You choose one at random. The chance of a collision is low, and if it is detected, you randomly choose again. Not a big deal.

    In response to the GP, it's not even a matter of implementing multicast. Almost all of the networking hardware out there has it in place, it's just turned off.

    The reason? The original implementation is hard for ISPs to charge for. But there is hope. At SIGCOMM 2006, there was a proposal that would be more ISP friendly, with a minimal performance hit. Its called Free Riding Multicast [] and essentially piggybacks off BGP's unicast routes.

  • Re:i wanna go fast (Score:5, Interesting)

    by arivanov ( 12034 ) on Tuesday August 08, 2006 @04:38AM (#15864501) Homepage
    More likely fast in terms of "lawyers homing fast".

    Anyway, the problem is elsewhere. It all boils down to Telco thinking combined with incompetence. ISPs have degenerated to the point of being either telco resellers or telco wannabies and they are no longer capable of solving a trivial problem through network design and product definition. So they try a silver bullet (CacheLogic) or a big stick (fare share, bandwidth throttle and "kick the hogs" policies) instead.

    Once upon a time around 10+ years ago it was commonplace to charge people for traffic and to have multiple charge categories with local traffic free or nearly free. That was in the days before the big telcos became interested in the Internet. When the big telcos became interested in the Internet the first thing they pushed for was to increase port density and bandwidth on access concentrators and routers. In order to do this the vendors killed the bandwidth accounting features. Best example - Cisco Netflow stopped working in 1999-2000 with the release of CEF (can give plenty of other examples actually).

    As a result of the normal equipment upgrade cycle 10 years later there are very few devices out there capable (and tested in real deployments) of bandwidth accounting on the edge. Even if there were, as a result of the "people upgrade cycle" there are even less people in ISP business development and engineering capable of defining, developing and rolling out a bandwidth accounting based product.

    If the charging was based on bandwidth accounting and local traffic was free (or seriously discounted) the "bandwidth hogs" problem would go away right away. So will most of the "Joe Idiot" problems related to people not cleaning their zombie machines (when these start costing them money they will be cleaned right away). People will again start running local network services for community purposes. For example I used to run centralised network backup for some friends but I stopped as eats the monthly "fair use" quota allocated to me by the ISP in less than a week. And so on.

    The only people who will actually suffer from the reintroduction of bandwidth and differentiated charging will be c***sucking freeloaders of the Nichlaus Zenstrom "it is my right to steal your bandwidth for my service" variety. And CacheLogic (the economical drive to buy their device will go away). Frankly, good bye and good riddance.

New systems generate new problems.