Bittorrent Implements Cache Discovery Protocol 170
An anonymous reader writes "CacheLogic and BitTorrent introduce an open-source Cache Discovery Protocol (CDP) that allows ISP's to cache and seed Bittorrent traffic. Currently, Bittorrent traffic is suffering from bandwidth throttling ISP's that claim that Bittorrent traffic is cluttering their pipes. This motivated the developers of the most popular Bittorrent clients implement protocol encryption to protect bittorrent users from being slowed down by their ISP's. However, Bram Cohen, the founder of Bittorrent doubted that encryption was the solution, and found (together with CacheLogic) a more ISP friendly alternative."
Off the cuff thought (Score:5, Interesting)
With the files being on my PC and served from my PC I'm the responsible party... if the ISP then is caching that data to make it more available (speed/latency/load reduction etc) then the ISP could be deemed to being a party to an illegal act...
Re:Possible legal problems (Score:3, Interesting)
On top of that, what torrents are ever so common as to warrant the use of a cache? There are certainly legitimate users of bittorrent, if you can limit the cache to legitimate content. But what torrents would ever be accessed so frequently by individual users on any given network that this would make sense? My employer's just ~300-400 customers strong, but I don't see how this could be useful to any ISP, given that the largest would probably only benefit if the caches were replicated and stored close to the users.
Re:Possible legal problems (Score:4, Interesting)
If the system simply facilitiates the protocol blindly, then I don't see how they could be any more to blame for copyright violations than AOL's Web proxies. Sure, gigabytes of copyright violations move through AOL's proxies every day (and get cached to speed up downloads), but they literally don't have the processing power to try to make a distinction. Same goes for the ISPs and BitTorrent (or Gnutella, or any of the other high-bandwidth swarming download technologies).
Re:Off the cuff thought (Score:5, Interesting)
It goes a LONG way towards legitimizing BitTorrent in case anyone tries to sue Bram, but contains no real-world benefits.
If ISPs want to reduce bandwidth overuse by seeders... Just IMPLEMENT MULTICAST ALREADY!
Yes, I realize multicast has historically presented major problems in scalability at the backbone router level, but with modern processing power and memory economics, it shouldn't be that difficult to implement now, and in the end presents far more benefits (massive reduction in bandwidth usage) than its disadvantages (backbone routers need some pretty hefty amounts of memory to track all of the multicast groups.)
Even "limited" multicast solutions like xcast (explicit multicast - basically instead of sending to a "multicast group" an IP datagram is given multiple destinations) would result in MASSIVE reductions in bandwidth usage by P2P applications like BitTorrent.
Due to the nature of BitTorrent and how it is used in general, caching is just an extremely hackish and limited way of implementing a shitty form of multicast... If the backbone supported multicast, there wouldn't be any need for caching of torrents.
Torrents are major traffic hogs (Score:1, Interesting)
I've used torrents for 2 periods; one time I only let it run 4 - 6 hours to grab some mediafiles and noticed how my bandwith consumption was actually threefold of what I'd normally use. To download a 60Mb (or so) I was at some point with 20Mb downloaded and approx. 60Mb uploaded. Since speed goes both ways (what goes up can't come down (at the same time anyway) and I had a maximum amount of data traffic to consider I decided to stop using the program after this session. Picture my surprise when I kept noticing 'torrent connects' on my firewall logs for the next 4 weeks! I really consider that a major overhead, especially if you consider that not every firewall blocks a port by giving out a response "sorry no access" but many, like mine, simply ignore the whole attempt alltogether. Thats bound not to work well with regards to timing.
And now that we're on the issue of firewalls. I think that the flexibility to change the used ports is something simply needed in such software. If you can change ftp ports, why wouldn't you be able to change torrent usage ports? However, it would have been a lot nicer if you could specify what port(s) you used so that others would stick to it. I don't like opening up a zillion ports on my firewall, so when I opened up the very basic range in my second session attempt (approx. 1/2 - 1 year later) I noticed that an increase of peers wasn't using the ports I specified to be using. In fact, even though I clearly indicated that I wanted the "default" range I kept torrent hits on ports never progagated (or so I assume) by my torrent client/server.
So my simple conclusion is that while the whole concept (spreading the load over multiple sources) is a smart one the reality shows a completely different picture where there is a massive amount of overhead being created. Either they look at the global picture (no need anymore to keep sending 60Mb (for example) from your site over and over again, that load is spread over many sites) or simply take a look at a very narrow picture (no problem if there is a server with a slow upload somewhere, there are many others being used in parallel) but it seems no one pays attention to the generated overhead.
Yes, its nice that you can grab a 60Mb file from many sites in parallel. But is it really as fast as people claim? Using several feeds means more overhead on your box with regards to dataprocessing. Then there's the bandwith itself to keep in mind, you only have so much to spare... But when I see that a 60Mb download actually generates 180Mb worth of data then I can't agree with people saying how much better a torrent is and that the spreading of data is actually a good thing. Sure, perhaps in a global picture... But for anything else (security, bandwith, etc.) I think its a poor concept.
Re:Off the cuff thought (Score:5, Interesting)
http://www.slyck.com/news.php?story=1231 [slyck.com]
http://www4.law.cornell.edu/uscode/html/uscode17/
" If a file shows up on the network frequently, the cache stores that file so that its seeded in the network rather than by peers. ISPs appreciate this because their access networks are terribly congested with P2P traffic. Caches are legal and covered explicitly in the DMCA"
Locality awareness in the protocol is the answer (Score:3, Interesting)
See http://del.icio.us/tag/p2p+locality [del.icio.us]
Another Cache? (Score:3, Interesting)
Why not IP Multicast? (Score:3, Interesting)
Wouldn't IP Multicast [wikipedia.org] be a more appropriate solution to this problem (and, for that matter, also for the whole lot of streaming content that flows on the 'net nowadays)? AFAIK it has been standardised for some time now, both for IPv4 for IPv6. Why, then, is it that multicast is virtually unused outside private networks?
Re:Off the cuff thought (Score:3, Interesting)
Doesn't matter if ISP-side torrent-cacheing woudl turn every computer into a supercomputer - ISPs won't do it, for a variety of reasons:
1) Legal liability, obviously. Sure, it's probably fine, but not-caching torrents is definitely fine, which is better than probably fine. This is called the "chilling effect".
2) Easier just to not do it. The torrent-cache is one more system to maintain that they'd probably just rather do without. For any software problem there are two solutions, the right solution and the easy solution - and an ISP will always choose the easy solution unless it offends 99% of their customers.
3) The only people who want this feature are the kind of users the ISP would rather be rid of. You know, the users that actually use their service instead of just checking email once in a while.
3)
JPC (Score:3, Interesting)
Azureus already have LAN Peer Finder and JPC (Joltid Peer Cache [azureuswiki.com]). Not sure how this is different from JPC on the practical level:
Looks like by going its own way, the official client will once again create segmentation, just like with DHT.
Re:Off the cuff thought (Score:4, Interesting)
In response to the GP, it's not even a matter of implementing multicast. Almost all of the networking hardware out there has it in place, it's just turned off.
The reason? The original implementation is hard for ISPs to charge for. But there is hope. At SIGCOMM 2006, there was a proposal that would be more ISP friendly, with a minimal performance hit. Its called Free Riding Multicast [stanford.edu] and essentially piggybacks off BGP's unicast routes.
Re:i wanna go fast (Score:5, Interesting)
Anyway, the problem is elsewhere. It all boils down to Telco thinking combined with incompetence. ISPs have degenerated to the point of being either telco resellers or telco wannabies and they are no longer capable of solving a trivial problem through network design and product definition. So they try a silver bullet (CacheLogic) or a big stick (fare share, bandwidth throttle and "kick the hogs" policies) instead.
Once upon a time around 10+ years ago it was commonplace to charge people for traffic and to have multiple charge categories with local traffic free or nearly free. That was in the days before the big telcos became interested in the Internet. When the big telcos became interested in the Internet the first thing they pushed for was to increase port density and bandwidth on access concentrators and routers. In order to do this the vendors killed the bandwidth accounting features. Best example - Cisco Netflow stopped working in 1999-2000 with the release of CEF (can give plenty of other examples actually).
As a result of the normal equipment upgrade cycle 10 years later there are very few devices out there capable (and tested in real deployments) of bandwidth accounting on the edge. Even if there were, as a result of the "people upgrade cycle" there are even less people in ISP business development and engineering capable of defining, developing and rolling out a bandwidth accounting based product.
If the charging was based on bandwidth accounting and local traffic was free (or seriously discounted) the "bandwidth hogs" problem would go away right away. So will most of the "Joe Idiot" problems related to people not cleaning their zombie machines (when these start costing them money they will be cleaned right away). People will again start running local network services for community purposes. For example I used to run centralised network backup for some friends but I stopped as eats the monthly "fair use" quota allocated to me by the ISP in less than a week. And so on.
The only people who will actually suffer from the reintroduction of bandwidth and differentiated charging will be c***sucking freeloaders of the Nichlaus Zenstrom "it is my right to steal your bandwidth for my service" variety. And CacheLogic (the economical drive to buy their device will go away). Frankly, good bye and good riddance.