Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Bittorrent Implements Cache Discovery Protocol 170

An anonymous reader writes "CacheLogic and BitTorrent introduce an open-source Cache Discovery Protocol (CDP) that allows ISP's to cache and seed Bittorrent traffic. Currently, Bittorrent traffic is suffering from bandwidth throttling ISP's that claim that Bittorrent traffic is cluttering their pipes. This motivated the developers of the most popular Bittorrent clients implement protocol encryption to protect bittorrent users from being slowed down by their ISP's. However, Bram Cohen, the founder of Bittorrent doubted that encryption was the solution, and found (together with CacheLogic) a more ISP friendly alternative."
This discussion has been archived. No new comments can be posted.

Bittorrent Implements Cache Discovery Protocol

Comments Filter:
  • by MrSquirrel ( 976630 ) on Monday August 07, 2006 @06:22PM (#15862112)
    We have the technology -- we can make him stronger, faster, better! ...now, if only there were some more seeders.
    • It's about time something like this was done. Caching is complicated but is in theory so much faster. The older system of local mirrors for downloading software faster is something that could really benefit from being used in conjunction with bittorrent.
    • Re:i wanna go fast (Score:4, Insightful)

      by timeOday ( 582209 ) on Monday August 07, 2006 @07:50PM (#15862659)
      Wouldn't this technology make your ISP a seeder? Now that would be fast.
      • Re:i wanna go fast (Score:5, Interesting)

        by arivanov ( 12034 ) on Tuesday August 08, 2006 @04:38AM (#15864501) Homepage
        More likely fast in terms of "lawyers homing fast".

        Anyway, the problem is elsewhere. It all boils down to Telco thinking combined with incompetence. ISPs have degenerated to the point of being either telco resellers or telco wannabies and they are no longer capable of solving a trivial problem through network design and product definition. So they try a silver bullet (CacheLogic) or a big stick (fare share, bandwidth throttle and "kick the hogs" policies) instead.

        Once upon a time around 10+ years ago it was commonplace to charge people for traffic and to have multiple charge categories with local traffic free or nearly free. That was in the days before the big telcos became interested in the Internet. When the big telcos became interested in the Internet the first thing they pushed for was to increase port density and bandwidth on access concentrators and routers. In order to do this the vendors killed the bandwidth accounting features. Best example - Cisco Netflow stopped working in 1999-2000 with the release of CEF (can give plenty of other examples actually).

        As a result of the normal equipment upgrade cycle 10 years later there are very few devices out there capable (and tested in real deployments) of bandwidth accounting on the edge. Even if there were, as a result of the "people upgrade cycle" there are even less people in ISP business development and engineering capable of defining, developing and rolling out a bandwidth accounting based product.

        If the charging was based on bandwidth accounting and local traffic was free (or seriously discounted) the "bandwidth hogs" problem would go away right away. So will most of the "Joe Idiot" problems related to people not cleaning their zombie machines (when these start costing them money they will be cleaned right away). People will again start running local network services for community purposes. For example I used to run centralised network backup for some friends but I stopped as eats the monthly "fair use" quota allocated to me by the ISP in less than a week. And so on.

        The only people who will actually suffer from the reintroduction of bandwidth and differentiated charging will be c***sucking freeloaders of the Nichlaus Zenstrom "it is my right to steal your bandwidth for my service" variety. And CacheLogic (the economical drive to buy their device will go away). Frankly, good bye and good riddance.
        • The only people who will actually suffer from the reintroduction of bandwidth and differentiated charging

          And hey, the best part of the whole thing is that your ISP just has to drop every other TCP packet in order to charge you double! Half the work for twice the price is a great deal no matter how you slice it!
  • Off the cuff thought (Score:5, Interesting)

    by Arimus ( 198136 ) on Monday August 07, 2006 @06:22PM (#15862114)
    Just read this and wonder what the legal position for ISP's will be with regards to caching non-legal P2P files (warez, music files etc)?

    With the files being on my PC and served from my PC I'm the responsible party... if the ISP then is caching that data to make it more available (speed/latency/load reduction etc) then the ISP could be deemed to being a party to an illegal act...
    • by zhouray ( 985297 ) on Monday August 07, 2006 @06:25PM (#15862134)
      I assumed you didn't read the article. It says "only for commercially licensed content".
    • no different than news servers. They don't monitor what goes on it, they only respond when contacted by the copyright holder. No harm no foul.
    • by muftak ( 636261 ) on Monday August 07, 2006 @06:50PM (#15862302)
      On the cache the files are stored as file chunks, with only a reference to the file hash value, not the filename. So the ISP has no idea what is in the cache, so it is the same as the file being passed through their network.
      • ...but without the crypto.

        It's a shame more ISPs don't run freenet, tor, or i2p nodes. Usenet servers were a good idea, and torrent caching servers are a step in the right direction.
    • by Andy Dodd ( 701 ) <atd7NO@SPAMcornell.edu> on Monday August 07, 2006 @06:50PM (#15862304) Homepage
      It looks like (from TFA), there will be restrictions in place that only allow caching of non-copyrighted, legal content.

      It goes a LONG way towards legitimizing BitTorrent in case anyone tries to sue Bram, but contains no real-world benefits.

      If ISPs want to reduce bandwidth overuse by seeders... Just IMPLEMENT MULTICAST ALREADY!

      Yes, I realize multicast has historically presented major problems in scalability at the backbone router level, but with modern processing power and memory economics, it shouldn't be that difficult to implement now, and in the end presents far more benefits (massive reduction in bandwidth usage) than its disadvantages (backbone routers need some pretty hefty amounts of memory to track all of the multicast groups.)

      Even "limited" multicast solutions like xcast (explicit multicast - basically instead of sending to a "multicast group" an IP datagram is given multiple destinations) would result in MASSIVE reductions in bandwidth usage by P2P applications like BitTorrent.

      Due to the nature of BitTorrent and how it is used in general, caching is just an extremely hackish and limited way of implementing a shitty form of multicast... If the backbone supported multicast, there wouldn't be any need for caching of torrents.
      • by mzs ( 595629 ) on Monday August 07, 2006 @07:04PM (#15862369)
        And who doles-out the multicast group addresses? I think the problem is harder than you think at first glance.
        • ICANN?

          I'd imagine that ISPs would have to buy small chunks of multicast addresses and then resell them to people. Unfortunatly that will probably kill the idea before it even gets started, since ISPs will no doubt charge and arm and a leg for a Multicast IP and Bittorrent users generally want to avoid drawing too much attention from their ISP. It might make sense if there is just a pool of multicast groups that's managed by a central server for all Bittorrent users, but even that sounds like a non-start
          • by silas_moeckel ( 234313 ) <silas@@@dsminc-corp...com> on Monday August 07, 2006 @09:56PM (#15863253) Homepage
            They are allready allocated. Modern multicast uses a source IP / port, multicast destination address /port tuple(sp?) so realy you can pick any of the piles of multicast addresses to use traffic is split up based upon the tuple that you joined. Lower end gear hasent been as specific as higher end gear in splitting up traffic leasing the OS to remove anything unwanted but modern switches listen in on multicast setup to be more specific but those times are going away as the old gear gets aged out (managed 100bt gear is about the newest stuff that would do this)
        • As I mentioned in my post, even a limited form of multicast such as explicit multicast (Google for xcast) solves both the issue of limited multicast groups and of massive routing tables. It isn't as formalized a standard as traditional IP Multicast, but that doesn't really matter since IP Multicast is basically not implemented by anyone except in very limited scopes. Yes, xcast has its own limitations (limit on number of destinations from the maximum size of an IP datagram, and the potential for "spam" st
        • by aprilsound ( 412645 ) on Monday August 07, 2006 @11:35PM (#15863622) Homepage
          You choose one at random. The chance of a collision is low, and if it is detected, you randomly choose again. Not a big deal.

          In response to the GP, it's not even a matter of implementing multicast. Almost all of the networking hardware out there has it in place, it's just turned off.

          The reason? The original implementation is hard for ISPs to charge for. But there is hope. At SIGCOMM 2006, there was a proposal that would be more ISP friendly, with a minimal performance hit. Its called Free Riding Multicast [stanford.edu] and essentially piggybacks off BGP's unicast routes.

          • FRM seems like an interesting research topic, but while addressing the limitations of now fashionable Single Source Multicast (SSM), it is an overkill for P2P. P2P could do fine with very simple SSM.

            But in the end I doubt the feasibility of IP (layer 3) multicast. SSM solves the multicast routing problem, and source/group discovery and advertising can now be easily done manually, but other multicast problems remain: synchronicity (everyboday has to recieve at the same time), least common denominator bandwit
      • > It looks like (from TFA), there will be restrictions in place that only allow caching
        > of non-copyrighted, legal content.

        "Non-copyrighted", eh? I suspect that isn't what you really mean. Hint: this article is copyrighted. So is yours.
      • by Spezzer ( 101371 ) on Monday August 07, 2006 @09:29PM (#15863140)
        Some people in networking research believe that the problem with Multicast (and even QoS) has nothing to do with scalability, but more with economics. Although in this case, ISPs would reduce traffic going through their network by enabling multicast, there is no popular method of accounting for internal traffic when multicast is enabled on all routers. For most ISPs this is unacceptable, since large customers are billed based on the amount of traffic sent. Since there's no economic model developed for multicast-traffic, ISPs would rather throttle back BitTorrent than enable multicast. Someone please correct me if I'm mistaken on any of these points.

        Most networking researchers seem to believe multicast is technologically feasible and helpful, which is why a lot of Internet architecture research seems to provide methods for multicast, even though hardly anybody uses it today.

      • If ISPs want to reduce bandwidth overuse by seeders... Just IMPLEMENT MULTICAST ALREADY!
        Isn't Multicast a real-time protocol, i.e. everyone would have to download a torrent at the same time to benefit from it? Multicast seems to be more suited for TV-like applications, not random access bulk data. Or am I missing something?
        • I'll give you an example how it would be used in a bittorrent style network application:

          I am peer 1. I have section 4 of "the file". In current bittorrent, I upload this file to peer 2. However, peers 3, 7, 24, 23, and 15 need that chunk too. With multicast, I can send the file to all of them at once.

          Sure, it has to be at the same time. There may be times when a portion of a file is sent to only 1 user. But with significantly large peer swarms, it is useful.
          • by markom ( 220743 )
            Well, there is only one problem with this. Multicast in itself is connectionless and doesn't work with TCP. If I'm not much mistaken, bittorrent is TCP. To make it work with UDP, the whole new mechanism would have to be developed for it to be reliable. There are solutions like "reliable multicast" that has fallback to unicast, but on a large scale, this won't work. Benefits of multicast would be absolutely minimal.
            • So? While it wouldn't be exactly trivial to create a BitTorrent-like multicast protocol, it'd be fairly straightforward. Hell, if I were to code up a very rough prototype all by myself, I can't see it taking me more than three weeks, and most of that would be protocol design. Once it'd been implemented once, it'd get steamrolled into most of the BitTorrent clients out there in a matter of 3-6 months as an option, just like every other new BT feature. Of course, without a multicast-capable Internet, nobo

              • You are completely missing the point.

                Yes, you can make it use multicast. You have any-to-any multicasting (or as it is actually called, bidirectional multicast). What happens if (when!) on of the hosts misses few packets? It needs to recollect them somehow, but... you can't retransmit, because it retransmits to the whole group? How do you solve that on a global scale? Of course, for that client, you reverse to unicast and send to him. With network of few thousand clients, in a matter of few minutes, you'll
                • If even two peers manage to get a chunk off the same transmission you save bandwidth. Especially with the current asynchronous connection fad it would increase bittorrent download speeds considerably while reducing total bandwidth used.

                  It doesn't matter if a couple peers miss a transmission. You don't retransmit to that same group - you instead transmit that piece to the new group that needs it when nessisary, a new group that includes those from the old group that missed it the first time.

                  The simplest im

    • by Sark666 ( 756464 ) on Monday August 07, 2006 @07:00PM (#15862349)
      When bittorrent 4.2 was released, there was already mention of this, and I thought ya right the isps will help with torrents, but supposedly isp caching (even copyright material) is allowed under the dmca.

      http://www.slyck.com/news.php?story=1231 [slyck.com]

      http://www4.law.cornell.edu/uscode/html/uscode17/u sc_sec_17_00000512----000-.html [cornell.edu]

      " If a file shows up on the network frequently, the cache stores that file so that its seeded in the network rather than by peers. ISPs appreciate this because their access networks are terribly congested with P2P traffic. Caches are legal and covered explicitly in the DMCA"
      • The thing is, the intention of the law for caching is for otherwise legal copies. As in, graphic images for popular websites like slashdot are allowed to be cached, so long as you obey industry standard refresh requirements. And section E of the conditions pretty much makes it clear that observing copyright is more important than saving bandwidth.

        Which is to say, because the internet is incredibly efficient at duplicating binary information, rather than mere transferral, machines involved in improving this
        • the intention of the law for caching is for otherwise legal copies

          The "legality" of the cache's contents are irrelevant, and you really should just read the part of Section 512 the PP has so kindly linked to [cornell.edu]. You can argue intent until you're blue in the face, but all you're doing is second-guessing, because the law itself -- not its intent -- is what has force.

          That said, should the MPAA/RIAA trusts decide to argue about this, their claim will probably be that this is not a cache. They'll say somethin

      • by Pxtl ( 151020 )
        I think you've hit the nail on the head with "ya right".

        Doesn't matter if ISP-side torrent-cacheing woudl turn every computer into a supercomputer - ISPs won't do it, for a variety of reasons:

        1) Legal liability, obviously. Sure, it's probably fine, but not-caching torrents is definitely fine, which is better than probably fine. This is called the "chilling effect".

        2) Easier just to not do it. The torrent-cache is one more system to maintain that they'd probably just rather do without. For any software p
        • But it could save bandwidth for the ISP. Any transfers they can keep within their own network are transfers they do not have to pay someone else for. As to whether that outweighs the reasons you give against implementing caching remains to be seen.
    • I work for an ISP, and no, you are incorrect. ISP's are not Telco's and are therefore, not covered by common carrier status. You share illegal files, your ISP is just as liable as you are. If the copyright holder files a complaint with the ISP, and the ISP doesn't deal with the issue to the holder's satisfaction, the ISP can be sued as if they were directly responsable.

      Doesn't mean it happens, any smart ISP noc shuts anything down as soon as they get a complaint. Frequent offenders might just find themselve
  • by woodhouse ( 625329 ) on Monday August 07, 2006 @06:24PM (#15862128) Homepage
    Given that a lot of torrents are copyrighted content, are ISPs really going to want to do this? The moment they start caching these files on their servers, they become a huge target for lawsuits.
    • Dang I was just going to say that! Well here is to hoping that the MPAA and RIAA sue every ISP in the nation. When all the evildoers are busy fighting with each other they are bound to leave us consumers alone. (Maybe wishful thinking)
    • Given that a lot of torrents are copyrighted content, are ISPs really going to want to do this? The moment they start caching these files on their servers, they become a huge target for lawsuits.

      On top of that, what torrents are ever so common as to warrant the use of a cache? There are certainly legitimate users of bittorrent, if you can limit the cache to legitimate content. But what torrents would ever be accessed so frequently by individual users on any given network that this would make sense? My e
      • On top of that, what torrents are ever so common as to warrant the use of a cache?

        How many people download Linux ISOs using BT? If 30 people on one ISP download a new release, and it's using this, the ISP saves about 20-30GB and the users get the full 300KB/s they pay for instead of 2-3KB/s.
        • Nevermind the occasional Linux ISO over BT. 20-30 gigabytes is chump change compared to the cost of a disk to store that data on. Instead, think Naruto [yhbt.mine.nu]. With over 2 thousand simultaneous users (and this is almost two weeks old!), it's likely a valuable net gain for them there. And if they go ahead with it, usage would spike even higher; many users find BT throttled or simply slow; if suddenly you were maxxing out that "6 mbps" line the cable company sold you for BT, I don't think you'd bother with DCC bots
    • From the article:

      ...downloads will be accelerated instead of throttled. However, only for commercially licensed content.

    • by Bogtha ( 906264 ) on Monday August 07, 2006 @06:46PM (#15862279)

      Given that a lot of torrents are copyrighted content, are ISPs really going to want to do this? The moment they start caching these files on their servers, they become a huge target for lawsuits.

      They already do it with HTTP proxies and Usenet servers without getting sued. So long as they are simply complying with a content-neutral communications protocol - which is basically the whole point of an ISP, I don't see how they could be held accountable. Their business is to transport bits in a particular fashion. It's not up to them to decide which bits are "good" bits and which bits are "naughty" bits.

    • by ajs ( 35943 ) <{ajs} {at} {ajs.com}> on Monday August 07, 2006 @06:48PM (#15862288) Homepage Journal
      First off, many torrents are copyrighted, but many more are not, and they're both a problem for ISPs, so yes they'll WANT to. The question is CAN they? I thik they can, but have to look over the details more.

      If the system simply facilitiates the protocol blindly, then I don't see how they could be any more to blame for copyright violations than AOL's Web proxies. Sure, gigabytes of copyright violations move through AOL's proxies every day (and get cached to speed up downloads), but they literally don't have the processing power to try to make a distinction. Same goes for the ISPs and BitTorrent (or Gnutella, or any of the other high-bandwidth swarming download technologies).
      • Almost no data transmitted over BitTorrent is without copyright, as under the Berne Convention, works are copyrighted upon creation even when such rights are not claimed by the creator, or when the creator remains anonymous.

        The only works which are not copyrighted are those in the public domain due to the expiration of their term, or those where the copyrights were explicitly waived. In other words, the vast minority of content transferred over BitTorrent.
    • Given that a lot of torrents are copyrighted content, are ISPs really going to want to do this? The moment they start caching these files on their servers, they become a huge target for lawsuits.

      Google's caches are full of copyrighted content. Are they a huge target for lawsuits? If not, why not?

      • They do get targetted, off the top of my head I recall at least one porn site taking action for cached images etc, and there was the thing with them scanning books in too. Google's argument goes along the lines of checking for things like a robots.txt file, or certain META tags in documents, and excluding anything requested. This does make them more opt-out than opt-in, but I think everybody realises how useless an opt-in search engine would be in comparison to a spidering one.

    • Given that a lot of torrents are copyrighted content ...
      I think I know what you're trying to say, but free and open-source software and content that's distributed over BitTorrent is also copyrighted content. I think you're trying to say "Copyrighted content distributed without the owner's consent" or something like that.

      I don't like to see the notion reinforced that "copyright" == "RIAA/MPAA bait."

  • by zonker ( 1158 ) on Monday August 07, 2006 @06:26PM (#15862138) Homepage Journal
    when will this be implemented in azureus and utorrent? i appreciate bram's work immensely but i'm not too keen on his app...
    • JPC (Score:3, Interesting)

      by eddy ( 18759 )

      Azureus already have LAN Peer Finder and JPC (Joltid Peer Cache [azureuswiki.com]). Not sure how this is different from JPC on the practical level:

      Joltid PeerCache (JPC) is a device employed by ISPs to reduce the huge external network bandwidth required to support today's P2P networks. It basically acts as a caching web proxy (like Squid), only for P2P file data.

      Looks like by going its own way, the official client will once again create segmentation, just like with DHT.

  • Pipes? (Score:5, Funny)

    by norminator ( 784674 ) on Monday August 07, 2006 @06:27PM (#15862145)
    Currently, Bittorrent traffic is suffering from bandwidth throttling ISP's that claim that Bittorrent traffic is cluttering their pipes.

    You mean tubes.
    • Re:Pipes? (Score:3, Funny)

      by Scorchmon ( 305172 )
      Not to be confused wtih a big truck. The internet is most definitely not a big truck. You could possibly confuse the two.
      • Re:Pipes? (Score:3, Funny)

        by x2A ( 858210 )
        Look, you can't just keep dumping your own private jokes on this slashdot, it can't support them, and results in situations where it can take me 5 days to get the joke.

    • Looks to me like some horses and poker chips could solve the ISPs problem.
    • Currently, Bittorrent traffic is suffering from bandwidth throttling ISP's that claim that Bittorrent traffic is cluttering their pipes.

      You mean tubes.

      No, I'm pretty sure he mean hoses.
  • by Anonymous Coward
    CDP = Cisco Discovery Protocol
    http://www.javvin.com/protocolCDP.html [javvin.com]
    • ahh just give it spanning tree's abv.. no one cares about it anyways
    • A few alternative suggestions:
      TCP (Torrent Cache Protocol)
      SMB (Storage Method for Bittorrent)
      ATM (Advanced Torrent Method)
      BOOTP (Bittorrent Over Other Temporarystorage Protocol)
      BGP (Bittorrent Gateway Protocol)
      HTTP (Helper Torrent Transfer Protocol)
      NTP (Networkfriendly Torrent Protocol)
      TDMA (Torrent Data Management Advanced)
      TFTP (Torrent File Transfer Protocol)

      I'm sure someone will have a few even better suggestions
  • obligatory (Score:3, Funny)

    by cli_rules! ( 915096 ) on Monday August 07, 2006 @06:28PM (#15862152)
    Isn't torrents clogging up the tubes the real problem?
    • Care to explain the joke to the people who don't follow the latest digital fads and jokes?

      --
      Evan

      • Senator Ted Stevens of Alaska (previously famous for "the bridge to nowhere") is one of our leading idiots (and it really does take a lot to stand out in our current crop of Senators). He was recently featured on the Daily Show comparing the Internets to a bunch of "tubes". He was speaking as a no doubt well paid agent of the poor telecoms industry which needs to be able to extort money from Google, et al in order to pay for new tubes.
      • yeah but it might still take you 5 days to get it.

  • between the clogged tubes and the friggin SNAKES... on a PLANE! I'm not sure what to do...
  • by saleenS281 ( 859657 ) on Monday August 07, 2006 @06:39PM (#15862216) Homepage
    It's no different than them hosting usenet servers. When contacted by copyright holders they are required to remove the infringing material(s). As long as they aren't actively monitoring what they're caching, they aren't required by law to do anything about it. +1 for legal precedence before lobbyists took over our government (at least the telecom portion).
  • by ElephanTS ( 624421 ) on Monday August 07, 2006 @06:41PM (#15862238)
    Currently, Bittorrent traffic is suffering from bandwidth throttling ISP's that claim that Bittorrent traffic is cluttering their pipes.

    Jeez, who writes this stuff? Must be clueless because everyone knows the internet uses tubes. Sheesh.
  • Seems like CacheLogic will be providing hardware supporting this new CDP protocol (which, ahem, CISCO Discovery protocol also shares the same acronym). Neato. It's open source as well, so I'm sure we'll see ISP's deploying linux boxes running the CDP daemon..CacheLogic and BitTorrent didd good. One thing I noticed on the official press release was that the engine caches content, but specifically 'legitimate content'. Hmm..
  • Comment removed based on user account deletion
  • bittorrent causes a lot of traffic. I mean come on, the internet isn't like some sort of truck you can just dump stuff on. It's a series of tubes, man!
  • by sdpinpdx ( 66786 ) * <sdpNO@SPAMscottp.us> on Monday August 07, 2006 @07:20PM (#15862459) Journal
    No ISP cooperation necessary. This has been tested experimentally a couple of times.

    See http://del.icio.us/tag/p2p+locality [del.icio.us]
    • A locality-aware swarming protocol can only discover other peers at the same ISP that are running at the same time, but a cache hosted by the ISP is always running and can serve content that was downloaded by another client earlier (sort of cooperative prefetching). Also, the bandwidth between the cache and a customer is usually going to be much higher than the bandwidth between two customers because of asymmetric connections.
  • Another Cache? (Score:3, Interesting)

    by OverlordQ ( 264228 ) on Monday August 07, 2006 @07:21PM (#15862466) Journal
    Azureus has supported JPC (http://www.joltid.com/index.php/peercache [joltid.com]) for quite a while now.
  • Here in the UK, for an ISP to buy a 622Mb pipe into BT's network (our beloved monopoly telco) costs £1.5m per year. That's a wholesale price of £200 per Mb, which is over 10-20x more than the external bandwidth is going to cost. So even if your traffic is only going from your local cache direct to your customers, it still costs WAY more to send it that one last hop than it would to get the same amount of traffic from anywhere else on the internet.

    Net result, those crappy bandwidth quotas / "ba
    • Or use the /other/ network, telewest/ntl blueyonder *woot* their accounts dept sucks, but other than that they're million times better than going through BT.

    • Well yes BT are not the cheapest, however they also have to install a telephone line anywhere at all in the UK for you for a fixed relatively low price and keep it working.

      Other providers can cherry pick only the profitable exchanges.

      Additioanlly BT wholesale will sell you a pipe to your customer and that is covered by the cost of the monthly charge and a bandwidth limit. The more you pay the more bandwidth you get. Thats how things work. An ISP has a guarnateed maximum downtime and response time to any pro
  • by doshell ( 757915 ) on Monday August 07, 2006 @08:00PM (#15862728)

    Wouldn't IP Multicast [wikipedia.org] be a more appropriate solution to this problem (and, for that matter, also for the whole lot of streaming content that flows on the 'net nowadays)? AFAIK it has been standardised for some time now, both for IPv4 for IPv6. Why, then, is it that multicast is virtually unused outside private networks?

    • IP multicast creates a routing table entry for each group in every router that the group's packets flow through. If Internet users were allowed to create multicast groups, routers everywhere would run out of memory immediately.

      Also, ISPs claim that they don't know how to bill for multicast.
      • What about XCast [irisa.fr]? Seems perfect for groups around the size of a typical torrent, and if the torrent gets too large you can just use multiple XCast groups because the number of groups is unrestricted. Even if you need many groups you'll still save a ton of bandwidth compared to unicast.

        Seems to me like the multicast people have been going about it the wrong way all these years, with tons of state inside the network. What happened to the dumb network philosophy? A stateless protocol like XCast is what
  • Now I can do 'show cdp neighbor eth0' on my linux box and actually get something back!
  • The sender can multicast the file in a loop. The recipients will get the pieces starting from whenever they started "listening" on the ongoing multicast, and then get the earlier parts, when the sender finishes and starts over again.

    This is far more efficient, than for the sender to push the same data to each client in parallel.

  • Encryption should be on EVERYTHING, be it legal or not.
    • Encryption should be on EVERYTHING, be it legal or not.

      Exactly. I routinely encrypt my harddrives for that very reason. There's not much illegal stuff there except for the occasional temporary movie (or mp3) downloaded in order to watch it (or listen to it) while everybody's talking about it, not when it suits the companies to release it here (usually months later). If the movie or song is crap it is deleted right away. If it is good, I hang on to it until I actually can buy it. Then it is deleted and I mak
  • The problem, at least at the small ISP I work for, isn't with out upstream connection; we've got bandwidth to spare in the NOC. For me, the problem is actually in the last mile. This would only work if I could buy about fifty of these caches, and deploy them at or near my POPs. I'm gonna take a wild guess and say that's not cost-effective for me.
  • If I were an ISP and had a bittorrent problem (and it's obviously an issue with pirated content on bittorent), I'd be interested in having the proxy up if it really helped defer my bandwidth costs.

    BUT...I'd DEFINITELY want it to be transparent and invisible.

    So basically many ISP's will want this software BAD. But they don't want anybody to KNOW they do it for fear of lawyers from the RIAA/MPAA/SPCA/ECT comming down on them like a ton of bricks.

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...