Forgot your password?
typodupeerror
The Internet Software

P2P Through Firewalls 220

Posted by michael
from the file-in-hand-worth-two-in-the-network dept.
An anonymous submitter writes "A few stream-through-firewall applications have been announced recently. p2pnet has an interview with Ian Clarke about his new 'Dijjer' program, which promises to reduce bandwidth requirements from HTTP servers by transparently distributing the load. Slyck.com has an article about LimeWire's new version that offers firewall-to-firewall transfers (code here). [Both Dijjer and LimeWire are GPL'd.] There's also been a lot of discussion on the p2p hackers list about reliable UDP transfers."
This discussion has been archived. No new comments can be posted.

P2P Through Firewalls

Comments Filter:
  • please dont (Score:5, Funny)

    by Spy Hunter (317220) on Tuesday November 23, 2004 @02:36PM (#10900863) Journal
    Haha, so much for their "please avoid submitting it to any high-traffic web sites, it is not yet ready for primetime" policy. Good work, anonymous submitter.
    • Re:please dont (Score:4, Insightful)

      by nocomment (239368) on Tuesday November 23, 2004 @02:38PM (#10900905) Homepage Journal
      hmmm. I don't care if it's ready for primetime or not. GTK-Gnutella works just fine with firewalls.
    • Re:please dont (Score:3, Informative)

      by Jugalator (259273)
      The page has now been updated:

      Welcome Slashdotters

      Ok, I guess the "please don't submit to high traffic websites" in red wasn't enough, perhaps I should have used <blink> tags ;-) Since you are here, please heed the warning that this is at an early stage of development, if you are interested please sign up to our announcement mailing list so that we can let you know when its ready for primetime. Otherwise, we do need testers, so feel free to poke around.

  • by recursiv (324497) on Tuesday November 23, 2004 @02:36PM (#10900872) Homepage Journal
  • But... (Score:5, Insightful)

    by remigo (413948) on Tuesday November 23, 2004 @02:37PM (#10900883)
    But... I thought that peer-to-peer sharing was horribly immoral and only used for warez and porn!!

    Seriously, though, this is the kind of thing that I desperately wish mainstream media/Congress paid more attention to. It's only the lawsuits and illegal uses that get covered because that's what sells ads.
  • Text of interview (Score:5, Informative)

    by Anonymous Coward on Tuesday November 23, 2004 @02:39PM (#10900914)

    p2pnet.net News:- Freenet author Ian Clarke is developing Dijjer, a new open source p2p content distribution tool, and he's looking for people to test drive it before it goes online in beta.

    "Dijjer is a peer-to-peer HTTP cache, designed to allow the distribution of large files from Web servers while virtually eliminating the bandwidth cost to the file's publisher," he told p2pnet.

    "Dijjer is designed to be simple, elegant, and to cleanly integrate with existing applications where possible. Dijjer uses "UDP hole punching" to allow it to operate from behind firewalls without any need for manual reconfiguration.

    "Dijjer's distributed and scalable content distribution algorithm is inspired by Freenet."

    Below is a brief Q&A.

    p2pnet: When did you start working on this?

    Clarke: Several months ago. It's hard to pinpoint a specific time because it's a combination of a variety of ideas that have been at the back of my mind for quite some time.

    p2pnet: What prompted you?

    Clarke: Dissatisfaction with apps like BitTorrent, and a desire to demonstrate that the ideas behind Freenet could be applied to solve other problems.

    p2pnet: When do you expect (hope) it'll be completed?

    Clarke: Well, I'm sure that development will continue for quite some time, but I hope to release a beta version in four to eight weeks that will be suitable for large-scale adoption.

    p2pnet: Who do you see as the principle users?

    Clarke: Anyone who needs to distribute large files to large numbers of people but who can't afford to pay for the bandwidth that this would normally require.

    The download site [dijjer.org] says features include:

    "No Firewall configuration
    With many P2P applications you must reconfigure your firewall to get the most out of them. Not so with Dijjer, we use state-of-the-art "NAT2NAT" techniques to get the most out of your internet connection without any reconfiguration.

    "Sequential downloads
    If you tried to download a video through Dijjer you may have noticed that you could start watching the video before the download completed. This is because Dijjer behaves like a web server, pieces of a file are download in-order and fed to your web browser when they arrive, allowing your browser to start displaying content before it has completely downloaded.

    "No "Tracker" necessary, works with virtually any URL
    This is a big one, Dijjer will work with almost any direct URL, the content publisher doesn't need to lift a finger - they may not even realise that people are using Dijjer to save their bandwidth costs!

    "Cross platform and native compilable
    Dijjer is implemented in Java, meaning that it will run on Windows, Linux, and Macs. Those who don't wish to install the Java Runtime Environment (JRE) will be pleased to note that Dijjer can be compiled with the GNU Compiler for Java (JCJ) to native code thus eliminating the need for a JRE. Native compiled versions of Dijjer will be available from this site in due course.

    "Free as in Speech
    Dijjer will be released under the GNU Public License.

    "No cumbersome clients
    Dijjer downloads through your web browser or preffered HTTP download application. You don't need to learn to use yet another P2P client user interface.

    "Advanced scalable distributed caching algorithm
    Dijjer uses a highly scalable distributed caching algorithm inspired by Freenet. This will allow it to deliver faster download speeds while placing less burden on the web server, and will be better able to handle sudden increases in demand for content."

    "Now all I need are some people to help me test it,"says Clarke.

    • by Elwood P Dowd (16933) <judgmentalist@gmail.com> on Tuesday November 23, 2004 @03:20PM (#10901549) Journal
      Dijjer uses a highly scalable distributed caching algorithm inspired by Freenet. This will allow it to deliver faster download speeds while placing less burden on the web server, and will be better able to handle sudden increases in demand for content.

      Sweet! Maybe it will be as fast as Freenet!
      • Sweet! Maybe it will be as fast as Freenet!

        You *do* realize that Freenet is so slow on account of its design constraints wrt privacy and anonymity -- constraints that don't apply to this project -- right?
        • You *do* realize that Freenet is so slow on account of its design constraints wrt privacy and anonymity -- constraints that don't apply to this project -- right?

          I hope that's true, but I don't see why you're so sure. There are many other good candidate reasons that Freenet is slow.
    • *Any* URL? (Score:3, Interesting)

      by MacDork (560499)
      "No "Tracker" necessary, works with virtually any URL
      This is a big one, Dijjer will work with almost any direct URL, the content publisher doesn't need to lift a finger - they may not even realise that people are using Dijjer to save their bandwidth costs!

      So, am I to understand that when using dijjer you are broadcasting your web surfing habits all the time in the hopes that someone other than marketers and police are out there listening? Or is there some anonymizing Freenet magic going on here? Givi

    • Sequential downloading may have its advantages, but speed isn't one of them. Nor will it provide the same savings in bandwidth for the originator. So I don't see this displacing BitTorrent.
  • It has been a long running discussion in my project lab at the university, how to make a p2p program (in our example, Skype) work between two networks using NAT-firewalls.

    Seems like somebody finally came up with the answer! :-)

    Freespirit
    • Saw this [hamachi.cc] a couple of days ago. Pretty vague description,
      but it does promise exactly what you are looking for. 2c.
    • So wierd: I woke up this morning with the same idea in my head. I finally tried using gtk-gnutella this weekend, and noticed that it was having problems with my firewall. I started thinking about how to punch thru a firewall without having to manually reconfigure and came up with the same solution.

      Essentially, UDP is a stateless system, so stateful firewalls don't have SYN packets to signal the start of a connection, so you can do the following

      • Machine A sends a UDP packet from port X to machine B Port
  • by 192939495969798999 (58312) <infoNO@SPAMdevinmoore.com> on Tuesday November 23, 2004 @02:41PM (#10900948) Homepage Journal
    It strikes me that one could set up a server to cache udp requests and serve them back out to the attached/requested clients reliably. However, one must wonder why not just use TCP, which is guaranteed to be reliable. IMHO, What you'd end up with using UDP is a LOT of "did you get it? yes/no"-type network traffic between peers.
    • by Anonymous Coward on Tuesday November 23, 2004 @02:44PM (#10900997)
      TCP has slow start / back-off retransmit problems that for long transfers over links with some packet loss can cause it to not fully use the pipe.

      Most modern UDP transfer systems use NACKing, where the receiver just tells the sender if it didn't get a packet (the packets are numbered sequentially) and that it should put it in the retransmit queue. The sender just goes about it's business spewing out packets until it's informed the receiver didn't get one.
      • Mod parent up. (Score:3, Insightful)

        by apankrat (314147)
        And mod me up while you're at it :)

        Most of reliable UDP protocols do use unsolicited NACK'ing and solicited ACK'ing. This cuts down overhead on fat pipes to just one ACK per a transfer, which is as low as it gets.

        This approach doesn't work well on lossy links or for interactive sessions though.
      • by Spy Hunter (317220) on Tuesday November 23, 2004 @03:27PM (#10901659) Journal
        TCP errs on the side of caution. The failure mode of TCP on congested links is to stop. These new UDP transfer protocols have a "damn the network, just send my bits!" attitude that could be bad for the health of the Internet as a whole. The failure mode of a NACK protocol is to flood the pipe with data that never reaches its destination, while the NACK packets never reach back to the source. Widespread use of unproven UDP transfer protocols with bad congestion control could flood the entire Internet with uncontrolled traffic, making it impossible to establish a TCP connection because of high packet loss, and reducing throughput for all. All in the name of getting a few percent more speed on long links. These people should be working together through the IETF to publish RFCs on a real replacement for TCP, not writing their own vigilante protocols on top of UDP.
        • That could happen, but let's not jump the gun by slamming "vigilante" protocols. TCP just doesn't make sense for everything, e.g. real-time apps (including games) where retransmissions are counterproductive. As good as TCP is, we can't improve on it without experimentation. The time may come for collaboration through the IETF as you suggested, but only after lots of small-scale experimentation I think.
          • TCP just doesn't make sense for everything, e.g. real-time apps (including games) where retransmissions are counterproductive.

            Correct, which is why they should (and most do) use UDP, which is unreliable by design, specifically for the type of situations you cite. Trying to make UDP reliable is totally counter productive. You'll just end up with TCP.

      • Oh good! So when someone starts a moby transfer that's going to take forever, and crashes his machine trying to game with scores of spyware running at the same time, then I can be next in line to pick up his old DHCP dynamic IP address.

        I'm so looking forward to having my bandwidth eaten by a system that wants a precise STFU packet to stop spewing at me.

        • Ever heard of ICMP or more specifically - DestUnreachable/PortUnreachable code ?

          It is essentially a guaranteed system-level NACK, which comes handy exactly in the situation you describe. Every decent NACK-based protocol implementation has ICMP handler (see SOL_IP, IP_RECVERR in setsockopt).
    • What if you were creating a new protocol to improve on some of TCP's deficiencies(for example, UDT [rgrossman.com] [PDF file])? Generally, you'll want to prototype in userspace to make debugging and portability easier. In this instance, going with UDP makes sense since you don't have to modify the operating system at all to have your program utilize your new protocol, and your program doesn't have to run with root privledges like it would need to do to write the raw IP packets. Essentially, UDP functions as a great testbed
    • You need to use UDP in order to do the transfers through firewalls. NATs & firewalls allow solicited UDP (necessary for most all UDP-based transfers to work), but disallow TCP. A reliable UDP layer is of course going to have the "did you get it? yes/no"-ism of TCP, because that's the whole purpose of it.
    • by crow (16139) on Tuesday November 23, 2004 @03:04PM (#10901325) Homepage Journal
      UDP has advantages and disadvantages.

      UDP is connectionless--you just send a packet to a given IP/port and it goes there. This means that you can forge the from address to make it impossible to tell who is sending the file (provided your ISP doesn't filter those as bogus packets). Of course, you still need some way to get the request from the recipient to the sender (along with re-requests for lost packets).

      UDP has no flow control--the sender sends as fast has he likes without any knowledge as to what the maximum bandwidth on the connection is. If the sender's direct upstream connection is the bottleneck, then that should be fine, but otherwise there may be huge packet loss. Also, because of the lack of flow control, it tends to hog the bandwidth instead of share the bandwidth.
      • There are UDP based congestion control protocols out there, so UDP doesn't necessarily have to be "send as fast as you like"; it's just up to the application developer to think a bit more.

        eg: TFRC, ftp://ftp.isi.edu/in-notes/rfc3448.txt
    • by ArbitraryConstant (763964) on Tuesday November 23, 2004 @03:29PM (#10901681) Homepage
      p2p traffic is large static files almost 100% of the time. A UDP protocol optimized for large files can be a good thing, better than TCP.

      a) Instead of a relative small window like TCP, we can make the window as big as we want. This would let us cut down a LOT on ACKs (or pseudo-ACKS in the case of UDP). We can ACK a range, or a range with exceptions, or whatever. For a protocol specializing in bulk transfers, it can really cut down on overhead.

      b) TCP guarantees that data arrives to the application in order. This is expensive when we don't care. A custom UDP protocol lets us pick up missing chunks at our leisure, we simply need to maintain a list of missing chunks as the transfer goes along so we can request them later.

      c) Since UDP is connectionless, firewalls must create pseudo-connections for UDP. When a UDP packet is sent, the firewall will allow incoming UDP packets from that host/port to the originating port. This gives us a way of signaling to the firewall that we wish to accept UDP packets from that host on that port, even though the client on the other end will never recieve that packet due to their own firewall. Once they've both done it, they have a mutual "connection". This is a brilliant hack, whoever thought of it.

      d) We can hide the sender of the data. If we request a file in some mutually accessible place, along with the host/port we're going expect packets from, anyone anywhere can start spewing packets at us with falsified sender information. It's nearly impossible to determine where they came from with UDP.

      "However, one must wonder why not just use TCP, which is guaranteed to be reliable. IMHO, What you'd end up with using UDP is a LOT of "did you get it? yes/no"-type network traffic between peers."

      TCP does that a lot too (a LOT), it's simply handled by the network stack rather than the application. TCP ACKs cause 1/15th or 1/20th as much upstream traffic as the downstream portion of the connection causes. That adds up when you have a {dsl,cable} modem that's 1/10th as fast with upstream traffic.
      • No, no, no. I can see reasons to prototype protocols over UDP (cause it's mostly raw IP), but your reasons are all bad.

        a) Modern TCP implementations (with window scaling) support a maximum window size of approximately 1 GB.

        b) A big window, and the selective acknowledgement feature provided by many TCP stacks these days, makes this mostly moot as well.

        c) Yeah, until the firewall vendors start looking for this and the whole thing becomes a even more insanely unreliable hackjob than it already is. Why not
        • "a) Modern TCP implementations (with window scaling) support a maximum window size of approximately 1 GB.

          b) A big window, and the selective acknowledgement feature provided by many TCP stacks these days, makes this mostly moot as well.
          "

          AFAIK, RFC1323 is not enabled by default in Windows 98, 98SE or XP. Using UDP lets people benefit without messing with their settings. sack is supported, but with a small window it doesn't do as much good.

          "c) Yeah, until the firewall vendors start looking for this and the
  • Or rather it CAN be faster depending on how you desing your protocol.

    It'll still take some form of end-to-end acknowledgement scheme, but since it is pushed up to the application, there is less overhead overall.

    Of course if EVERY app did this, it would really gum up the Internet.
  • by ArbitraryConstant (763964) on Tuesday November 23, 2004 @02:44PM (#10901013) Homepage
    I have bittorrent behind my firewall. Rather than statically allowing ports, I set up a "torrent" user, and told the firewall to let it listen for connections. This also has two beneficial side effects. First, if there's an arbitrary code vulnerability, an attacker can be somewhat contained. Second, bittorrent doesn't always use the common range of ports, so prioritizing by port is problematic. Having a seperate user lets me throttle the bittorrent connections so that interactive traffic has priority.

    While I imagine this is possible with Linux, I have no specific knowledge of how to do it. I did it with PF on OpenBSD.
    • I do the same, but I also run it systraced [openbsd.org]. You can use the policy posted in the BitTorrent security [undeadly.org] thread.
    • Azureus is excellent. It is UPnp aware so it opens the relevant ports on your router without any intervention.
    • Let me make sure I understand this. You can take:
      pass in on $ext_if inet proto tcp from any to $ext_if \
      port $btorrent flags S/SA keep state queue (p2p_bit, low_ack)

      Change to:
      pass in on $ext_if inet proto tcp from any to $ext_if \
      user torrent flags S/SA keep state queue (p2p_bit, low_ack)

      Not only will it assign the apropriate queue, but automatically open the ports without specifically defining them?
    • While I imagine this is possible with Linux, I have no specific knowledge of how to do it.

      Pretty easy:

      iptables -P FORWARD DROP
      iptables -A FORWARD -i eth1 -o eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT
      iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT
      iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
      Thought that's probably a pretty common NAT box snippet for iptables users...
      • Except that your iptables version doesn't limit access to the "torrent" user like OpenBSD's PF does. Which is pretty important.

        • Except that your iptables version doesn't limit access to the "torrent" user like OpenBSD's PF does. Which is pretty important.

          It's only important if you run BitTorrent on the firewall as a trusted user. I run BitTorrent behind the firewall, on my workstation, as a regular user. There's no need for doing anything with different users, unless you're running BT on the firewall.

          Prioritizing interactive traffic with one tool is a nice PF feature, though Linux's tc can make up the functionality.

      • Uh, that's nowhere near what the grandparent has. All it does is allow through already established packets and things related to them. But that won't open ports for things running under the a torrent user, which is what the grandparent's setup sounds like it does (I might be wrong about that, though).
  • by Bert690 (540293) on Tuesday November 23, 2004 @02:46PM (#10901047)

    This looks like an interesting hybrid of Coral [nyu.edu] and BitTorrent. Coral is nice in that you don't need to install any client-side software to take advantage of it. This one it appears you do need to install a client-side proxy, which is a little scary.

    This system seems to utilize a client that takes on roles of both the BitTorrent tracker and the Coral caching nodes. I wonder how the client caches cooordinate? Any centralized server involved here?

    Another firewall-busting HTTP serving system is YouServ [nyud.net] (coral link), though geared more at sharing personal content instead of content requiring "super distribution".

  • by Krunaldo (779385) on Tuesday November 23, 2004 @02:47PM (#10901057) Homepage Journal
    My father, who's using a mac, loves limewire beacuse it melts perfectly in to his UI. But he can't do anything else while he has limewire running; except play solitare and shanghai.

    Personnaly I think limewire sucks. Here's the reasons. 1. It's slow and processor hogging.
    2. It dosen't melt into my fluxbox theme. (my fault)
    3: It requires Java.



    But for the ordinary user I think limewire is the best p2p software out there.

    Fear kazaa though.
  • Dijjer (Score:3, Interesting)

    by burns210 (572621) <maburns@gmail.com> on Tuesday November 23, 2004 @02:47PM (#10901058) Homepage Journal
    "Dijjer will work with almost any direct URL, the content publisher doesn't need to lift a finger - they may not even realise that people are using Dijjer to save their bandwidth costs!"

    That is a good thing, but potentially a bad as well, for how some sites make money... I think a needed features is a robot.txt entry that blocks dijjer from caching the site.
    • [Having the mechanism be transparent to servers] is a good thing, but potentially a bad as well, for how some sites make money... I think a needed features is a robot.txt entry that blocks dijjer from caching the site.

      I'd state it differently: this potentially breaks the formerly viable business model of certain websites, therefore requiring that such websites adapt or go under... and in so doing, perpetuate the natural competition of a free marketplace rather than restricting the evolutionary opportunit

    • Dijjer respects the various no-cache HTTP headers, a robots.txt file is intended for search engines, not caches.
  • VPN-mesh? (Score:5, Interesting)

    by B5_geek (638928) on Tuesday November 23, 2004 @02:47PM (#10901068)
    I'll be the first to admit that I'm an idiot; but what about using VPN's to secure and mesh these P2P hubs together?

    Each PC that wants to share data, acts as a hub with x-number of tunnels going out at one time. The content of each hub could be spidered and locally cached. (kind of like combining a router-cache with a Freenet hub)

    It might be slower (like DC++) but you could setup groups of peers that get preferential bandwidth.
    BUT you could always add a swarm-like functinality of BT.

    a) secure from **AA (as long as you don't let them into your peer-group)
    b) distributed load (no central server to take down)
    c) because it is a VPN, you don't need to worry a firewall because YOU initiate the connection and keep it open. {I do know that you are fubar if the firewall admin blocks the ports, but wouldn't you be SOL anyway?}

    d) well, I just think it sounds kida cool. =)

    • Re:VPN-mesh? (Score:2, Informative)

      by Cyfun (667564)
      Check out Virtual Native Network [www.vnn.cn]. "VNN is a platform which provides the peer to peer's transparence. The peer that is behide either NAT devices or a SOCKS server can communicate with another peer transparently. Also the applications run on the peer can ignore the NAT devices' existence. Enter the world of VNN. Get over the lack of IPv4 address. Construct our own convenient and easy-using VPN."
    • a) secure from **AA (as long as you don't let them into your peer-group)

      How did they get into DC++ hubs? A lot of them are private, yet for p2p apps to really work (for activities that are frowned upon by the *AA's) you need to appeal a large group of people (if I'm only interested in sharing with a small group of people, I can always set an ftp server) so infiltration will always be relatively easy.

      The only way you are truly going to be secure, is by masking the origin IP, like FreeNet does, and then y
  • But... (Score:3, Interesting)

    by RAMMS+EIN (578166) on Tuesday November 23, 2004 @02:49PM (#10901104) Homepage Journal
    And I thought firewalls were supposed to stop certain services...isn't "P2P Through Firewalls" defeating the purpose?

    Or perhaps the problem is rather with NAT? In that case, I'm still hoping that someday someone will implement something like RFC 1701 [faqs.org] or somesuch instead of continuously reinventing the wheel.
  • by pair-a-noyd (594371) on Tuesday November 23, 2004 @02:54PM (#10901164)
    Smoothwall GPL 2.0 final [sourceforge.net]
    POS PC = free from side of road
    Smoothwall GPL = free

    Problem solved..
    • you may have problems with Smoothwall at the moment. According to the mailing list, the main Smoothwall site has been hacked. No further news available so far. The websites has the short message "Down for maintenance - returning soon"
      • Being high profile like they are, they present an enticing target. Joe average at home though, presents a very uninteresting target, not to mention hard to find.
        I'm personally not worried about it, if they were hacked it didn't involve the GPL version, they run the corporate version. And if there is a vulnerability in the GPL version, they'll shortly have a patch available.

        Also, another poster mentioned add-ins. Yep. A bunch of them. http://sourceforge.net/projects/smoothiemods/ [sourceforge.net]
        • I'm personally not worried about it, if they were hacked it didn't involve the GPL version, they run the corporate version. And if there is a vulnerability in the GPL version, they'll shortly have a patch available.

          How do you know the GPL version isn't affected? It's the GPL website that's down. One hopes they're being extra carefull double checking all the downloadable files and updates to make sure they haven't had trojaned versions substituted.



  • If someone out there that is willing to put the time in to implementing a reliable UDP I'd be willing to share my notes and research on how to implement my ECIP error correction over IP as well as my SPAC Protocols. (Selective Packet Acknoledgement) algorythems. They can work together for a really cool solution.

    The original code was lost when my former company went bust, it's was mess anyhow.

    But the algorythems can be reimplemented.

    ECIP [ecip.com]

    John L. Sokol

    PS: Method of passing bi-directional data between [ecip.com]
    • Take a look at the links in the article. The link is to the GPL'd LimeWire code for a reliable UDP layer. The link to the p2p hackers discussion is also from a bunch of people about how to do a reliable UDP transfer.
  • by AIX-Hood (682681) on Tuesday November 23, 2004 @02:58PM (#10901225)
    "No "Tracker" necessary, works with virtually any URL This is a big one, Dijjer will work with almost any direct URL, the content publisher doesn't need to lift a finger - they may not even realise that people are using Dijjer to save their bandwidth costs! As the that guy who runs filerush, I'm always looking to move to whatever will keep the files free flowing with zero hassle. The problem is that this method just shot itself in the foot. So you're saying that I have to serve my 350 meg new game demo on my regular http server and Dijjer users will P2P it without my knowledge. That's great.. but what about the other million users who have no idea about Dijjer, and just hammered my download and therefore my site in extinction because I can't tell who's who? Now nobody gets the file.
    • Trivially solved (Score:4, Informative)

      by Sanity (1431) on Tuesday November 23, 2004 @03:41PM (#10901837) Homepage Journal
      Just make your web server reject or redirect links that do not report Dijjer as their HTTP client. Easy.
    • bandwidth limit the file and promote digger on the site.
  • by Animats (122034) on Tuesday November 23, 2004 @03:03PM (#10901304) Homepage
    If P2P through firewalls is deployed, viruses through firewalls can't be far behind.

    0wn corporate networks! Laugh at their ineffective firewalls. Use them to send spam all night! Resell them on Spamforum.biz [spamforum.biz]. At last, the killer app for "grid computing".

  • Wondered when.... (Score:3, Informative)

    by GoRK (10018) <`johnl' `at' `blurbco.com'> on Tuesday November 23, 2004 @03:44PM (#10901894) Homepage Journal
    I have been really wondering when someone was going to do this for P2P apps. Compared to how much other software actually uses the same techniques, it's long overdue. There seems to be some misconceptions on how it works though in the comments here, so I'll try to do a simple explination:

    UDP is stateless. There is no connection setup like there is with TCP so there's really no way for a firewall or gateway to statefully track where to send UDP packets, so the typical implementation for NAT'ing UDP is something of a 'best guess' scenario, redirecting certain packets based on port numbers and IP's. These new applications take advantage of this synchronous behavior of NAT devices to permit direct connection between client computers where both are behind NAT firewalls.

    NAT of UDP is generally implemented like this: If you begin sending UDP from source port 2000 on your computer to a remote host on port 5000, then the router doing NAT will automatically open up a 'hole' that allows any UDP packet from the remote host from source port 5000 to destination port 2000 on your machine to pass through to you. This is sort of how it works with TCP too; however the firewall only opens up the 'holes' when connections are first set up and only allows packets with correct sequence numbers to pass back through.

    Essentially how it works is that two clients decide to "connect" and agree on port numbers, etc through some third host that both can reach via tcp. They then begin broadcasting UDP data to each other. Once a packet goes out from both hosts, the two 'holes' in the firewall will open up. Probably at least one packet will not actually arrive at its indended destination; however, the software can implement its own robust transfer protocol over UDP.

    Games have been doing this forever. QuakeWorld (the Quake 1 client tailored to internet play) was one of the first to implement it. Most implentations of SIP support this type of connection.
  • Related to this, check out the Visual P2P/Swarming Simulation [onionnetworks.com].

    It allows you to visualize firewalled transfers.
  • Problem with Dijjler (Score:5, Informative)

    by brunes69 (86786) <<gro.daetsriek> <ta> <todhsals>> on Tuesday November 23, 2004 @04:03PM (#10902193) Homepage
    Just tried it out, so this is speaking from actual experience. Digger doesn't limit itself to sharing files you have already downloaded - it will *actively* download files other people are requesting, so that it can share them.

    This is simmilar to freenet, and indeed will maximize everyone's bandwidth. But it has grave issues when not combined with Freenet's huge anonymimity factors like encryption and hiding IPs , and will open you up to all sorts of legal problems.

    I don't want the FBI knocking down my door because my Dijjer client has been downloading kiddie porn for someone else without my knowledge. Sure, I *may* be able to argue in court that it was not me, and hey, I may even be able to prove it. But is that potential trouble worth my saving on some bandwidth? I think not.

    • What proof do you have of this? Can you post it here? The greatest thing about Coral and bit torrent, is that you get in, you get what you want, you get out. No unwanted junk, and once you close your browser or client, there's never any strings attached or other file transfers going on. People generally like to be in full control and not have things going on behind their back.
      • Sure, here is some proof. I downloaded the dijjer.jar of the download page. I ran it, and clicked the test link on the main Dijjer page - the link for the Linux kernel. I clicked no other link and did nothing else except look at the status page.

        Meanwhile, check out some of the output from the server, printed right to STDOUT. Remember - I did not download this file, or make a request for it, and it certianly does not exist on my machine:

        8950 -1 -> lysanderspooner.xs4all.nl:9114 : acknowledgeRequest {uid
      • People generally like to be in full control and not have things going on behind their back.
        Yes, that would explain the unpopularity of file-sharing applications...
        • Was that meant to be sarcastic? Just the other day they announced that BitTorrent, the P2P app/protocol that gives far more control to the user than any other P2P app out there, holds 35% of all internet traffic. You can have your freenet which constantly shares random data bits, even ones you're not directly interested in, and I'm all for it but only when appropriate. Everyone in a city, all using 5% of their upload capacity at all times because some app is sharing without their knowledge ends up floodi
          • by Sanity (1431)

            Just the other day they announced that BitTorrent, the P2P app/protocol that gives far more control to the user than any other P2P app out there, holds 35% of all internet traffic.

            Correlation doesn't imply causality.

            It already happened when Kazaa first hit the net, before it didn't have the ability to completely shut off sharing.

            URL? I have never heard of that. I have spoken to many ISPs and they secretly love P2P, it is the primary driver for broadband adoption. Well designed P2P can actually red

    • will open you up to all sorts of legal problems.

      Care to be more specific? It seems to me that Dijjer is pretty-much exactly what the system caching [cornell.edu] exemption of the DMCA was intended for.

      Dijjer does not create any more liability for its users than a HTTP cache creates for an ISP, and note that virtually all ISPs run HTTP caches, so far as I know, without encountering legal problems.

  • do you pronounce it?
  • Ian Clarke? Shouldn't he be busy with getting Freenet to a usable state?

God is real, unless declared integer.

Working...