Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
The Internet

P2P, Firewalls And Connection Splicing 120

dbarclay10 writes: "There's an interesting article over at Byte about what happens when nobody accepts incoming connections any more, like when more people start using firewalls or NAT. Specifically, it talks about peer-to-peer networking(a la Napster), and how it would be affected. Good read."
This discussion has been archived. No new comments can be posted.

P2P, Firewalls And Connection Splicing

Comments Filter:
  • where does this "massive bandwidth because it's relayed" requirement come from? Relaying/proxying should only ever require an O(1) signalling overhead. And O() is a kind function in this situation.

    A----B----C

    Relaying data from A through B to C for 23 million hosts is massive traffic, which requires massive bandwidth. B telling A what C's address is and C what A's address is requires less traffic/bandwidth.

  • by Restil ( 31903 ) on Friday November 24, 2000 @06:19PM (#602602) Homepage
    Specifically the problem is 2 clients behind firewalls such that neither can be used as a server, so they cannot communicate with each other directly. They COULD have their traffic relayed through an indirect server that is open to the internet, but that means that the relay needs to be able to handle that bandwidth as well.

    I'm not well versed on the internals of TCP/IP, but I believe that when a connection is established, the ip and port information are written to some type of internal table and used from then on for further data transfers across that socket.

    Consider if both clients initiated a connection with the relay to open the connection. Once the connection is opened, the IP information in the internal table will be modified on both clients to the IP address/port of the NAT machine of the other client. At this point, both clients will be connected to each other but neither of them is a server. And the connecting relay only needs to pass enough traffic to initiate the connection, thereby keeping it readily available.

    -Restil
  • ATM Machine
    PIN Number
    DSL Line
    KBPS per second (horrible confusing, because it can actually mean something..)
    VDU unit
    EBCDIC code
    IBM machines (not really, but should be)
    Microsoft software (not really, but should be)
    ...language FORML
    DOS operating system
    ...distribution of BSD (not really, but should be)
    TOPS-20 operating system
    ARPANET network
    OSF Foundation
    USG group
    TECO editor
    Yacc compiler (ouch.. think about it :)
    EMACS editor (not really, but should be)
    ...
    I'm pretty sure that repeating part of an abbreviation for clarity has become acceptable through overuse...
  • You're fired.

    -- Your boss.

    --

  • In this scenario, all traffic between many pairs of hosts wishing to communiate is 'relayed' through a common .. well.. relay! Figure it out...

    It's like if during all file transfers on napster, all data was passed through the napster server.

  • Doesn't work for all NAT's, unfortunately. Some NAT implementations will only allow incoming UDP packets where the source ip/port match a host that you already sent a packet to (rather like a connected UDP socket). Any other packets are discarded. The sad fact is, there is no trivial solution that works under *every* circumstance.
  • The real issue, I think, is that, even if we start destroyign transparency with NAT (well, we already seem committed to that).. there will alwyas be *some way* of getting data to and from where we want, the question becomes 'how efficient is it'.

    The real reason simply boils down to conserving available address space. The reason we need NAT, contraty to what everyone thinks, is not security... though it's commonly used that way, the reason is because of a LACK OF ADDRESS SPACE.

    You could firewall *just as well* stuff NOT behind a NAT box.... original firewalls were *gasp*, filtering routers.

    yes, there are lots of reasons to use nat in firewalls, for company networks.. but these are controlled, engineered situations where the admin (hopefully) understands all the implications. I know I do... I consciousoly accept the lack of incoming connections. I'm FINE with that. It's necessary for me to have a single choke point to prevent people on my network from violating my policies. The problem is with lots of people who don't want that.. they just want lots of hosts on the net, period.

  • This Article has nothing to do with anyone taking anything away. It points out a techncal problem of using P2P apps on an increasingly NAT'ed Internet. NAT, in case you don't know, is technology that allows multiple machines to share the same IP address by connecting several machines on a fake, unconnected network, then connecting one of them to the Internet on a seperate interface. The machine connected to the net then takes traffic from all the other computers on the network, remarks it as coming from itself and send it out onto the net. When it recieves responses it remembers where the original request came from, remarks the packet as being from the local network and sends it the iniating machine. The problem with this is that there are three machine in the exchange and only two of them actually exist on the "Real Internet". There is no way for a machine on the Internet to initiate contact with any machine in a NAT network, other than the one that has the "real" address, and in many cases that machine is nothing but a dumb router. If both the machines trying to use a peer to peer system are on NAT networks, neither of them can be the "server" because neither of them can be reached unless they initate contact. Thus if a suffcient number of people use NAT (Which more and more people are doing because broad band ISPs only give out one "real" address) P2P system will simply cease to work, or will become to unreliable to count on. No one will "take away" Napster, it will just become so un reliable that no one wants to use it.

    The solution to this problem is not trivial and as the article mentions would probably make a good graduate thesis.

  • That's the point... B can tell A what the address of C is and vice versa, but if A and C are both behind a firewall (or many other types of indirect internet connections), neither one can actually "listen" to a port on their true internet connection, so if they try to open a socket to one another, it won't work because they will be talking to the NAT router, IP Masq box, etc, which has no way way of telling which internal LAN address the inbound request is meant for.

    Now someone with a direct 'net connection can communicate with someone behind a firewall, as long as they act as a server, and the firewall'd client establishes the connection to them. Once the TCP connection is established, you can stream data both ways no problem.

    As the articles more or less states, there are two theorectical ways to connect two firewall'd clients... one method is to have a third computer act as an intermediary server for both clients, handling all messaging and data transfer.

    The other way, which he referred to as "blind spoofing" would be to rewrite the initial handshaking signals at the packet level to "spoof" the return address. That way you could "magically splice" two outgoing TCP connections together. AKAIK this has never been done, which is why all programs like Napster, Scour, and ICQ can't establish direct connections between two firewall'd clients.

    I'm actually head (well, ONLY right now ;) coder for an all Java P2P file sharing app called File Rogue [filerogue.com]. We're still in beta testing but it's starting to look pretty good.

  • They are trivial from the point of view of someone who may understand the details of tcp/ip; they are absolutely NOT trivial to someone who simply wants ot download and run software, and who uses NAT because he both doesn't know to do differently, and because his ISP charges 'per IP address', if in fact they offer additional ones at all!

    We're out of address space. It's that simple. In a normal, proper arhitecture, old-school IP, every home would be a subnet. and YES, that's 'wasteful', but the idea is that there is supposed to be enough address space that it's okay to do so. This is the point of switching in ipv6 asap.
  • It violates the fundamental rule of multiplayer gaming: don't trust the clients. I'd rather my data be kept on a trusted server than be handed out to random clients that accost me.
  • While it's not bulletproof (or would that be bullet-resistant?) security, the system is used by a number of firewalls already:

    1) An "inside" user makes a request on a known port (for FTP, this would be 20 or 21) to a server somewhere in the world (the "Outside server").

    2) The firewall/gateway/router (FW/GW/R) remembers the inside user who made this connection and the NAT address to which they are being translated.*

    3) When the Outside Server makes an inbound request connection to the address that the inside user is being translated to, the FW/GW/R thinks "Hey, this person just asked for a connection on another port, and using my 'FTP' rule, that means that the outside server is likely to ask for a connection on a different port - this must be it!" and passes the connection to the inside address.

    Voila - dynamic inbound port assignment without specific client support!

    Now, this isn't perfect. Some high-density PAT (port-address translation) environments will require fairly advanced rules in the FW/GW/R, but it works for most of the tier-1 firewalls (Cisco PIX, Checkpoint, etc).

    *we'll have to assume that the NAT address translations are persistant for at least a few minutes. Most NAT solutions work this way anyway.
    _________________________________
  • Yikes.

    As a security freak, this scares the shit out of me. The last thing I want is for any random user to be able to allow inbound connections through the firewall. No thanks - they can suffer with filtering, proxying and NAT.

    Before you knew it, you'd have everybody under the sun using netcat to allow themselves back into a protected network from the outside...just because it is convenient.

    Ugh. Please somebody take us to PKI and IPv6.

    sedawkgrep

  • What's really funny is that your post is currently moderated "Redundant" - sounds like somebody has a sense of humor...

  • I think the author's point is that behind a NAT device, there is no such thing as a "firewall-friendly protocol.
    Wrong way to look at it then. Firewalls aren't "friendly", they should be about as friendly as a locked steel door marked "Trespassers will be shot". Whether it's a NAT based device or an application proxy firewall, firewalls aren't supposed to let anything in by default. In fact outbound traffic should be restricted as well.

    The NAT device needs to be manually configured.
    I don't see anything wrong with having to voluntarily configure the device so that you get less security.

    In fact it's already a concession to consumers that many NAT firewalls automatically allow certain outbound traffic (or even all ick!).

    What if there's a p2p protocol and no one is able/capable/willing to share what they've got?
    Then nothing should get shared. Sharing should be voluntary and unforced.

    The author should get a reality check. We no longer have an Internet where everything is left unsecured and anybody can go about and do as they please, and nothing really bad happens because the only people around are responsible and well mannered. Nowadays people are securing their stuff. And if you want to access their stuff, they better have given the permission first. If they don't know how to unlock their stuff, then it usually means they aren't ready to expose their computers to the world yet- and involuntarily help some script kiddy DDOS e-bay.

    Link.

  • First of all no major company would want users using Napster so I believe NAT (or PAT) will be around for a while. This means that the majority of NAT users wanting to enable incoming connections are DSL and Cable users.

    Obviously you could forward all incoming traffic for 6699 to one machine and use that for Napster. IMHO you shouldn't use NAT at home if you can't figure this out.

    I was wondering what would happen if you were to forward all inbound connections destined for port 6699 to the broadcast address of your subnet? Would the listening peer start receiving the transfer, and would this overload the network?

  • "A troll is someone who, upon discovering that no one likes them, decides to pretend that it's on purpose."
  • NAT isn't that bad. When I first set up a home NAT firewall, I suddenly noticed that many of my napster recieves weren't working anymore. Woe is me. However, it's a trivial matter to run napster on each individual computer set to listen/accept on a certain port, and configure forwarding of that port through the firewall.

    For example, if I am running napster on default listen/accept of 6699..
    ipmasqadm portfw -a -P tcp -L $EXT_IP_ADDR 6699 -R $INT_IP_ADDR 6699
    And that's that. If another computer on the internal NAT segment wants to use napster, just set it up to use a nondefault [say, 9966] port. Most all NAT/masquerading issues can be resolved with a little elbow grease.


    ---
    man sig
  • The massive bandwidth requirements come from his example; Napster. Would you like to volunteer to pay for the bandwidth of hundreds of thousands of users downloads MP3s at ~700Kbps? :) Thought not.

    Dave

    Barclay family motto:
    Aut agere aut mori.
    (Either action or death.)
  • by crt ( 44106 ) on Friday November 24, 2000 @06:26PM (#602621)
    Coming from the gaming industry (which has done P2P LONG before Napster), I can safely say that Proxys and NATs are the bane of a network programmer's existence. Often times it's difficult or impossible to detect them, and most of the time once detected there's nothing you can do about them.

    NAT is certainly an improvement over application-specific proxies (like HTTP) - since you can usually make arbitrary outgoing connections, but the inability to allow incoming connections makes peer-to-peer gaming difficult or impossible.

    However, there is a solution in the works, it's called Realm-Specific IP, here's the IETF working group that's working on it: http://www.ietf.org/html.charters/nat-charter.html

    Basically it allows a client behind a NAT to reserve a port on the NAT and forward all traffic from that port to the client. So different clients can open up different listening ports on the NAT, and it will forward them incoming connections. Since a NAT box has a good 65k ports to play with, you should easily be able to support several thousand clients on a NAT'd IP with virtually no loss in functionality - clients can make any outgoing connection they want, and can accept incoming connections after binding a port on the NAT.

    I pray every day that this protocol will get finalized quickly and be implemented in all NAT products. Even better if it could be implemented in the client OSs at a low-level - so that when you do a bind() on a client, it automatically makes an RSIP request to your NAT to bind the port there as well. That way client applications can work transparently without having to add special code (like you do to support stuff like SOCKS) - although I expect there will be Winsock wrappers on Windows to support RSIP like there are SOCKS wrappers.
  • Once the connection is opened, the IP information in the internal table will be modified on both clients to the IP address/port of the NAT machine of the other client

    This would be fine, except that it would require write access to kernel data structures which map the TCP connection information (local IP/Port and remote IP/Port).

    Needless to say, this is not an option. :(

  • Most all NAT/masquerading issues can be resolved with a little elbow grease.

    Dude, you're just brilliant... Now I can fix this for my entire office (remember not just Napster is P2P or requires inbound connections) - gee a little elbow grease can get all 150 people in the firewall and with static ports and IPs... wheeee!

    Next time don't give the first answer that comes to your head as if it were an expert answer. The fact of the matter is that your solution is an old well known hack, not a solution.

    Now a link to a masq module similar to masq_icq or masq_ftp would have been a very cool solution and wouldn't have gotten a shitty reply like this one.

  • I've used napster, imesh, gnutella etc. on my masq network without any extra modules and it's always worked fine (well as fine as it can work on dialup).
  • Here is a link to the framework description - anyone interested in this topic should DEFINATELY read this - as it addresses almost all of the problems of NATs in a very elegant fashion that can be implemented at the IP Stack layer, transparent to applications.
    http://www.ietf.org/internet-drafts/draft-ietf-nat -rsip-framework-05.txt [ietf.org]
  • by Smeg}{ead ( 71770 ) on Friday November 24, 2000 @06:40PM (#602626)
    Maybe I'm falling for a troll here but...

    I'm afraid that you are in danger of losing your job -- unless of course your job is to know how to manage networks but not necessarily to know how they work.

    The author of the article knows exactly what he is talking about. The two modes he mentions would work as follows:

    Brokering - Napster does this. It allows two independent peers to "find" each other and then establish a direct correction, much like DCC in IRC. This follows the definition of a "broker" very well.

    Relaying - basically like proxying, except that it allows two "clients" to communicate with each other. Both clients connect to the same server, which establishes a "virtual" connection between them by storing and forwarding the data between them, sort of like a bridge. The reason why this would require huge amounts of bandwidth is that the server would have to be able to handle the traffic for both ends of all connections that it relayed in this way. Seems pretty obvious right?

    The other technique that the author hints at is "connection splicing". This is a mechanism whereby an intermediate basically "joins" two TCP connections together. There are a number of difficulties in doing this -- mostly to do with the unpredictablity of TCP sequence numbers. It probably would not work in this scenario. You really have to be in the middle to rewrite all packets that go between the two parties - e.g. as a bridge. It's pretty impossible to switch the endpoint IP addresses of a TCP connection midstream.

    Like I say, maybe you're just a troll, but I would say that the main lesson here is don't go shooting your mouth off unless you are pretty sure that you know what you're talking about.
  • Here [linksys.com] ya go.
  • by PTrumpet ( 70245 ) on Friday November 24, 2000 @08:01PM (#602628) Homepage
    It's generally recognized that NATs are a hack to get around the failing of the current IPv4 network. Even though I have written one of the first NATs before they became popular, it is widely agreed within the IETF that NATs present surmounatble problems. In adition to the inability to establish peer to peer TCP connections - the primary reason being that you can't determine both ends TCP ports until after the connection is established - there are also significant problems for network security mainly because you can't directly ties the end user's IP address to the security protocol. Also, IPsec won't work through NAT's particular well because the ESP protocol doesn't typically work through NATS. Also, I suspect that MTU discovery may not fully work through NATs if they haven't been correctly implemented - i.e. you also need to translate the ICMP messages related to the NAT - often these might not pass through.

    This is one of the dominant reasons why IPv6 is needed. While there are many reasons that NAT is useful, the dominant one is the lack of IP addresses. Ipv6 certainly deals handsomely with that issue.

    Another issue I have with NATs is that from an ISP point of view (we run one also) it is quite difficult to trace a rogue user that is on the inside of a NAT because usually a NAT will hide the identity of the customer, and you have to resort to other means to determine the identity of a user that may be launching an attack. This may be a good thing for some, but generally it makes life more difficult for the sysadmin.

    There are other failings with NATs - for that reason they are generally frowned upon.

    Time to rollout IPv6.
  • "Windows 2000: based on NT technology" ???

    based on new technology technology ?



    Zetetic
    Seeking; proceeding by inquiry.

    Elench
    A specious but fallacious argument; a sophism.
  • I don't see why he has a problem doing P2P through a firewall.

    If your P2P client software uses a fixed port when behaving as a client, then there is really no problem at all with virtually any firewall.

    In that case, if you want to allow it through, just do it. If you don't, don't.

    Most decent NAT devices allow you to statically forward a port to an internal host, or even a range of ports, or everything (not advisable).

    If your device refuses inbound connections it just means it's configured that way. So reconfigure the device, or if it can't do it get a better device, or find a more firewall friendly protocol.

    If you don't own the firewall and you want to get through it, talk to the admin. If you're the admin, then there's a chance that problem is between keyboard and chair.

    The entire column seems to be much ado about nothing. Just scratching a nonexistent itch. Or Jon Udell needed to fill some pages so that he could pay for the turkey.

    Cheerio,
    Link.
  • If your P2P client software uses a fixed port when behaving as a client, then there is really no problem at all with virtually any firewall.
    Oops I meant behaving as server.

    Link.

  • by Anonymous Coward
    >Basically it allows a client behind a NAT to reserve a port on the NAT and forward all traffic from that port to the client. So different clients can open up different listening ports on the
    >NAT, and it will forward them incoming connections. Since a NAT box has a good 65k ports to play with, you should easily be able to support several thousand clients on a NAT'd IP with
    >virtually no loss in functionality - clients can make any outgoing connection they want, and can accept incoming connections after binding a port on the NAT.

    Where have you been? We've been doing this in the *BSDs for years now.

    ipnat takes care of this. Read the man page. Here I'll even make it easy for you:

    Redirection rules
    rdr tells the NAT how to redirect incoming packets. It is useful if one wishes to redirect a connection through a proxy, or to another box on the private network. The format of this directive is:

    rdr ifname external/mask port service -> internal port service protocol

    This setup is best described by an example of an actual entry:

    rdr xl0 0.0.0.0/0 port 25 -> 204.213.176.10 port smtp

    This redirects all smtp packets received on xl0 to 204.213.176.10, port 25. A netmask is not needed on the internal address; it is always 32. The external and internal fields, similar to the map directive, may be actual addresses, hostnames, or interfaces. Likewise, the service field may be the name of a service, or a port number. The protocol of the service may be selected by appending tcp, udp, tcp/udp, or tcpudp (the last two have the same effect) to the end of the line. TCP is the default.
  • Someone should copy the way Apache uses http1.1 to host multiple virtual domains for use in other applications.
    Also reminds me of the way Kali works, wrap ipx packets with tcp/udp packets. You tell the client(or server) what you want. At the application level not the transport.
    Currently, the only way to do it now is with Port mapping/routing. But maybe someone will come up with some cool gateway layer type program/protocol. The gateway layer program would route IP based on the content, and route internally on some realtime definition list.

    Example.
    Lets say 192.168.1.1 is our NAT/Application gateway router.
    We load our P2P program, it then sends notification to 192.168.1.1, with the content it wants to accept on a port, and the some identification key. I think you could form tcp/udp packets with this information. Then after the first redirect, its all normally NAT traffic.

    There must be dozens of ways to accomplish this.
    -Brook

    I was walking down the street wearing glasses when the prescription ran out.
    - Steven Wright

  • If you hang your DSL or cable modem off of a linux box and do your NAT there then solutions to these little conundrums become fairly trivial. To accept external connections on a particular port of a machine behind the NAT box, all you need is a generic proxy, like the TIS Firewall Toolkit's plug-gw. And you get packet filtering capability thrown in as part of the bargain.

    Naturally, this solution involves tradeoffs. Instead of a sleek little fanless box nestled somewhere in the vicinity of one of your computers, your stuck with a hulking rattle-trap of a 486 beast (in a butt ugly tower case, most likely). But on the bright side, you get to coo over your homemade router's uptime and you can set things up so you can SSH into your home network from the office. Works for me.

  • While it's a good idea, it's a bit reminisent of the portmapper, don't you think? A deamon listens on a port, and directs incoming requests to ports that are dynamically allocated and reserved. The portmapper keeps track of who is listening where, and what to do with connections. Old idea, new implementation (sits on a firewall instead of a machine, and it forwards to machines and ports instead of just ports on the same machine). Let's hear it for one of the programmers best allies: laziness. Don't get all crazy when there's alredy a good idea to solve something.
  • As an admin, I implore you: Don't piggyback on well-known ports. I've had to create a number of "exceptions" for lame applications that use HTTP, DNS and other well-known service ports because they're assuming that they'll be open. These apps choke because we use transparent proxying and their non-http traffic on port 80 dies. The stuff trying to use 53 dies because we don't allow end-user DNS queries to the outside.

    I'd also implore you to add some kind of flexibility to the system to allow switching of port numbers or the use of a range of numbers. We haven't seen it with port numbers yet but we've gotten into pissing matches with application vendors who "assume" that they're the only one using 10.x.x.x or 192.168.x.x and what's the problem with us default-routing 10.0.0.0/24 to their network? It usally ends up in some weird dual-NAT situation that makes connections to their networks a nightmare. The same logic holds true for ports -- unless you've got an RFC or some IANA-reservation on your service/port combo, make it flexible -- someobody else may be using it!

    And port/service blocking isn't necessarily BOFH in action, a lot of times you have "past experiences" with lusers. Besides, good security policy means you get to pass the traffic we pay you pass, and nothing else. I know this rankles the geeks, but for 99.95% of the working stiffs they don't need pass telnet data or the like.
  • ipchains can be configured to pass through napster requests with something like:
    ipmasqadm portfw -a -P tcp -L xx.xx.39.225 6699 -R 192.168.20.6 6699
  • But should I "block WAN requests"? Or would that keep me from uplloading on napster/gnutella?
  • There is nothing wrong with your solution, except people. People and money to be exact. To implement IPv6 wholly, you have to turn the internet off, rewrite quite a few apps in IPv6, because they are not cross compatible and turn the switch on.
    That is not acceptible. Second way to do that is
    to convince enought people to run IPv6, such as core providers and major providers such as PSInet, Bell, AT&T sprint? Major institutions, corporation and fellow Unix hackers. Make transparent API interfaces for the applications so that they will use lower 4 bytes for the address space out of the 16 bytes, and the way you go. Provide network analysis utilities for major number plafroms and
    there you go, but for all that you need to get attention of lazy, parsimonious, incompetent people(take Murphys law into consideration), and then you got yourself a mess of replacing billions of hardware and dozen fold of that are installation and network adjustment costs...

    Now whats cheaper, to handle IP handout in a facist fasion or to rework the internet, it think the first is considerably easier... Its like converting from pulse to tone dial, since only very few companies were handling that, it was easy to control the conversion process, now you got literally millions people who are sysadmins, who have more knowlege in network engeneering than writing shell scripts or network engeneers who know nothing about routing and how ICMP works.
    You have to orchestrate the whole shebang together.
  • Network Address Translation (NAT) is Evil. It violates one of the fundamental architectural assumptions of IP: everyone gets a globally unique address.

    Without that, Peer-to-Peer networking goes right out the window; there has to be a "mediator" (which a security person would call a "man in the middle attack") to fiddle with your packets. And guess what? IP security (encrypted packets) go right out the window, too. No way to keep your traffic away from Carnivore's sucking sniffer...

    There's only one way out of this: insist on real, routable IP address space at all times.

  • I think the author's point is that behind a NAT device, there is no such thing as a "firewall-friendly protocol. The NAT device needs to be manually configured. I think the bigger implication is... what if there's a p2p protocol and no one is able/capable/willing to share what they've got?
  • by aozilla ( 133143 )
    Most firewalls will allow UDP replies for a certain period of time after an initial UDP request is sent. In this way, one side sends a UDP request (which gets thrown out on the other time). The other side then replies (presumably a central server is needed to note the initial request). That reply is received since it is a UDP reply. The two sides may now have a UDP conversation. This isn't completely decentralized, but only requires a couple of packets from an agreed upon server (you could even have this simply be an agreed upon peer with a less restrictive firewall).
  • I'm sorry, I'd rather not talk about it.
  • by Sheepdot ( 211478 ) on Friday November 24, 2000 @05:14PM (#602644) Journal
    While I had a hard time understanding NAT, I'm beginning to learn that it is extremely versatile and a lot of times people, like the author, don't understand that it can be configurable in a lot of ways.

    For our home connection, I set up a port for each of my roommates 4 computers and we use Napster through those.

    What is even more interesting is that NAT will soon have unnoticed configuration itself. There has been work done to improve NAT translation so if a port is opened on an inside IP, a client can connect to the router and request that NAT redirect to that port.

    I don't think IP masquerading is going to anything but get better over the next few years, and I trust it to be the best security with the most configuration in the future.

    I don't believe the author of the article has realized that even with the Cisco 675, used for a large number of DSL connections, changes have been made to NAT such as one-time configuration of addresses.

    What this new option allowed over previous versions of the bios was setting an inside NAT port and address and binding it to the routers IP. Before this, users would have to log in every time the routers IP had changed and continually change the NAT translation.

    NAT is only going to get better folks. Don't worry about peer-to-peer sharing dying any time soon because of it.
  • I haven't read all the responses here yet (who ever does??), but I think it seems obvious that we need a new protocol definition: TCP over UDP. Any TCP client would be able to use it, and the TCP session would be established over UDP packets. TCP as a protocol can already handle this since it is built to work over IP, another unreliable transport. UDP becomes useful, however, for doing the NAT bypassing that is mentioned in the article. The protocol back-end wouldn't need to be too long, and could be added to most Linux/BSD machines as easily as IPX over TCP is.

    Just my $0.02 worth ...

  • Napster isnt peer to peer at all. Gnutella is peer to peer. Napster is client to server. Napster depends on a server.
  • Problem is that NAT changes the address and port as packets are sent out. What you're talking about, TCP's "simultaneous open" behavior (when SYN packets cross on the wire), only applies if the addresses and ports on both sides are identical. But this can't happen with NAT.

    Example: Two peers use a rendezvous server of some kind to agree on ports and addresses. Peer 1 uses address A1 and port P1; Peer 2 uses address A2 and port P2.

    (A1,P1) -> (A2,P2)
    (hits NAT box for Peer 1; port P1 translated to P3)
    (A1,P3) -> (A2,P2)

    (A2,P2) -> (A1,P1)
    (hits NAT box for Peer 2; port P2 translated to P4)
    (A2,P4) -> (A1,P1)

    by the time these packets actually get on the Internet, they aren't using the same ports anymore, so it's not a simultaneous open.

  • by maccroz ( 126399 ) on Friday November 24, 2000 @05:08PM (#602648)
    Well, for each network of computers behind a firewall sharing an IP address, there can be one computer that has access to incoming requests. Linksys refers to this as the DMZ (Demilitarized Zone). This one connection can be the representative for the entire network.

    I know my apartment is behind a Linksys router, and we have 4 connections, however we have one computer that is the dedicated incoming access server. This doesn't really help the other computers on the network, but it is a partial solution to the problem.
  • The NAT would probably prevent you from sending the bastardized TCP request, and even if it didn't, the reply would go to the wrong place.
  • by Clownburner ( 257523 ) on Friday November 24, 2000 @05:09PM (#602650)
    ..to peer-to-peer connection requests should be intellegence in the firewalls or routers. Once they're aware enough of the application to recognize a "requested" inbound connection that doesn't exactly match an "originated" connection, the problem goes away. Many firewalls already do this with outbound FTP access to eliminate the need for PASV transfers.

    If a suitable peer-to-peer protocol were well-documented (read RFC) and widely implemented, it wouldn't take long before the vendors started picking up support for it. Problem solved.
    _________________________________
  • The article includes such gems as:

    Relay all the traffic between us, rather than just brokering the connection. (I am puzzled. What does that mean?)
    (This next bit is Quality bollocks)
    Broker the connection in some way that magically splices together two client-initiated TCP sessions.

    What the flying fuck was that sentence about?

    And now:
    I was sure Napster didn't do relaying, that would require massive bandwidth
    Please! Help me! I am *paid* to know about networking. What is this "relaying" thing that 'requires massive bandwidth'? Am I going to lose my job?

    Sorry, I am too scared to read further tosh like this. Either the author is a global telecoms guru, or he is a know-nothing fuckwit. If my diagnosis is wrong, then I have just lost my job.
    apologies
    I just read a bit further into the story. Where some people who know (at least) the basics politely tell him how it works. Later still in the article, he proves he hasn't learnt a fucking thing.

    I like journalists. But I'd need mustard before I could eat a whole one.

  • by Kierthos ( 225954 ) on Friday November 24, 2000 @05:13PM (#602652) Homepage
    I've always considered it more of a router. It's not like Napster stores anything on its own system. Rather, they allow the appearance of PTP connections through a pool of users.

    Actually, I could consider it a completely connected graph, where every user is a point on the graph, connected by lines to every other user. It's just when you refuse to 'share', it's a directed graph. And considering that I'm not trying to traverse the graph completely, just search the data at each point, it still doesn't seem like a server.

    Just my 2 shekels.

    Kierthos
  • by Anonymous Coward

    BTW: As someone writing a P2P app who can deal with the TCP stuff but hasn't done system/network administration in years, I've got a tangentially related question. What ports do you block. Or, more specifically, what port should I use?

    Since I will have relays, I don't care about what ports are blocked for incoming packets. I just care about the destination port on the outgoing traffic.

    I suspect that there are some, like 31337 that you are suspicious of. I also imagine that some places block 80 to force everything to go through proxies. And some facist palces probably block 23 and the like.

    So, what port would you write your program to use? Can I avoid piggybacking on http? Thanks!

  • by Anonymous Coward
    I don't use Linux, but I used to. And I remember to "ip_masq_irc.o" module that allowed DCC connections to work just fine. All you need is an ip_masq_napster module, and modify the napster protocol to send some information similar to DCC (DCC tells the other client where to connect, the kernel NAT routines recognise this). There is barely any overhead if you do it right in the kernel. When the connection is initially set up, if it matches the Napster port then set some function pointer up to the translation detector loaded in the module. If it doesnt match the port then the function will never be called. Do it right and you only have 1 extra test for when the connection is initiated. Function pointers -- Your best friend.
  • It's all rather peculiar. I've always wondered why, back in the good 'ol days, we could play Q3A NAT'd behind our Cisco 1605, run nearly any other kind of P2P application, yet couldn't host our beloved Quake server from inside the NAT...why one way and not the other? Is the Cisco's deployment of NAT any different (or that much more special) than the next?
  • Broker the connection in some way that magically splices together two client-initiated TCP sessions.

    What the flying fuck was that sentence about?


    Well, it sounds like they were trying to say that many firewalls these days block all incoming traffic except to designated server hosts. The rest can still usually initiate outbound connections (but even this is starting to change as default deny inbound AND outbound starts becoming the norm). They'd connect to this magical relay host which would act as a "broker". i.e. In non-idiot speak, it sounds like they're talking about a client-server proxy which just passes traffic between two connecting hosts. Big deal.


    And now:
    I was sure Napster didn't do relaying, that would require massive bandwidth
    Please! Help me! I am *paid* to know about networking. What is this "relaying" thing that 'requires massive bandwidth'? Am I going to lose my job?


    You're paid to know about networking and don't understand relaying? How about proxies? Client A connects to Server A which in turn connects to Server B. Server A passes traffic between Client A and Server B. Again, what it sounds like they're talking about is instead, Client A and Client B would connect to Server A and Server A would pass traffic between Client A's socket and Client B's socket. Actually that'd be a pretty simple daemon to write. Listen for two connections and pass any incoming traffic from one socket to the other and handle flow control.
  • Actually the "DMZ" function of the Linksys router is really just a "disable" function. The PC that is designated as being in the DMZ is afforded no protection from the router, which is as good as hooking it straight to the Internet.

    From the website: DMZ Hosting allows one user to be exposed to the Internet, bypassing the Router's firewall security while the rest of the network remains protected.

  • by dizee ( 143832 ) on Friday November 24, 2000 @06:52PM (#602658) Homepage
    Is that what NT stands for?

    I always thought it was naked turkies. Go figure.

    "I would kill everyone in this room for a drop of sweet beer."
  • Spoofing to NATted boxes into talking UDP is much easier than TCP 'cause of the lack of sequence numbers and other byproducts of guaranteed delivery. There' a great page about it at http://www.alumni.caltech.edu/ ~da nk/peer-nat.html [caltech.edu].
    --
  • NAT is a creative hack to a lack of addresses. What's the neccessary solution? IPv6? Time and end-user complaints will force the issue.
    It's also a creative hack for ISPs that want to charge insane rates for having the "privledge" of having more than one computer on the internet at the same time. IPv6 won't change that, unless those same ISPs want to charge you for the privledge of using IPv6. Until IPv6 is available in one of Microsoft's end-user class operating system, it's unlikely we'll see extensive deployment happen. ISPs are unlikely to implement it until a large portion of their users start asking for it, and their users are unlikely to ask for it until it's completely available to them.
  • You can't host a server behind any NAT (unless port forwarding is configured on it) - that's just a limitation of NAT.

    Outgoing connections work because the NAT knows where the two endpoints are - it knows the origin because it came from you, and it knows the destination because it's in the header.

    Incoming connections don't work because the destination is the NAT itself - it doesn't know how to forward it beyind itself (unless manually configured).
  • Any protocol that uses UDP almost always uses sequences or packet identifiers. While you still may get a packet through the NAT, the application processing the UDP packets will recognize the invalid packet and ignore it.

    Also, most NAT implementations do not keep track of TCP sequence numbers, this would be handled by the destined host to dump in the TCP stack.

    Sequence numbers and identifiers are part of the overhead of UDP, but provided inherently in TCP.

    See TFTP, BOOTP, etc, for example.

    It is trivial to make UDP as spoof resistant as TCP. While it may not stop at the NAT, it still will not affect the application.
  • This wouldn't work on a TCP connection, TCP connections must be established first, even if you established a connection with the central server, you must then establish a direct connection with the other peer or else all traffic must go throught the central server. The problem is with the way TCP handles established connections, the central server can not just hand over the connection as the other client of course uses a different IP address. This IP address would not have an established connection in the nat table and thus cannot be mapped to an internal machine.

    If we take NAT for our example and I feel this is a good idea as that's what the article is about; PC 1 sends a syn to the central server, server replies with its settings ie window size etc and connection is established. NAT then is able to provide translation, works perfectly but all traffic goes through the central server, we don't want this. NAT uses ip/port to check for established sessions, if an attempt to connect arrives to the box performing nat it looks up wether it already has a connection to the machine sending the request and what the mappings are. If an attempt to esablish a connection is received by the nat box, it checks for hardcoded internal mappings on the attempted port, if that fails it assumes the connection must be for a service it runs its self, if no service is running, establishment of the connection fails.

    The simple problem is:
    Say both our clients are nat'd how do we make an established connection if neither client can?

    I can see a solution but it's rather insecure and hence not pratical without some serious thought, however if we just take the fact that nat is being used to allow connection sharing and not security it is a possibility. What's needed is dynamic nat mapping protocol, which would work by letting nat clients instruct the server performing nat to open a mapping for a port to an internal machine before a connection is attemped.

    In the real world:
    PC1 contacts the napster server, finds a file that it wants to download. Central server arbirates between the two clients to see if either is nat'd. If both are it would quiery each client in turn to see if it had access to our (not yet implimented) mapping protocol. If so it would ask the first client to request a free port from the nat box, the nat box would map a port to the internal client and return this information. The first client would open a listening server on the mapped port and send this info to the napster server. The napster server passes the IP of client 1's nat box plus listening port info to client 2, client 2 opens an outbound connection to client 1 (which it's allowed to do) to the IP/port it was told via the napster server. Session can be established and transfer begins.

    Of course this involves doing a lot of things:

    1. Impliment our new protocol
    2. Add this functionality to NAT implimentations, both clients and servers.

    Apart from that there would be some serious security issues to think about.

    Ganja.

  • It's a real problem. Some months ago, I was looking into network architectures for a multiplayer game that needed voice chat. I wanted the client machines to handle all the chat traffic on a client-to-client basis, both to get the load off the servers and to provide better privacy. But making that work for clients behind a NAT box or firewall is hard. Somebody has to accept connections from the outside.

    If Microsoft didn't ship their systems with all those unwanted services turned on, this wouldn't be a problem.

  • Nobody mentioned "Push" yet? Explain.
  • The way some of the more advanced NAT routers work is they watch for outgoing connections on certain ports, and then forward return incoming connection traffic on a series of ports defined for that applications. e.g. Outgoing to connection to server on port 8000, return connections arrive on ports 9100-9200.
  • What the problem is (or as it appears to me,) is that people are getting too into security. Honestly my roomie says it best when he says, "You want security for your system, don't turn it on, then it's secure. And if you must turn it on, for god's sake don't plug it into anything dumb like the internet!" And to me this is what it boils down to. For months I got paranoid about the net (First virus will do that to a guy.) I was running a two GB HDD with a base Linux install, and that was the me that hit the net. From there it was put through a dumb scan for viruses and if it was clean then it went to a real HDD. But then I realized ... WHAT'S THE BLOODY POINT!! If you live your life (or your system lives it's life) doing nothing you will become nothing. You will be no richer for having a safe meaningless existence, you'll have no friends if you refuse to talk to people, and you will not find happiness. I'm not saying you should put out business cards with your IP on them. Just don't sit at home behind your Firewall, cause then you aren't experiencing the net as much as you are experiencing the firewall. And with everything ... MODERATION!!
    Kleedrac
  • by Phexro ( 9814 ) on Friday November 24, 2000 @09:05PM (#602668)
    ...distribution of BSD

    are you saying that it should be "distribution of BS"?
    --
  • I am not sure if this is relevant to peer to peer networking and Napster, but it is related to the issue of traversing firewalls transparently, so I thought I would post this. My apoologies in advance if somebody has already pointed this before. I did a study last summer about techniques which could be used for doing authenticated firewall traversal, since currently, setting things up so that you can work from home is difficult and time-consuming.There were quite a few stop-gap solutions, but none that were not application dependent or required major changes in kernel (something that you don't aways have access to). However, SOCKS by NEC (http://www.socks.nec.com) is a pretty good compromise. Both IE and Netscape already have support for SOCKS built into them and SOCKSified clients for telnet, ftp, ping and other common applications are freely available. The package is fairly easy to install and get running (I did have some problems with DNS,though), and even works fine with applications like Net Meeting and other H.323 based stuff (which is pretty good because they use dynamic ports). I think ICQ has support for SOCKS too, so for the average home user, SOCKS is a fairly good compromise. Last I checked, SOCKS actually had a couple of RFCs for supporting multicast as well, a big plus over other solutions. And NEC has separate implementations for Windows and *n?x OSes. I think it is a pretty good solution for those who would /could not be bothered to go through the hassles of NAT (which in my opinion, is definitely a cool idea, by the way, but one that will take a long, long time before it gets "mainstream" acceptance). Just my two cents' worth on the topic, and by the way, I DON'T work for NEC:).
  • if you had bothered to read the comment, you would realize how incredibly fucking wrong you really are.

    the whole point of RSIP is that you don't have to (re)configure anything. an app on a system behind the nat firewall does a bind()/accept() and that system notifies the NAT box that connections to those ports should be forwarded to the client that did the bind().

    i can set up my cisco 675 to forward inbound connetions. linux has also had this for ages. so has nt for that matter. so don't go trumpeting how "foo os" solves everybody's problems until you at least develop the patience to understand the problem.
    --
  • First, a P2P application that needs to connect two NATed peers can just use a third, non-NATed peer as a relay. This is what Mojo Nation does, and it works and scales fine.

    Second, it seems to me that the popularity of NAT is related to the infancy DSL is in right now; once serious competition between DSL providers sets in, I predict we'll see DSL modems that can easily hook several machines with real IP addresses to the net. We might have to use a better PPPoe, or wait for IPv6 to give us the address space, but then again we might not (if we can put a man on the moon...)

    If I may rant for a moment, complaints about firewalled (non-NATed) machines just piss me off. If your sysadmin won't put a hole in the firewall for your favorite P2P application, it's because said sysadmin doesn't want any machine in the known universe to be able to communicate with your machine and possibly crack any insecure software you may be running -- like P2P software. That's the point; few crackers run servers and wait for crackable clients to connect, and so firewalled machines are reasonably safe from outside attack. P2P software that allows by-request outgoing connections opens a large and invisible hole in any firewall; worse, you don't need to do any detectable scanning to find out that machine X is running something crackable. NAT is an acceptible excuse for designing this kind of insecurity into a protocol, I guess, but I would raise hell if I were an admin and caught one of my users sneaking around my firewall.

    Of course, if I were an admin, I'd forge email to users from their friends containing an attachment called NOEMAILFORYOU.EXE; I may be in the minority here. (Cut to big white letters on a black screen, alternating: "What did I tell you not to do?" "No email for you." "What did I tell you not to do?" "No email for you...")

    • While there are many reasons that NAT is useful, the dominant one is the lack of IP addresses. Ipv6 certainly deals handsomely with that issue

    Kinda strange to describe throwing a huge address space at a problem as a "handsome" solution :)

  • This was covered recently (relatively speaking) in a very good article regardign transparency of the internet, and how it's been shot.

    The general concept is that, originally, IP addresses were issued based on network size, *NOT* based on size of the pipes and who they were connected to. It was presumed there was space for everyone. Of course, today's restrictions have changed things, and are due to a lack of address space.

    Originally, it was safe to assume that any app would work so long as it adhered to using tcp/ip. Period. Now.. you have to take into consideration taht different sides of a conversation may be unable to initiate a connection....

    The only bad thing,I think, we find, is when protocols include their own IP information in packet payloads.. this defeats the purpose of layering, and causes confusion. This is why gnutella and other software aks you for your 'visible' IP address.

    At the heart of the issue is how the internet has changed; it used to be that IP address assignment, and acutal links were not tied together. Heck, it used to be you could have a large block allocated even though you were only potentially going to hook up to the network at some undetermined point in the future; the point of having unique addresses was merely to make it possible for internetworking to occurr.

    This has been lost now. Address space is at a premium, and it's being controlled more and more by big players. YOu can't even GET your own address space anymore unless you have fat pipes to multiple providers.

    NOw.. granted, there is a finite amount of space, and it IS fair to put it where it's best used.. but it must be temporary. THe real freedom of the net to grow without inhibition comes from having unique address space freely available. (Of course, convincing others to route your traffic was *always* a separate issue, but at least everyone agreed on who had what)
  • YEah, and anyhow, he's assuming half ISP customers use Napster, and fully 100% of those use it so much they would go to the reouble of switching ISPs... yeah, that's gonna happen. Napster's cool, but it's not a killer application, and really, it's not very good.
  • Yeah, that was what I was saying to myself just this minute. I don't get the fucking problem, P2P not working through firewalls is NOT a bug, it's a feature.

    It sounds like:
    Duh, good firewalling/NAT allows only what the admin wants allowed.

    (Please, I know that NAT!=Firewall, for me NAT is more or less a kind of "firewall" by accident).

    Please, people, P2P is NOT a cool kind of "I can trade mp3s/porn with the world about my companies wires".

    When the P2P people want to allow easy cooperation (and not circumvention) of firewalls and stuff, the ought to design their protocols in the direction of corporation. Not such a port whoring thingy like IIRC icq does (yeah, port 23 works, lets use that for inbound traffic).
    For instance some kind of application proxy which can be dropped into the DMZ and configured properly to allow only wanted P2P traffic.

  • For a long time it stood for "Not Today."

  • >Relay all the traffic between us, rather than just brokering the connection.

    That 'relaying' would imply that all traffic between clients passes through the server. "Brokering" would refer to having the server simply instruct both sides as to how to communicate more directly.

    >Broker the connection in some way that magically splices together two client-initiated TCP sessions.

    No that's not somethign that's done now, and I agree it sounds impossible, but what the author is implying is that we should look at a hack to allow a server to instruct two hosts that are only capable of initiating (outgoing) tcp connections to communicate directly anyway. I agree I can't see how it would be done, but I can't argue with the idea being good.

    >I was sure Napster didn't do relaying, that would require massive bandwidth
    Please! Help me! I am *paid* to know about networking. What is this "relaying" thing that 'requires massive bandwidth'? Am I going to lose my job?

    Perhaps. I'd probably fire you :) It's simple.
    Relaying is a common term in networking, especially on the internet, and in other forms of communication as well. You could look it up in teh dictionary. What he is suggesting is that napster relays all traffic between clients. I didn't think it did that (and still don't), but it's quite clear what he was talking about. IF you get paid to know this stuff, maybe you should start broadening your horizons a bit.

    >Sorry, I am too scared to read further tosh like this. Either the author is a global telecoms guru, or he is a know-nothing fuckwit. If my diagnosis is wrong, then I have just lost my job.

    Get off the high horse. He's a guy who has a decent understanding of how things work, but not a really low-level understanding. His ideas are correct, his motives are also correct. You are splitting hairs. He's not claiming to be an expert. Read between the lines. Just becuase he uses the word 'tcp' doesn't mean he understands it down to each bit of the conversation.

  • by Anonymous Coward
    I can understand your bitching about corporate users wanting to have non-NAT access, but do you think complaints are unwarranted even when it's an ISP that insists on NAT, and there isn't viable competition to that ISP (as was the case with DSL in the UK a while back)?
  • Napster isnt peer to peer at all. Gnutella is peer to peer. Napster is client to server. Napster depends on a server.

    Moron. Of course it's peer-to-peer. The server is just a directory. Files are transferred between clients, not through the server.

    --

  • TCP allows a connection to be established if both sides simultaneously send eachother a SYN packet. This method requires a little NAT cooperation, but only a little. Here's how it could work:

    1. Side1 Binds their TCP socket to a particular port.
    2. Side1 Tries to connect to Broker on an agreed upon port.
    3. Broker Replies with an RST when it recieves the expected SYN. Records source IP and source port.
    4. Side2 Binds their TCP socket to a particular port.
    5. Side2 Tries to connect to Broker on an agreed upon port.
    6. Broker Replies with an RST when it recieves the expected SYN. Records source IP and source port.
    7. Broker Informs Side1 of Side2's source port and source IP.
    8. Broker Informs Side2 of Side1's source port and source IP.
    9. Source1 Uses same socket originally bound to connect to Source2's IP and port.
    10. Source2 Uses same socket originally bound to connect to Source1's IP and port.
    11. Walah! They connect via exchange of simultaneous SYNs.

    This requires cooperation between the sources and their NATs. Specifically, it requires these three things from a NAT:

    1. The NAT should keep the same outgoing port for the same TCP port on a client for a period of time. This is very similar to how a NAT handles UDP.
    2. A NAT must not reply to SYNs it recieves on a bound TCP port that don't originate from the connected to IP. Normally it would reply with an RST.
    3. A NAT must change the IP it's expecting TCP packets from if the client sends out a new SYN to a different IP and port combo.

    There is one non-problem I expect people to bring up. There seems to be an apparent race condition in step 11. This isn't really a race condition because of requirement 2 for NATs. Basically, the two sources can SYN eachother all day, and it won't matter until both NATs have performed the step required by requirement 3, at which point it will appear to both sources as if the SYNs were simultaneous.

    It's a hack, but I think it'd work.

  • by Anonymous Coward
    ...get rid of transparent proxies! They violate the end-to-end, application/content agnostic design of IP. Ghod, nothing pisses me off more than an ISP that transparently proxies. One of these days a class action lawsuit is gonna bankrupt an ISP that's merely retained logs from their HTTP proxies. (That's my fantasy, anyways. The reality is likely to be more ugly - logs simply getting subpoenaed in civil torts.)
  • How does this guy figure that just because you are using NAT you can't accept incoming connections? I had DSL for about a year before switching to higher speed cable. I used a Cisco 675 DSL modem just like the author's. The DSL provider used NAT, so my computer was assigned a 10.x.x.x address via DHCP that was translated to a real-world address before it left their network.

    With that setup I was able to:

    1. Share files via Napster
    2. Run a web server
    3. Run an IRC server
    4. Run a DNS server
    and do pretty much anything I could with a regular connection with the exception of Samba.

    I'm not saying that there wasn't something preventing these services from working for him. I'm just saying that whatever it was, it wasn't NAT.

  • I know my apartment is behind a Linksys router, and we have 4 connections, however we have one computer that is the dedicated incoming access server. This doesn't really help the other computers on the network, but it is a partial solution to the problem.
    Well, it helps a bunch. The entire idea of NAT/masquerade firewalling is to deny all incoming connections to the firewall. So set up in the DMZ either Napster, or better yet a middleman to connect the two TCP streams (each of which is originated from the peers).

    It comes with the territory. Since the firewall presents a single IP to the world, there will be problems when two things want to act as a server at the same time. To make matters worse, the box inside the firewall has no idea what IP/port combo is seen by the outside.

    Since it's the NAT machine that is breaking the notion of IP<=>machine equivalence, it's up to those who want to run multiple servers behind a NAT firewall to be creative. We need to share some of the information that is normally hidden from the firewalled machine. With that in mind, here is the essence of an appropriate protocol for setting up on-the-fly "tunnels" through the firewall"

    1. Box asks firewall for a (semi)persistent peer-to-peer port it can use, rather like obtaining a DHCP lease (but probably with much shorter term for security reasons). Part of the request datagram is a pointer to an .rc with the rules to define how to manage this port.
    2. If box has permission from the firewall's owner to do this, firewall says "Fine. You can be UDP Port 2345 (on RealIP x.x.x.x, just in case firewall got this from an upstream DHCP server), and I won't use that port for anything else until __. Meanwhile, your daemon, abiding by your .rc, is listening".
    3. Box can now let p2p network know its current address.
    4. Outsider asks to connect to that port, and request is handled by daemon.
    5. If that request is authorized by the .rc, then daemon obtains a new port number and registers that TCP port for the exclusive use of the IP address of the UDP datagram.
    6. Daemon sends back UDP datagram to the port the guest box got from its firewall, informing it of the TCP port, and notifies parent application behind firewall that someone is calling.
    7. Guest then initiates a TCP connection, using the specific port number.
    8. Firewall allows this very specific connection to be established.
    9. When connection ends, box can update the .rc for its daemon, or tell firewall it's done with the UDP port altogether.
    I include the extra step of doing the UDP so that the firewall never opens a TCP port to the world, just to a single IP. If the p2p server can handle this negotiation, it can be skipped. From a security standpoint, I'd much prefer to just let the DMZ play middleman between the two machines, but if it's absolutely necessary to open a temporary hole, this would be the way to do it.

    Now someone can tell me the problem with this approach.


    --------------------
    SVM, ERGO MONSTRO.

  • As an admin, I wonder what you're trying to say. I block large ranges of ports on a regular basis, because there's no business for *any* external traffic to come into my LAN. HTTP, HTTPS, FTP, and a limited few others get inbound permissions to specific servers (On a different extranet subnet) that are hardened for those specific tasks.

    And if you think that security should *only* be on inbound traffic, you've obviously never seen a coworker led off in handcuffs after attempting to (illegally) hack another companies servers and leave his company to blame while he skips town.

    If your App has both the server and clients inside the firewall, no problem. If your client-server application is sensitive, and it crosses the internet, you should be using a VPN tunnel or SSH to cross the firewall in the first place, instead of sending passwords in the clear!

    And if the app isn't sensitive, why not treat it as if it is? Use https, ssh, or pptp to do your tunneling, and forget about it. Then those nasty admins won't ask you to help them protect you and your code.

    Why is it that you think SysAdmins are only there to *stop* you from doing *your* work?
  • Tunnelling over port 80 is (part of) the answer. The other part is having a server on the other side of the facist firewall that proxies for you. Oddly, this weeks Need to Know [ntk.net] mentions this problem. See http://http-tunnel.com/newpage/icqp.htm [http-tunnel.com] for Windows software that does it and http://www.nocrew.org/software/httptunnel.html [nocrew.org] for Unix software.

  • Actually, Napster is more like a directory server and nothing more. Sure there are chat rooms and such but those are secondary to the main purpose of providing a directory service.
  • While some say /48 will be the minimum allocation, that's for a whole site. More than likely the absolute minimum that a transient (dialup, DSL) customer will get will be a /64. Even so, that does leave a full 64 bits to play with, and IPv6 address space has been configured with that in mind. What this means is that almost every LAN segment will have roughly 64 bits to play with.

    In IPv6, the upper 64 bits are designated for routing, the lower 64 bits for node addresses. In theory, you can route within those lower 64 bits, but it is more likely that people will utilize space just above the 64 bit mark as it will make a lot more sense for routers to manage the address space that way. So I predict that customers needing to set up a routed network will most likely receive an address range

    IPv6 will significantly change the way that people will think about their local networks. Why do NAT unless you absolutely have to. Give me good reasons why a NAT is necessary. I don't buy the argument that it's a good firewall - the reality is that it isn't, as I have been told by industry experts. At best, it is a "poor man's firewall", more things break than is necessary. It's high time we did more about individual security on hosts rather than relying on the somewhat dubious security features that NAT delivers. A real firewall may resemble a NAT, but in practice is significantly more fortified than the average NAT box.

    The original topic of this whole issue is that NAT breaks end-end connectivity. It really should be outlawed, and IPv6 provides the opportunity.

  • by drsoran ( 979 ) on Friday November 24, 2000 @05:36PM (#602688)
    Of course, if that machine is exploited then your soft-gooey inside network is open to attack as well. Best to place your DMZ on the third interface of a firewall and seperate the traffic. Your bastion host shouldn't be trusted by any of your internal hosts. Course.. I guess if we're just talking home systems there's not much to lose. Maybe your checkbook in Quicken or something. :-)
  • heh. Still, no one else had yet posted about ipmasqadmin when I made that comment. It was meant to provider a hint to newbies; the article was afterall about the prevalence of firewalls in home networks.

    Of course a masq_napster module would be nice, but there is no one to speak of that I know of right now.
  • by vectus ( 193351 ) on Friday November 24, 2000 @05:39PM (#602690)

    It isn't like people will ever quit accepting connections, the home user won't let it, I know people who have only gotten the internet because of napster. Also, if you were an ISP, and you took away Napster and the like, you would lose half your customers. (it isn't likely that all individual users would cut off incoming transmissions, so the ISP would have to do it for it to actually happen)

    On top of that, it just doesn't make sense. This sounds like the rumor that the internet will die soon that always goes around. It could happen, but it won't.

    In addition, once people quit doing it, someone would revive it. The Gopher article from yesterday says it all, you can't really destroy any kind of internet technology that could be considered cool to even a small group of people. They will revive it, and it will grow.

  • by Anonymous Coward
    As an administrator, my advice is to write it with proxy support from the very beginning. If it can authenticate to an http proxy server it can almost always ride over that.. i.e. realaudio, windows media player, seti@home, etc. If you're dealing with simple packet filtering firewalls you can also probably get away with using any of the ephemeral TCP ports since they're probably wide open outbound to allow for passive FTP to work. We're facing that can of worms these days and are trying to close it up so that we can block all outbound connections and force everything through proxies for logging, auditing, and intrusion detection.
  • As an admin, I wonder what you're trying to say.

    I am saying I dislike you and your kind. I mean that in the nicest possible way of course: my intention is not to be rude or flame - I have just been pissed off past my ability to endure by admins that, in my mind, are not thinking clearly (or have their hands tied by (even more) incompetent management.

    I block large ranges of ports on a regular basis, because there's no business for *any* external traffic to come into my LAN.

    "*any*"? Just one line later you contradict this statement yourself and allow for incoming connections in certain situations.

    Also, I assume you meant connections not traffic, surely the tightly controlled users on your LAN are allowed to retrieve the results of their web browsing.

    HTTP, HTTPS, FTP, and a limited few others get inbound permissions to specific servers (On a different extranet subnet) that are hardened for those specific tasks.

    How to be polite. hmmm. Oh to be one of the dirty unwashed untrusted masses an your LAN. I used to hate it when reporters on TV and in magazines would confuse the "Internet" with the "World Wide Web" but I see that WWW access is all some are given.

    And if you think that security should *only* be on inbound traffic, you've obviously never seen a coworker led off in handcuffs after attempting to (illegally) hack another companies servers and leave his company to blame while he skips town.

    Some random thoughts:
    1. If a company can't trust their own employees who can they trust?
    2. This point is invalid because you have shot yourself in the foot anyway: Excessive port filtering makes programmers & hackers choose specific ports used by legitimate applications :- Finally, once everything is done via port 80, how exactly are you going to get any meaningful filtering done at all?
    3. Does the company also have metal detectors on the doors and are all the papers and disks the employees leave the building with checked?

    If your App has both the server and clients inside the firewall, no problem.

    That's not exactly now the "Internet" with a capital I now is it. Its a LAN application that happens to use TCP/IP as a protocol.

    If your client-server application is sensitive, and it crosses the internet, you should be using a VPN tunnel or SSH to cross the firewall in the first place, instead of sending passwords in the clear! The Internet - capital I. An internet (small I) is any network of tcp/ip hosts that may or may not be connected via gateway to the Internet. In my situation, we cross the Internet using SSL3.

    And if the app isn't sensitive, why not treat it as if it is? Use https, ssh, or pptp to do your tunneling, and forget about it. Then those nasty admins won't ask you to help them protect you and your code.

    Ah, so If I get the code for BackOrifice, modify it to make outgoing connections via HTTPS then you would have no problem if I managed to "deploy" it on your network?

    This is exactly my point. Now that we have a "short-list" of acceptable ports to use in applications, all applications will use them.

    Which meant that, in order to remain effective, firewalls will have to install application level filters, which (a) should be impossible as modern secure protocols are man-in-the-middle proof, (b) makes my life (as a writer of said applications) harder as I have to either co-operate with the firewall in some way, or fudge my protocol somehow to make it pass the application level filter applicable to some other more common application.

    What happens if I write some new P2P application that your users would find useful or just plain fun - or perhaps its even dangerous and sends sensitive information outside your network:- Given some random tool your users download you don't know and can't know. Hell - are you sure that Netscape or IE isn't volunteering sensitive company information to the outside?

    Now, do you "ban" the usage of this application? And how? (and why?).
    Imagine that the application is question was actually useful, and your users would be more productive for it. Would you make a hole in the firewall at your users request to allow the app to work?
    Or, imagine instead that the application is deemed dangerous as it keeps posting passwords in clear text to remote sites run by evil hackers. However the application does this by cleverly using ports that are currently open on your firewall allowing other mission critical services to function.

    My assertion is that port filtering is a short term fix to a problem that is made worse by the _very_ existence of port filtering :- the problem is that you - the sysadmin - doesn't trust certain application programs. BackOrifice is not to be trusted. HTTP daemons can be trusted on locked down boxes. ICQ???. Your assumption is there is a binding between an application and the ports it uses. This is false, and was false. And it becomes more false the more you try to leverage it under the guise of "security". If you truly think outbound port filtering constitutes a valid form of security then you are no deserving of your "SysAdmin" title.

    .Chris

    --

  • This IETF I-D [ietf.org] includes a novel hack (see part 3) using https.
  • I agree totally with your "Bandwidth Management for Dummies" explanations.
    The first point I was making was that, to achieve what the author was on about, you would need intermediate servers (with a very trusted, and trusting nature).
    Your second point: Yes, yes yes. But where does this "massive bandwidth because it's relayed" requirement come from? Relaying/proxying should only ever require an O(1) signalling overhead. And O() is a kind function in this situation.
  • it's in 2k professional.
    afaik, there's a SP planned for ME that'll add it
    it'll certainly be in whistler.
    ipv6 support in MS end-user stuff is nothing to worry about. :P

    "If ignorance is bliss, may I never be happy.
  • Ok, let's start off with the basics.

    We have a peer-to-peer file sharing service, such as napster. Client A is trying to request a file from Client B, however, Client A can't connect to Client B to get the file because Client B is behind a firewall and incoming connections are disallowed. How do we approach this?

    Well, the answer thus far is for the Server to tell Client B to connect to Client A and initiate the transfer from their end, but now what if Client A is behind a firewall as well!?!? Well, we have two options, although only one is even partially viable (can actually be accomplished with the TCP protocol definition in place today):

    Relaying - this should be painfully obvious. They both already have connections open to the server, so the server requests the file from Client B, then sends it on to Client A. This is obviously a Bad Idea because of the massive amounts of bandwidth you would need to relay all of these file transfers.

    Brokering or Splicing of two Client connections - the best example that I can think of to give here is that of a program called FlashFXP. It became a very helpful tool in transfering files from server to server when you were on a dialup, because you could in fact open a transfer from one server to the other without the bandwidth constraints of the dialup connection. However, this only worked because both of the servers you are connected to ALLOW incoming connections, allowing the software to broker the connections in this way.

    So the main issue is if neither client in a peer-to-peer connection will allow a connection (because basically, the Napster Client is also a server in that it serves up file requests for MP3's that you have residing on your computer), how can we transfer files between the two? One interesting answer was the TCP over UDP approach, but this would not scale well at all (too much wasted bandwidth, you might want to read the article deeper if you'd like a better understanding), and beyond that, there is no approach to take with TCP (that I or any of those discussing it knew of) to broker a connection, only because of the manditory handshaking that the protocol requires. I hope that helps.

    Revelations 0:0 - The beginning of the End
  • tunneling over port 80 breaks all kinds of standards... what we need to do is standardize the way hide NAT handles incoming traffic.

    right now, the firewall keeps track of the state connection (usually via a high source port #) and lets the returning packet go to the correct internal host.

    what needs to happen, is to create some type of standardized way of accepting incoming connections to a firewall, to route them to the correct interal host... unfortunatly, no matter what, this will be a security risk.

    the quick fix, which seems to be the only way using current standards, when 2 hosts are firewalled, is to use a server of some type... no way around it, but bringing port 80 tunneling mainstream, will cause security headaches globally
  • I believe you could trick a firewall into allowing the connection, by creating a simultaneous connection. Basically, both computers send a SYN packet simultaneously, using the appropriate source and destination ports. Both firewalls would see the SYN going out, and while they will deny the SYN coming in, they will allow the further ACKs going out. This is allowed in the TCP stack, though you might have to use raw IP packets to achieve it.
  • It uses a server to do searches, and broker connections, but data transfers is peer to peer.
  • Relay vs Broker

    Both involve something going from point A to point B, with the interaction of a third point/party C

    Relay: A sends something to B via C

    Broker: C makes arrangements for A to send something to B directly.

    Note that the negotiations to initiate all of the above would be a separate set communication packets.

    as always, effective communications protocols means correct handling of the data packets sent and received.

    Just like in any language.

Feel disillusioned? I've got some great new illusions, right here!

Working...