Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
The Internet

A New Approach to IP Address Exhaustion 191

akkem writes "For a while now, we've been running out of IPv4 address space, resulting in more and more computers getting put behind NAT devices. That's fine for many computers, but what if you want that computer to be available as a server? As part of his PhD work, my friend Eugene has come up with a nifty solution, AVES, which enables any computer on the Internet to reach one or more servers placed behind a NAT. His approach is to give each server a unique name (via DNS), and to handle all the IP address translation automatically via an overlay network." This looks somewhat similiar to virtual DNS, but taking it another step, and having the server route the requests behind itself instead of just handling it a little differently.
This discussion has been archived. No new comments can be posted.

A New Approach to IP Address Exhaustion

Comments Filter:
  • Anything incoming to port 80 on the NAT goes to port 80 on foo.hidden.domain.tld. Repeat as needed.
  • by Anonymous Coward
    Gee, this looks amazingly similar to the summary paragraphs from an interactive week (or one of those rags) article a few months ago!

    Cut-n-paste still good for a few quick&cheap karma points, it seems?
  • by Anonymous Coward
    The reason no one is using IPV6 is because Microsoft Windows doesn't support it.
  • by Anonymous Coward
    This doesn't scale.

    You have to have N AVES "waypoints" to access N internal machines using a single NAT gateway.

    The total bandwidth available to AVES "waypoints" has to be at least bandwidth used by all Z AVES-NAT gateways (which is the sum of the bandwidth usage of all their AVES-NAT'd internal machines). This is a lot of bandwidth.

    And regardless of theoretical bandwith comparisons, this will never work for significant amounts of bandwidth. Anyone who has experimented with any kind of tunnelling has probably noticed that the internet sucks.

    Peering points suck. Bandwidth sucks unless you pay tons of money and have dedicated fiber, and even then peering points still suck.

    So what you really need is N AVES "waypoints" for each AVES-NAT gateway that are close to the direct path between A and B. Unfortunately this is pretty impractical even given massive amounts of money to deploy these things.

    And as 8 billion people have already pointed out, AVES makes it impossible to do any real packet filtering by essentially anonymizing the incoming connections, unless you're [un]lucky enough to be running linux 2.4 and want to write a netfilter plugin that gets the real remote ip from the AVES-NAT daemon. As if filtering didn't chew up enough CPU already.

    I really want to know why people are coming up with schemes like this that don't scale at all. Don't they have better things to do with their time?

    DO SOMETHING USEFUL. PROMOTE IPV6 (And learn more about it too, ipv4->ipv6 migration has been thoroughly addressed, no pun intended)

  • by Anonymous Coward
    Actually, in their protocol, the NATbox replying to your request spoofs the AVES waypoint host, i.e. replies with a source IP of [] to use your example.

    In the paper [], the researchers mention that that can cause problems with ingress filtering by ISPs, which can be fixed by forwarding the return traffic through the waypoint as well.

    Read one of their papers.

  • by Anonymous Coward
    IP v4 space is not in much danger or running out - lots of space exists. The reason that they are rationing so tightly ( is that the global internet table was growing at such an alarming rate, it threatened to overrun the memory available on even the high end routers. 128M of ram will currently just fit the current table. If you are interested in more about this, read up at Arin (american registry for internet numbers) or go read the archives for NANOG (north am. network oper. group) at Again, lots of ipv4 space exists - especially b/c of NAT and the US DOD giving up large portions that it was sitting on. I return you to your programming.
  • by Anonymous Coward on Monday April 16, 2001 @11:24AM (#288130)
    IP Address Exhaustion is a serious concern. We need to do something to keep our IP addresses from getting all tired out and stuff.

    Maybe we should propose IP Address Naps.
  • IPv6 isn't being adopted for one major reason: The OS that 95% of the world uses on the desktop doesn't support it yet. Whistler will have an IPv6 option that is not supported (and comes with big red flags before you can turn it on). A friend of mine that works on Whistler networking has heard that Whistler server will ship with IPv6 as a supported option. Expect that in maybe two years. (The service pack for whistler end-user released at the same time will probably include the same IPv6 stack for production use.)

    Combined with the fact that router manufacturers should have a much stabler IPv6 base by then and critical mass of IP wireless devices should be arriving about then, expect to see a sudden surge in IPv6 connectivity and demand. You heard it here first!

  • Premission to use, copy, modify and distribute this software and its documentation is hereby granted for non-comerical purposes" That is hardly a BSD-style license.

    I just downloaded cyrus-imap-2.0.12 and cyrus-sasl-1.5.24. Neither license says that. In fact, no file in the cyrus-sasl archive even contains the string "commercial".

    Where exactly did you get that quote from? My guess is that you just pulled it out of your butt.

  • So you're basing your claim on an outdated version of the code, and you didn't even bother to look at the current version's.

    I see.

  • CMU's unwilling to use a BSD-style license? Really?

    Funny, when I worked there my lab released a big chunk of code under a BSD license, and the Cyrus IMAP server and Cyrus SASL library both appear to be released under a BSD license.

    Also, you do realize that this project is still in the experimental phase, right? Academic research doesn't have the same release model as open source software -- the goals and constraints are very different. In the open source world, someone else grabbing your code and running with it is great; you've contributed to the community, and people are doing useful things. In the academic research world, that can easily mean that someone else publishes before you do, and you've just spent a lot of time and funding with nothing to show for it. Oops.

    The same goes for the IETF comment -- taking things to the IETF too early is a waste of everybody's time. It's better to try something out and see if it works before trying to standardize it. Not everything is best hashed out completely in committees and over mailing lists.

    I would suggest that you give this project time to develop before trashing it for not being finished the way you'd like it to be, but I do realize that doing so would violate the Slashdot 'gimme gimme, I want it MY way!' ethic.
  • katy has big brass balls! iow, she's a he you fucking faggot freak!

    now, holly hunter - there's a babe! plus she's from conyers, georgia!

  • But what is being implememnted isn't much better. It's going to have just as many security holes in it, if not more than V6. Why not just work with the new and contend? It's like constantly replacing your cars radiator, while the body rusts out.
  • The biggest problem is probably training people to use it. At this point it is still a big unknown. We have to wait for everyone to learn how to use it.
  • Actually, a lot of the early companies got lots of IPs because, well, they were there early. Xerox, IBM, DEC, Apple, MIT. I don't know my Internet history well enough to know what role BBN played, but obviously they got something for it. All these companies have got to be wasting TONS of IPs... Apple for example... I'm sure all of Microsoft's IP blocks don't nearly add up to a class A, what's Apple doing with theirs?
  • Whoa, hold on there cowboy! I'm quite aware of CIDR notation, and your reply, while insightful, has nothing to do with my post. Classes are a perfectly valid way of measuring IP space. It's much simpler to say I have a class C than it is to say I have of IPs. Besides, my post was about the _abuse_ of class allocation, and while I didn't explicitely write it, one could say I was advocating breaking up those class A blocks into, wait for it, smaller CIDR blocks! In fact, I was going to link to RFCs 1466 [] _and_ 2050 [].

    So in the future please refrain from getting snooty on people and referring to them as MCSEs without cause.

  • by pod ( 1103 ) on Monday April 16, 2001 @10:51AM (#288140) Homepage
    Doesn't anyone find it strange how we've been running out of IPv4 address space for the last couple of years?

    Here are some stats from ARIN (unfortunatelly these are circa 1996...):

    Grand Total (Allocated and Assigned Combined)

    Class A - 127
    Class B - 10150
    Class C - 764202

    Right... so there are 127 institutions with class A's all to themselves. Now that's really efficient. Even a full class B (which 10000 organizations have been blessed with) is overkill.

    Percentage Allocated (Allocated and Assigned Combined)

    Class A - 100.00%
    Class B - 61.95%
    Class C - 36.44%

    Now, the offenders are here [] (this list _is_ up-to-date). Most notable class A assignments:

    • GE (ok - 1)
    • Bolt Beranek and Newman (BBN? that's a lot og IPs - 3)
    • IBM (ok - 1)
    • ATT (hmm, I guess telcos need some IPs too - 1)
    • Xerox (well earned - 1)
    • HP (lotsa research, ok - 1)
    • DEC (same, ok - 1)
    • Apple (definitely overkill - 1)
    • MIT (well earned as well - 1)
    • Ford (good one! - 1)
    • Halliburton Company (huh? - 1)
    • PSI (hehe - 1)
    • Eli Lily and Company (wtf? who are these guys? - 1)
    • Bell-Northern (no comment - 1)
    • Prudential Securities (that's funny... - 1)
    • duPont (I'm sure they're using it all... - 1)

    The rest goes to IP registries to dish out in comparatively puny class B and C chunks, and of course the US government.

  • You can start using IPv6 right now even if your ISP only supports IPv6, by tunneling it using 6to4 to another 6to4 machine acting as gateway. The 6to4 tunneling protocol is in the kernel as of at least 2.4.1 (earlier version than that I believe you need to apply a patch or two). If you live in Sweden (like me), check out SICS' 6to4 gateway []. They have connections to the 6bone and to several ISPs (it is recommended that you try one of those first).
  • We are NOT running out of IP addresses. We are adding too many routes to the global routing tables that must be held by all routers running bgp connected to multiple tier 1 backbone providers. This is one reason why IPv6 is still vapor. It doesn't address the size of the global routing tables.
    Joe Hamelin
  • The MBone, per se, no longer exists. Those involved switched over to using native multicast about 10 years ago, spelling the demise of protocols such as DVMRP and the introduction of PIM-SM and PIM-DM.

    Obtaining a multicast tunnel, these days, is an impossibility inside an absurdity. Try asking for a tunnel on the MBone mailing list, some time. If you're lucky, you'll only be talked down to, as if a small child.

    (Personally, I know children who can out-program pseudo-intellectuals any day. A degree and a job in an ivory tower doesn't make you smarter or better. It just gives you a better view of the ground, when the foundations collapse.)

  • There are also alpha-quality patches for Win 95/98 from Microsoft's development website.
  • Depends. If the ISP uses an IPv4-IPv6 translator, then the user should be able to play any networked game, or use any other networked software, without restriction.

    (That assumes, though, that ISPs have an interest in providing a service, rather than simply making a quick buck.)

  • Your best bet is to check the IPv6 information over at Lancaster University. They have a complete map of the 6bone as it currently exists.

    (Pointers to them are on:

    IPv6 and IPv4 can run concurrently, but unless you have some kind of translation layer, you can't simply connect to an IPv4 machine through IPv6. It isn't backwards-compatiable.

  • IPv6 is ready, reliable and robust. It supports IP migration, Mobile IP, multicasting, automatic addressing, proper flow control, and all sorts of other goodies.

    So why is it not being used? Easy. Same reason multicasting isn't used. None of the ISPs want to upgrade first. They want someone else to take the fall, if there's a problem. The whole bit about demand is politik-speak for "we're not telling anyone what we -could- be selling them, cos customers in the dark are so much easier to sponge off."

    So, how to get round these neanderthals? Again, easy. Proxy servers. What you need is not NAT as it is currently used, but rather IPv4IPv6 NAT. Then, end-nodes can use IPv6, whether the ISPs ever do or not.

    This is the reverse of the dismally failing attempt to push multicasting, by concentrating on the backbone. The backbone doesn't matter! It's what the user can do - and KNOWS they can do - that counts. Everything else is fluff.

    If NAT boxes and NAT solutions worked by mapping IPv4 to IPv6, you can be damn sure that Microsoft's IPv6 stack would be stable and on people's desks in a week, with AOL following a few days after.

    Why? When it's taken YEARS just to persuade a few hundred sites to even experiment with the protocol? Because image is everything. Mess up your image, and you're dead in the water.

    (This goes back to why ISPs are about as likely to try new things as a vulture is to go vegetarian.)

  • Software which requires IP addresses and doesn't understand DNS is broken.
  • Maybe. That's certainly *less* broken.
  • Halliburton is the oil company Vice President Dick Cheney was appointed to represent, err... I mean used to work for... :)

  • Why don't we just use the RT (route-through) resource record? It's been around for ages, is supported by bind et al, and could allow nearly unlimited use of existing address space.

  • Obtaining a multicast tunnel, these days, is an impossibility inside an absurdity

    Actually, it is not (UUNET will gladly give you a DVMRP tunnel for a few hundred a month, if you're a customer). And there are reasons why you might want to do a DVMRP tunnel rather than MBGP.

    Of course, you do want to run PIM-SM within your network.
  • This whole thing is stupid. If the "Waypoint" knows the name of the machine that it is connecting to why not simply build that information into NAT? In other words, we have a protocol such as HTTP/1.1 which sends a hostname in its header (The only way the waypoint can identify the host in question) So build a http filter into NAT., and both point to the same IP, NAT can simply read the HTTP header and know that host1 requests go to host1, and host2 requests go to host2. A filter such as this can be made for protocols that name the machine in their heaaders. This "AVES" solution is typical PhD type overkill shit, gotta make it hard, cause I need to drag it out over years.
  • Then game publishers should put out a patch to change the IP address inputs to a textbox input, require names to connect, and be done with it. The code to use a name instead of an IP address is about 5 lines longer and adds about half a second to execution times in bad DNS traffic conditions. Besides, if any number of names could map to a single IP address, then no company would have cause to prevent you from requesting TBONE.MYISP.COM on your account when you dialed in. In fact, you could have your own internal IP address in your provider, assuming every provider used the Class A private network for their internals.

  • IPv6 may well happen first in mobile networks - this is due to the number of mobile phones (about 500 million currently), and the fact they are becoming IP enabled (about 70% of mobile phones use GSM, and most GSM networks are going GPRS, enabling IP to the phone).

    GPRS is an easy upgrade for GSM networks and US TDMA (IS-36, i.e. digital cellular other than CDMA) networks. It includes a tunnelling protocol that allows the tunnelled address of the phone to be IPv4 or IPv6. And in the 3G world, IPv6 is part of the standards from the beginning.
  • This is really horrible - anything that discourages ingress filtering makes it a lot easier for script kiddies to DDoS the world. And routing all traffic via the waypoint server means you have now created a centralised network with sub-optimal routing.

    This really does illustrate how successive kludges on top of IPv4 (NAT, AVES, etc) will make it essential to migrate to IPv6...
  • Are you Jon Katz logged in under a different name? =)
  • Of course, in the modern web you can assume that every client will include a "Host" header in its requests... Netscape has done it since 1.1, and you're required to do it if you claim to be HTTP/1.1 compliant (which is just about everyone these days except for squid, and they still conform to a good chunk of RFC2616 except for the caching nitty-gritty).
  • but that only works if you only have ONE server that wants port 80 behind the NAT.

  • Unfortunately only HTTP 1.1 supports a hostname in the packet header. Most web hosts use virtual hosts in order to stick a shitload of domains on a single server (and thus IP) and charge you a bit of mula for it.
  • What the fuck? Port assignments are an RFC stanrd (I don't remember exactly which one) they aren't just random assignments people decided looked pretty. Theres 65000 or so ports because the designers of TCP weren't exactly sure how they were going to be assigned. You can't just open up 65,000 or so ports to the outside world. Thats how people easily DoS your network.
  • You can run a network service on any port of your choosing but if my client isn't trying to access your server on the right port I CAN'T CONNECT TO YOU.
  • Maybe AOL could use that many if all their users were given real IPs =P ... otherwise to answer your question, yes it would be a great help to have a good chuck of those IPs back.
  • Actually Lynx was probably one of the first because it's based on the Libwww.
  • Looking at their AVES "setup" page, anyone is permited to go and setup dns mapings. How do they authenticate that I own the machine I am mapping? Otherwise I can just map right through the NAT.
  • by augustz ( 18082 ) on Monday April 16, 2001 @10:18AM (#288166) Homepage
    NAT devices have the nice side benefit in that it makes hacking them from externel networks tricky. So for the home user behind a high-speed net connection, even if they leave their computer wide open to attack, it may not be trivial to actually attack it.

    What happens if someone forges a AVES DNS entry to point to an internel IP, and then uses the AVES protocal hooks on the NAT to actually drive through the NAT and hit that machine?

    I don't see this shipping in the default "on" position anytime soon in the future, but a neat way around IP connectivity issues behind a NAT.

  • The fuss is about providing a service with one of your Nat'd boxes. How are you going to assign a domain name to a non-routable IP address ?
  • There is very good reason to know, and teach, the class structure. Just becuase CIDR is now commonly used, doesn't mean that all routing protocols in use are classless. There are 1000's of networks out there using clsssful routing protocols, and thus, it is important to know how these are used. After all, just becuase you are running RIP with a Class A network, doesn't mean that you are useing a public Class A network. There are 1000's of 10. networks out there, and many of them are using classful routing protocols.

    There are way too many network engineers out there that don't understand the class structure, and how it effects summarization. Making a blanket statment that this is history, and no longer needed, is pure rubbish.

  • All computers should have publicly reachable IP addresses; this makes writing new network applications far easier. You can assume a fairly transparent network. With the IP shortage, this is no longer the case.

    BTW.. they aren't 'fake' IPs, they are 'reserved' IPs.

    And http is one of the ONLY protocols that includes the domain being looked up in it's own protocol.

    In short, we have something that came about via an oversight in the original design of the protocol (that 32 bits would be enough address space), and now people like you are complacent about the hacks we use to get around it?

    What we need is IPv6, deployed properly. And it's going to happen.

  • Ipv6 will take years to deploy. My guess is that you won't see it until something like ten years from now. Consumer operating systems do not support ipv6 and/or require significant and non trivial tweaking to support it (this is not likely to change for a while). As long as this is the case, ipv6 will be the standard. Port forwarding does not really help because you can only forward a port once (which sort of sucks if you are running more than one webserver behind NAT).

    A kludgy solution like outlined above might just be a nice solution for many small companies and home users. I'd hate to get a more expensive account from my isp just for the additional IPv4 nrs when stuff this would solve my problem just fine.
  • I meant "ipv4 will be the standard" of course, silly me.

    Sorry, I really should preview,

  • IPv6 is the long term solution to the ip-exhaustion-problem.
    However, the adoption of IPv6 is dependent on several other parties, over which you personally may have no control whatsoever.

    This solution could be deployed today, without having to wait for all parties to adopt IPv6, something which may actually never happen.. a different protocol may be used at the time that people actually convert.
  • If you've got enough servers behind a NAT box to care about that, you've got plenty of reason to get a small range of IPs from your service provider. Simply "dedicate" one IP per server that needs some ports forwarded, or overlap as needed.

  • I feel cool; I've worked for BBN and Eli Lilly and Company, so I've been involved with 4 class A networks.

    Eli Lilly and Company is a large pharmaceutical firm who has had an Internet connection since the late 80s, long before most non-technology companies of similar size.

    As I understand the history from someone who should have known, Eli Lilly originally applied for a class B address space back in the late 80s/early 90s, but Jon Postel himself suggested that they ask for a class A instead.

    Postel later criticized Lilly (among others) for not returning the extra addresses.

    It should be pointed out that renumbering 40,000+ computers is a non-trivial task, and handing back portions of the address space would likely cause other headaches. To be honest, I'm not certain anyone has actually formally asked Lilly to turn the space over.

  • It loosk liek a good way to help out bulk web server farms, but it does not even come close to the IP shortage problems.

    Because one is using DNS as the map to the NAT'd server, the server must actually receive the DNS address as part of the request. HTTP is the only common "over the internet" internet protocol that has this functionality.

    I am not too afraid of the IP shortage much in the short term anyway. ICANN and the IP sub-orgs have handled the translation to more effective IP blocks very well, and since people have to pay for them now, it is unlikely that the will be used frivilously. Plus, the internet, despite its massive growth in user nodes will eventually crest I think soon enough to eliviate heavy strains.

    By taking a position of superiority you show how nearsighted you are. Thus Spake ADRA
  • Apart from the fact that CMU does release plenty of BSD-style-licensed code, any talk about the IETF is totally irrelevant because AVES does not introduce any new standards or require any new infrastructural support. It can and is being deployed today with no cooperation from anybody.

    It would be nice to have the DNS protocol changed a little bit so that forwarded requests contain the address of the original requestor. But that's a completely orthogonal issue and other people (e.g., Akamai) want that too.
  • by tqbf ( 59350 ) on Monday April 16, 2001 @12:29PM (#288189) Homepage
    David Cheriton has a research group working on this problem at Stanford DSG --- "TRIAD" [], a DNS- based overlay that integrates the DNS query round-trip with the transport handshake round-trip and ties resource location to request routing.

    Robert Morris has a group working on overlay networks as an alternative to basic Internet path selection --- RON []. They are concentrating on overlays as a means of allowing intelligent or policy-based routing decisions on a small scale effect decisions on the large-scale Internet.

    Of course, multicast is only going to happen via overlay networks. There are many groups building scaleable overlay networks for content and data delivery today. I'd go so far as to say that multicast semantics are going to drive adoption for routed overlay technology, which will then be used to bridge NAT domains later on.

    A valid question to ask in response to this article, though, is "what address exhaustion"? Does anyone have real, valid numbers + methodology for address depletion on the post-NAT Internet?

  • by tqbf ( 59350 ) on Monday April 16, 2001 @02:45PM (#288190) Homepage
    Cisco desperately wants to deploy IPv6, for the same reason every year for the past few years has been "the year multicast will happen" at Cisco. Cisco's core technology has been commoditized. If the core of the network changes dramatically, Cisco gets to leverage a huge mass of expertise and reputation to get a new handhold on the market. If it stays the way it is now, Cisco competes on raw performance against competitors who are just as capable as they are.

    Unfortunately for Cisco, ISPs don't particularly want to deploy IPv6. It doesn't make them more money. Gadget internetworking ( hasn't happened yet, and when it does, there's no reason why it can't be made to fit into the 32 bit space we already use. Security has already been addressed by opportunistic IKE/ISAKMP/IPSEC, SSL, and SSH.

    In a network that already aggressively uses NAT, private addressing, and overlays, what does extra address space really buy us?

    Nonscaleable routing table growth!

    Personally, as a low-level network application developer, I'm in no hurry to see IPv6 deployment. I generally have a problem with the way infrastructure developers have pushed more and more problems into the core of the network. This is contrary to the end-to-end argument that the Internet is based on. The more we do in applications, the more flexibility we gain.

    The fact that you can't run "Icecast" servers has nothing to do with addressing. Streaming audio distribution over the Internet is a debacle right now. What you're really asking for is multicast, and that's coming around the bend (only riding ON TOP OF IP, not inside of it!). When widespread overlay multicast occurs, you'll have access to an efficient distribution channel without the need to run a "server" that people "connect to" to get audio.

    And how on earth do you overlook dynamic DNS in all of this? If the problem is resource location, what is an IP address buying you? DNS already provides enough information to resolve rendesvouz problems. If you are stuck behind NAT, relay/rendesvouz architectures already exist to turn your "clientside" connection into a server feed.

    I think this desire to deploy IPv6 is just knee-jerk religious bigotry from people who don't understand the problem.

  • Why can't we just promote IPv6 ? Instead of hacking together something that works, why not just design it right from the start akak IPv6 ?

    (Not meant as a flame, but as an honest question.)
  • Doesn't work for DHCP of the firewall. Theoretically, when the firewall starts up, it is reconnected to name-tree with the new IP address, thus quakerserver.mygames.XXX will allow one-stop-configuration. Existing methods require the startup process to post the firewall's new IP address on some 3'rd party's site, which is less than convinient.

  • why try to extend IPv4 when IPv6 is already here?

    Can you assign an IPv6 address to a cable-internet modem/gateway and play everquest today?

    Thank you.

  • What happens if someone forges a AVES DNS entry to point to an internel IP, and then uses the AVES protocal hooks on the NAT to actually drive through the NAT and hit that machine?

    Theoretically, this is easy to defend against.. You simply provide private-key authentication between the NAT server and the AVES router. Yes it can be implemented poorly (especially with proprietary closed-eyes windows drivers).

    Additionally, I would assume that the NAT is client-side configured to explicitly allow ports and machines. Thus quake, web and email ports would be all that could be hit. Faking the router (as I assume you're talking about), wouldn't be able to bypass anything; with the possible exeption of DOS attacks.

  • The only systems that need real IPs are servers. It's as simple as that. Multiple www and ftp sites can be placed on a single server; all the server software has to do is check the request string. (eg. '' goes to one virtual directory, '' goes to another; both are on the same server).

    I don't know what all the fuss is about.

    Local networks can use fake IPs (just use a range of IPs that are reserved for local networks; I'm not sure what they are off the top of my head, though...)

  • Actually I think that NAT is quite a nice solution for most of the problems of non-routable IP addresses (even servers can be handled with a bit of tinkering at the gateway.)

    IIRC IPv4 has had client routed protocol packets for forever though. I don't get why you couldn't just add a loose-route optional protocol header to the IP packet to route traffic past gateways rather than add layers upon layers to the IP stack (which invariably seems to result in protocol stack inversion.)

  • After spouting off this morning about how simple it should be to do the same thing with core IP, I did eventually go back and reread RFC 760 & 761. And I agree that it wouldn't be nearly as simple as I thought to use client packet routing.

    Among other things it looks like client routed IP packets were never completely specified. The packet route is destroyed as the packet is being routed (each hop specified in the route gets pulled off when as the gateway is reached, and the only way of building a reverse route is by setting the packet tracing option which would require knowing in advance how many hops the packet will go through.

    In addition there doesn't seem to be any supported way (at least in Linux) of using that packet as the basis for a response. Instead the user-mode program manually copies the sockaddr_in from source to destination, and that structure only uses the basic IP address.


  • Where is this in the IETF standards process?

    NATs violate the concept of direct connections to the internet that a large part of the IETF want to see. (Strike 1)

    Where is the source code? What is the license terms? (given CMU's lack of willingness to use BSD style license....Strike 2)

    Two strikes as to why the IETF would look at this and click their tounges. If they are uynwilling to submit this to the IETF and go through the process, this is nothing more than an acedemic excersize, and can be safly ignored.
  • Interoperability and a clear migration path are part of IPv6 ( Transition Mechanisms for IPv6 Hosts and Routers [], Routing Aspects Of IPv6 Transition [] and Connection of IPv6 Domains via IPv4 Clouds without Explicit Tunnels []). As a home user you can easily join the 6bone and be part of the magic. So, anyone who wants to switch to IPv6 can do so without a lot of trouble. For more info and the site where I stole those links from check out: IPv6 site []
  • by kindbud ( 90044 ) on Monday April 16, 2001 @11:09AM (#288208) Homepage
    We are suffering from apallingly short-sighted allocation policies that were in place 15 years ago.

    Stanford recently did the right thing, and gave back an entire Class A netblock [], renumbering into the remaining Class B blocks they retained ( was the block they returned to ARIN, in case you're wondering).

    Other [] parties [] mentioned in that NWFusion article seem to think they have a God-given right to hoard address space they will never use.

    According to the NWFusion article, it is estimated that only 69 million IP addresses are actually in use, out of the 160 million to 1 billion that are practicably useable given the limitations of IPv4 routing protocols.

  • by TheReverand ( 95620 ) on Monday April 16, 2001 @10:14AM (#288210) Homepage
    More security issues to contend with. Let's be honest here. How many servers do you really need? For crying out loud, you don't need 19 servers running web pages and DBs and god knows what anymore. Use yous allocated IP's wisely, Nat what can be natted, and let everything else reside peacefully behind that firewall. And wait for IPV6 already.
  • Class A - 100.00%

    There are 126 class A's address spaces (1-126) (0 is used for localnet, and 127 is used for loopback). 10 is reserved for private address space by RFC1918, so that's 125 left.

    Currently, ARIN has 67-79 listed in RESERVED-7, 82-95 listed in RESERVED-11, and 96-126 listed in RESERVED-8. The list you gave additionally has 1, 2, 5, 7, 23, 27, 31, 36, 37, 41, 42, 49, 50, and 59-60 (and those still appear to be in the same state). That's a total of 72 unused class A's that aren't even assigned to a registry representing 28% of the address space.

    219-223 are also unused (RESERVED-5), as are 240-254 (although they don't appear in ARIN's DB), for another 8%.. APNIC hasn't really begun to use 218. ARIN is currently doling out 63-66. 197 and 201 don't seem to be used.

    Additionally, there are 15 class A's that are assigned but not used (publically routed):

    • 7 (DISA)
    • 8 (BBN)
    • 11 (DoD)
    • 14 (Public Data Network...packet net?)
    • 19 (Ford)
    • 21 (DDN)
    • 22 (DISA)
    • 28 (DSI)
    • 29-30 (DISA)
    • 34 (Halliburton)
    • 43 (JAPAN)
    • 48 (Prudential)
    • 51 (UK's equiv to SSA?)
    • 54 (Merck)

    There's quite a bit of IP space left. We may need a larger addressable space, but we don't need it tomorrow; the day after tomorrow will be fine.

  • I've gripped about this topic before on ./ in this comment []. In this comment [], I propose a solution that essentially adds a layer between TCP and IP. While this ia a very Good Solution, it has almost negative probability of occurring.

    The one listed in this article is pretty reasonable for a lot of uses. The article talks about web servers etc. That isn't one of the uses that this would be good for. You will almost always have packets doing some backtracking from the waypoint. This backtracking represents a slowdown. If there are only waypoints in the U.S., imagine a two Europeans trying to use this system. It also represents a cost on behalf of the waypoint. This cost will be passed on to you, as the subscriber. If you are running a heavy, multiserver farm. I'm willing to bet that that cost will be more than buying your own IPs. Besides, there are way easier ways to have multiple webservers behind a NAT which give you more control over the load.

    I guess if your ISP (in my case AT&T broadband) set this up, then there would be no or negligable backtracking. ISPs can then entice newer subscribers by allowing them to do this (possibly for an extra fee). I would probably switch ISPs, if there were a broadband ISP that offered this.

    What it might be good for is for a home user with a multinode network behind a NAT who ocassionally P2P things, like network gaming and telephony. With this system, each computer could have a copy of Net2Phone running, and can be called by entering the machine's DNS into that product. Similarily, you might be able to do this in games (not in Alien vs Predator, where you can only give an IP, but some games allow DNS).

    Where I am skeptical of the above is the speed costs. I said above there would be backtracking. There is also costs in the routing. Telephony doesn't require a low ping, but it is better without it. Gaming requires a low ping.

    This might also work well with the file sharing thing. This adds one last bit of skepticism. There is nothing in ICQ that lets me set my DNS. I don't think there is anything in Napster to specify a DNS. Napster and ICQ "know" how to contact you by the IP address you use when connecting to the central server. There is no way to tell htem how to use this system.

    Which brings us back to web servers, ftp servers, telephony, and gaming. Don't get me wrong. If telephony worked with this, and I were an international business, I would use this at the very least for intracompany calling/conferencing. I might even have my employees put their machine DNS on their business cards to promote other companies to use telephony.

    The chances that the applications will change to allow a DNS field are much higher than the chances of everyone changing to my NATCP idea above. Software, even that much software, is much cheaper to change than all that routing hardware.

    I give it a B+ for solving the problem. It may be the best mark I give.

  • Ack. I just figured out a problem with this that lowers my grade to D+, and retracts my international company from using this system.

    I am going to begin speaking as if you have read the "How does AVES work" page. If you haven't, do it now []. When I say "locks up", I mean the waypoint won't be able to create new connections to a different NATed machine.

    Essentially the problem is that there is a very easy DOS attack, that cannot be removed by the design of the system.

    Basically, what you do is you make a bunch of DNS requests without ever making a connection. This will allocate all of the waypoints. If my understanding of this system is correct, a DNS lookup will allocate the waypoint to the specific machine for quite at least a few seconds (so that the proxy can form) if not longer (otherwise it may have problems with applications that cache the IP address, like IE, which don't do a DNS lookup for each connection).

    So, find a bunch of unique DNSs (if you use the same DNS, the system can just reuse the same locked machine) that use the same service, and begin allocating. Pretty soon, no one will be able to make a connection to any subscriber.

    Note that it is the whole machine that locks up waiting to form the bridge, because the DNS server can't know what port the remote application is going to try to use.

    This goes back to the reason why I wouldn't use this system for web servers: there are other ways of having multiple machines as web servers behind a NAT that give you more control over the load.

    I would limit this to home use, and even then, expect some script kiddies to knock out your service now and then.

  • How about this. You read my comments that I link to first. Then you would see my comments on the problems with this. Then you read my comment which talks about the same issues you bring up, but reserves judgement in lieu of actual performance testing (as opposed to armchair performance testing that programmers are want to do). Then read the suggestion that might remove some of the problems.

    Incidentally. I figured out how the DOS attack above won't work. You just lock the machine down for that IP. So it will end up locking out the attacker from the services, but not the rest of the world. This is pretty cool stuff, and it can work. You can even set one of these up at home (my cable IP is semi static, so I can us it as a DNS server). I'll raise my rating to a B-. Very good for home use. Possibly feasible for corporate use, but you would want to manage your own waypoints/DNS (to control load issues). You are still open to DOS (just from people trying to flood your waypoints), but not as open as I originally said.

  • Who has a real story of an IP address shortage? I mean, something like an ISP saying "Sorry, we'd like to give you a DSL line, but we've just run out of IP addresses".

    This hasn't happened...yet. However, it will occur not too far down the road. Actually, I should rephrase that. Unless IPv6 is used, increasingly cumbersome methods of increasing that available IP pool will need to be used.

    The growth of broadband, WAP devices and talk of such things as ovens, air conditioners and god-only knows what else being hooked up to the internet will rapidly drain this pool. This is why IPv6 is neccessary. For a really good article on it, check out this CNet story. []

  • one problem with schemes like this is that compared to IP routing, DNS is much slower, less reliable, and more prone to misconfiguration. for another approach to solving the address exhaustion problem in the context of NATs, see RFC 3056 [] and draft-moore-6overnat-00.txt [].
  • So far we have been saved by the Alan Greenspan approach to IP address shortage. Send the economy into a tailspin, put all the "dot coms" out of business, and watch the IP addresses come rolling in.
  • The whole point, though, was that software did not have to be changed. If we are going to require a great quantity of software to be modified, we may as well move to IP6.

    I, of course, agree that games should allow you to enter a domain name instead of an IP address. I also think games should allow you to configure which ports it uses.


  • by yamla ( 136560 ) <> on Monday April 16, 2001 @10:30AM (#288225)
    This is hardly a new approach. As noted in the Slashdot writeup, this is basically similar to virtual hosts that Apache supports. Furthermore, there is a significant problem with this solution.

    This works fine for software that uses domain names to communicate. An http request, for example, resolves a domain name and includes that domain name in the request header. That is why virtual domains can work so well under Apache. However, there are other protocols, often somewhat non-standard, that do not use a domain name at any point. These protocols will continue not working under this scheme.

    Consider, for example, many multiplayer games. You connect to another person's IP address. You do not use a name. If that person is behind a NAT firewall, I do not see how this proposed solution will help at all.

    Besides, for all but huge internal networks protected by NAT, how is this any better than forwarding ports? For example, when you hit port 8080 on the firewall, it is forwarded to port 80 on apache1. When you hit 8081, it is forwarded to apache2, port 80. And so on. Any modern firewall allows this fairly easily and lets you hide a whole series of servers behind a NAT firewall.

    The downside, of course, is that the protocol of choice must be able to connect on arbitrary ports. No problem with http but probably you cannot set up your multiplayer game to do this. On the other hand, you do not need to install any new software assuming your firewall is half decent.


  • You just have to coax it a little... "c'mon feel the burn", and "where's your second wind?" or even "you've almost acheived runner's high!"...
  • Not me personally - all of my ip addresses are very dynamic. ;P
  • In the article, they say:

    We have tested our AVES implementation on RedHat Linux 6.1 and above, although we believe a version 2.2 or above Linux kernel is the only requirement.


    If you are good at Linux/Windows/Mac network programming and are interested in doing a project, we can design a cool project for you, click here for more details!

    Do they have any plans to support *BSD? I mean, OpenBSD makes a really nice firewall, and I like the way IPFilter works. (It seems a whole lot less kludgy to have a simple text configuration file than to have a full-blown script calling the iptables/ipchains command once for each rule you have. Sigh... I wish Linux used IPFilter.)


  • This is the reverse of the dismally failing attempt to push multicasting, by concentrating on the backbone.

    You don't seem to understand how the MBone [] works. It's the opposite of concentrating on the backbone. Users behind the multicast router get real multicast, and the router tunnels it over unicast IPv4.

    The lesson of the MBone is that even when you can put real multicast on people's desktops, the infrastructure still resists change.

  • Also looks exactly like a post under this thread: tm l

    (look at post #44)

    Replace government search-engine with IP exhaustion an you have some instant karma whoring!


  • No, you are right, and that is a _very_ isolated example. I think it is gonna be a long time before there is widespread support for ipv6 in common applications like games and such.

    It seems it boils down to short-sighted economics.


  • So he has just created another level for computers to work on? Now clients and servers would need to go by another step after ARP, DNS and all the other stuff we hae to deal with. IMHO if we just put all this time spent in trying to side step the IpV4 space problem and put it into converting software and hardware to IpV6 we would be better off in the long run. but hey thats just me. ( note: this is not a flame)


  • by dmccarty ( 152630 ) on Monday April 16, 2001 @10:15AM (#288244)
    I appreciate all the work your friend has done, but why try to extend IPv4 when IPv6 is already here? This reminds me of companies producing "blazingly-fast" ISA video cards years after the PCI and AGP specs were defined...
  • by tringstad ( 168599 ) on Monday April 16, 2001 @01:03PM (#288250)
    Your post is actually interesting, but completely incorrect as there is no such thing as Class A, B, or C addresses anymore, nor have there been for a long time now.

    In November of 1996, RFC 2050 [] regarding Internet Registry IP Allocation Guidelines, and Classless Inter-Domain Routing (CIDR) was introduced and used ever since.

    Unfortunately, some people, and certifications (coMCSEugh) cling to the old Class structure, and demand that people remember it, in order to go about properly mucking up large networks with a limited understanding of routing protocols (TCP/IP is a routed protocol, not a routing protocol) .


  • An http request, for example, resolves a domain name and includes that domain name in the request header. That is why virtual domains can work so well under Apache.

    It's worth pointing out that versions 0.9 and 1.0 of HTTP (which conforming servers are required to be backwards compatible with) don't send the hostname in the request header. That's why Apache has that workaround where you create a pseudo-directory for each virtual host (i.e. would be listed as; assuming that 'www' is the machine acting as the server for the virtual hosts, a request to would get treated the same as and

    Also, I'm not sure if it's still the case, but there was apparently a chicken-and-egg problem with virtual hosted SSL at one point. In order for the server to get the appropriate 'Host:' header from the client (necessary to determine which virtual host to use), it needed to provide the client with its public key. In order to provide the client with the public key, it needed to know what virtual host the client wanted to connect to.

    So even HTTP, which I agree is one of the more ideal examples of a hostname-driven protocol, has its short-comings. In that light, it makes this solution appear even less useful. However, that's not to say it is completely without merit -- it helps illustrate some issues that designers should keep in mind when cooking up new protocols.

  • I use port address translation (Port forwarding) at home and it works great with all my apps. In addition their are very few services that don't respond to this technology anymore. I also have the option of running a DMZ or "1 to 1 NAT" to futher assist me in the "Special Cases." I can see very little practical use in the technology that this article proposes. Sounds like someone trying to reinvent the wheel to me... but that just my opinion.
  • Why not spend billions upgrading all your routers, network cards and operating systems for a new address format? I mean, heck, the economy would get quite a boost as all your current stuff would be garbage. (except as a standalone)


  • Yadda Yadda Yadda
    Oh, yeah! Y-A-W-N

    We already have a solution to fix the IP address depletion problem, not to mention other issues with the current IP infrastructure.
    It's called (drumroll)

    IP V 6

    Perhaps you've heard of it?

    Always amazes me why people bother directing such a large amount of energy to solving a problem which has already been solved.

    Can anyone say "fragmentation"?
  • These documents indicate that hosts who want to use IPv6 need a DNS server that will support it. Unless you run your own DNS, which is not something that most home users do, this is dependent on the whim and pocketbooks of ISP's and BB providers.

    You may run your own DNS, but I can count the people I know who would get any use out of their own DNS server on one hand.
  • by Bonker ( 243350 ) on Monday April 16, 2001 @10:35AM (#288282)
    AVES, and other domain services are probably going to be the way we do things for a long time to come. Despite the fact that the technology exists, the sheer cost of upgrading the *entire* internet to IPv6 is prohibitive.

    If you're Cisco, you're interested in getting IPv6 capable routers out the door, but recognize the fact that very few people want or need them yet because the 'rest of the internet' doesn't use IPv6 yet. Even if you can muster the cash to make the code change (which Cisco has, if I remember correctly) you still have to provide combo routers and switches, and hope for market penetration to make the investment in IPv6 worth it.

    If you're an ATT or a Worldcom, you more than have the cash to do it, but it will make your bottom line look bad if you spend millions on upgrading routers and switches. As we all know, in the U.S. nothing is more important that the bottom line (gag).

    If you're a home user, you'd love to go to IPv6 so that you can run your own OpenNap, Icecast, FTP, Web, etc... server, but realize that you will never convince your ISP to allow you to do so since they're still using IP4 protocols and working with backbone providers who use IP4 protocols.

    So you use AVES, making it possible for those who would otherwise be force to use it put off IPv6 off just a little longer.
  • Because one is using DNS as the map to the NAT'd server, the server must actually receive the DNS address as part of the request. HTTP is the only common "over the internet" internet protocol that has this functionality.

    I'm not sure I understand what you're referring to here. If the client makes a DNS request (can happen with HTTP, FTP, SMTP, POP, etc.) for a NATed server, then the DNS server will give the client the IP address for a waypoint.

    At the same time that the client is receiving the info on the IP address, the waypoint is receiving info from the DNS server that it should expect a packet from the client's IP address and forward it to (the address for the NAT box and a predetermined port number based on the service requested). At some point, the waypoint or the DNS server must also notify the NATed server of the originating IP address so it can serve the request without having to travel back thru the waypoint. I don't know if this is a seperate packet, or if the TCP header is unmodified, or what. I didn't see any details on that.

    The NAT box receives the forwarded packet, and since it recognizes the waypoint (?or does it simply let all packets thru? it's not clear form the write-up AFAICT), it lets the packet thru and forwards it to the NATed server.

    The NATed server processes the request and replies to the client's original IP address. A tunnel thru the NAT box has now been opened.

    A way to bypass the waypoint for the rest of the "conversation" might be to set an extremely low TTL on the DNS records. The DNS records (Dynamic DNS) would be automatically updated from a request by the NAT box (or waypoint) once the initial request is served, along with a higher TTL. The tunnel should now be opened on the NAT box, and it can set a DNS record with it's IP address. The client IP would clear it's DNS cache of the original record and retransmit a DNS request, which would give it the IP of the NAT box.

    Errr...wait...that wouldn't work for at least one reason. If a DNS request came from another source during that conversation, it would receive the NAT box's address, but the NAT would drop it, as no connection was established. The only way I can think of right now to implement it, would be to have the DNS server keep track of the requests served, and after serving a client the waypoint IP, serves that client (and only that client) with the NAT IP.

    This is a very nonscalable, kludgy, and high overhead proposition. On the one hand, you can route all the client to server traffic thru a waypoint. That's a lot of bandwidth if people actually use this. OTOH, you can try to hack together what I mentioned with DDNS. That's a lot of overhead, and may require the client to install software to modify lookup times and such. Oh well...nice research project at least. At least he got featured on Slashdot (every CS/CE student's dream, isn't it?)

  • This appears to be about as revolutionary as a coal-fired pocket calculator. Sure, it addresses a need, but in a round-about and probably unsustainable way.

    Individual machine addressing through NAT has always been possible using free, commonly-available VPN tools. I've done this for my home machines for years by bouncing traffic through a colo box. It works because I'm willing to pay for the bandwidth. Who's going to pay to run these "Waystations" when they could instead put their resources into fine-tuning IPv6?

  • by osorronophris ( 318023 ) on Monday April 16, 2001 @10:20AM (#288293)
    I'll probably get flamed for this, but I read in an interview that IP6 was ready to go and NAT is often not needed. Apparently the only thing holding the net up from adoption of IP6 is hardware companies not making the proper equipment.

    Since IP6 is a logical solution to the problem with address, is there any reason we shouldn't push hardware companies to adopt it instead of focusing so much on workarounds?

  • AVES breaks subtle assumptions that a lot of software makes about the relationship between names and IP addresses. But, hey, it works in the simplistic cases, so it's largely backwards compatible, right? Sorry, I don't think that's a good approach.

    The problem isn't a shortage of IP addresses, it's a shortage of well-known ports. There are only so many port 80s and port 23s to go around. However, there are a lot of other ports, and there are good, reliable, safe ways of forwarding them (firewall forwarders, ssh, SOCKS, ...). Rather than fixing subtle assumptions about name/IP correspondences in lots of software, I'd rather be fixing software that hardcodes port numbers; the latter is much easier to find and code.

    AVES is a prototypical example of how we create messes and maintenance headaches: it looks like it solves most of the problem and, hey, we can fix the remaining problems, right? But it isn't the right thing to do, and the long term costs of creating such a mess would be high. Fortunately, I don't think it will catch on: ISPs don't want people to run servers anyway.

  • You can run Apache or FTP or other servers on whatever port you like, and people do. Perhaps you haven't noticed.
  • That's why the most commonly used form of naming network services these days, URLs, includes port numbers.
  • can't a nat box be set up for an easy port fordwarding scheme to enable hosts to be found behind a nat? if i want to get to a mail server behind a nat, i forward all "standard" requests from nat interface to my mail server, etc...

    after reading through that stuff, i didn't see anything that new or breath-takingly cool. so a dns lookup scheme that works with nat to do host forwarding instead of port forwarding. true, i hadn't thought of it meself, so i'll give them that credit.

  • which will bollix up many kinds of firewalls.

    The fourth diagram on the "How Does Aves Work? []" page shows this clearly.

    An example: my home firewall sees an HTTP request go out to, for which (according to the explanation) a DNS lookup gets an IP address []. [] is actually the IP of an "AVES waypoint" host. The waypoint processes my original HTTP request, and sends it along to the actual machine behind some NATbox (which has an IP of []) somewhere, which replies to my browser. But the reply doesn't originate from [], which is where my firewall is looking for a reply to the original query -- instead, it arrives with a source IP of [], which is the IP of the NATbox behind which actually sits. To my firewall, this looks like an incoming connection attempt that is unrelated to any outgoing traffic, so it gets DROPped on the floor.

    So, far from requiring no upgrades on the part of the end-browser, this scheme will require anyone with a firewall or a NATbox (such as my P90 running ipchains [], or a linksys BEFSR41 [], or some other cablemodem/DSL access sharing device) to understand the protocol and deploy mechanisms for handling it.

  • by zfight3r ( 443714 ) on Monday April 16, 2001 @11:21AM (#288312)
    Short sightedness has caused the depletion problem (if you can call 160 million possibilities short sightedness)...but the issue is kind of moot right now.
    IPv6 is coming...and we won't run out of addresses. We need creative ways to deal with problems that we have right now as we wait for IPv6.
    The issue of NATed addresses is a real one and a barrier for peer-2-peer communications, not the hype, but true application-to- application communications that can allow networks to understand their state and topology to make intelligent routing and communications decisions. In order for this to occur the Internet needs to go back to its roots of true bi-directional communications. Publishers cannot simply view nodes as passive receivers of content...but as active participants on the network at large with important things to say and receive. The current trend for ISPs to provide asynchronous bandwidth is our next barrier and a trend that hopefully is reversed as more devices and home users demand to be publishers of content and information.

Fear is the greatest salesman. -- Robert Klein