Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
The Internet

Trouble Ahead for Internet Routing Tables? 149

joabj writes: "This article in Light Reading, a fiber optics news page, claims that the Internet's routing tables are ballooning in size and within a couple of years "equipment won't have enough processor power and memory to handle them." The article draws its conclusions from the dramatic increase in the number of BGP routing tables over the last six years and the predicted need for more IP addresses for all those pervasive computing goodies we've been promised."
This discussion has been archived. No new comments can be posted.

Trouble Ahead for Internet Routing Tables?

Comments Filter:
  • IPV6 will actually increase the size of the routing tables, beacuse routers will have to support both IPV5 and IPV6 tables
  • Hmm. The entire @home network moved onto a single class C network address? Nahh.. But possible.

    And why not? When Sympatico started their DSL service in eastern Canada they placed the whole province of Nova Scotia on a 10.* net. People who need to run servers have to sign up for a business package to get a routable IP address.

  • The tier 1 NSPs weren't huge corporations 5 years least not on the same scale they are at now.

  • I don't see how this is different from IP-IP encapsulation.

    As for addressing the cost of renumbering, we should recognize that IP addresses have become a scarce (in the economic sense of the word) resource, and should be now priced. Given a cost for holding onto an IP address, people will figure out how to relinquish the ones they're not using.
  • I don't know about other manufacturers but I do know that Cisco "approved" memory for their boxes is ridiculously expensive. Ridiculously expensive is an understatment. We bought a 3640 with the standard 32meg (I think, it may have been 16) of memory in it and to upgrade to the 128meg we needed was $5k. If we start needing huge amounts of RAM for just basic things like BGP with 2 route tables it'll be very hard for smaller companies to even be able to function.
  • Can someone translate this posting into English for me?
  • by billstewart ( 78916 ) on Wednesday November 01, 2000 @12:26PM (#657353) Journal
    The main problem the article addresses is not the supply of IP addresses, but the rapid increase in the number of BGP AS numbers, which increases the amount of memory and CPU that routers need to track and calculate routes. We've largely fixed the problem of regular IP addresses, between CIDR, RFC1918 10.x addresses behind firewalls, and virtual hosting for web sites. So why do people need their own BGP addresses? It's not just for ISPs any more - there are about 5-10,000 ISPs but 100,000 BGP addresses in use.

    I think the answer is that, as IP connectivity from the outside world becomes mission-critical for business applications, businesses often want to deal with more than one ISP, or at least more than one technology (e.g. cable modem plus DSL) so that their customers can reach them even if their primary ISP is down, and to improve performance. To some extent, you fix this by using reliable ISPs and hosting services, or by using fancy DNS tricks to make it easy to find the connections that aren't down or that will give the fastest connections. But ultimately, you get yourself a BGP number and advertise your routes diversely so you can get diversity.

    How do we find alternatives to this? Either ISPs need to come up with ways to handle it for their customers, or routers need to get bigger and faster, or we need alternative protocols that make it easier to avoid BGP. A good local ISP can provide this - buying service from a couple of big carriers, and providing enough transparency and responsiveness that customers trust them, and enough customers that their one BGP number supports multiple customers. Hosting centers also do the same thing, and let their customers avoid access circuits as well. But it's tougher to make it work for customers who have offices in multiple locations.

  • First of all, I've set up Potsdam State so all their client IP addresses come out of a bootp/dhcp server using static assignment. So their cost to switch to a completely different network is trivial. Change a few servers, edit /etc/bootptab, done. If your site doesn't do this, then it's poorly managed.

    I can't say how many addresses your site needs. All I can say, as an economist, is that an IP address should have a price. If the price is worth paying, you'll pay it, and you'll have the addresses you need. Or if you have too many addresses, it makes sense to sell some of them. And if the price of an IPv4 address becomes high enough, it will justify a switch to IPv6.

    It's it amazing how well a free market works? Instead of having to have endless discussions, and wailing and gnashing of teeth about routing tables and switching to IPv6, you just turn IP addresses into private property and let the market work it all out.
  • I recall a post on the NANOG mailing list recently that ARIN has started delegating CIDR from The post is here . []
  • Are you suggesting that 5 years ago MCI and IBM weren't huge corporations? Does the PSINet of today somehow dwarf them? Please. Data services and telecommunications have been big business for quite some time, and they certainly were 5 years ago.
  • On a [un]related note, it was pointed out by Avi Freedman [] at ISPCon (and I'm sure elsewhere) that UUNet AS 701 + Sprint AS 1239 = 1940. And they say they aren't fascist. =]
  • That's an interesting point about a direct routing table. In a couple of years, putting a 64-bit processor into your router with 48 bits of physical address space might be entirely possible. More than enough space to keep a route for every single address. Your route-lookup time should be O(1), right? If you actually had a network route, you could just store it as a bunch of individual host routes. Cool.

    Sure, BGP would probably freak out, and it might not be a good idea to update the core routing table every time some laptop reboots. The table would never converge, but what the hell? Why not? In a few years, the necessary memory won't be worth squat. Embedded processors will be running at 1GHz. BGP would probably need some updates to keep route flapping down. It sounds scary, but in a few years this will be totally doable.

    It gives quite a few advantages, also. 100% of address are portable. Addresses can be handed out without any concern for the effect on the routing table, making for very efficient distribution of IPv4 address. IP mobility becomes a non-existent problem. Most importantly, I can finally have my own personal, portable, routable /32 network. Maybe I'll multi-home my DSL connection. Yeehaw! :-0> What a great idea! I'm off to the patent office...
  • Not to mention those overly-zealous about privacy issues would start to scream if it could be determined where you live by your IP.
  • Surely the situation isn't as bad as portrayed. The IPv4 addresses are fast running out, and subnetting is not in the domain of the top level routers. There is a physical limit on the number of top level domains which can exist which should sit at around about 2^23 + a bit (something like 8 million).

    The problem really comes in with IPv6. With IPv6 the whole address space expands to a much larger scale. Now, I don't know a great deal about IPv6 addressing, but I have always assumed that the higher order portion of the address is much more location based.

    Can anyone comment on this?

  • My apologies. It was a quick post, and I didnt notice the not-quite-you name he had. An honest mistake.

    I have changed my sig to reflect it, and made sure to make the user info a link in it. :)
  • by BeBoxer ( 14448 ) on Wednesday November 01, 2000 @10:22AM (#657362)
    This seems to be more of a scare article than anything else. This is primarily a problem of memory. Given the rapid advances in the RAM industry, I would be suprised if the global routing table could grow too fast. Even the article itself says that within a couple of years, routers might need gigabits of memory. So what. Is spec'ing out a whole GB of RAM on a > $100K router really going to be a big deal in two years? Hell, if you bought 1GB of RAM for Cisco's top of the line router (12000 series GSR), you would spend ~$30K today. Moore's Law says that cost will drop to less than $10K within a couple of years. That's chump change on a serious router. Cisco charges that much for the power supplies alone.

    Let's face it. The global routing table is never going to stop growing. It's certainly never going to get any smaller. Every year the core routers will need more memory than the year before. Is this a bad thing? That the Internet is growing? I don't think so. Personally I think everybody who wants it should be able to get portable address space. But, that probably would melt down the routers. Not to mention exhausing the IPv4 address space ;-)>
  • "equipment won't have enough processor power and memory to handle them."

    Are they forgetting Moore's law?
  • Anybody familiar with routers has seen this coming for quite some time. It's not uncommon for routers today to need 128-256 megs (or more) to hold the routing table, and people are buying larger and larger routers to handle it.

    just imagine what will happen when IPV6 gets used in a widespread manner. (I still advocate IPV6)
  • I wouldn't say that it's stupid for these items to have static IP addresses. Some small devices will need them to do all of the neat little things we want them to do via the 'net. However, I do think that NAT needs to be used in more situations where client-pull rather than client-push technology is being used. For example, an office of 20 computers with normal, web surfing, email downloading users. Rather than giving them a /27 network of IP addresses, they should use NAT. I'm seeing increasing laziness in the industry towards NAT. Granted, if the network needs real IP space, use it. But definitely do NOT use it if you don't need it.
  • IMNSHO buy Kingston RAM for your Ciscos. I've never experienced a problem with the Kingston stuff.
  • It's all about RIPv1 over the WAN links. =] Nothing like 89000 prefixes every 30 seconds... Oh, what's that you say, RIPv1 doesn't aggregate?

    Yes, this is supposed to be somewhat un-humorous. It's still before noon, I'm allowed stupid jokes.
  • Ummm, I'll give it a try.

    (* Babelfish Mode On *)

    Fweep hanburger splodge router the aggregate, nerd meep fubar rezrov gaspar.

    Alternatively, it might translate to:

    I hate renumbering. Everyone I know hates renumbering. We can afford to buy more routers, and have them load-balanced. Exponential growth isn't a problem, provided it includes your bank balance as well as your throughput.

  • by jd ( 1658 )
    You just need whopping big lookup-tables at the borders of the IPv6 island. Everything in the island can be pushed around by encapsulating the IPv4 packet in an IPv6 one.

    (That's why it's amazed me that the IPv6 developers chose NOT to focus on IPv4-in-IPv6, but rather on IPv6-in-IPv4, which is relatively useless, once you pass the half-way mark.)

  • That's part of the problem. As you chop up the address space, the size of the routing tables grow. If the smallest set of contigous addresses routed on the global network is a class C (256 addresses) then you'd potentially have 2^24 (16 million) route entries. Even if the smallest block of portable addresses is limited to /20 (1024 addresses) the routing tables could potentially hold 2^20 (1 million) entries. That translates to a table around 16 to 32M in size that has to be inspected for every packet passing through the router. That's going to take a measurable about of time even if you wire it directly into the silicon at gigahertz speeds.
  • 64.x.x.x is definitely in use. My ISP ( has a block of 64.50.x.x. It just isn't used as a class A.
  • Actually yes, I am suggesting that. The part of MCI that handled Internet routing was not huge. UUnet was not part of Worldcom, etc. I imagine that the tier 1 NSPs are a lot more bogged down with politics today than they ever were 5 years ago and therefore much less able to force a new and likely largely untested protocol through.

  • Fasinating... I used to work for a company that was with the worst offender (BCnet - that's the BC Government's networks).

    Doesn't surprise me at all that they could be doing things MUCH more efficiently. There's so many groups politicking there, it's terrifying.

  • You'll just see more route aggregation. Why is this particularly a problem? Renumbering isn't that hard.
  • Putting a price tag on IP addresses does NOT solve anything, it only provides a way to tax the internet. If you start treating IP addresses like real-estate, you're asking for a massive set of problems, just like auctioninng off the airwaves to cellphone providers, instead of leaving them open for all takers.

    Bad idea, bad kharma
    Mike Warot, Hoosier

  • ... Film at 11.

    (Sorry, I had to do it)
  • I can imagine it now!

    "The MS Internet will be based on NetBEUI enabling everyone to leverage the power of Windows TM."
  • If you really think about it, the Internet is really just the worlds biggest LAN (or more accurately, WAN). No LAN i know of can handle X amount of traffic without having some problems, and since the internet's growth is increasing exponentially, either people are going to have to spend more money fixing the net, or remove some users. (while the latter seems more fun and easier, the former is often considered more PC (too bad))
  • Metcalfe predicted the eminent collapse back in 1996, but it never happened. Smart people invented CIDR and other routing tricks to avoid the problem. I'm sure we'll find a way around this one... and if not, switch to IPv6.

    Address to Univerity of VA: eri []
    NPR program: ntp /npr/nf6A16.html []

  • here is a reply from a co-worker, of whom I sent the article.

    There are several statements in that article that are incorrect. Perhaps the biggest is:

    "This growth results from the proliferation of Internet devices, each of which requires an address"

    No, the growth results from people not adhering to the (once upon a time) "rules" for how to announce networks. The idea of announcing a /24 all around the planet was, at one time, a completely laughable idea. Nowadays, with everybody assuming that they have as much knowledge and capability as everyone else, people have the attitude that will announce whatever the darn well please and nobody can tell them different.

    Until the day comes when there is one governing body for the Internet, the whole thing will just be a toy to keep trade rags in business. Imagine if all the little cable or phone companies decided for themselves about what frequencies they used or what area codes they used. Same thing.

    now me: Juniper, Cisco, and Extreme Networks all have products that default come with 256 MB, and they are all upgradable. thats a fairly big routing table. With Juniper leading the way with their BSD based routers, and the new linux kernel supporting all the advanced routing options, we are going to see some cheap linux/bsd based routers in the very near future. and because it can be PC based (provided you had a nice motherboard with a very wide bus) you could easily and cheaply add 4 gigs of ram. now THAT is a huge routing table. a dual or quad 1000 Mhz pc based router.. Sounds pretty good to me.

  • by Fervent ( 178271 ) on Wednesday November 01, 2000 @09:57AM (#657382)
    I'm concerned with the increasing occurance of giving static, permanent IP addresses to relatively dumb items. Palm Pilots, refrigerators, guns in the army, etc.

    Why do devices that only really need temporary internet access get permanent IP's? If we didn't have all of these extra devices crowding available IP numbers, perhaps there would be no need to develop a more complex numbering system.

  • I'm exactly thinking tier 1 NSPs will role out a different protocol. It could very much happen VERY quickly if for some reason BGP was imposing a significant cost/performance overhead as opposed to an alternative solution. The main reason why changing from BGP is crazy right now is that BGP meets their needs and it's in place. Once that's no longer true change will take place quite rapidly.
  • I mean IPV4 not IPV5 (which was scrapped)
  • This seems to be more of a scare article than anything else. This is primarily a problem of memory. Given the rapid advances in the RAM industry, I would be suprised if the global routing table could grow too fast.

    The more routing entries you have on a router, the slower it gets. Even the top-end Cisco (or similar) routers succumb to this problem eventually.

    Oh, and for the high-wattage power supplies, Cisco charges $10 for a power supply, singular, alone. Or more. Now that's frightening. Don't forget however that ram prices, like the stock market, are only very loosely tied to reality. Not that it'll stop people from putting more memory on the routers.

  • No, we don't need IPv6. That's why it hasn't been implemented yet. We can get along with IPv4 just fine by aggregating routes. But before we can do that, we need to scavenge IP addresses.

    Yes, the decision to allocate all those class B's was reasonable at the time. It's not reasonable now, and those IP addresses are needed.
  • They've been saying it for years now. It's still true ...
  • I'm no network guru, but taking from another reader's example on Potsdam University, why do they even need internet IP addresses for everyone ? Couldn't they just settle with just a handful and set up a gateway for the dorms ? The only reason one really needs a dedicated globally-routable IP is for a server and some multiplayer games (Quake isn't one of them). Same thing for most businesses, they don't need 64k IP's when most of the terminals are used for only web browsing. How many boxes really need to be directly accessible from anywhere in the world ? Certainly not 4 billion.
  • uh huh, and one hacker tries to get in and that ip is blocked, hence an entire city block is blocked that makes great sense. Why didn't anyone think of this before? We just get a bunch of volunteer CCIE's to set it all up and manage the block based network for all of the (l)users.

    Or we can set it up like a neighborhood watch! YEAH! One person in your neighborhood is responsible for maintaing the NAT! _and_ there will be a rotating schedule, tonight is your night, tomorrow is my night, the next day is my mothers night, yeah sounds like a great plan...

  • Um..why is this moderated as flamebait?

    So you could be moderated up to 3 by posting a reply about how it was unfair, of course! Aren't those moderators just the nicest people?
  • A few folks have talked about how we're running out of IPv4 addresses and need IPv6 yesterday. Others are saying "CIDR fixes this, or at least mitigates it."

    All I have to offer is data. CAIDA has a chart of the IPv4 address space. Look at all of that wasted space. []

    IF we could CIDR-ize and allocate IPv4 more efficiently, the problem will go away.

    Will we ever go to IPv6? If there's a compelling reason to (and not just "it's better" or "it's more technically correct"), then we will. Otherwise, we'll continue to hack on IPv4 for as long as it'll hold up.

  • claims that the Internet's routing tables are ballooning in size and within a couple of years "equipment won't have enough processor power and memory to handle them."

    Am I the only one who thinks it foolish to try and predict the kind of processor power we will have in a couple years? A couple years ago, the routers available probably wouldn't have been up to par with the traffic the internet currently generates. I'm no expert though...

    Own your own piece of! []
  • Aargh. I thought I was hitting "Preview" but I guess I hit "Submit". Sorry about the lack of a closing "</a>".
  • So you're saying that because of mobile support, every packet has to get about 40 bytes larger, thereby raising traffic on the backbones, LANs and everybody else's networks? Hardly a good idea!

    I do note that "Class A" address space 64-126 was never issued, so a LOT of CIDR blocks can be released there.
  • You say that such items do not need static addresses because you only do short Internet-related activities with them.

    I submit that once you have a static IP and permanent connection, it enables yot o use that device in new ways. I know once I set my computer to stay connect 24x7 with a 56k modem, my work began to revolve around the Internet. Now with DSL, it is even more so.

    It's one of those things you don't need unitl you have.

  • by Xenu ( 21845 )
    I thought this was fixed by CIDR and route aggregation. Plus, many of the backbones will not route to allocations smaller than X, where X may change if their routing tables get too big. This forces people with small allocations to move to a larger, aggregated allocation, or live with the fact that their IP address space is no longer routable.
  • The problem won't be just ram, but the amount of time a lookup takes.

    Sure, you can put a gig of RAM in the router, but you then have a gig of data to do a find upon. That's what will really hurt it. Memory isn't a problem, it's speed.
  • by Dungeon Dweller ( 134014 ) on Wednesday November 01, 2000 @10:42AM (#657398)
    Yeah, if every coffee maker in the world gets it's own IP address, is hosting a website about it's personal stats, and can be turned off and on via the web... We're kinda fucked. The big question being, who really wants this shit? A lot of stuff will be on tiny intranets, so I doubt that we really have much to worry about. I imagine that your coffee maker and fridge will post to a household webserver, that way you can get aggregate data which is much more managable, and also much more meaningful/useful anyways.

    Now you will recieve spam for expensive coffee beans every time you make a few pots! Enjoy!

  • Any time I see dire predictions like this, I recall the story that, in the early 1900s, the fledgling telephone system was supposed to come to a grinding halt because the number of operators required would soon exceed the entire female population of the U.S. Of course, direct dialing ultimately made the use of operators for each call unnecessary. I'm confident that the internet will survive this routing 'crisis' as well.
  • OK, first of all, RAM is cheap. The issue is CPU cycles to process the routing table. Second, auto-aggregation will never work, because there are networks that have legitimate reasons for de-aggregating their blocks of address space. Then again, there are others that do it just because they can, but unfortunately there isn't a good way to tell the former from the latter.
  • Since I'm a Symbol employee, a quick clarification:

    The Symbol SPT1700 Series either have a wireless Spectrum24 network card, or a Novatel Minstrel radio modem. The Spectrum24 card can either use a static IP address, or talk to a DHCP server. The radio modem has a static radio address, and an IP is given to the owner when s/he signs up for a wireless account with some provider.

    The SPT1700 is just the base model with no wireless stuff. The SPT1740 has a Spectrum24 card. The SPT1743 has a 11 megabit wireless network card. The SPT1733 has the radio modem.

    If you really want to know more about the above models, head over to [] and look them up. Username and Password are "guest"

    Note that the SPT1700 line has a Type II PC Card slot, so all the above wireless stuff is just a PC Card added to the device at the factory.

    "I may disagree with what you have to say, but I will defend to the death your right to say it"
  • by Phizzy ( 56929 ) on Wednesday November 01, 2000 @10:46AM (#657402)
    Alright.. so first off, this isn't news. Anyone following the NANOG list knows that the routing table is increasing exponentially with the rest of the internet. There isn't anything that can be done about that, realistically. The aggregation Nazis will scream day and night that they can fix the Internet if you would just let them aggregate things properly. Fine, but that would require a total renumbering of the internet, so it isn't at all possible with IPv4, unless everyone out there really feels like renumbering every machine on their network with a publicly addressable IP. Think about that for a minute. They'll scream that they can do it without renumbering, but they're wrong. The routing table is an intricate mesh of advertisements and if everything was aggregated, nothing would work right. BGP's first method of selection of routes is the longest match rule, whereby when you're choosing a route to pass traffic on, you choose the most specific advertisement, eg choose a class C rather than a class B advertisement. If everything was aggregated into /20 or larger blocks, there would be no practical way to load balance traffic in a multihomed environment (when you have transit through more than one ISP).

    And secondly, BGP isn't the cause for the routing table growing, it is the cure. There is no way we would still be using IPv4 without BGP. It saved the internet by introducing classless routing.

    The answer to this is simple.. upgrade, upgrade, upgrade. There are routers out there that can handle far more than the internet has to throw at them right now.. it's just that Cisco doesn't make them. Juniper [] does.. check them out. They built a router off some sweet hardware and BSD. You can type 'start shell' in the router and drop to a BSD shell, and they have the route processor to chew through a routing table many times the size of our current table.

    ISPs need to keep up with the growth and upgrade their routers, or they will have problems. Much of the instability of the 'net is due to that now, routers get overloaded and reboot and cause all kinds of churn in the network, which overloads other routers, which reload.. you can see the cascading effect. The ISP I work for had to upgrade all of our older routers to 128m of ram and newer route processors.. if all the ISPs did this, there would be no routing table problems. They just don't want to spend the millions they need to to upgrade their infrastructure, unless the users start screaming. So start screaming at your ISP! (unless it's mine. ;)

  • An interesting point. The good news is that the growth curve for log(n) is much flatter than Moore's law's exponential curve. Indeed, if n is growing exponentially, that means you have a linear growth curve.

    While memory speeds haven't been improving as per Moore's law, they have been improving. There's an interesting article on some of the techniques to help with the problem at:

    I think in the 6 years that these growth numbers are talking about we've gone from 33MHz 32-bit memory buses (yes, pentiums already had faster buses, but what I'm describing were pretty common) to the point where we now have 133MHz 128-bit (and in some cases even wider) double pump buses pushing data into increasingly faster and larger cache memory regions. Then you throw in ideas like compression and you can imagine that memory speed has been improving well enough to keep up with this growth.
  • sigh, you obviously don't understand the argument, and you get moderated up to a 2? it's not the number of addresses. it's the number of routable networks, aggregation policies, and the increasing number of entities that are multi-homing and injecting long prefixes into the global routing table that are causing the problem. this is *not* a trivial problem, and lots of people much smarter than most everyone here have been working on this problem for a while. there is no simple obvious solution.

  • Also see my MPLS node [] on everthing for a short and sweet overview.


    If it's referenced on Slashdot, is it nodevertising?

  • Increase the amount of routing entries by...a lot.

    Now have 45Mbps worth of traffic going.

    1000 sessions per second means 1800-3600 compares per second.
  • I'm sorry, but I've been wondering about this for a few years now(*). Suppose I were to route the whole IP number space as class C networks. That means 2^24, or 16M of "routes". Now even if my router happens to have about 100 different network interfaces, I can still hold that in 8 bits. So with 16Mb of memory I can hold my routing table.

    Now a route lookup is equivalent to

    itf = route_table[dest_ip >> 8];

    That's going to take around 60ns on a modern PC.
    So if that's all, we'd be able to do around 13M routing decisions per second. That's not bad. (you'd be routing over a gigabyte per second by the time that this could start to become a bottleneck...)

    The only problem with this method is that when a class-A route changes, you have to update 65536 routing table entries. This can be solved by having a multi-level table.

    You'll probably have to have a few "exceptions": Someone is bound to have split up a C class network that you route it over different interfaces. Simple: An exception "interface" that indicates: "try the exceptions routing table".


    (*) This subject keeps popping up in the media every year or so...
  • What about systems that support 64bit PCI? Shouldn't that boost the effective bus bandwidth to 266Mbyte/sec? That ought to be enough bandwidth to handle nearly 10 full-duplex 100Mbit/second ethernet interfaces running full throttle.

    Better CPU can help when recomputing routing tables or make more sophisticated routing; besides, "real" routers main advantage is fast switching which is much less processor intensive. If you can't do that, why can't you use CPU cycles doing the same thing, especially if the cost-per-silicon is cheaper for raw CPU power?

  • I can see BGP being a limiting factor. There are already things about BGP that annoy me because of the simple fact that it is a distance vector protocol. However, I don't share your optimism that things would or could change quickly. The tier 1 NSPs are huge lumbering corporations that probably wouldn't give in to change very easily. Politics often trumps technical recommendations in the corporate world.

  • With all the hubub about my razor and toaster being on the net, why isn't hub/dhcp in the ouse the standard for discussion? Does my toaster really need a unique I.P.?
  • The problem is not the number of IP addresses, static or otherwise, the problem is the number of routable networks, since that is what determines the size of the routing table in a backbone router.
  • 3 years ago 32M of ram on a 4500M was enough to run full bgp. now you need a 7200vxr with 128M of ram to run full bgp. we are over 70k routes in the global table, and this trend will get worse now that providers are not filtering on the /20 boundary anymore.

    ipv6 does nothing to solve this problem. the tla concept is gone from ipv6 once they realized that it was a very bad idea. actually, there are several provisions in the current proposal of ipv6 that are bad. the default allocation of a /48 is the worst part of the current proposal.

    something has to give, but then again the router vendors claim that by the time that 128M isn't enough we'll have bigger faster routers. this is fine for uunet et al, but not so fine for small isps.

    this is a hard problem with non-obvious solutions. perhaps what will end up happening is that we will acutally use the osi radial routing method. only time will tell.

  • well 10fold increase in 6 years. if we look at moor's law then memory will increase 16fold at this time. so as long as moore's law holds then we are save. also: the cpu load does not increase much with larger rouging table. after all it is just a lookup in a hash table. but: what is more of a problem is increasing bandwith. so router have to work faster to do more routing decessions/sec as bandwith goes up.. i think the rate is something like 118 % per year. that would mean an increase of 2400 fold in 10 years.. now of course the traffic is shared amoung more systems and not all concentrated but still here is more of a challange in router performance then with the size of the routing table.
  • Students are allowed to run servers from their dorm rooms (just not kiddie porn servers, hehe). 8K addresses would work just fine for them. That's 1/8th the numbers they currently have.
  • Is avalible here [].

    This problem has been known for some time, I forget when I first read this paper, but it has been out for over a year. It describes the problem in good enough detail that I downloaded the adobe versions and made a hard copy of them. Its about time that "major" news service noticed.
  • Ok, the internet is in trouble.

    The internet is ALWAYS in trouble, it's the normal state for the monster. Well guess what? We'll fix it. We'll fix it again, and again, and again if we have, and we'll have to.

    It grows, it writhes, it creaks and groans under the strain. It mutates and then mutates again. It's a digital age " The Blob."

    But it feeds off the energy of its users and continues to grow. It shows every sign of continuing to do so.

    Looking years down the road to see where such an amorphous beast might be headed serves some purpose I suppose, but life is what happens while you're making other plans, and I've found this creaky old gem more applicable to the internet than just about anything else.

    Who the hell KNOWS where the whole thing will be and what it will look like in just a few years time.

    Not I.
  • MPLS is also going to help solve this problem. Core routers will have much smaller MPLS routing tables, with only edge routers knowing IP routes. If all goes according to plan, of course.


    Cisco - IP+ATM Solutions []

    IETF MPLS Charters []

  • giving each coke machine a phone number ..... and causing us all to change our area codes every so often ....
  • Sure the death of the internet is imminent - again!

    Meanwhile, dumb devices (like the lightbulb on your porch????) don't need to be on the internet directly - and probably shouldn't be. You want the light to turn on when some newbie in Lower Slobbovia mis-types the URL for 'Naked Schmoos Live 2343988'? NAT on gateways can concentrate an awful lot of dumb (and not-so-dumb) devices into a single IP.

    And a core router needs gigabytes of memory? So what? The cost of the memory is negligible compared to the cost of the core-capable routers. Besides - a direct (i.e. one entry per possible IPV4 address) routing table would only need 4G entries, and be faster than a heirarchichal lookup anyway. If you have less than 256 ports on the router, then thats under 5GB memory. And if you just route on the first 24 bits, it's only just over 16MB.

    Ok, so that won't work with current routers - but they'll need to be upgraded or replaced for IPV6 anyway.

    And if a router ends up handling dual duty IPV4/IPV6, then IPV6, with it's built in heirarchy of address bits and closer coupling between address bits and routing, is hopefully going to require fewer routing resources than IPV4. (Or an IPV6 network running on IPV4 tunnels could use the existing routers just to access the bandwidth).

    Meanwhile, as more and more home users connect, we're going to see more ISPs putting them ALL on a single IP address (Can you say NAT, Mr Newbie?) for two reasons: 1), a firewall and web proxy at their gateway lets them use fewer IP addressses and bandwidth, and 2) the customers can't run "unauthorized servers".

    Hmm. The entire @home network moved onto a single class C network address? Nahh.. But possible. (Even more possible in the future if they provide a tunnel to an IPV6 router?).

    But 'The death of the Internet' again? Hardly. Saturation? Maybe. And I'll bet that until it DOES saturate, nobody's going to be offering IPV6 connections for quite a while.

  • and CPUs run at hundreds of millions of cycles per ...second. 3600 compares per second doesn't sound all that difficult.

    - A.P.

    * CmdrTaco is an idiot.

  • by g_mcbay ( 201099 ) on Wednesday November 01, 2000 @10:57AM (#657440)
    Um..why is this moderated as flamebait?

    Redundant -- perhaps, though even that wouldn't really be fair as its post #18 and was probably up fairly soon after the article and started before the other posts of this type were finished/poste.d

  • This will not necessarily happen. It's quite possible that IPv6 traffic and IPv4 traffic will be split and passed off to different routers. This would provide incentive to use IPv6 as it would presumably be faster. Additionaly, even if Dual-IP-layer routing is necessary, one would hope that once IPv6 arrived, the IPv4 routing tables would stop growing so aggressively, as new IP's become IPv4 addresses. Should that prove to be the case, things will be easier.

    P.S.: I presume you mean IPv4 rather than IPv5. ;-)
  • by Russ Nelson ( 33911 ) <> on Wednesday November 01, 2000 @10:00AM (#657444) Homepage
    This is not a serious problem. What is a serious problem is all the sites that were allocated 2^16 (many colleges) or 2^24 (HP, Stanford, Interop, e.g.) addresses back when there seemed to be an infinite supply. For example, Potsdam State University has a class B. They only have 500 staff and 3000 students. What are they doing with 65,534 addresses??
  • Film? How quaint....
  • The problem is that the core routers are doing the wrong job.

    Assume that all allocations are all /24. Now if that core router has 16 interfaces, you need 16 million nibbles of memory for its table. Thats 8Mb. You only get into trouble when you have several good routes for the same destination and then you need to do a level if indirection where you can look at that routers entry in the full routing tables. You build a seprate system to update those tables since they don't have to be real-time, they have several seconds after updates to get the swtich table updated.

  • by cornjones ( 33009 ) on Wednesday November 01, 2000 @10:06AM (#657452) Homepage
    the article is saying that in a few YEARS we are going to need more memory and faster processors for our routers. the problem with this is where? I don't see any slowdown in the hardware advances we are making.
    if we want to /can find more efficient ways to do it, all the better. I am just saying that this might be a problem if we were running out of space tomorrow but in a few years I am confident the basic hardware will be much better than it is now.
  • by fm6 ( 162816 ) on Wednesday November 01, 2000 @05:34PM (#657454) Homepage Journal
    I'm concerned with the increasing occurance of giving static, permanent IP addresses to relatively dumb items. Palm Pilots, refrigerators, guns in the army, etc.

    You're actually focusing on the wrong problem. Except if you focus on the right problem, it turns turns out to be even worse than you suggest.

    It isn't simply a case of addresses for trivial devices versus "real" computers. A lot of computers -- real serious computers -- can get all the the access they need without using any address space at all. RFC 1597 [] sets asides IP numbers that cannot be used for "public" interaction. These addresses are valid only for intranet traffic.

    The machine I'm using right now is a case in point. My employers do not want anybody not on our campus network accessing this computer. So I don't need an IP number that's valid in the Internet at large. Instead, I have a Class A address in Network 10. Addresses in 10.*.*.* can be reused endlessly, so long as they're not re-used on the same network.

    I used to work for a major computing company that was extremely paranoid about off-campus access to their systems. But for some reason (probably institutional inertia) they assign IP numbers out of their permanent allocation. So that's thousands of IP numbers used unnecessarily. Plus they have a permanent shortage of IP numbers for internal use. Plus, every once in a while, a hacker finds his way through the firewall...

    Perhaps I speak in ignorance, but it seems to me that nobody needs a public IP address, permanent or transient, unless they have a server or peer app. (Age of Empires anyone?) Thus 90% of all users -- especially the users of "real" computers -- are just wasting address space. And making themselves vulnerable to boot.

    On the other hand, it makes perfect sense to assign an IP address to a gun. You never know who needs to kill who....


  • Troll? Who moderated this post? Vint Cerf?
  • We said this same thing in 1995 when the two big routing points at the time, MAE East and West required routers greater than the Cisco 4000 series which did not have the memory to handle the routing tables.

    We also thought by 1997 or 1998 we would be out of the original IP space.

    Guess what? There are still tons of IP addresses left and more being recycled everyday. Internet access providers are merging and going bellyup everyday, returning IP space back to other backbone providers. Network security companies are moving public networks to private IP space to keep out scanners and sk's.

    This kind of fearmongering has been going on for years and all it leads to is IP hoarding.

  • man every time someone thinks the computers of the world are going to melt in a year... two years...50 years... there's either a fix in half the time or when the time comes it's less of a disaster than they expected.

    You don't understand. The reason that "there's a fix in half the time" is because someone writes an article or otherwise brings up the fact that there's a problem in the first place. It's the problem that no one finds or mentions that will kill you.

    What we have here is validation that "many eyes make bugs shallow," but it still takes hands and minds to FIX those bugs.
  • by Orifice ( 239264 ) on Wednesday November 01, 2000 @11:30AM (#657464)
    Exactly how big is a routing table? I've never seen one, but given that they can fit inside a computer they must be pretty small. If they get bigger why can't we just keep them in that big empty hole they dug for the Supercollider in texas?
  • by X ( 1235 ) <> on Wednesday November 01, 2000 @11:01AM (#657465) Homepage Journal

    Let's go through a number of things that came up here:

    1. BGP isn't working. Well, fortunately, there are a lot of other protocols out there to choose from. When it becomes too costly for everyone to have routers using BGP, people will negotiate the use of other protocols.
    2. Routers will need "gigabits" of memory within two years. Well, that sounds really scary, but of course a "gigabit" is roughly 128MB. That is a lot of memory for a router, but right now that'll cost you at most $150. In two years time you'd like to think it'd be a lot less. Either way, it's a tiny portion of the cost of a router. I think we'll survive that.
    3. In 6 years we went from 10,000 to 100,000 entries. That is some pretty serious growth, but it is not nearly as scary when you consider that Moore's law suggests that processing power has improved 2^4 = 16 times in the same time frame. So, in other words, CPU speeds at least are easily out pacing the growth of routing tables. I don't know how this plays out for memory, but I seem to recall that 6 years ago 16MB of memory was over $1000 and now 256MB of RAM for a laptop is $400. Bottom line: it's easy to make computing growth numbers look scary, because computing is growing at a scary rate. You just have to remember that both the capability and need side of the equation are growing at an insane pace.
    4. Of course IPv6 changes all this. Part of the reason the routing tables are growing so much is because IPv4 does not make routing tables very efficient. Chalk this up as one more reason to use IPv6. Given that IPv6 is available today, I think the relevant parties will make the switch when it starts saving them lots of $$'s.
  • by swb ( 14022 ) on Wednesday November 01, 2000 @11:33AM (#657467)
    Hell, if you bought 1GB of RAM for Cisco's top of the line router (12000 series GSR), you would spend ~$30K today.

    Every time I read one of these articles, I'm initially thinking, "Wow, we can't keep up." And then I remember what Cicso passes off as big-bucks equipment is lame-ass compared to off-the-shelf desktop computer components. My biggest router is a 3640, used internally to route between various LAN segments, and its selling around $5k now, and I bought mine two years ago (along with RAM and ethernet cards). With a lame R4000 CPU and 96MB RAM, it's not a particularly impressive computer.

    Given that SMP capable systems with 800Mhz CPUs (mobos, CPU, and maybe RAM) are running ~ $1000, why can't we "solve" the routing table crisis with some cheap, high-powered hardware? Moreover, why is Cisco stinging us along with overpriced, underpowered hardware platforms? Because they can?

    I know that Cisco equipment is capable of doing some fancy switching between interfaces that generic PC hardware wouldn't do, but has anyone ever put 4 of those 4-port NICs into a fast SMP box and compared its ability to route relative to a high-end Cisco box? Omit from the comparison the encryption modules and some of the other goodies that you can do on a custom hardware platform but which isn't totally necessary for vanilla IP routing.
  • Imagine when IPV6 arrives. Routers will have to support Dual-IP-layer [] routing which means...

    ... you guessed it: Two routing tables!

    Under IPV5, they will run out of IPs before they run out of memory!

  • by l33t j03 ( 222209 ) <> on Wednesday November 01, 2000 @10:11AM (#657476) Homepage Journal
    I for one applaud the foresight of you geeks. First you design operating systems and hardware that can't understand dates beyond 1999. Now, you folks designed the entire Internet so that it will collapse under its own weight. You know, if you weren't so busy trying to get everything from toasters to Furbies an IP you wouldn't run into this problem. I know, I know, you're all thinking: "But we designed an obfuscated OS to foil all of the Johnny Lunchpails who tried to use our Internet!". Not good enough, you efforts go for naught. The thing is getting overloaded and there is nothing you can do about it now.

    Given that the Internet has undergone a transformation as of late, what with all of the theft of IP and violent imagery it propagates, I am happy about its demise. This ranks right up there with the inevitable heat death of the universe in terms of things that I look forward to.

    Possibly, when your Internet (the Vint Cerf crappy one) is finished, Microsoft will invent you a new one. You will all probably hate it of course because they certainly won't permit any misdeeds that you all seem so fond of. Just nice clean fun and information with a little dash of profit for all.

    Run along now children, play on your Internet while you still can. When Daddy builds a new one your decaying 386 machines won't be compatible and you'll all have to revert back to your BBS days.

  • Immediate thought: routing table sizes won't increase in proportion to the IPv6 address size increase, because IPv6 aggregates most of those addresses into prefixes and it's only the prefix that needs a route. In fact, with the IPv6 capability to put more networks under a single provider's network number, it may even reduce the number of routes.

  • Tis called a joke. Still, people will replace old equipment. It happens, we upgrade. It's not going to be a ONE DAY THE EARTH CAME CRASHING DOWN change, people are going to upgrade their equipment to cope with just the bandwidth. These other problems will be thought of as secondary, but taken care of in the upgrade, so why worry?

  • by dublin ( 31215 ) on Wednesday November 01, 2000 @11:38AM (#657480) Homepage
    Wow, I finally get to disagree with Russ on technical grounds... :-)

    I think we do need IPv6 for one crucial reason: mobile support. This is something that's cooked into IPv6, and it's the only right way to solve the problem. With v6 mobility, nodes essentially have two IP addresses - one static, the other dynamic. The advantage of this is that most of the world only has to know the static one to talk to you - your nomadic device is responsible for letting the static server know what your current mobile IP addr is. This keeps the Inernet routing tables from ever having to deal with any of the routes to a particular device - it just points to your static IP (which would be part of a routable superblock), and the local network (or wireless carrier, etc.) handles it from there.

    I agree that NAT and superblocks have allowed us to be lazy for a few years too long, but it's critical to recognize that the move to IPv6 will be driven by mobility, not a lack of v4 addresses. This in turn won't happen until people start developing and embedding lean, fast v6 stacks into high-volume mobile consumer devices like cellphones, laptops, and PDAs. As much as I hate to say it, Microsoft may be the only one that can get us kicked off-center here.

    Oh, and if you've ever done a massive IP address change for a large corporation (I have), you'll know why it's easier to pull shark's teeth than get those addresses back. Note that even mandating NAT at border routers (which seems reasonable on the surface) still requires all IP addresses to be changed to the "martian networks" (net 10, etc.) to avoid the possibility of collisions with the reclaimed addresses. The costs of this re-addressing are simply too high to expect that IANA could reasonably force any recalamation of IP addresses.

    We need IPv6, but not because we're running out of v4 address space...
  • So you're saying that because of mobile support, every packet has to get about 40 bytes larger, thereby raising traffic on the backbones, LANs and everybody else's networks? Hardly a good idea!

    No, that's not what I said at all. I do think IPv6 is the only right way to do mobilty, but IPv6 was painstakingly designed to be 100% backward compatible and interworkable with IPv4 and not to require any significantly difficult switchover logistics such as "flag days" where everyone would have to change at once. Only the mobile packets will have to get bigger (I expect v4 will rule the roost for fixed use for some time yet), but that's a small price to pay for true location transparency. The increased packet size is inconsequential for most everything but telnet and the like, which are an irrelevant percentage of all inet traffic.

    Good point about the rest of the Class A space, though - that slipped my mind - are you sure none of the upper range was ever issued?
  • by michael_cain ( 66650 ) on Wednesday November 01, 2000 @11:45AM (#657483) Journal
    At least at my house, I don't want all of the local widgets on the home network to have globally routable/reachable addresses. Unpleasant thoughts about hackers using the recently discovered bug in the firmware on the Brand X washing machine to turn it on twelve times a day...

    What I would like is a generic proxy capability in my home firewall/gateway that allows devices that require some form of outside access to register, and as part of that registration, include some proxy code to be executed by the server when someone outside wants to access the device. Lots of different security models needed -- selected addresses at the power company are allowed to contact the electric meter, any address is allowed to access the Tivo recorder if they possess the magic password, etc.

    Obviously, the code passed to the proxy needs to be processor and OS independent. Java could probably do the job.

    Hey! A generic proxy server, software, the whole concept fairly obvious -- I'll bet the USPTO would grant a patent on this!

  • by CoreDump ( 1715 ) on Wednesday November 01, 2000 @11:48AM (#657484) Homepage Journal
    1.BGP isn't working. Well, fortunately, there are a lot of other protocols out there to choose from.

    Really, pray tell what these are? Apart from draft proposals, please tell me what these other protocols are? BGP does work. No, it is not perfect, but it works and it's failure modes are pretty well defined. The fact of no legitimate alternatives also poses a problem. :\

    2.Routers will need "gigabits" of memory within two years.

    Assuming cisco, which is pretty much the standard, you are going to have trouble fitting a full BGP table into less than 128 MB today. So what? That doesn't mean the sky is falling.

    3.In 6 years we went from 10,000 to 100,000 entries.

    Yes, for a good statistical analysis of this growth please see:

    • html
    Now, how did the number of end users on the "Internet" grow during the same period?

    4. ... Part of the reason the routing tables are growing so much is because IPv4 does not make routing tables very efficient.

    Not the case at all. IPv6 is going to save nothing. Greater than 1/2 of the current routing table is announced as /24 or longer prefixes. Aggregation can cut the routing table size. Please see the CIDR report for the worst abusers of de-aggregation. The worst offender is announcing ~430 blocks when they could aggregate those into ~150 blocks, without losing any routing stability. The CIDR report is available at:

    CIDR Report []

    IPv4 has a long way to go still before we are in dire straights. Let's not forget what 2^32 gives us, and what we are using now out of that.

    ------------------------------------------------ ------------

  • by MattW ( 97290 ) <> on Wednesday November 01, 2000 @10:11AM (#657491) Homepage
    There's a problem with route aggreggation, and while bigger providers are more responsible, its still an issue. But lets put gigabytes of memory in perspective, here: my biggest personal box is sporting 512M of ram. Is a few gigs of ram any sort of shock for routers that cost hundreds of thousands of dollars?

    It also wouldn't surprise me to see more auto-aggregation being done with spare cpu cycles as the routes propagate, which would probably help.
  • Actually your example's quite reasonable (from the PoV that the decision was made in the days class A/B/C blocks) given the circumstances. As 500 + 3000 is more than 256, a class B would be what you'd allocate (Today you'd allocate perhaps a /20 (4096 addresses))

    You want one routing entry for the entire university per route, rather than 14, and internal routers can easily work with that amount rather than having to check against 14 different class C blocks to determine whether an IP address is internal or external.

    The problem here is not that Potsdam is being inefficient, it's that in order for us to continue to have efficient routing, we need to dramatically increase the IP address space. Hence IPv6, which should improve on this, except it probably will never come out, it's been RSN now since 1994, and the industry has, in the meantime, made a tidy sum by using the limitations of IP to create artificial choices.

  • by Anonymous Coward
    /. is running out of space for troll comments. Since the number of /. trolls is growing exponetionally and the number of real /. users is only growing linearly /. will soon run out of comment space for trolls. Therefore I think all trolls should go over to forums and troll there for a while untill Rob and the gang can fix this troubling problem. (moderate TROLL).
  • Not to mention that ipv6 will actually help quite a bit.

    I have been told that ip6 addrs are sorted geographically. This way a router can calculate a simple geographic "net mask" or two for a given interface.

    Anyone have some details on this?

"There is no distinctly American criminal class except Congress." -- Mark Twain