Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet

Is The Internet Growing Too Fast? 156

SpunOne writes: "According to this article, the Internet is growing faster than today's routers can handle. After years of predictable growth, the size of the routing table and traffic in it exploded during the past six months, topping 104,000 entries in March, compared with 75,000 a year ago. Even more troubling is evidence that frequent updates to the routing table entries by network managers are causing instability in the Internet's backbone routing infrastructure."
This discussion has been archived. No new comments can be posted.

Internet Growing Too Fast?

Comments Filter:
  • Multihoming and pulling full routes from both providers does not require that much hardware. BGP is not particularly processor intensive. A cisco 2650 with 128 MB of ram can handle the job just fine. In fact a 2620 would work too (processor-wise), but they max out at 64 MB of ram.
  • by Anonymous Coward
    You can fix the prefix load caused by multihoming by not multihoming at an IP level: What is IP's approach to everything that's hard to scale? Put it in the end node..

    Today, Multihoming (an important feature for realibility) prevents signifant aggregation, thus inflating the route table.

    The solution:

    Give each node addresses on each of the connected networks (easy with IPv6), multihome only on a limited scale (I.e. Tell provider B about provider A's addresses, but he should not propagate it any farther), then let a next generation replacement for TCP/UDP impliment the multihoming..

    Guess what, this protocol alread exists, it's called SCTP. It's already implimented for Linux, but Microsoft will likely signifantly hamper it's
    adoption just like they have with IPv6.

  • Uhm yeah.. ok and where exactly can I purchase OC3 and DS3 interfaces for these Linux boxes???

    Face it.. Cisco/Juniper/etc are the only way.

  • by Anonymous Coward
    ISPs haven't done a good enough job explaining to their customers that they don't need to multihome

    Like hell. If my (very large) ISP didn't occasionally wipe out our mission-critical connection for an hour or so, maybe I wouldn't care about multihoming. I'm not multihomed yet for budget reasons, unfortunately, but once we can afford to go that route we damn well will.

  • by Anonymous Coward on Monday April 02, 2001 @09:18AM (#320519)
    I did propose to the IPv6 group that two things in the new IP space be done. First, that a bit toggle whether this is a roaming/DHCP IP address (an opt-out system, plus additional space to indicate if you were even on earth, or in orbit, etc.), and the second, including a rough latitude and longitude for the system with the IP. It was never necessary nor possible to get an exact location for every device. Besides, everything in a rack would have the same IP.

    When a pitched it then, it was quickly shot down. However, it allowed for regional routing, which is where you just sent packets to the router responsible for a major lat/lon, which then forwarded packets to things in its area. Really simple.

    It also allowed you to figure out where things were coming from (if the user opted in), which meant you could redirect them to a regional mirror. As a side-effect, folks like Yahoo could actually keep all the French out of websites, since the IP would actually have some sort of location information in it.

    Routing would be more efficient, and smaller, and faster. But hey, I'm just a nutcase, as the IPv6 guys said.
  • by Anonymous Coward on Monday April 02, 2001 @09:19AM (#320520)
    Ok this whole dicussion is so far over your average slashole's brain capacity so I'm not gonna expect too many intelligent posts.

    the fact here is that a multihomed router taking full BGP routes from their upstreams today needs atleast 256 megs of RAM and a big cpu (example cisco 7500 MINUMUM) in order to handle it..

    The problem is all the recent "hey let's be a DSL provider!!" idiots that have no understanding of route summarization and addressing.. Thankfully many of these ISPs are going bankrupt now so hopefully all their /23's and /24's will be withdrawn after their lights go out.

    The ISPs need GOOD engineers who can design a network in a manner that only the most aggregate route will be announced to external peers.. the biggest problem is they assign smaller blocks and then that customer gets multihomed and the original provider has to cut holes in their BGP filters in order for it to work right.

    If you're going to multihome, GET YOUR OWN ADDRESSES FROM ARIN!!! Places like Verio have very Nazi-ish filtering policies and their routing tables only hit about 85,000.. While verio makes bad routing decisions, this shows an example of what the internet routing table SHOULD look like.

    SUMMARIZE, SUMMARIZE, FUCK YOUR TOASTER, IT DOESN'T NEED AN IP ADDRESS!@@!#@!#

  • Whatever. /. has sucked since user 204.

    --
    "Don't trolls get tired?"
  • Hellz yeah, signal 11!

    --
    "Don't trolls get tired?"
  • Does that include #204?

    - A.P.

    --
    * CmdrTaco is an idiot.

  • "moderator" and "reasonable" are two mutually exclusive ideas...

    --
    * CmdrTaco is an idiot.

  • I think you misspelled "Feel free to sit back on your lazy ass while somebody else designs and implements a fix - just like they always do". HTH. HAND.

    Well, isn't that what those people get paid to do?

    -A.P.

    --
    * CmdrTaco is an idiot.

  • by Tony Shepps ( 333 ) on Monday April 02, 2001 @10:14AM (#320526)
    Yahoo started to wobble in their traditional high-quality, hand-picked links directory in 1997. By 1999 it was nearly impossible to get anything in there was wasn't under "Business & Economy". And then they implemented their "we'll look at it for $199" approach, which probably makes sense for all involved IF you accept that Yahoo is the 800-pound gorilla. I'm prepared to accept that but a lot of other people aren't.

    You're absolutely right that our ability to cope with the net is what's suffering. A friend of mine once said that the Internet is "like giving +1 to everyone's intelligence. If they don't know something, they now have a tool that lets them look it up and get a ton of information quickly." But now we realize that you only get the +1 bonus if you are already fairly intelligent, because it takes intelligence to be able to do Internet research, more intelligence to determine if the sites that you're looking at are good sites and/or responsible with facts.

    With the wisdom of hundreds of other netters, and the net's gift of great communication, I can be more intelligent than a doctor is about a given prescription drug, for example. But if I don't select sources correctly, I can be downright dangerously ignorant about that drug.

    If the tools that we have for research improve, THEN we can add +1 to everyone's intelligence. So what we have in the meantime is a tool where the smart get smarter and the dumb stay pretty much where they are.

    It may be that the current economic contraction may be just what the net needs. It may stagnate in growth for a little bit -- and then we'll have a little bit of time to catch our collective breaths and allow humans to catch up with what is possible. One reason we have all these fucked companies is that the masses didn't adapt as quickly as the netted elite. The main reason is the temporarily insane marketplace, sure, but a lot of online things are much better than their offline counterparts, it's just that people couldn't catch up with the growth of the net. It's better to meet online than to travel 2000 miles to meet, but today we are still adapting to the idea that we can shop for airline tickets over the net.

    The net has allowed us to see what's possible and to implement new and better approaches. But it could not hold our hand and show us the new and better approaches. With time at a premium in everyone's lives, no-one can afford the time for experimentation or learning with these new approaches.

    OK, I'll stop now before I become Katz.

  • I don't think you get a lot of information quickly you get a lot of data quickly. How much of it is good is anyone's guess. The thing about the net is that there is a lot of good data and a lot of total cruft out there and telling which is which is often very hard.

    After all we have never known a web site to put up information that is flat out wrong have we. :)
  • by jd ( 1658 ) <`imipak' `at' `yahoo.com'> on Monday April 02, 2001 @10:27AM (#320528) Homepage Journal
    Problem. Joe & Jane Q. Public have been convinced, over the years, that slow networks and connection seizures are "normal".

    The result - the Internet's population CAN exceed its useful capacity. (It has, several times. I've experienced outages on international links of up to 14 days, due to catastrophic hardware overload.)

    The problem is that the AVERAGE load needs to be less than 1/3 the total capacity, if you want to avoid major packet loss. However, when you look at profit-run businesses, it makes no sense to have three times the capacity you can get away with. That's just going to eat profits.

    The major failure of a profit-driven IP-based network is that you hit maximum profits when you hit minimum quality of service. There's no way round that.

    In the end, privatizing the Internet may turn out to be a failure. Upgrades are expensive, and the increase in revenue will typically be trivial, in comparison, making maintenance unprofitable and (in a depressed tech market) marketplace suicide. (You think shareholders'll approve a plan to "throw away money"? No matter how necessary it might be, from a quality standpoint?)

    In the end, enough people just use the net for music, prawn and e-mail (none of which require speed) that the Internet could, in principle, survive off just those services.

    This is why projects such as Internet 2 became popular. Researchers and Universities around the world realised that the Internet is dead for any "real" (read: "Network-Intensive") work.

    The now-famous live videoconference between American and Russian hospitals, over the MBone, would not be possible today. (It was barely possible, then, with networks around the globe suffering melt-downs from the incredible demands it placed on the system.)

    Today, even connecting to the MBone is a nightmare. Mail the MBone mailing list, and they'll tell you to pay over hard cash or talk your ISP into doing so. Gone are the days of trading resources, knowledge and capabilities.

    The Internet will live on, but we'll see further fragmentation, as commercialization forces people to build specialized, distinct networks, which will become ever-more incompatiable.

    If this is the world "we" (as the users and therefore the funding) want, then oh well. It sounds b awful, to me. But this has gone far beyond anything any individual can change. I'm not even sure it's something even large corporations can change. (Microsoft tried! And is still trying!) This may well be the scenario we're locked into, no matter what.

  • And this will decrease the number of routes in the average backbone router... how?

    kashani
  • Yeah in the 2501 which was EOL'ed in 96 I believe. It was replaced by the 26xx series

    sho ver
    cisco 2620 (MPC860) processor

    kashani
  • True, though it doesn't do much for the existing problems. Also what happens when your downstream wants to multihome? Does he get his own /64 (heh heh) or whatever or does he announce a subset of your space? I'd vote for the former, but I haven't seen, which way their leaning.

    kashani
  • DO you have any idea what you are talking about?

    We need dynamic routes so that we can actually pass traffic when links fail. Link go down, route gets dropped. Router crashes routes get dropped. Also have you ever tried to static route 100K routes?

    kashani
  • Ahem.

    with 32 bits a /32 is one IP address.

    with 128 bits a /64 is 18446744073709551616 IP addresses

  • Because it's so damn silly it does not require anything more then a "You're a nutcase. How'd you get in here." If you don't know that already I'm not going to explain it. Second backbone routers provider TRANSIT, so it generally needs to know all routes to all places, hence the problem. Neither this nor IPv6 does anything to relieve this. If anything releasing more space ie IPv6 will make the problem worse.

    kashani
  • by dangermouse ( 2242 ) on Monday April 02, 2001 @12:45PM (#320535) Homepage
    A discussion on this topic at /. could generate alot of creative and viable solutions to this major problem.

    This is the single funniest statement I've read in days.

  • An unfortunate side effect of high intelligence is a remarkable lack of patience for lesser intelligences.

    Thus, a routing guru may have a very good reason for dismissing your proposal out of hand, but feels that it's beneath his intellect to talk down to you in terms you can understand.

    Whether this is good or not is irrelavent. It's just the way we're wired.

    Personally, I think your idea has merit, but then my idea of complicated routing is getting my home and office wired together across different providers. I'm no guru.

    A real guru, with the vision to see beyond your idea in totality may take the idea and run with it in a similar, if not same, direction, and come up with a great solution. I wouldn't count on it, though. I've found that really, really talented/skilled people are very, very focused on their talent/skill. Thus, as a prima donna ballerina is a great dancer but sucks at fixing your car, a routing guru may not have the right-brain capability to think outside the box of routing table rules.
    "Beware by whom you are called sane."

  • please someone Quick ORDER MORE!!! HURRY!! this internet thingi is gonna Get /. ed AGAIN!!!

    (plugs holes with fingers)

    hurry, im running outa fingers here!!!
  • A lot of providers have multiple separate IP address blocks. With IPv6 every provider would have more address space than they could ever use, so they would never be assigned more than one block. Just one route per provider would help things a lot.

    Benny

  • Cellophane factory gutted by fire. No film at 11!
  • It's easy for Ds3, you will have to wait a month or so for Oc3's. Just look at ImageStream [imagestream-is.com]. I do have to say that I work for ImageStream... but, I think we make very good midrange routers. You can also get all of the cards that we use in the routers and put them in your own Linux servers. Everything from a one port t-1 card to a two port ds3 with intgrated Csu-dsu's on the card. We will have the Atm Oc3 cards in a few weeks. And they do look very sweet. We aready have the hardware, we are just finishing up the software... I am pretty sure that the cost on the Atm Oc3 cards is going to be around ( don't quote me for sure, but this should be close ) $2100. I know that sounds like alot, but compair that to a cisco solution. If you want more info, just call 1-800-813-5123 and talk to Doug or Eric. p.s. I would write a longer message but it is 3:30am and I need sleep BAD...

  • "The sky is not falling, but the sky is hanging a little low."

    Why is that quote funny to me?

  • Wasn't Northpoint the first step along this route?
  • This is just another Imminent Death of the Internet [mids.org] prediction. To quote from the link above (from John S. Quarterman):
    Personally, I've been hearing ``Imminent Death of the Net Predicted'' since ARPANET days. Back in 1977 or thereabouts there was a failure of the old ARPANET routing algorithm that shut the whole net down. As late as 1987 I was still hearing people (mostly OSI backers, it's true) cite that incident as proof that packet switching was not viable, and only circuit switching (as in X.25) could possibly support massive amounts of traffic. ...
    This is the Same Old -- er -- Stuff, recycled.

    Computing is the only field in which we consider adding a wing to the building to be maintenance.

  • Disorganized could very easily be indicitive of "growing too fast". I agree with you about technology preventing it from growing faster than it can handle. BUT, that doesn't account for human interaction.

    For one, how many people do you know have a little bio page that basically says "I'm joe shmoe, and I do nothing special, but I feel I need a web page." I'd say about 70% of personal home pages are worthless (and that's based on a very easy rating scale).

    So, as a result, the engines like Yahoo and so on are trying hard to keep up with the crap, but the crap is crap, and should remain listed as such. I think it might be better worth-while to figure out some sort of sponsor-to-create system with all sorts of checks-n-balances to keep useless websites from ever existing. That won't ever happen though, because after a few more years, the novelty of the 'net will fade, and it will become a standard part of our lives. People won't see the need to have their own website anymore, unless they really are trying to share something with the world that is worthwhile. That won't prevent the crap from piling up, but it will slow and the 'net will become stable again.

    I can't wait 'til that happens.

  • Worthless websites might have helped you find your friends, but websites like Classmates.com are far better for that purpose. They have a military section too.

    And I find that to be content oriented.

  • The original poster has a valid point -- compared to contemporary PCs, many Cisco routers, for simply forwarding packets (I'm excluding specelized compression/encryption cards, which aren't likely to be used in a nationwide backbone anyway) are really weakly powered and highly priced.

    Why does a Cisco router with the equivilent of an Athlon and 512MB of RAM cost $50k or more? I'm sure the I/O engine of a Cisco router is actually better designed than a PC with PCI slots full of four-port hundred megabit NICs, but has anyone ever done any test to determine just how fast such a PC could be?
  • Yup, we had to put 48MB into the Ultrix boxes to keep BGP from killing them with 25k routes. Net's too big.

    Gonna collapse from all this

    Gonna run out of IP addresses by 1996 or so


    Is IPv6 going to help this all?

    Only if we give out prefixes in a sensible manner (for some definition of sensible - country? Provider? Height of user? You decide).



    This is the Good Times Virus (tm) of Routing.

  • by apsmith ( 17989 ) on Monday April 02, 2001 @09:10AM (#320548) Homepage
    They blame multihoming - however there is a limit of 65535 possible autonomous systems out there under BGP4, so if each of these has an average of 3 entries, doesn't that give a max number of route entries of under 200,000? Except you also have to multiply by the number of IP address ranges (networks and subnets) advertised by each AS - but still there would seem to be a limit not too far from where we are now. Which probably does mean it's time to replace BGP...
  • Verio's routing policy (viewable at http://info.us.bb.verio.net/routing.html [verio.net]) basically says that Verio follows allocation boundaries in accepting inbound announcements: /24s will be heard only from traditional C space, etc. Additionally, Verio will not announce longer prefixes than /24s.

    This isn't "Nazi-ish." This is common sense. If everyone started aggregating properly, there would be a lot less overhead.

    Frankly, I think IP allocations should be yanked from people who don't know how to announce them properly.

    ---
    click a button, feed a hungry person!

  • The internet is built in such a way that it is capable of adaptiing.

    The reason that routers are slow is that there isn't enough reasonable route summarisation that is relevant to a global basis.

    This will fix itself over time as the internet is built in such a way that it can adapt itself in an evolutionary fashion. Very much as open source can do the same.

    Stop worrying...

    Let the "corporate entities" sort it all out...
  • It seems very harsh to blame Microsoft for the slow adoption of IPv6. There's been an IPv6 stack available for Windows for ages. I would guess it's more the ISPs who are holding back on upgrading expensive equipment.

    john

  • OK. We'll set it up so that everybody who wants to put up a web page asks YOU whether it's worthwhile or not. Hope you've got some spare time. What's your email address again?

  • We as a community must take action to prevent this most grevious of conclusions for our beloved Internet! All sites that take less than 1,000 visitors a week should hereby relinquish their domain and IP address to The Greater Good and set up camp at either Tripod or Angelfire.

    All ISP or service providers: scale back your IP ranges! Everyone with a class A group of IPs should move to class B; everyone with B to C. All service providers currently shopping for IP addresses, please purchase one (1) and set up virtual domains.

    You there! with the vanity domain for IRC vhosts and email - cut that shit out! And you, with the pointlessly obscure and insipidly dull "weblog" with it's own domain - to GeoCities with your ilk!

    We can save ourselves, but only through moderation! (hint, hint!)

    Cheers,
    levine
  • >the fact here is that a multihomed router taking >full BGP routes from their upstreams today needs >atleast 256 megs of RAM and a big cpu (example >
    >cisco 7500 MINUMUM) in order to handle it..

    Nah. not really. we got full routes from one provider plus partialrouters from uunet and are easily under 128mb. CPU usage of a 7200 is hovering around 30%.

    I really don't see this as a problem. Memory is dirt cheap, (yes, even Ciscos use sdram) the cpu usage isn't that high at all, BGP updates don't occur THAT often, if someone is fucking their routes up badly, that's what route flapping is for.

    No one ever said you had to pick up the /24 routes, a lot of people don't. If this became more frequent then those morons who advertise their /21 as 8 /24's would get a clue and maybe fix it.
  • > It's already implimented for Linux

    who cares? no decent internet provider is going to run their core router as a linux box. Running a MS box for routing is unheard of. When Cisco (with the basic enterprise IOS, i don't want to buy a module for it when i can still use BGP w/o a problem) AND my upstream provider will do it with me (haha yah right), then i'll even think about considering it.

    MS is not repsonsible for the IPv6 not catching on, the reason is because I (the ISP) have absolutely no incentive to go to IPv6. ARIN will give me more IPv6 ips than Ipv4. woohoo, who cares, the difference in cost for me upgrading my /18 to a /17 is $0. My CPU usage is at 40%, my memory usage is at 60%, but at current memory prices, who cares?

  • Except DNS is entirely unrelated to routing. have a nice day. DNS would be a problem if it sent a NOTIFY to every DNS server on earth like routing does, but that's why DNS has caching and TTL's.
  • Regional routing allocation. Then your router only needs to know the routes to the nearby supernets, and your own subnets, and the router upstream from you will know it's own subnets, and it's neighboring supernets, and routing is a lot more aggregated, unlike know where there's morons out there advertising /18's as 64 /24s
  • > IP addr's aren't sacred. They should be chosen
    > for the convenience of the routers -- that's
    > the idea behind Class A, B & C arrd blocks

    Well they weren't. This is one of the many things IPv6 is supposed to solve... though i'm still not entirely sure that a problem even exists...
  • I don't see how this helps any. All it is doing more or less asking your upstream to put a bgp filter to prevent propagation anyways. I also see a problem that it'll create huge routing loops during mid-convergence, something that bgp more or less is unaffected by, because rarely do you have two routes pointing in different directions.

    And the core routers do care, they need to have more or less a NEXT_BEST that points to the other provider of the downstream AS in question.

  • Sure, the reported [newsbytes.com] 300 million online worldwide will explode to one billion by 2005, but as internet usage increases globally, the laws of Supply and demand will kick in. However - IMO, the important question is how the populations of developing and economically crippled countries can get access to the internet.

    A discussion on this topic at /. could generate alot of creative and viable solutions to this major problem.

  • [ A discussion on this topic at /. could generate alot of creative and viable solutions to this major problem. ]

    This is the single funniest statement I've read in days.

    You know, judging by the topics and the overall cynicism at /. lately, I'm beginning to think so too.
  • Yeah, stop making babies. that's the answer to all 3rd world problems isn't it?

    Read the whole thread shithead. Stop thinking so linear.

    Maybe if our government took your laissez-faire sentiments to their logical conclusion we wouldn't have a third world dominated by officially sanctioned dictators, they would be owned by some infinitely benign corporation.

    I suggested it should be a humanitarian effort. I also suggested corporations wouldn't be interested in solving this problem for said reasons. If you still don't understand, I can't help you AC.

  • I think your opinion comes off as a little crass.

    The major reason birthrates are disproportionate with family income is because of a lack of education and access to information. That is the primary benefit of the internet - access to information, something people in socially and economically deprived countries desperately need! When a government is too negligent to supply basic infrastures and social services to it's citizens, the people have to impower themselves. Communication via the internet would allow people to assemble, exchange ideas and find solutions for their particular situations.

    "...teach a man to fish" is all I'm saying Rich. Enable people to find their own solutions.

    Unfortunately, there is no financial incentive for multinational corporations or wealthy countries like United States and Canada to help developing countries participate in the information age. And redundant (and just plain stupid) topics like "Is the internet growing too fast" is a bloody waste of time when all this focused brainpower - using the internet *ahem* - could be used to tackle serious global issues.
    [ soapbox | off ]

  • Hi. John Galt is a moron, is your answer. Who else would want to have anything to do with that railroad bitch? ;)
    Good post, thanks. Just wanted to add - The city of Sacramento actually used to own a nuclear plant (called SMUD) but they closed it down about 11 years ago. Certain groups (including myself) warned at the time that it was an irresponsible action based on unscientific feel-good eco-politics and could eventually cause brownouts and power shortages. So you have to understand that this California Stupidity thing has been going on for over a decade now.

    I live in Dallas today, thank God. It's comforting to live in a state that never voted for Bill Clinton, even once, for anything. And a state that can take care of its own power grid, thank you very much.
  • IPv6 is totally non-locative? Does that conflict with the goals stated in the RFC's [rfc-editor.org] that are worried about "transparency"?

    Either way, I don't see any guarantee one way or another on 'locative' vs. 'transparency' with IPv6. There's been only a few addressing formats specified. We may not have seen the one that gets widely adopted yet.


    --
  • by west ( 39918 )
    First of all, there's no such thing as "growing too fast." It grows as fast as it grows. When routers stop being able to handle it, it stops growing until technology catches up. It's a standard population cycle.

    Actually standard population cycles have huge crashes where the population drops, often precipitously. Let's hope the analogy doesn't quite fit :-).

    However, more important is that it's not that more people won't be able to join the internet, it's that performace will degrade to unusable long before that.

    A better analogy is probably traffic gridlock. With too many cars, nobody moves. Unfortunately, that can mean financial catastrophe as people start "giving up on the net" because it's just too slow or unreliable. As if e-commerce didn't have enough problems.

  • The issue is not whether the pipes are saturated, the issue is whether the routers can propagate and crunch routing information fast enough. Spam is irrelevant; the size and complexity of the Internet is.

    Boss of nothin. Big deal.
    Son, go get daddy's hard plastic eyes.
  • Great! So now we just need someone to hold your hand while you bell the cat and Presto! ;-)

    Looking forward to hearing your developments!

    Seriously, there was somone a year or two ago who made a vast leap in voice recognition by realizing that you need to vary the timing between firings. Now I haven't heard of any practical developments, patents, licenses, or products as a result, so it may have been bullshit, but... Given that so many years of neural net research apparently overlooked this simple variation of technique, and given that any application of this to routing technology would necessarily make use of asynchronous signals, someone like myself who has no real knowledge about neural nets at all can see that it's likely that a big chunk of the theoretical groundwork for such an application is still terra incognita. Which means that we could easily be looking at 5 years research time PLUS design, development, implementation, and deployment of an utterly new routing protocol, one which would have to speak natively to at least BGP, if not other routing protocls, and which could have mission-critical implications for every major backbone provider.

    Good luck!

    P.S. Don't forget to post those links when you find them!

    Boss of nothin. Big deal.
    Son, go get daddy's hard plastic eyes.

  • ...after a few more years, the novelty of the 'net will fade, and it will become a standard part of our lives. People won't see the need to have their own website anymore, unless they really are trying to share something with the world that is worthwhile.

    No offense, but bullshit. Did the novelty of the Polaroid fade? Are the only people who have an interest in photography these days folks like Ansel and Maplethorpe? No! Now we have disposable cameras and sticky film.

    Personal pages will be around as long as somebody's giving away free hosting in exchange for ad banners... Now if you want to debate the certain death of the ad banner, that's different. :-)

    Screw the banners...with broadband to the home, you can run your own server and put whatever you want on it, with the only size limit being what your hard drive can hold and without those stupid "punch the monkey" banner ads. If anything, personal pages have the potential to proliferate (say that three times quickly :-) ) to an even greater extent than before, and potentially with less meddling from The Powers That Be (assuming for a moment that they don't do something really stupid, like say that you can't host your own website on your cable-modem or DSL connection).

    (FWIW, I've heard from people in other parts of the country with high-bandwidth connections that their usage of those connections is somewhat restricted. Maybe I'm lucky, as the provider I'm using [lvcm.com] doesn't seem to care much what I do with the fat pipe, as long as I don't resell bandwidth or provide warez/kiddie pr0n/etc. through whatever services I might make available. Besides, with 128 kbps upstream, it's only good for a personal site [dyndns.org] that sees maybe 10-20 hits on a good day.)

  • That just means its time to start developing AI (neural net) networks. This way the internet technically will be able to route itself more efficiently based on a few guidelines.

  • I don't have the slightest idea where to begin. Getting it off the ground would take a while and alot of hand holding but after about 3-4 months it could be done. I don't have any links to neural nets or AI but this would be a perfect application of its uses.
  • I'm gonna search around and see what I can find.
  • No. I am clueless :) But even I didn't say we don't need dynamic routes.

    I asked why we need _heavily_ dynamic routes that build deep routing tables. Sounds to me like someone is abusing the tables. Or using multi-homes as pass-thrus.

  • Please tell me once again why we need heavily dynamic IP routes?

    Is it because somebody wants 69.69.69.69 to be on one end of the Continent and 69.69.69.99 to be on the other? Why? I'd be sorely tempted to screw'em. Drop 69.69.69.99's packets on the floor if it's away from the rest of it's C class.

    IP addr's aren't sacred. They should be chosen for the convenience of the routers -- that's the idea behind Class A, B & C arrd blocks. If AOL wants to use a US IP addr for *.de traffic, fine, just drop the packet in the US and let'em route it themselves.

    When a system is taxed, somebody will suffer. It shouldn't be those who follow the rules. Otherwise, nobody will follow the rules and it will get much worse.

  • ...and look how well all the US's modern net companies are doing now. Surely an outstanding example for the rest of the world to follow.
  • ...after a few more years, the novelty of the 'net will fade, and it will become a standard part of our lives. People won't see the need to have their own website anymore, unless they really are trying to share something with the world that is worthwhile.

    No offense, but bullshit. Did the novelty of the Polaroid fade? Are the only people who have an interest in photography these days folks like Ansel and Maplethorpe? No! Now we have disposable cameras and sticky film.
    Personal pages will be around as long as somebody's giving away free hosting in exchange for ad banners... Now if you want to debate the certain death of the ad banner, that's different. :-)


    "Smear'd with gumms of glutenous heat, I touch..." - Comus, John Milton
  • Imminent death of the Internet predicted! Film at eleven.



    (Sorry, couldn't resist)

  • Alright. Now that we have proved you are clueless :). Basically it comes down to heavily dynamic routing schemes being 1. way more reliable and 2. way more easily managed. You can't administrate a mid size network without running some sort of dynamic protocol, even if it ends up being RIP or OSPF. Otherwise you would spend all day typing in and deleting static routes.
  • by gskouby ( 61416 ) on Monday April 02, 2001 @09:10AM (#320579)
    This was posted on NANOG [nanog.org] this morning and should be required reading. It is from Sean Doran who basically built Sprint's Network in the early/mid 90s and is probably *the* authority on this kind of stuff. Read it [merit.edu].
  • The problem is with the routing tables being filled. Therefore, if you want to use the oh-so-lame "OSI Model", the problem is layer 3, while what you're talking about, HTTP, is layer 7.
  • by wurzle ( 67794 ) on Monday April 02, 2001 @09:15AM (#320581)
    OK reality check here -- how does email traffic have ANYTHING to do with the size of routing tables? (Hint: It doesn't have anything to do with it). This article isn't talking about traffic its talking about the size of hte routing tables.

    And what commision is it that is planning to "delete spammers from the internet"? I hope they don't delete me.

    Troll...........
  • If the current infrastructure is not up to the traffic then the parts of it that are too weak need to be redone. It is pointless to cry that it is being used too heavily and even more pointless and abhorrent to talk of attempting to slow it down.
  • we get an article like this every few months, and the net still seems to be up... don't u get bored of reading the same over and over?


    ---
  • Thats not the point. I shouldn't have to qualify what I am going to use it for, if it's available it should be available to all.

  • by jidar ( 83795 ) on Monday April 02, 2001 @11:21AM (#320585)
    I see everyone blaming the small multihomed sites and talk about banning multihoming on this forum, I strongly disagree with that and I strongly disatree this nasty quote from the article:

    "Half of the companies that are multihomed should have gotten better service from their providers," says Patrik Faltstrom, a Cisco engineer and co-chair of the IETF's Applications Area. "ISPs haven't done a good enough job explaining to their customers that they don't need to multihome."

    Yeah sure, my provider told me how I didn't need to multi-home and I got burned. Excuse me for stamping on your elitism here, but everybody wants redundancy and you shouldn't have to be a fortune 500 to get it.

    About a year and a half ago our company was looking into upgrading bandwidth, and since we already had a t1 my boss figured we could buy another t1 one from a different company than our normal bandwidth provider, thus achieving increased speed redundancy with a different isp at the same time.

    Well I found out that in order to do something like that I needed to run BGP to get myself into the core routers routing table because I would have multiple paths to my network. I was a big concerned about this because it sounded drastic to me and I spoke with my tech rep at my current provider and we mutually agreed that perhaps it wasn't the best thing to do as we could get redundnacy from their network and I wouldn't need to have my own AS.
    Well, to make a long story not quite as long, I got burned. They had some routing problems that affected both of my links at once, and we were down for a bit. I had to explain to the boss why I had made the decision I did and he wasn't real happy about it. I am about to be installing our 3rd t1 and switching to bgp so we can be multihomed.

    This is a very typical story, and is the primary reason that the BGP4 tables are so huge. Every dotcom and their mom has been going through a similar scenario the past year or so. Now of course this is starting to not work very well so we are seeing some problems, and these same dotcoms are being blamed for this.

    The problem isn't the dotcom's, its due to limitations in the current system. I suppose they (we) do shoulder some of the blame, but christ shouldn't we be allowed to have some kind redundancy? What is there some kind of special "VIP's only" sign in front of the redundancy bar? To hell with that. Obvously the current situation cannot continue, but I'll tell you right now that all of these dotcom's (and their moms) are not going to be giving up redundancy, so you core router guys better figure out why to let everyone in on the redundancy bandwagon.
  • Meanwhile, places like Extreme offer more GigE ports than you could ever imagine using and at a much more reasonable price. Cisco has a stronghold on the market indeed.

    I've worked with Extreme's switches. They're good. Simple, WICKED fast, and more cost-effective than Cisco.

    My only beef is that if you want to use them as core routers, you've got to be running Ethernet. No ATM interfaces. No T1 interfaces, no DS3 intercaces, no OC3/12/96 interfaces. So you've still got to hang other vendors' boxes off the mess and you risk vendors fingerpointing when something chokes.

    It's kind of a catch-22. Cisco's mature, but almost too much so, which all sorts of useless legacy equipment in their product line that they have to support and legacy code in the IOS kernel. Extreme needs to become more mature, I kind of feel their software needs some fleshing out yet and they need to add some components to their product line.

    I love that crazy fast backplane on the BlackDiamonds, but for chrissake can't we do something about the purple and green color scheme?

    -carl
  • by carlhirsch ( 87880 ) on Monday April 02, 2001 @09:59AM (#320587) Homepage
    Most routers contain only motorola 68k processors which seems absurd for the price that routers go for.

    Actually, I'm pretty sure that Cisco's 2500-series routers to have m68k processors. 68030, I believe. Everything never has a PowerPC variant. Dunno about Juniper, but I'm going to guess they're running a RISC processor of some sort.

    The hardware issue was another factor covered by NANOG this morning in response to this article. The long and the short of is that throwing hardware at a problem might get you through the day, but good planning and forethought will bring you through a lot more cleanly.

    With any router manufacturer, your value-added is never the hardware but rather the software (i.e. Cisco's IOS) and the service. Sure, you could grab a PII and run GNU Zebra for your BGP peering, but the CCO is a great resource to have when troubleshooting. Puts MS's TechNotes system to shame. It WORKS. Plus, working in downtown Chicago I know that if my 7200 takes a dive, Cisco's got a parts depot blocks away where I can get parts.

    -carl
  • Second of all, a greater concern might be "Is the Internet growing too disorganized?" There are ten jillion pages out there, and the vast majority of them aren't even linked to from other documents. They don't show up on search engines, they just sit there

    As one of the replies to your post pointed out, a lot of these jillion pages (the vast majority of homepages, anyway) are crap. Personal crap. Yes, they're sitting on a web that's supposed to be interconnected, but for these personal pages there's no need - they exist simply so their owners can tell a friend or relative, "hey, go check it out, it's got my dog's picture on it."

    This majority of personal pages should be ignored by search engines simply because there is nothing to search for in them - typing "Cindy" into google when looking for your cousin's homepage wouldn't be much use no matter how many times google's been to her site.

    ---

  • That just means its time to start developing AI (neural net) networks. This way the internet technically will be able to route itself more efficiently based on a few guidelines.

    We're thrilled to see your enthusiasm, and we look forward to the results of your research and work in this field to help this endeavor get off the ground. Godspeed, sir.

    ---

  • What I want to know is why everyone, (apparently) including the /. community, seems to think that the only thing on the Internet is HTTP/WWW.

    Well, I certainly don't think that NNTP is putting a strain on any routers...

  • by zpengo ( 99887 ) on Monday April 02, 2001 @09:12AM (#320593) Homepage
    First of all, there's no such thing as "growing too fast." It grows as fast as it grows. When routers stop being able to handle it, it stops growing until technology catches up. It's a standard population cycle.

    Second of all, a greater concern might be "Is the Internet growing too disorganized?" There are ten jillion pages out there, and the vast majority of them aren't even linked to from other documents. They don't show up on search engines, they just sit there, with the web masters wondering why they've only gotten 3 visits in the past year.

    Even the sites that can be found by search engines are getting increasingly hard to organize. Yahoo! [yahoo.com] is starting to wobble in their traditional high-quality, hand-picked links directory. They can't keep up with the net, so they've started implementing pay-for-listing programs. The Open Directory Project [dmoz.org] survives because of heaps and heaps of volunteer editors, but every category varies in qualities, and some basic categories don't even have editors. Many other search engines have attempted dynamically-created directories based on keywords, but these are easy to spam and often have very low-quality content.

    All the disorganization also affects our information processing [jamesarcher.net] skills. We don't read like we used to. Hardcore web surfers are generally incapable of sitting down and enjoying a good book, because they're too accustomed to the "This page doesn't have it, go to the next one" cognitive paradigm.

    What we really need is a new way to organize web sites [jamesarcher.net], perhaps based on a combination of client (most visited sites), server (author-specified categories), and parsed (most linked-to sites) information.

    The internet is not growing too fast. Our ability to cope with it, however, is failing to grow with it.

  • by BradleyUffner ( 103496 ) on Monday April 02, 2001 @09:08AM (#320597) Homepage
    Acording to the aticle the problem isn't with the amount of user traffic on the internet, it is with the amount of router-to-router traffic. The routing tables are being updated too quickly for the routers to handle and pass the changes onto other routers. As the number or routing entries an updates grow, because of multi-homed systems, the system can't keep up with the changes. At least thats the impression I got from the article, please tell me if i'm wrong.
    =\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\=\ =\=\=\=\
  • by Animats ( 122034 ) on Monday April 02, 2001 @05:40PM (#320603) Homepage
    The basic problem is that IP addresses were designed to be locative (you can find out where something is by decomposing its address), but over time have become more non-locative (the address is just a number).

    The same is true of telephone numbers. Originally, the area code addressed an area, the exchange prefix addressed a specific switch, and switches didn't need much routing info. Today, two adjacent phones can have completely different area codes and exchange prefixes, and most phone calls involve a database lookup to get routing info for the call. The internal architecture of the phone system has changed completely in the last twenty years to make this possible.

    The Internet is going through the same transition. The endpoint is probably IPv6, totally non-locative addresses, a lookup for every routing decision, and a hierarchical cache of a distrbiuted database system for the routing info, somewhat like DNS. But we're not there yet. The existing technology was designed for a network with a hierachical address structure, so that the routers only needed to track class A, B and C networks. Now they're expected to track more finely divided portions of the IP address space, requiring more router info. BGP is hard-pressed to keep up.

    I looked at this problem once in the 1980s, and concluded that part of the solution was a strong separation between information about actual topology (what nodes and links actually exist) and network status (which nodes and links are up). The former changes more slowly than the latter, and contains more data. The link state info changes more rapidly, but the data volume per link is small. Such a separation helps you get a handle on the data volume.

    It also makes the network more robust, because you can afford to do more checking on topology data. You'd like to verify that both ends of a link agree on a link before putting it into the topology, for example. That is, before anybody accepts a link between A and B, they need to see A saying it has a link to B and B saying it has a link to A, preferably with digital signatures. I was trying to devise a routing protocol for DoD that would resist attacks on the network in the form of bad routing data. But we didn't get the contract to build that, so I went on to other things. Whomever gets stuck with replacing BGP needs to solve that problem.

    But I did this before mobile IP was an issue, and never had to deal with that very difficult problem.

  • by susano_otter ( 123650 ) on Monday April 02, 2001 @09:50AM (#320605) Homepage

    Ignore this shit. Every year, someone predicts the internet will "collapse under its own weight". Guess what? It NEVER DOES. People have been claiming the sky is falling since NSFnet became available to the public -- I'm still waiting.

    I think you misspelled "Feel free to sit back on your lazy ass while somebody else designs and implements a fix - just like they always do". HTH. HAND.

  • True. Get your own IPv6 tunnels for free here [he.net] and here [freenet6.net].

    There is also some very interesting information regarding IPv6 in various sites, such as 6BONE [6bone.net]'s, and Sun [sun.com]'s. It is really great to poke around with IPv6 stuff, there are a lot of programs that support it by now, such as lynx (-dev tree only), w3m, BitchX, epic, etc. etc. etc. And also, IPv6 is cool because it lets you create such educational hosts like dead:beef:c0ff:eeca:bf00:3:133:7.

    If you don't believe me, here is my sit1 interface:


    sit1 Link encap:IPv6-in-IPv4
    inet6 addr: 3ffe:1200:3028:817d:dead:dead:dead:dead/127 Scope:Global
    inet6 addr: 3ffe:1200:3028:ff01::2fb/127 Scope:Global
    UP POINTOPOINT RUNNING NOARP MTU:1480 Metric:1
    RX packets:166 errors:0 dropped:0 overruns:0 frame:0
    TX packets:156 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0
    RX bytes:22433 (21.9 Kb) TX bytes:18211 (17.7 Kb)


    You're tired of Slashdot ads? Get junkbuster [junkbusters.com] now!
  • "
    Utilities such as: Gas, Electric, Water, Internet Access, should be GOVERNMENT regulated and provided. "

    "
    YES!! more socialism! while we're at it cars, computers, televisions, steak tar tar and beer should also be provided by the government.
    "

    I think the argument is infrastructure should be built and maintained by the government, otherwise you'd need multiple gas supplies to your house for competition. The idea is only one set of power lines needs to be built to your house, rather than one per electricity company.

  • &nbsp

    ... including a rough latitude and longitude for the system with the IP ...

    What about devices that change position. I suppose that a "rough" latitude would work for devices that don't move very far, but what about device on trucks, trains, airplanes, etc, that routinely cross continents? Would they have to get new IP addresses if they moved "too far"?

    Jeff

  • This Slashdot story reminds me of the restaurants nobody goes to because they are too crowded.

  • What we really need is a new way to organize web sites.

    I think it's called "XML". Define the data on a site so that it's easier to search. Of course, there are 101 reasons why this hasn't caught on on the client side yet...

    Also, I can't imagine it's as bad as you say. Whenever I need something, I type it in Google. I'm amazed at how fast and accurate my results are. I almost always find what I'm looking for (no, not pr0n!). I'm sure there's a LOT of sites that I've never seen, but do I care? The relevant ones with good content seem to come up just fine. Yes, there's some serious room for improvement, but I wouldn't say that it's completley disorganized.
  • by _ganja_ ( 179968 ) on Monday April 02, 2001 @11:18AM (#320619) Homepage
    "frequent updates to the routing table entries by network managers are causing instability in the Internet's backbone routing infrastructure"

    Yeap, seen this too many times to be funny: Network managers, when you make a change to the networks you advertise or your filters, use "clear ip bgp * soft out"! Without the soft out, all your routes get withdrawn and then re-added around a minute later, crating a mini route wave through the Internet. Instead of forcefully resetting your peers, soft out will just add to the table version without withdrawing routes. You could also use "soft in" for applying filters to incoming routes also BUT watch out for memory useage with this command. Soft out doesn't have this issue.

    This is fairly basic Cisco IOS stuff but I've seen network admins from 2 of the top ISPs do this on peering point routers that were advertising a lot of routes to a lot of peers.

  • But if you RTFA, you'd have seen that's what they wanted to do in the first place, replace BGP4.

    I can't be karma whoring - I've already hit 50!
  • Ways to fix this:
    • Replace HD with flash disk
    • Under-clock processor to where only heatsink is needed
    Not too hard, eh? And now you have the same thing as a Cisco, just a case fan or two moving. For those really worried, redundant power supplies are nice too.

    I can't be karma whoring - I've already hit 50!
  • including a rough latitude and longitude for the system with the IP ...

    What about devices that change position. I suppose that a "rough" latitude would work for devices that don't move very far, but what about device on trucks, trains, airplanes, etc, that routinely cross continents? Would they have to get new IP addresses if they moved "too far"?

    What he proposes is great, and makes sense. Its just like the phone system, with an area code and exchange.

    I guess cell phones have to get a new phone number when they leave an area code? :)

    The phone companies managed huge networks years before most of us were born, maybe we should take a lesson from them and quit being stuck up assholes.
    -

  • Maybe everything should go back to text mode

    Then /. would have to switch off the lameness filter once and for all.

  • by Sommelier ( 243051 ) on Monday April 02, 2001 @09:21AM (#320634)

    To avoid a complete meltdown of the Internet, President G.W. Bush has recommended that rolling blankouts begin within the next two months. During peak periods of Internet traffic, up to 100,000 sites may have their routing tables blanked for up to an hour in order to reduce the load on the system.

    "I may not know much about the Internet, but I don't think you need a PhD to see what all this free music trading is doing to our nation supply of routing tables," Bush was quoted as saying. Asked if he was implying that Napster was responsible for the rolling blankouts, he replied "The RIAA says they are, so it must be true."

  • Yum! Flamebait, I'm hungry.

    /me picks up idiot stick and beats notbob upside head.

    feel better now?

    "DeRegulation and Commercialization of basic utilities was the dumbest idea ever invented.

    "Okay what did it help? Not a blithering thing, okay so some jackass in idaho saved $2.95 on his gas bill, which of course is more important then the 14 BILLION dollar debt the California utilities are suffering. "

    No, the dumbest idea ever was calling the removal wholesale cost restrictions and keeping consumer charges fixed, deregulation. Lets open the backend to market fluctation and not allow the front end to respond. hmmm that's smart.

    By the way, that was Cali's deregulation and had nothing to do with Idaho, Washington, Arizona, Colorado or any other state. If you'r going to deregulate, do it right an open the whole thing to the market.

    "De-Regulation was an idea pushed through by greedy assholes trying to convince you it was a good idea while they were planning to run with the profits even more then when they abused the government contracts. "

    Uhh huh... Price fixing is a great way to promote greed. Let's charge the customer less then it costs us to provide a power and make a HUGE profit. Then we'll be Billions of dollars of debt and richer the Bill Gates.

    "Utilities such as: Gas, Electric, Water, Internet Access, should be GOVERNMENT regulated and provided. "

    YES!! more socialism! while we're at it cars, computers, televisions, steak tar tar and beer should also be provided by the government.

    "Oh no not that evil government???? I'm sorry but some shit has to be done by the big guys, because they're the only ones not trying to make a buck in fact they loose money constantly."

    Ha they don't need to make money, they just spend it and take more from you. So lets give this stuff to people for whom cost is no major impedment, and they still buy form the lowest bidder. Making a buck can be a good thing. It means you provided a good or service for less then people were willing to pay for it.

    "Nobody in Cali is building a new Nuclear Plant or Spending billions on alternative energy resources research. "

    Damn Right! and that's the problem. If they want the power they've got to generate it somehow. If They don't want a nuclear plant in thier backyard, maybe they can pay the folks in Idaho a few billion dollars to have it in their's. I'm sure the guy in Idaho wouldn't mind getting free power and a check every month from the residents of Cali for that reactor a few miles from his house. OF course maybe Sacremento will realize they can get in on some of that cash down in silicon valley if they are willing to build a reactor or two an d charge a market rate to the folks in the Bay Area.

    "Think about shit before you blindly go saying ..."

    No comment... :)

    "Also why can't we get Clinton back? I don't care who sucks his dick just that he doesn't f up my economy like bush or gore will)"

    Well a constitutional Amendment for one thing. Durring who's adminstration did this start? So we see his results, but we'll make Bush or Gore guilty even before they can commit the crime, good.

    This reminds me of something I read once...
    Who is John Galt?

  • Well, one thing's for certain. There will be no stopping growth until every home can simultaneously stream in four separate, HDTV-quality porno streams.

  • by MSBob ( 307239 ) on Monday April 02, 2001 @09:03AM (#320649)
    According to this [slashdot.org] post it's not growing fast enough...
  • And then they implemented their "we'll look at it for $199" approach, which probably makes sense for all involved IF you accept that Yahoo is the 800-pound gorilla.
    As if that isn't bad enough, for certain specific sites Yahoo charges $600 just to look at your site. There is no option to submit a site without paying. What that basically means, is that Yahoo is just a giant paid yellow pages directory now. This policy raises the bar high enough that a lot of webmasters are not going to pay the fee. The whole point of the net was originally that it would level the playing field, right? Policies like Yahoo's encourage success from sites that are well-financed, rather than sites that are actually good.

    Google, on the other hand, encourages success from sites that are good, rather than sites that are well-financed. Being well-financed will always help, but Google ultimately places the most weight on sites that get referenced frequently, instead of sites that can afford to take out a $600 yellow page ad. That is the sort of new innovation in web organization that I think will have a big impact on how we find information in the future.

    -Keslin [keslin.com], the naked nerd girl

  • by PotLegalizer ( 398537 ) on Monday April 02, 2001 @09:40AM (#320655)
    >>Maybe everything should go back to text mode

    Oh great... back to ascii porn

  • My personal opinion on this problem, is the control that the content providers have on this system is too much. Freenet seems to solve this by getting your data from the nearest source, just like mirrors are supposed to. If every home dial-up user contributes to the serving of files, then the problem is reduced. Obviously the encryption in Freenet slows it down, and this should be removed, a publisher should have to submit the content to this system, but would benefit us all if they do. For example i live in the UK, and im sure many other /. readers do. It would be so much less burden on the transatlantic links, and all the routers between me and /. if i could get the info from whoever has just read it in my town, or even street. You could solve all the bandwidth problems by sharing it out.
  • by meza ( 414214 ) on Monday April 02, 2001 @09:59AM (#320665)
    Don't know much about tcp. Don't know much about BGP. Don't know much about DHCP. Don't know nothin' bout my ISP. But I do know I don't like IP (v4 that is) And I know that if we used IP (v6 of course) What a wonderful world it would be.

    --------------------------------

  • I think you misspelled "Feel free to sit back on your lazy ass while somebody else designs and implements a fix - just like they always do".

    Yes, the people that are paid to engineer the Internet will solve the technical issue, and as always the gloomy predictions ala Metcalfe, will probably be proved wrong.
    Remember when the backbones were 56 Kbps (before 1988) and it was unthinkable to some people it won't collapse? Remember the T1 backbones until 1991, when Internet was doomed to be charged by volume?
    Nowadays I have 1-5 Mbps download connection to major sites. Bandwidth is no longer a big issue? Now predictions are about explosions/wild changes of route tables (to be fair, this has been started a long time ago, along with with IPv4 unconceivably too small address space).

  • I'm not a router expert, but...

    Routing uses the process of receiving a packet and checking its destination. Using a table lookup based on the destination, it finds the the neighboring router it should forward the packet to. The table is updated so that routing can find 'the shortest path.' The real question here is how 'shortest' or distance is defined.

    Your method of configuring geographic location on routers is basically trying to get the router to calculate its distance to its neighbors. However, you're thinking about geographic distance rather than logical topology which is the 'road map' that routers can travel on. There are two direct problems that stem from your method:
    1. Pipe size (bandwidth) is not accounted for. If there exists two paths to the destination, should I take the 'geographically' shorter path that is connected by a 56K line, or a slightly longer path that consists of several OC3's?
    2. Hop count is not weighed. With two paths, the geographically shorter path may travel through 20 routers, where a longer path may hop through 5 routers--how do you determine the tradeoffs?

    Most importantly, routers currently have a mechanism to address this routing issue by allowing users to define the distance between routers. I believe this is usually set to reflect bandwidth size (ie. usually people define their OC3's to be shorter than their T1's). This is generally easier for network managers to adjust their link distances than to tweak their lat/lon coordinates.

    Most of the problems with congestion I've seen is due to capacity problems--where there may be two different paths to a destination, but one path is over-utilized and another is under-utilized. Currently, routers are very limited in their load-balancing techniques. I believe Cisco routers can balance up to 6 equal paths--but configuring multiple 'equal' paths is not easy for a meshed network (equal as in defined distance). Without equal paths, routers generally send their packets down the 'shortest' path, regardless of congestion. Routers are not intelligent enough to recognize congestion and calculate the 'next shortest path' (though 'next-shortest' paths are done when there is a link failure). Without this type of mechanism, if you were to get the router to use the 'path of least resistence,' it would just shift the congestion from the previous route to your newly defined route. Currently, the MPLS initiative is trying to address this capacity issue.

"Your stupidity, Allen, is simply not up to par." -- Dave Mack (mack@inco.UUCP) "Yours is." -- Allen Gwinn (allen@sulaco.sigma.com), in alt.flame

Working...