Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet

Whatever Happened to Internet Redundancy? 200

blueforce asks: "At one time, there was this really neat concept built into the internet that said there's all this redundancy like a spider web. If one segment or router would go down the internet would re-route traffic around the faulty segment and keep on chuggin'. So, as I sit here today and can't get to a whole bunch of places on the net, I'm wondering what gives? Where's all the redundancy? I'm not referring to mirrors or co-location. It almost seems like a script-kiddie with some real ambition could bring the world to it's knees. What really happens when routers go down, and what goes on when something like a Cable and Wireless pipe or someone else's OC-something backbone goes down?" Redundancies are nice, but not infinite. Planned DoS attacks can take out dozens or hundreds of routers at once, and as the number of downed nodes increases, the process of rerouting becomes increasingly difficult. What are some of the largest problems with the current systems in use today, and are there ways to improve them?
This discussion has been archived. No new comments can be posted.

Whatever Happened to Internet Redundancy?

Comments Filter:
  • by Anonymous Coward
    this post is redundant to exactly that point.

    first pointed out it was rendundant post!
  • by Anonymous Coward

    OK, a guy asks what is actually a valid question, and you decide you need to use the opportunity to try boost your poor self-esteem by mocking the guy and trying to show off your (actually seemingly undergraduate level) knowledge of internet routing? You think that knowing the basics about internet routing makes you better than other people? Get over yourself. BGP has problems of its own, like requiring huge fucking routing tables.

  • Well, considering that I find my Internet connection more useful than my phone line, as well as the fact that I demand (damn near) 100% reliability from my phone, I think ISPs will eventually realize that they cannot survive if they only provide 80% (or 95% or whatever) connectivity.

    Of course, there's that whole other problem of ISPs restricting certain types of traffic (upstream, certain *cough*Napster*cough* ports, etc). I really don't like intentionally degraded service...
    --
  • by Mike Hicks ( 244 ) <hick0088@tc.umn.edu> on Thursday April 19, 2001 @12:48PM (#278506) Homepage Journal
    I just read about this a week or so ago.. Back before the ARPANet was built, someone did the math and discovered that you only need three or four connections to each node to provide reliability very close to what you'd have if all of the nodes were directly connected to all of the other nodes. In practice, I don't think that even the ARPANet got to that level of connectivity. Certainly, Internet Service Providers of today generally don't have anywhere near that level of connectivity.

    There are a number of obvious reasons why high levels of connectivity don't exist. One is cost -- who wants to pay for multiple connections if you usually only need one? That's also a somewhat psychological problem. Obviously, there are advantages to having multiple connections -- lower ping times and throughput to what would otherwise be `distant' networks, for instance.

    Another reason is the fact that routing tables would be extremely complex if that many connections existed. There may be algorithms that can reduce the complexity, but it's definitely not something I really want to think about..

    Otherwise, I suppose a lot of people just haven't thought about it.
    --
  • "roww-ting", of course. like "owl". or do some people call those birds "ools"?

    --
    Forget Napster. Why not really break the law?

  • Eighty-seven _trillion_ dollars of money was exchanged in international foreign exchange electronic transactions.

    That's not last year, or this year- that was back in 1986. Before the 70s, it was like one or two trillion a year, and then it started to snowball. Finance is by far the biggest customer of communications networks.

    Taking out the world's financial networks for a second would impede $2 grand worth of transactions. A minute of downtime a year would be $165K- an hour, nine million dollars. And that is from the 1986 figures- more than a decade ago. Any guesses on how much of the world's financial transactions go over the net now?

    It's true. Or to be more accurate- the world's finances could be sabotaged in this way quite easily. The weird thing is, it's already taking damage just from stuff like Microsoft's irresponsibility- you don't have to have a malicious geek with a trenchcoat to cause billions of dollars of financial damages. Your software vendor can do you that kind of damage without even thinking, charge you for it, and then go set you up for even more.

  • Oh you have noticed the improvements in geriatric sex aids like Viagra then?
  • I dont think you understand what he's saying. What he's trying to say is that America doesn't produce as much pollution in relation to its energy consumption as other nations do. Not that it justifies their great consumption in the first place.
  • There's a research project called Detour at the University of Washington to look into fixing this kind of thing.

    The only working link I can find right now is
    [washington.edu]
    http://www.cs.washington.edu/homes/savage/papers /I EEEMicro99.pdf
  • The common misconception is that the internet is valuable because it allows multiple viewpoints to reach multiple audiences via multiple pathways--hence the focus on redundant infrastructures and the decentralization of services. The reason why it's a misconception is not because those features no longer exist (though they're fading); it's because there's no longer a need.

    The consolidation within the news-service sector of our economies has assured one thing: there is now only one message to get across. Only one message and soon only one audience, as human languages are dying out (thanks in part to the internet but more because of radio). If there is only one message and one audience, then you no longer have to worry about having multiple pathways. Redundancies have been made redundant.

    But the corporatization of the internet is only partially to blame. More of the blame falls on the EU: who would've thought that banding the nations of Europe together in one bureaucratic machine could do so much harm to human civilization? Like the internet, sovereignty was once decentralized and redundant across many pathways. Now, a single marching order can come from Brussels and there'll be a third world war.

    But redundancy is a very necessary thing. It's not safe to have just one of something: we must have several. If we are to have a third world war, we must have competing manifestations (WW3a and WW3b, for example), or else how can we possibly determine which was the more effective or more desirable? And what if one were to fizzle out? In the old world order, we'd be covered by grand international rivalries. In the new world order, we can only hope that fleeting petty intracultural differences can take up the slack.

    The internet is an incredibly important technological phenomenon, but let's not allow it to blind us to the more pressing drives in humanity (such as competition). Looking solely at the internet as an end product may mask the underlying social and political conditions that created our mess in the first place.
  • The Rogers network (ie. Rogers@Home) was partially disrupted a few weeks ago by 'copper thieves' near St. Catharines, Ontario a few weeks back.

    Is the outage you were thinking of?

    Kinda makes you wonder why a lot of data traffic would be going over copper. I thought copper was mostly restriced to the last mile these days.

    Or is there so much copper out there that it won't be phased out for years? Anyone?
  • The funniest thing about this is going to be the (-1, Redundant) moderation of this double post on redundancy.
  • by ptomblin ( 1378 ) <ptomblin@xcski.com> on Thursday April 19, 2001 @01:04PM (#278515) Homepage Journal
    Back when the Internet was designed and run by techies, the techies would say that they needed three redundant backbones running through different cities and with no common switching points to make sure they had 100% uptime, and they leased the lines to do it. But now the Internet is in the hands of profit seeking companies, and the bean counters say "we don't have to have 100% reliability, 80% is good enough, so stop using three backbones where one will do", and suddenly you have the situation where one backhoe can cut off one part of the country from another.
  • mainly because with IPv6 they have a clean slate to assign the addresses properly to allow for clean and dense aggregation.

    also, ipv6 allows for things like having the assigning the last 64bits of your address space statically or dynamically but /without/ having it tied to the first 48 bits (or whatever) that controls how packets are routed to you. IPv6 DNS also supports this division of host id and network. So that you can renumber your network from DNS by just changing one record!

    What it means is that, where currently with IPv4 if you want 2 redundant links to the internet from 2 providers you will have to either:

    - get a provider independent chunk of addresses from your NIC and have both your ISPs add this (small) subnet to their BGP adverts. PI subnets are increasingly more difficult to get, cause they're running low and cause they are a huge overhead on routing at backbones, and hence discouraged - your ISP isn't even required to add your PI to their BGP adverts.

    - or get provider dependent subnet from one and persuade the other ISP to advertise this chunk (not good).

    - or get a PD subnet from both and dual-home all your hosts which could be a mighty pain in the arse if you have any significant number of hosts.

    Instead, with IPv6, you just get 2 chunks of address space, say dead:beaf:: and f00b:a43d:: from each of your ISPs address space. You assign a unique host id to each machine, and let them figure out that their full ipv6 addr can be either dead:beaf::hostid or f00b:a43d::hostid statically, or even better, dynamically from the peers/dhcp server/ routers around them.

    And v6 DNS supports this fully, you look up an A6 record and the answer consists of hostportion and a pointer to which records to look up to find out what the network portion is. You look up the network record, cat the previously found host portion and this network portion together and you have your IPv6 address.

    (ie change just that one network record and you've updated network number in DNS for all your hosts, cool).

    anyway, sorry i can't be more specific about the ipv6 auto-config stuff, but it is in the specs. they did think about this stuff over the last, what, 8 years or so (???) that they've been working towards ipv6.

    If people are interested in playing with IPv6, well play with it at home! Eg, linux with the USAGI patches (www.linux-ipv6.org) works perfectly. Then you can get a tunnel from the 6bone (6bone.net), and after that maybe even a /64 from your tunnel provider. Ie: public IP addresses for all your machines at home - bye bye NAT!!

  • I can have 35 ways to get to Minneapolis, but if the city is destroyed, or the place I want to visit inside the city closed down, they don't do much good.

    LetterJ
    Head Geek
  • It still is. If you go into a UUNET HUB and unplug a GW nothing happens (well after routing converges again). Same if you take out a TR, or XR, I forget the difference. Other big ISPs are similar.

    And if you look at the older way it was redundant, take out a long haul link and things route around, well it still works, take out a long haul link and traffic flows along the ones that still exist (even if they are a very different path).

    What isn't redundant? Your link to the ISP probably isn't. The router you land on at the ISP's hub probably isn't. With enough money you can buy two links, better yet to two different ISPs. Most ISPs don't have more then two exit routers per hub, so if both go you are screwed. Some hubs only have two exits. I expect some ISPs aren't even that good, but you do get what you pay for. Don't buy connectivity from a cut rate provider and complain that they aren't redundant. What else? Well whoever you want to talk too might not have redundant connections. Sometimes a whole ISP can do something that screws them (load a Cisco or Juniper code release that has a bad bug that didn't show up in their or your testing...or screw up your L2 fabric or...), but the other ISPs are still alive and kicking. They can all talk to each other if while you are dead (unless they don't really have a backbone, but just wholesale for the dead ISP, and only the dead ISP, but again you get what you pay for)

    Still, that's not too bad.

    Did you expect it to be better?

  • It's really simple: redundancy is expensive. If you want redundancy, you have to pay for it. There are ISPs that have redundant POPs with redundant backup power, redundant telco access providing redundant backbone and local loop paths, redundant switches, redundant routers and redundant peering who will provide redundant tail circuits and redundant routers to their customers. But this all costs money. And of course, the beauty of the internet is that no matter how redundant your connection to it is, the connectivity of your destination may be totally unprotected and unreliable. Or their servers might be hosed. Or their database might be corrupted. Or whatever. For critical services, you need redundancy everywhere, and it can get obscenely expensive. Fortunately, people who rely heavily on such services are typically willing to pay through the nose for them.


    fnord.
  • by maggard ( 5579 ) <michael@michaelmaggard.com> on Thursday April 19, 2001 @01:55PM (#278520) Homepage Journal
    First of all yes it is possible to configure your own little part of the world to continue working in case of a meltdown.

    You'll need multiple connections that are all independant. This can be difficult to ensure as lots of times Company A's fiberlink will be in the same trench as Conpany B's & so the same backhoe will take them both out even though you used two services. You'll need to determine the full path your data will take & lots of time the salesfolk won't have or even understand what you want, particularly if you're not a big commercial account.

    Then you'll need a way to route your inbound & outbound traffic dynamically. BGP is the method of choice but it's *not* a friendly thing. For the small-time techie Zebra & other tools are under development to help with this sort of thing but it's still tricky tricky stuff full of gotchas.

    The same redundancy advice goes for power - you'll need at least two separate services that are well & truly separate, not just the same line coming in the front door as well as the back door. Local generation for backup is also a good idea. You'll need to test everything regularly - systems often fail & a botched hand-off can ruin your whole day.

    That said a buddy set his house up to be always-connected. UPS's on key hardware. BSD on dual laptops using BGP connected to cable-modem, ADSL, dial-up, digital-cellphone & a ham packet radio rig. Even has a wireless connection to a friend in another town a few blocks away but on a different part of the grid & central exchange with a similar setup.

    Of course it's still possible for something to break in a big way. One EMP over Arlington Virginia-area would take out lots of important services, probably causing major disruption in the confusion & resultant instability. Heck a group with an axe to grind could presumably cut enough critical cables in isolated areas in an hour or two to 'cause significant traffic problems globally.

    This is of course no more different then bringing down any number of other services: Water, electricity, sewage, roads, gas pipelines - none are particularly hard to shut down if one is nuts enough to try.

  • Eventually, you will reach a single connection on the path that leads to the machine you are looking for. Many providers have redundant connection to the backbones, but, for example, there is only 1 connection from them to you. And actually, there are many providers who do not have redundant, topologically separate connections to the backbone.

    The internet wasdesigned so that if any particular switching point went down, the others could keep up with it. The idea was nice 20 years ago when there were 50 NAP's. There's probably 50 NAPs within 10 miles of me right now. So we're not quite as redundant as intended, but we're still pretty redundant.

  • Eventually, you will reach a single connection on the path that leads to the machine you are looking for. Many providers have redundant connection to the backbones, but, for example, there is only 1 connection from them to you. And actually, there are many providers who do not have redundant, topologically separate connections to the backbone.

    The internet wasdesigned so that if any particular switching point went down, the others could keep up with it. The idea was nice 20 years ago when there were 50 NAP's. There's probably 50 NAPs within 10 miles of me right now. So we're not quite as redundant as intended, but we're still pretty redundant.

  • If your ISP is linked only to PSInet, you have more problems than a non-redundant connection - PSINet may only have 1 month of $$$ left.
  • I think part of this was due to the lax security of the original Internet design. The routers are all safely locked up and the "bad guys" are not able to log into the network.

    If the "bad guys" blow up one of your routers, the network can cope. If they can log in and start downloading pictures of Britney Spears and clog your network, there's not much you can do.
  • "...like a script-kiddie with some real ambition could bring the world to it's knees."

    Er, yah. Right. To its knees.

    Good god, we're not talking about a nuclear war.

    --
  • Actually the consumers took over. Specifically the non-techie residential consumers who made the Internet a true innovation and commercial success. They do not demand 100% reliability at this time and thus will not pay for it. Now, if the cable TV goes down for 5 minutes you will hear them scream. Until 100% (or rather 99.999%) reliability is really desired (i.e. they will call and yell when it goes down) you will not see too many companies wasting the money on redundancy.

    Right now if you said to the average residential DSL subscriber, "hey you are getting like 90% reliability, for an extra $15/mo I can get that up to 99% reliability" he probably wouldn't care.
    Stuart Eichert

  • by Cato ( 8296 )
    There's some interesting stuff around on faster recovery and convergence - see http://www.nanog.org/, recent presentations, and in particular http://www.packetdesign.com/Docs/isis.pdf which talks about millisecond-level convergence through better algorithms and faster updates on big links, etc.

    You can also use layer 2/2.5 type technologies, such as SONET Automatic Protection Switching (APS) or MPLS Fast Recovery, which can recover much faster from certain types of failures. However, this won't address the whole issue.

    ISPs that serve the business market are adding extra services such as IP VPNs, competing with Frame Relay and ATM, and are having to improve their availability figures - over time, this technology will filter down to the consumer market.

    The Internet is already much more reliable and much faster than it was in 1995 - hopefully this will continue...
  • by artdodge ( 9053 ) on Thursday April 19, 2001 @02:13PM (#278528) Homepage
    There's a pretty significant body of research into web usage, actually - file sizes and transfer length in particular have been pretty squarely beaten to death.

    For example, file/transfer sizes seem to follow what's called a "Heavy-Tailed" distribution (usually modelled as Paretto). This means, roughly, "most of the files are small; most of the bytes are in big files."

    The parameters of the distribution depend on where in the network you take the measurements (inside the client, mid-net proxy, server).

    There are some old studies of which low-level protocols appear most on the backbone (UDP vs TCP for picking out "streaming" candidates etc); they're harder to get now that the backbones are commercial instead of research-centric.

    As for how much is porn and how much is business, well... I've been involved with some studies that have casually looked at that, too; In one trace I checked out, about 13% of requests included some word that would indicate a site with strong sexual content (The 13% number is without trying very hard; it's also worth noting that the percentage of bytes in responses to those requests was a larger percentage, on the order of 20-something IIRC). Unfortunately, it's a little harder to differentiate "business" from "casual/home" with heuristics, so no numbers there.

  • Well if that doesn't guarantee some down-moderation...I don't know what does. =P

    Anyway, lets all remember that the internet was built to service places that look more like datacenters and colocation gateways, than your living room or mine. That said, we as individual network subscribers an afterthought, not the primary design model. Redundancy is expensive, and $20-$40 a month doesn't quite cut it for that kind of expense.

    The other thing to bear in mind with redundancy is that it was meant not to ensure your connection to the network no matter what, remember you don't exist any more because you were vaporized for being at the wrong end of an ICBM's parabola. =P That sort of thing is guaranteed to lower your modem connect speeds if you catch my meaning... The rest of the network, however will do just fine without your participation, and that is the redundancy that IP was designed for. I must say, with all of the posts complaining about service interruptions, my network connection was responsive, and useful through all of them. I expect it will be too....at least until some backhoe/ICBM moves in to complicate things.
  • by M@T ( 10268 )
    Telstra a better ISP??? Get real... Telstra are the primary reason we have such shit redundancy in Australia. Just about everything ends up being routed through a Telstra server somewhere, and as with all large monolithic ex-government buearocracies, they do a terrible job of keeping them running.
  • When the net was originally created/designed, it was the child of ARPA. It was during the Cold War, and there was worry that the Russians, knowing we had an information network, could bomb it strategically, destroying the choke points and rendering us weaker.
    So it was designed for any point to be physically destroyed, and for the whole to continue functioning. They did not, however, worry about an attack via less tangible means, like huge quantities of packets. So, the redundancy that you say is gone, isn't. The net will still function after a military strike or natural disaster, but a well-done DDoS attack can cripple it, and that's fine by the Day One specs.


    ~Conor (The Odd One)

  • Any country that sends the majority of their traffic to the outside world through one connection is vunerable. That would be most countries with state-run telcos.

    For example, the vast majority of traffic in and out of Poland goes through through one link out of Teleglobe's NY pop. That's a country of 40 million people, at least 10% of whom use the Internet through the state telco. (almost everyone uses the state telco for Internet) Lose a router and 4 million people are disconnected from the net.

    (by the way, if anyone wants to enlighten me of any recent changes in this situation, I'd be willing to listen, but still skeptical)
  • That's all right, "overrated" is the closest to an accurate negative moderation of one of my posts that I've seen in a while. Usually I end up being "flamebait" or "troll". At least I have posted something in the past which was overrated, so I can consider this to be karmic retribution.

    I hope for metamod too (and I like to think I've fixed some things in that phase) but I don't hold out too much hope.

  • by Sloppy ( 14984 )
    Redundancy is (obviously) inefficient. The DARPA days and a 'Net designed to be able to take nuclear hits is long past. It's commercial now, which means saving a penny here and there makes sense. After all, would you pay twice as much for your ISP to have twice as many connections to the outside? Well, maybe you would if you're a nerd, but most people won't. So the ISP that has twice the connections at twice the cost, is defeated in marketplace by efficient (but less reliable) competitor.
    ---
  • by sharkey ( 16670 ) on Thursday April 19, 2001 @01:07PM (#278535)
    Removing the redundencies and only needing to shut down a few key routers to shut off the Internet is a feature. By doing this, they are able to cut the Internet Cleaning Time on New Year's by almost 75%, since the Internet can be shut down, and brought back up with fewer network operators working on it, and each having to do less. Therefore, this New Years Eve you only have to make sure you're disconnected from 1:00 AM to 3:00 AM to keep your data from being erased when they clean the Internet.

    --
  • Lately, I've been realizing more and more how the concept of the Internet is going to hell.

    For example, here in Israel, the most-used link we have is an optical connection to the US. Nobody cares of connection anywhere else, and even ISPs which have connections to Europe (e.g. Barak ITC [barak.net.il] which represents Global One in Israel) doesn't offer the European link to the common users. About connections to our neighboring countries, there's very little to talk about, since they're both mostly technically undeveloped and aren't in very friendly diplomatic relationships with us, to say it mildly. So it ends up that we route via the US to reach Turkey or the far east.

    In case of a war, which is sadly something more likely in our region, there would be just one point of failure.

    Of course, one of the leading ISPs, NetVision [netvision.net.il], seems to have relatively broadband satellite links which might be the solution.

  • by B.D.Mills ( 18626 ) on Thursday April 19, 2001 @03:24PM (#278537)
    In today's Internet, large bandwidth providers connect to backbones and purchase bandwidth. They then sell this bandwidth to smaller customers such as ISP's, who in turn sell to customers. Typically, ISP's and the like only have one bandwidth provider. How many ISP's do YOU use?

    A lot of these business transactions mean that the organisation of the Internet, far from being organised like a spiderweb, is organised more like a tree in many places. So if one node fails, everything downstream loses connectivity.

    --
  • The Internet was never truly redundant. If you ever thought it was, you where mistaken..

    First off, we're lucky it works at all. The fact that I can get to slashdot every day is supprising.

    On the polical front: There is no regulation, there are no rules. Peering is a joke. You can only peer if your one of the top 10 providers. Everyone else is buying from everyone else.

    No one has the power to say your packets will always get from point A to point B. If one ISP is mad at the other, it can remove the route through thier network.

    On the technical front: Most of the time, your packets will take the same path every time. If that link goes down, normaly, it will reroute (eventualy). But not in real time. And the path it just reroute to, may be sub optimal (IE Your packets take a 30 second round trip over a already overloaded link.)

    Another problem is everyone is sharing fiber runs. This saves dollars, but one backhoe can (And has) put a huge black hole on the internet.

    Anyway, thats my babble. I haven't looked into this stuff in a while, so my statements may be outdated.

    -Tripp

    PS I didn't proof read this, so don't insult my bad english.
  • Yes, I remember when the main fiber op got cut to Australia - Most of the USA was unreachable unless you were signed up with an ISP that had its' own private link to the USA and even then it was dog-slow. So unless you were with either the incumbent telco (Telstra) or with another BIG ISP (OzEmail) you were pretty much screwed.
  • Yes, bring in the children and other issues that are IN NO WAY RELATED TO THE DEBATE.
  • Urgent updates would just let more flapping links through (which is why the holdown timers are there)
  • Two other big points.

    One, the major backbones are maintained by a small number of companies. Especially now as CLECs die like mayflies and regional ISPs and ILECs get gobbled up by nats and multinats. (In the ISP arena, from my experience, the bean counters are even willing to risk total pipe saturation than to pay the expense of the expansion they need to meet sales estimates -- never mind ensure backbone redundancy!) But basically, you have a small number of companies who though individually are expanding their pipes, on the whole the expansion is not enough. Not only that, but the complexity (not just technical but administrative and accounting-wise) of multiple pipes from multiple vendors and peers is considered unnecessary, when they can just get bigger/more pipes from the same upstream.

    Two, the consumer focus on Internet isn't reliability -- it's speed. The popularity of DSL in the face of its gaping unreliability is a sure sign of this. In order to serve customers, ISPs/ILECs only need bigger pipes, not "better" ones. Customers will complain about a day or two's worth of downtime, but in the end rarely is the information or method of communication important enough for there to be a viable market in reliable connectivity over fast connectivity.

    Basically, if you want any of the old Internet traits -- reliability, noncommerciality, technical assurance -- you'd be better off making your own Net. (Honestly I dont know why one hasnt sprung up already.)

    --

  • Considering what's happened (twice) to one of Canada's most wired cities, Ottawa, I'd definitely like some redundancy. First, it was an animal [ottawacitizen.com] that supposedly bit into the only cable connecting all of us 300,000 Rogers@Home users (including businesses), then some thieves cut the wire [ottawacitizen.com] again stopping all access to the 'Net, and only two weeks after that, the line was cut again (I don't have a link and I can't remember from what)... so, basically, three times a single wire was cut, taking access away from over 300,000 people... another single OC cable would have solved all the problems... ugh.

    (or maybe Rogers@Home is just bad... hmmm)
  • I have to agree with all the people who say that much of the problem has to do with the routing protocols in common use on the Internet. IMO part of that problem is that everyone has gone to link-state protocols; protocols in this family have certain desirable properties wrt loop-freedom and optimality, but slow convergence is a known problem with this approach. Personally, I've always been a distance-vector guy.

    All of this came back to me recently as I was reading Ad Hoc Networking by Charles Perkins. It's about protocols intended for use in environments where mobile nodes come and go relatively frequently, where the links go up and down as nodes move relative to one another, and where there's no central authority to keep things organized. A lot of this work has been done in a military context - think of a few hundred tanks connected via radio, rolling across a large and bumpy battlefield. It turns out that distance-vector protocols are making a comeback in this environment because of their faster convergence and lower overhead compared to link-state protocols, and researchers have pretty much nailed the loop-formation and other issues. It also turns out that a lot of the techniques that have been developed for this very demanding environment could be useful in the normal statically-wired Internet, not just in terms of robustness but also in terms of giving power over connectivity back to the people instead of centralizing it in huge corporations.

    I strongly recommend that people read this book, to see what's happening on the real cutting edge of routing technology. In particular, anyone working or thinking of working on peer-to-peer systems absolutely must read this book, because it describes the state of the art in solving some connectivity/scalability problems that many P2P folks are just stumbling on for the first time. I've seen many of the "solutions" that are being proposed to these problems in the P2P space; I can only say that P2P will not succeed if such stunning and widespread wilful ignorance of a closely related field persists.

  • There are a few hopeful signs on the horizon though. IPv6 should make routing a lot easier and give us a lot more operational "breathing room" which we can use for redundancy and robustness.


    After pouring over things like this [isi.edu] and this [isi.edu], and keeping in mind the recommendations in other RFC's and discussions, I can't find anything that supports this. We certainly get breathing room as far as more address space, but how does this lead anything but requirements for more routing complexity to keep tabs on it all?


    --

  • "agenda" is a plural
  • Last month the whole of the province of Ontario (in Canada, for whoever doesn't know =) lost out on Rogers@Home cable twice, but not due to Internet attacks. It was more along the line of pure stupidity, really.

    The first outage was the result of thieves trying to steal copper cabling. They accidentally cut the ONE fibre-optic cable that services our province (located between Toronto and Buffalo). Brilliant, no? Rogers does have redundancy servers and connections in place, but chose at the time not to use them because they were so outdated, the service would slow to a crawl and crash anyway. So much fun!

    The second instance of a problem was a server crash in California that brought us down again. Why Ontario servers are located in California is anyone's guess. Very dumb, IMO, but who am I to tell Rogers what to do? (To be fair, they are currently relocating the servers, but far too late).

    At least the service is decent for the most part, and Rogers has the cable monopoly here, so I can't do much about it but live it out ;-)

    (I did, however, have a *lot* of fun when they sent out a customer satisfaction survey a couple weeks ago!)
  • wait, you're no expert on BGP but you've already determined that much of the blame lies with BGP?

    At the last month's IETF in Minneapolis there was a slide during the plenary (which hasn't seem to have made it to the web site yet) that showed the average speed of route convergance. It was on the order of 90% propagation of route changes within 1-2 minutes. That's pretty fuckin fast.

    One has to condisider what is the theoretical minimum one expects to see given the depth of the internet and how fast the links and CPUs on the routers are. There's improvements surely that can be made (some not without major protocol changes), but we're pretty darn close I think.

    The major improvements that BGP needs to make are not in propagation speed, IMHO, but on general issues of scalability (size of the table as it relates to the memory and CPU avalable in a router).

  • What I love about this is that some brilliant moderator has managed to mark it "redundant." Folks, keep in mind, this is comment number 6! I don't know if it's an attempt at humor (I can almost see the thought-process now, "I'll mark the first comments in the redundancy story redundant! That'll show 'em!") or mere /. mod-point induced cluelessness (my money's on that one, of course) but since moderations of "redundant" can't be metamoderated, I felt the need to make this totally-off-topic comment. Seriously, folks, when you have mod points, try to browse at a deliberately-lower level and take some time actually reading/thinking, eh?

    (Go ahead, mod me down for spouting off like this, see if I give a rat's ass.)
    JMR

  • One good backhoe accident, and you suddenly have a bunch of intranets. In theory all of the Tier-One operators peer at multiple points, but in practice they route their own traffic through the same facilities. Likewise, in theory the Tier-Two operators are multiconnected through multiple Tier-One providers but in practice thanks to volume contract terms they are single-homed.
    Below Tier Two, it really doesn't matter.
  • Can you say NSA?

    Not without spraying my monitor :-)

    Its doubtful the NSA needed to ship all traffic to the US. They certainly have unfettered access inside telephone company switching points in every NATO country, and many other US-allied countries. When you work in those buildings, there are always some bits of unidentified kit doing something "important", the bosses let you know not to touch them or else your career will be very short.

    crooked politicians

    In the commission, that's redundant. Political lobbying by entrenched businesses is becoming positively American in depth and scope.

    In Europe, never chalk up to conspiracy that which can best be explained by misguided nationalism and greed.

    the AC
  • In the U.S., the only problems are of money and a few anti-trust regulations. Interconnects can usually happen wherever someone wants to lay down some cable or fibre, and going from one state to another is no problem. Any start-up with ambition can buy an old telco building and create a NAP, and the customers tend to roll in and don't care about competitors also using the bandwidth.

    But in the rest of the world, there quite often are regulations preventing a company from just running a fibre from one place to another. It is starting to improve, but for the longest time, almost 99% of all intra-european traffic passed through the US. Traceroutes from one ISP to another in the same country often went via the US.

    This meant that everyone was relying on a few trans-atlantic carriers, and the reliability was pathetic. To get from here in Belgium, all communications to neighboring countries passed by the US. the people in charge of the routers, at the bean-counter, lawyer, politician level, would forbid the engineers to create inter-country routes, in case there was a law somewhere being broken. It doubled the traffic on the trans-atlantic lines, and engineers couldn't do much about it.

    Recently a number of peering points and interconnects have sprouted up all over Europe. Economics eventually overrules short-sighted politicians. It feels so good, as an engineer, to be able to route traffic as directly as possible. But there are still problems with NAPs run by telcos, as they have learned two decades of dirty tricks by US telcos, and they have polished up those tricks to hurt competitors. Shit happens.

    The greed factor has also raised its head, as some of the more criminally backed peering points *cough*telehouse*cough* have tried to purchase European wide laws giving them 100% of the market. The argument is that all the incumbent telcos all are too greedy and incompetent and biased to run peering points, and all the peering points should be run by a single, greedy, politically aligned non-incumbent non-telco operator. Whoops, maybethose last points were raised by all the other NAP operators.

    I feel the internet is coming to the breaking point, where its being pushed to do what it was never originally designed to do. The original design was for reliable communication, not censorship, business operations, or avoiding national laws. The telephone companies of the world worked out many of these issues in back rooms, with no real public insight into the down side to each policy. The result was a communication system which never worked very efficiently, and cost a huge amount more than it should have. Those costs and inefficiencies slowed the growth of telecoms the world over, until the US justice department broke up ma bell, and, unforseen to them, sparked a revolution for cheap telecoms which is now churning around the world. I remember when a short overseas call cost one weeks wages, now I don't even think about chatting for an hour to the US.

    The internet has started to make people aware that unlimited communication has its downsides as well, since not all humans are perfect, good creatures. Because of this realisation, we are seeing a large backlash from the unwired masses who never had a need to communicate, and want others to stop communicating freely. The internet was designed to communicate, and there are no easy (or even complicated) engineering fixes to social problems placing limits on communication.

    the AC
  • One of the problems is, no matter how much redundancy there is, if a significant amount of traffic falls on a backup route, then it can overload. This results in turn in traffic being failed over to a third route, etc. and you have a cascade failure.

    Most ISP's backbones are sufficiently saturated that this is hard to avoid. Add in misconfigured routers causing looping, and one link can take you out.

    As for the "last mile" issue, any half serious internet service will have full redundancy on this, down to the cable and switch level.

  • providers are in business. Nobody in a business wants to be redundant. Hence, the internet model is incompatible with business interests.

  • One of the things we want to do here where i work is to connect our network to multiple ISPs, so that if one of our ISPs goes down, our customers can still reach our servers.

    It seems like there is just not much solid information out there about exactly how to configure such a setup. We have wireless links, ADSL, and a 10Mbps fibre-optic connection, each to a different ISP here, but actually using them in either a simultaneous or failover fashion seems difficult.

    Presumably, this would require us to publish routes (BGP?) to our IP address-space to multiple ISPs, but obtaining our 'own' block of IP addresses, that we are truly responsible for - i.e. not allocated by some specific ISP seems horribly expensive, at least here in New Zealand.

    Does anyone have any links to good documentation on setting up multipath routing - prefereably on a Linux/BSD-based router?

  • The Internet, to a large extent, is no longer decentralised. Internet hosts route to their providers then to backbone providers, across those providers' backbone routers to the far ISP and/or destination host. This is done on what looks like a redundant map at the core level, but on the edges, near the clients, where the problems usually happen, there is no redundancy. Does your local cable provider have more than one connection to the backbone? Probably not. Do they add multiple redundant links to the same provider using multiple routers or just one or a few big router(s)? You guess.

    Can you even set up your own redundant links anymore? Not really -- you need a /19 or thereabouts in address space to successfully advertise BGP routes to the Internet at large. We've screwed up the redundancy of the Internet because of a lack of shared connections and fast routing protocols (when allowing every individual user multiple egress and ingress points).
  • (Score:2, Redundant)

    That's the most ironic comment rating that I've ever seen.
  • There could be full redundancy, but where do you draw the line?

    Many ISP's have multiple routers to connect to multiple back bones. Two or more connections to a land backbone and one satelite connection should be pretty redundant! The server you are trying to connect to could actually be more than one physical machine, each possibly with more than one network card, maybe plugged into seperate hubs or switches for redundancy. Perhaps even those switches and hubs have redundant power supplies (along with the servers and routers) and even load sharing/redundant back planes. There could be redundancy even with the routers, with one working and another checking the health of the "live" router periodically, and then assuming it's identity if it dies. And all this, powered by redundant UPS.

    The redundancy is there and when it works, you don't know about it. Only when a hosting co, or other site is badly designed that some server cannot be reached and then someone asks, "Whatever Happened to Internet Redundancy?".

  • Redundancies have been made redundant.

    Well, uh...that IS sort of their purpose in the first place.
  • Redundant != bad

    Even in writing/language, where it is often criticized and carries a negative connotation, it can be effective. In speaking to a large group, it helps to reiterate one's point a few times. While this is redundant, it helps to emphasize the major purpose of one's argument, and saying one thing a few ways makes it more likely that it has been presented in a fashion that someone will understand.

    In most other (not language) senses, redundancy is always a good thing: RAID, redundant networks as mentioned in this article. Redundancy means security and protection against failures of one thing in a chain. Space Shuttles and other risky ventures have redundant mechanisms so that the failure of one does not immediately constitute a mission- or life-threatening emergency.
  • A team of meteorologists in the UK made the discovery. An article about it is here [heartland.org]
  • Actually, I do give away some of my money. What I don't like is being forced to do so and I don't particularly want the violence of the state to be harnessed to force others to give their money away.

    Nobody really argues against people giving their own money away. The question is whether we ought to be forced to do so at the point of a gun. What a shabby method of charity, forcing it by government action.

    As for Kyoto, it's a sham and a shame.
  • Several times over the last few months, I've been bitten by bad router configs that lead to loops inside my provider's network (Telocity). I'm looking for a new provider. I'm glad you have a better provider

    Redundancy measurement would be a great dotcom business idea... wait, we're past that, aren't we?

    DB
  • So for this story, would I get a +1 for Redundant?

  • "Redundancy" has two substantially different problems: How do you initiate connections to outside internet sites, and how do they initiate connections to you. It's pretty easy to handle the outbound problem - most users have some kind of proxy firewall that handles their web and email traffic, and depending on what routing protocols your ISPs use, it's easy enough to find one route that works, especially if the main failure mode you're worried about is the access between you and your ISP's router. You don't need BGP for that, though it can be fun, you just need to know what locations you can reach by what paths, and nobody's bothered by the fact that sometimes your address space is from ISP1 and sometimes from ISP2.


    The harder part is giving other people multiple paths to reach you. One way is to get yourself a routable address block (your local policies will indicate whether this is /19 or longer), and use BGP to advertise yourself to multiple ISPs, who forward those advertisements to the world. You need to be tolerably large to do this. Another way is to use a fancy DNS version that advertises different routes to you (www.you.co.nz gets advertised as a.a.a.x or b.b.b.x, using some load-balancing that also detects failure.) This isn't perfect, because DNS caches will prevent some outsiders from getting your current address quickly, but it's a good start. Another is to have a server in a hosting center that has multiple highly-reliable internet connections, so not only can you provide your web servers there where the response time and price of bandwidth are better than hosting them in your home office space, without risking backhoe fade, but you can use that to forward email and other services to your real IP addresses, whichever ones are working best this minute.


    I can't speak for New Zealand - between physical isolation and occasional entertaining telecom and business regulation laws, there's lots of specialty detail involved. In particular, there may be fewer providers who can get you real paths off the islands, and you have to care a lot more about their service quality, but you still have a lot of flexibility for accessing local sites.

  • you actually don't need your own block of IP addresses, as long as you've got a good set of providers. You'll need to work with them to get the EBGP peering setup, and your filtering straight... you surely don't want ISP A using your paid for bandwidth to pass traffic to ISP B, right?

    If your providers can't help you setup your BGP peering, then you probably need to find a different set of providers.

    What you will need to do it correctly, though is your own Autonomous System Number... commonly known as an ASN. this is the number that actually identifies who your organization is to the world, and that BGP uses to define "paths".
  • You can only be redundant to a point (Score:3, Redundant)

    Thank you, moderator, you just made my day!

    (Sorry, T-Bone)

    --

  • The notion that the internet is fully redundant only applies (as it did in its very early stages) if every host is also capable of routing traffic, and every network has more than one connection. Neither of these (windows PCs and single leased lines are extremely common) true anymore.

    This is my understanding, at least.
  • I'd say that number has to be at least 20%, with 20% non-porn media in addition.
  • considering /. has the uptime of a 80 year old's penis today.
  • At least not as long as I've been on the net. Lets see... I started adminning a UUCP feed back around '89/'90... 'Course back then it was all store and forward. If something didn't work right now you just tried again later...

    TCP/IP networks have never been particularly able to stand having a link drop, though. Even if you KNOW there are more ways to get to where you want to go, you'll never see the packets go to where you want them to go. I'd love to see more dynamic routing on the net. It'd be nice to be able to keep my traffic off Sprintnet and other backbone providers who got their routers in cereal boxes, for instance...

  • Al Gore said in a CNN interview in March of 1999, "During my service in the United States Congress, I took the initiative in creating the Internet."

    You can read more about it here: http://www.wired.com/news/politics/0,1283,18390,00 .html [wired.com]

    Read a newspaper.

    ----
  • ...about where that critical point is. Eventually, sure - most of us only have a single connection. The problems is that sometimes our ISPs only have a single connection, and sometimes those ISPs have hundreds of thousands of clients.

    Last month, Rogers@Home, the internet-via-cable provider in Ontario, lost connectivity for a day and a half. Not just locally, but every single client in the province, because of a cut cable in Boston. A cable that's cut in a different country, for crying out loud.

    The problem isn't about individual connections. It's about states, provinces and possibly entire countries dropping off the net for days or weeks while the sabotaged hardware is repaired or replaced.

    We get really upset when countries insist on being able to do this deliberately - how much more upset should we get if countries aren't preventing from happening inadvertently?

    --

  • by cperciva ( 102828 ) on Thursday April 19, 2001 @12:35PM (#278577) Homepage
    I think much of the blame lies with the routing protocols currently in use on the internet. Due to concern over maximizing performance -- and minimizing overhead -- most routing systems are set to react quite slowly to changing conditions. This helps eliminate route flapping, but has the unfortunate consequence of taking several minutes to route around a downed router or link.

    We can hope that someday we'll have better protocols to deal with this -- don't ask me, I'm no expert on this stuff -- but until the gurus come up with one I guess we just have to suffer.
  • by cperciva ( 102828 ) on Thursday April 19, 2001 @01:23PM (#278578) Homepage
    At the last month's IETF in Minneapolis there was a slide during the plenary (which hasn't seem to have made it to the web site yet) that showed the average speed of route convergance. It was on the order of 90% propagation of route changes within 1-2 minutes. That's pretty fuckin fast.

    Two points to respond to here. First, 90% of route changes propagation occurs within 1-2 minutes; that doesn't necessarily help much if the remaining 10% take two hours. Yes, I know they don't, but in any case an average statistic would be more useful than a 90th percentile statistic.

    Second, 1-2 minutes is fast when it comes to switching between working routes. Internet routing works pretty well when it comes to the problem of determining *which route is faster*. However, when it comes to routing around faults, 1-2 minutes is a pretty long time: With ISPs advertising "99.9999% uptime" (ie, down for at most a few seconds each month) downtime of 1-2 minutes is a Bad Thing.

    What I'd like to see is some mechanism by which updates could be marked as "urgent" if they relate to fault-recovery -- that way, the few updates which are necessary in order for packets to be routed away from downed links could be propagated within a few seconds, while routine "link x is faster/slower than link y" updates could be handled more slowly.
  • If anything had happened to it, the east and west coasts would have been unable to communicate, even though there were several logical paths between mae-east and mae-west.

    Of course you can't talk to Mae West, she's been dead for more than 20 years!

    --

  • If you're wondering why redundancy is so lacking nowadays, it might be because, well, imagine this.

    The Island of Tonga decides to place any and all circumvention software banned under the US DMCA, on its government's archives. Then they put it on the web. Now, you have "illegal" software hosted on the site of a government no one else can legally touch.

    Of course, the US Navy could just pound them from offshore, but what US President would want to face the public outrage over little ol' Tonga??

    No, there's a BETTER way to handle this. Pay off an internet backbone to shut off their West Coast link to Tonga. Boom. Problem solved.

    Or is it?
    Redundancy means you can get to Tonga ANOTHER way, maybe by routing through Canada, or via Mae-East to Europe and through Europe to Asia and Asia to Tonga. Now you have the problem of telling everyone out there to cut off Tonga.

    Redundancy is, again, the enemy of dictatorships. They have the greatest motivation of all, in keeping internet redundancy as weak as possible.

    On a side note don't be surprised if the backbones leading out of the US, decide to install caching proxies (what's the official term for these, anyways?) that do like Junkbusters and edit out content from "banned" sites at the backbone level.
    The other thing they can do to defeat redundancy at its foundation, is wipe it off the internet registry or DNS so that you get no such domain: "freedom.to" errors, or something.

    Of course then you can just route to an ANONYMOUS PROXY in Europe or Asia and it'll bypass both problems :-)


    ========================
    63,000 bugs in the code, 63,000 bugs,
    ya get 1 whacked with a service pack,
  • It was also build to move small chunks of text. As soon as we started wanting it to stream video, audio, and 1337 quake games, we needed high speed backbones. And that introduces SPOFs.
  • Sorry to interupt you there, but I can't really draw the line between: "Resolving hostnames", BIND security problems and network integrity. I hope you realise that a nameserver and the root nameservers don't have anything to do with the lower levels (like IP and ARP). Well, of course they do for a bit, without them, a nameserver will usually not run, but they are on a higher level (TCP to be precise).
    Attacking a nameserver only moves the problem away. Other nameservers have caching abilities and there are around 20 main nameservers on the internet to serve us with the toplevel domains.
    You might want to read some RFC's on http://www.faqs.org [faqs.org].
  • *slams himself on his forehead*
    Yes, you're absolutely correct, I should read some more RFC's also *grin*
  • A recent contribution on this very topic appeared in Physical Review Letters [aps.org] on April 16 2001.

    Breakdown of the Internet under Intentional Attack

    Keren Erez,1 Daniel ben-Avraham,2 and Shlomo Havlin1

    Volume 86, Issue 16 pp. 3682-3685

    Worth checking out. Pretty readable.
  • redundancy is great for a routed network.

    but if your entry to that network is down, you're SOL; regardless of how redundant the network itself is.

    I've frequently found that my local pacbell router is down (or the dslam at the CO for my dsl line) and that effectively cuts me off the net totally.

    also, not every network has peering agreements with all other networks. this is business not pure technology. even if a packet theoretically -could- traverse a router, in many cases it won't due to BGP policy and such.

    --

  • by vergil ( 153818 ) <vergilb@@@gmail...com> on Thursday April 19, 2001 @12:35PM (#278607) Journal
    The current edition of the European "Netzkultur" magazine, Telepolis features an article [heise.de] discussing the vulnerability of the the Dutch Internet infrastructure to a single, well-placed attack, according to a recent report issued by the Dutch Ministry of Traffic and Waterways:

    One well-placed bomb could wreck the entire Dutch Internet, the report states.The physical protection of (fiber optic) cables at critical network and ISP junctions is almost none, TNO claims. It is very easy to find out where exactly the cables are located and they can easily be approached. 'For now the chances of a deliberate disruption of the cable network by activists or terrorists are low. But as the importance of the Internet is growing, we fear that criminals, activists or terrorist will see the cable infrastructure or other critical infrastructure as targets in the near future.'

    Sincerely,
    Vergil
    Vergil Bushnell

  • by andyh1978 ( 173377 ) on Thursday April 19, 2001 @12:43PM (#278623) Homepage
    There was the paper (abstract here [aip.org], paper here (PDF) [arxiv.org]) mentioned in the Slashdot article here [slashdot.org] about the resilience of the 'net; crash 99% of the nodes at random and it'll still run. Which isn't bad.

    Problem is of course when you crash the <1% of nodes that actually do the major routing.

    Routing's getting hairier and hairier; it should really get fun once IPv6 kicks off and everyone and their dog have a squillion IP addresses each.
  • I knew a kid a while back that attended a well known college who effectively cut off internet access from most of Bulgaria by setting a computer lab full of sun workstations to continuously ping several prominent ISP's servers for a while. I was 12 at the time so i dont know the details (besides the fact that he was expelled) and i imagine that It would be harder today, but it just shows how smaller countries with less developed infrastructures are extremely easy targets for that kind of thing.
  • I'm not sure, but aren't the redundancy features of the Internet and web sites totally unrelated? Redundancy helps if a node in between the departure and destination nodes goes down. But a web site is often a destination node.

    It's not like my email goes through Yahoo.com as a node on its way to being delivered. Yahoo is an endpoint, not a pathway.

  • I have always had a problem with the redundancy in the internet. Shouldn't every node on the whole damn 'net be mod'ded down: (Score:-1, Redundant)?!
  • by ZanshinWedge ( 193324 ) on Thursday April 19, 2001 @12:50PM (#278632)
    Yup. If I had to sum up the flakiness of the internet in one word it would be "routing". When every link works as it is supposed to internet routing is already strained nearly to the breaking point. Screw up a link here or there, or update a routing table or software and drop a router or two, and poof, major internet cluster fuck. Theoretically your packets are supposed to be routed differently if they can't reach a destination. But in practice that rarely occurs. Most of the time you get the same route sending your packets into the same big brown smelly hole like lemmings. Enjoy! The other major contributor to internet flakiness is the organization of the major links and interlinks. There are few uber-high bandwidth pipes and they are rarely organized to provide superior routing and redundancy.

    There are a few hopeful signs on the horizon though. IPv6 should make routing a lot easier and give us a lot more operational "breathing room" which we can use for redundancy and robustness. There will also be a lot more high speed fiber optic links from hither and thither, which should help out quite a bit (especially to fix the "backhoe" vulnerability).

  • Simple, use NAT for the humans and set up all your servers with IPs from all of your access providers, and use DNS to direct the traffic where-ever you want to go. Keep the TTL on the zone low, and you won't be out for more than a couple of minutes.

    -Nathan


    Care about freedom?
  • I've noticed this problem for sometime, I'm starting an organization/open source project that will collect miles of string and old soup cans/ empty coffee cans to help alleviate the ever increasing problem of downed routers. More information will follow.

  • by Scott Hazen Mueller ( 223805 ) <scott@zorch.sf-bay.org> on Thursday April 19, 2001 @02:36PM (#278645) Homepage
    A couple of posters have hit on one of the key points - redundancy has gotten quite hard for a small site to set up. Even back in 1996, it was next to impossible to get routable address space for a small company (e.g. a web commerce/content provider). The smallest allocation has been a /19 for a long time, and if you've got 10 web server systems it's pretty hard to justify that many addresses.

    From the routing standpoint, the alternative is to advertise subnet blocks out a redundant connection. That is, you sign up for provider A and get a /24 block from them (for example). You then sign up for a backup connection from provider B and get them to announce the /24 block from provider A's space for you. This works, but it's considered unfriendly because it undoes route aggregation. Unfortunately, ARIN doesn't really provide any better solution for small sites.

    At the next level, even if you get redundancy of ISPs, you may very well not have redundancy in your telco facilities. Fiber providers swap the actual fibers back and forth - I'll trade you a pair on my NY-Chicago route in exchange for one on your Chicago-Dallas - so even if you get your Provider A connection from Worlddomination and your Provider B connection from AT&CableTV, there's a measurable chance they're in the same bundle. Even if they aren't in the same bundle, they may well run through the same trench.

    Thirdly, you don't know what providers A and B are doing for redundancy. Are they ordering all of their backbone circuits from diverse providers, and are they ensuring diverse physical routing of the fibers? On top of that, I recall reading on one occasion that telcos sometimes move circuits around, so you can order redundant circuits, have them installed correctly, and then have them moved on you later...

    There's also been a lot of stuff flying around here about NAPs & MAEs. The MAEs and NAPs were quite important a few years ago, but since then the major providers have switched mostly to private peering arrangements, where their interconnect traffic doesn't go over the public peering points. Smaller providers still peer at those points, and some of them probably even peer with some of the big guys, but the major traffic goes via private DS3/OCx connections running off-NAP.

    Lastly, vis-a-vis the redundancy of major backbone networks. It's been ages since I looked at them, but Boardwatch used to have maps of the various Tier 1/Tier 2 NSPs. Even back in 1997/1998, UUNET's US network looked like someone took a map of the US and scribbled all over it. They have a huge bloody lot of connections, and you can be they've got multiple redundancy out of virtually any city. (Disclaimer: never employed by UUNET or any related firm...) Yeah, I can see that some of the smallest national backbones (are there any left?) might only have 1 link into some cities, but even those guys set up fallback routing so that their traffic can get in and out.

    Generally speaking, if your favorite site is not reachable, it's most likely something at the site's end of things. Second most likely is that it's at your end, if you're not using a major connectivity provider, or if you're using a DSL provider with known problems...

  • What I love about this is that some brilliant moderator has managed to mark it "redundant." Folks, keep in mind, this is comment number 6!

    Well, it's also comment #7, so redundant seems reasonable...

  • by srichman ( 231122 ) on Thursday April 19, 2001 @03:26PM (#278647)
    Eventually, you will reach a single connection on the path that leads to the machine you are looking for. Many providers have redundant connection to the backbones, but, for example, there is only 1 connection from them to you.

    Where I work we use two providers. Redundancy in a company's ISPs/backbone connectivity is a reasonable and, depending on your needs, essential.

    If you're sitting at home with only one ISP (which is expected), then you should just recognize and accept that having a single point of failure on your end is a fact of life on the consumer end of the commodity Internet. When I'm sitting at home, my power supply and hard drive and network card are all single points of failure as far as my network access is concerned, but I can live with that.


  • What I see happening is a mixture of crappily assessed networks created by pundits who have zero skills configuring their networks [antioffline.com].

    When companies go out of business as well, so do their networks, which means if your on a node with that connection, somewhere along the line your bound to have a broken link.

    Sure there are DoS attacks [antioffline.com], and there are also fixes for them [antioffline.com], so DoS attacks should be 3rd or 4th in line for resolving host names.

    Security risks associated with BIND problems could also be to blame for resolving hostnames, in which you could always try different servers for your nslookups to try to resolve them.

    Personally I don't think people envisioned what the Internet would be in a few years when they made those statements.

  • >Just because some punk says something doesn't mean it's true. Especially when said punk has a material interest in people believing it.

    Well I would not call DR. Mudge a punk. He is a respect security expert and at the time L0pht was only know to internet security people and those that hacked systems. He was not as well known as today.

    >>And the Senate is part of Congress.
    OK. Are there hearings subject to the same rules and regulation. I thought it was different.

    >>an attack like you describe would require an awful lot of coordination.

    Yes and No. What I mean by yes is that you are correct that it requires very detailed time line. The No part is how you or I could hack ( via virus and other tricks ) systems and set up the time line or even better, upload the time line at the last possible moment.

    Taking out a router would not only require huge amounts of bandwidth hits but at the same time proper usage. I would definely use dieing packets,( packets that have to report back to the sender that they have died in transit and require a new packet to be resubmitted) this way I can clog up bandwidth at the same time.

    Anyway, after the taking out of newark and white plains. The rest was a joke.

    ONEPOINT



    spambait e-mail
    my web site artistcorner.tv hip-hop news
    please help me make it better
  • by onepoint ( 301486 ) on Thursday April 19, 2001 @01:34PM (#278666) Homepage Journal
    Well, Dr. Mudge ( L0pht security guy ) mentioned in a Senate (could have been congress) hearing that his group of guys could take down the entire net in less than 30 minutes. Given, I think this was back in 1998.

    Move to the current.

    A well-designed attack on the major routers (and it's not that hard to find them) could reduce traffic to a crawl.

    Hell all they have to do is hit the ... done in this order ...
    Hit the MCI routers for their newly installed OC192's and the back ups OC48, take both out in Newark NJ and the backup in Weehawken NJ then kill the Sprint loop in Weehawken. Kill the OC3 and 12's in Newark and Weehawken.

    Yes, there is a lot of traffic that passes via Newark and Weehawken; the others are White Plains and the Bronx. Take out White Plains and that should take out 10% to 30% of inbound the British traffic.

    Hell while were at it, lets take out the Aussie, hit them at the Singapore router, that will slow it down a bit, then hit them at the Philippines and kill them off at Sri Lanka

    But wait how about the Latin Americans, Easy also, Start at Miami, then work over to Bahamas then kill shot Sao Paulo, Brazil.

    What did you say? I did not mention the Asians, Oh my... so sorry, but I would like to keep my goods at the current cheap prices so I'll leave them alone.

    All you need is to have is a large number of computers doing these attacks at the same time.


    spambait e-mail
    my web site artistcorner.tv hip-hop news
    please help me make it better
  • because:

    * peering arrangements create static routes
    * problems on dynamic routes are difficult to debug

    Combine these two factors and you can see the problem.
  • I took a course on routing and flow control in grad school. I get the impression that the features that people interpret as redundancy are actually examples of distributed processing. For example, no central location keeps the entire routing tree; local nodes don't need to know the global topology; nodes must find a way to route and prevent queues from busting without relying without supervision or instruction. That is, each IP gateway and router is expected to be co-operatively autonomous.

    I also got the impression that although the potential for redundancy is included by distributing the authority, there really isn't all that much actual redundancy. For example, there are very few backbones that connect major routers across the country.

  • by sllort ( 442574 )
    Hi, I don't know anything about IP, store and forward routing, TTL, dijkstra's algorithm, or the differences between switched and packet forwarded algorithms. I don't even know that a majority of packet-forwarded traffic flows over antiquated voice networks configured in fiber-optic rings with 1:1 50ms protection switching, but no packet forwarding protection.

    Hell, I don't even pay attention to the unbridled explosion in consumed bandwidth on the Internet, or the protocols like BGP4 that ISPs use to delineate their peering relationships and shut down unwanted traffic, decreasing network redundancy by entire orders of magnitude.

    But, um, slashdot, I was wondering...

    why can't i get to my porn?

    thanks.
  • by gBonics ( 444932 ) on Thursday April 19, 2001 @12:38PM (#278690) Homepage
    This is a misunderstatement. Al Gore, the inventor of the internet didn't coverationalize the impending ramificacations when he invented the router for Internet world wide web traffic which could systemautomatically handle the dispersement of traffic fluctuating in outer space.

    Aren't you glad you have a Resident who cares?

    Resident George W. Bush
  • rumours that a internet outage a few weeks ago that affected the @home network was the result of vandals

    This was not the work of vandals, it was the work of thieves.

    Unfortunately I have no evidence

    A report at the time of the incident can be found here [cbc.ca].

    However the information in the article is not entirely accurate.

    So far as I know the cops haven't caught the thieves yet, but their ilk has been seen before and their MO is no mystery.

    This is what shakes:

    • utilities lay wire/fibre/cable in the rail beds - usually a couple of feet under and in conduits along the railway lines
    • for servicing purposes, every few clicks they let the conduit/wire come closer to the surface - sometimes it is laying exposed
    • along come the thieves
    • they find 2 exposed spots
    • cut the wire at both ends
    • tie one end to the back of their 4x4
    • haul off a large chunk of pipe.
    • Good thing most criminals are dumb

      Unfortunately for the thieves in the story above, this proved too true. When they made the first cut they found they were dealing with fibre, which, in the eyes of thieves is useless and they left the scene.

      Why would someone want to vandalize an internet line?

      (It would be redundant to say here that these are not vandals but are in fact thieves). What the theives were after is good old copper wire. Copper wire theft [google.com] is a problem world wide. In this case the thieves were after 1/4 inch copper cable which they can sell for about a 75 cents a pound at the junkyard. In other parts of the world thievs go after the thin, colourful wires used in telephony, because they are valued as material for weaving.

      - Vandals are annoying; thieves change the way we live
  • by circletimessquare ( 444983 ) <circletimessquare@@@gmail...com> on Thursday April 19, 2001 @08:51PM (#278698) Homepage Journal
    Hi, I don't know anything about communicating my vast networking know-how to the average slashdot visitor, how to come down out of my ivory tower, being friendly, or the differences between a good honest question for good honest debate and a question I can inflate my ego over by making snide sarcastic fun of. I don't even know that a majority of slashdot visitors don't know as much as I do about packet-forwarding protection.

    Hell, I don't even pay attention to the looks I get when my voice rises in frustration because no one else understands what I'm talking about when I'm in "the zone," or the simple human convention of being nice because I'm too busy plotting to take over the world and educating everyone about my vast knowledge of networking minutiae, decreasing my need to spend hours explaining things that I already know and holding it against other people because they don't know about decreasing network redundancy by entire orders of magnitude.

    But, um, slashdot, I was wondering...

    why can't i get a date?

    thanks.

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...