Whatever Happened to Internet Redundancy? 200
blueforce asks: "At one time, there was this really neat concept built into the internet that said there's all this redundancy like a spider web. If one segment or router would go down the internet would re-route traffic around the faulty segment and keep on chuggin'. So, as I sit here today and can't get to a whole bunch of places on the net, I'm wondering what gives? Where's all the redundancy? I'm not referring to mirrors or co-location. It almost seems like a script-kiddie with some real ambition could bring the world to it's knees. What really happens when routers go down, and what goes on when something like a Cable and Wireless pipe or someone else's OC-something backbone goes down?" Redundancies are nice, but not infinite. Planned DoS attacks can take out dozens or hundreds of routers at once, and as the number of downed nodes increases, the process of rerouting becomes increasingly difficult. What are some of the largest problems with the current systems in use today, and are there ways to improve them?
Re:You can only be redundant to a point (Score:1)
first pointed out it was rendundant post!
Re:Hi! (Score:1)
OK, a guy asks what is actually a valid question, and you decide you need to use the opportunity to try boost your poor self-esteem by mocking the guy and trying to show off your (actually seemingly undergraduate level) knowledge of internet routing? You think that knowing the basics about internet routing makes you better than other people? Get over yourself. BGP has problems of its own, like requiring huge fucking routing tables.
Re:The bean counters took over (Score:2)
Of course, there's that whole other problem of ISPs restricting certain types of traffic (upstream, certain *cough*Napster*cough* ports, etc). I really don't like intentionally degraded service...
--
Connecting nodes (Score:3)
There are a number of obvious reasons why high levels of connectivity don't exist. One is cost -- who wants to pay for multiple connections if you usually only need one? That's also a somewhat psychological problem. Obviously, there are advantages to having multiple connections -- lower ping times and throughput to what would otherwise be `distant' networks, for instance.
Another reason is the fact that routing tables would be extremely complex if that many connections existed. There may be algorithms that can reduce the complexity, but it's definitely not something I really want to think about..
Otherwise, I suppose a lot of people just haven't thought about it.
--
Re:Hi! (Score:1)
--
Forget Napster. Why not really break the law?
Re:The world on its knees (Score:2)
That's not last year, or this year- that was back in 1986. Before the 70s, it was like one or two trillion a year, and then it started to snowball. Finance is by far the biggest customer of communications networks.
Taking out the world's financial networks for a second would impede $2 grand worth of transactions. A minute of downtime a year would be $165K- an hour, nine million dollars. And that is from the 1986 figures- more than a decade ago. Any guesses on how much of the world's financial transactions go over the net now?
It's true. Or to be more accurate- the world's finances could be sabotaged in this way quite easily. The weird thing is, it's already taking damage just from stuff like Microsoft's irresponsibility- you don't have to have a malicious geek with a trenchcoat to cause billions of dollars of financial damages. Your software vendor can do you that kind of damage without even thinking, charge you for it, and then go set you up for even more.
Re:interesting timing (Score:1)
Re:yes but (Score:1)
Detour Project at U of Washington (Score:1)
The only working link I can find right now is
[washington.edu]
http://www.cs.washington.edu/homes/savage/paper
Redundancy is a thing of the past (Score:2)
The consolidation within the news-service sector of our economies has assured one thing: there is now only one message to get across. Only one message and soon only one audience, as human languages are dying out (thanks in part to the internet but more because of radio). If there is only one message and one audience, then you no longer have to worry about having multiple pathways. Redundancies have been made redundant.
But the corporatization of the internet is only partially to blame. More of the blame falls on the EU: who would've thought that banding the nations of Europe together in one bureaucratic machine could do so much harm to human civilization? Like the internet, sovereignty was once decentralized and redundant across many pathways. Now, a single marching order can come from Brussels and there'll be a third world war.
But redundancy is a very necessary thing. It's not safe to have just one of something: we must have several. If we are to have a third world war, we must have competing manifestations (WW3a and WW3b, for example), or else how can we possibly determine which was the more effective or more desirable? And what if one were to fizzle out? In the old world order, we'd be covered by grand international rivalries. In the new world order, we can only hope that fleeting petty intracultural differences can take up the slack.
The internet is an incredibly important technological phenomenon, but let's not allow it to blind us to the more pressing drives in humanity (such as competition). Looking solely at the internet as an end product may mask the underlying social and political conditions that created our mess in the first place.
Re:Timely Question (Score:1)
Is the outage you were thinking of?
Kinda makes you wonder why a lot of data traffic would be going over copper. I thought copper was mostly restriced to the last mile these days.
Or is there so much copper out there that it won't be phased out for years? Anyone?
Re:You can only be redundant to a point (Score:2)
The bean counters took over (Score:4)
Re:BGP (adn IPv6) (Score:1)
also, ipv6 allows for things like having the assigning the last 64bits of your address space statically or dynamically but
What it means is that, where currently with IPv4 if you want 2 redundant links to the internet from 2 providers you will have to either:
- get a provider independent chunk of addresses from your NIC and have both your ISPs add this (small) subnet to their BGP adverts. PI subnets are increasingly more difficult to get, cause they're running low and cause they are a huge overhead on routing at backbones, and hence discouraged - your ISP isn't even required to add your PI to their BGP adverts.
- or get provider dependent subnet from one and persuade the other ISP to advertise this chunk (not good).
- or get a PD subnet from both and dual-home all your hosts which could be a mighty pain in the arse if you have any significant number of hosts.
Instead, with IPv6, you just get 2 chunks of address space, say dead:beaf:: and f00b:a43d:: from each of your ISPs address space. You assign a unique host id to each machine, and let them figure out that their full ipv6 addr can be either dead:beaf::hostid or f00b:a43d::hostid statically, or even better, dynamically from the peers/dhcp server/ routers around them.
And v6 DNS supports this fully, you look up an A6 record and the answer consists of hostportion and a pointer to which records to look up to find out what the network portion is. You look up the network record, cat the previously found host portion and this network portion together and you have your IPv6 address.
(ie change just that one network record and you've updated network number in DNS for all your hosts, cool).
anyway, sorry i can't be more specific about the ipv6 auto-config stuff, but it is in the specs. they did think about this stuff over the last, what, 8 years or so (???) that they've been working towards ipv6.
If people are interested in playing with IPv6, well play with it at home! Eg, linux with the USAGI patches (www.linux-ipv6.org) works perfectly. Then you can get a tunnel from the 6bone (6bone.net), and after that maybe even a
The destination isn't redundant. (Score:1)
LetterJ
Head Geek
You know, it still is. (Score:2)
It still is. If you go into a UUNET HUB and unplug a GW nothing happens (well after routing converges again). Same if you take out a TR, or XR, I forget the difference. Other big ISPs are similar.
And if you look at the older way it was redundant, take out a long haul link and things route around, well it still works, take out a long haul link and traffic flows along the ones that still exist (even if they are a very different path).
What isn't redundant? Your link to the ISP probably isn't. The router you land on at the ISP's hub probably isn't. With enough money you can buy two links, better yet to two different ISPs. Most ISPs don't have more then two exit routers per hub, so if both go you are screwed. Some hubs only have two exits. I expect some ISPs aren't even that good, but you do get what you pay for. Don't buy connectivity from a cut rate provider and complain that they aren't redundant. What else? Well whoever you want to talk too might not have redundant connections. Sometimes a whole ISP can do something that screws them (load a Cisco or Juniper code release that has a bad bug that didn't show up in their or your testing...or screw up your L2 fabric or...), but the other ISPs are still alive and kicking. They can all talk to each other if while you are dead (unless they don't really have a backbone, but just wholesale for the dead ISP, and only the dead ISP, but again you get what you pay for)
Still, that's not too bad.
Did you expect it to be better?
redundancy is expensive (Score:1)
fnord.
The 'net & real life (Score:5)
You'll need multiple connections that are all independant. This can be difficult to ensure as lots of times Company A's fiberlink will be in the same trench as Conpany B's & so the same backhoe will take them both out even though you used two services. You'll need to determine the full path your data will take & lots of time the salesfolk won't have or even understand what you want, particularly if you're not a big commercial account.
Then you'll need a way to route your inbound & outbound traffic dynamically. BGP is the method of choice but it's *not* a friendly thing. For the small-time techie Zebra & other tools are under development to help with this sort of thing but it's still tricky tricky stuff full of gotchas.
The same redundancy advice goes for power - you'll need at least two separate services that are well & truly separate, not just the same line coming in the front door as well as the back door. Local generation for backup is also a good idea. You'll need to test everything regularly - systems often fail & a botched hand-off can ruin your whole day.
That said a buddy set his house up to be always-connected. UPS's on key hardware. BSD on dual laptops using BGP connected to cable-modem, ADSL, dial-up, digital-cellphone & a ham packet radio rig. Even has a wireless connection to a friend in another town a few blocks away but on a different part of the grid & central exchange with a similar setup.
Of course it's still possible for something to break in a big way. One EMP over Arlington Virginia-area would take out lots of important services, probably causing major disruption in the confusion & resultant instability. Heck a group with an axe to grind could presumably cut enough critical cables in isolated areas in an hour or two to 'cause significant traffic problems globally.
This is of course no more different then bringing down any number of other services: Water, electricity, sewage, roads, gas pipelines - none are particularly hard to shut down if one is nuts enough to try.
You can only be redundant to a point (Score:2)
Eventually, you will reach a single connection on the path that leads to the machine you are looking for. Many providers have redundant connection to the backbones, but, for example, there is only 1 connection from them to you. And actually, there are many providers who do not have redundant, topologically separate connections to the backbone.
The internet wasdesigned so that if any particular switching point went down, the others could keep up with it. The idea was nice 20 years ago when there were 50 NAP's. There's probably 50 NAPs within 10 miles of me right now. So we're not quite as redundant as intended, but we're still pretty redundant.
You can only be redundant to a point (Score:2)
Eventually, you will reach a single connection on the path that leads to the machine you are looking for. Many providers have redundant connection to the backbones, but, for example, there is only 1 connection from them to you. And actually, there are many providers who do not have redundant, topologically separate connections to the backbone.
The internet wasdesigned so that if any particular switching point went down, the others could keep up with it. The idea was nice 20 years ago when there were 50 NAP's. There's probably 50 NAPs within 10 miles of me right now. So we're not quite as redundant as intended, but we're still pretty redundant.
Re:ASK YOUR ISP WHO THEIR ISPs (PLURAL) ARE! (Score:2)
Internal vs. External disruption (Score:1)
If the "bad guys" blow up one of your routers, the network can cope. If they can log in and start downloading pictures of Britney Spears and clog your network, there's not much you can do.
Hyperbole (Score:2)
Er, yah. Right. To its knees.
Good god, we're not talking about a nuclear war.
--
Re:The bean counters took over (Score:1)
Right now if you said to the average residential DSL subscriber, "hey you are getting like 90% reliability, for an extra $15/mo I can get that up to 99% reliability" he probably wouldn't care.
Stuart Eichert
Re:BGP (Score:2)
You can also use layer 2/2.5 type technologies, such as SONET Automatic Protection Switching (APS) or MPLS Fast Recovery, which can recover much faster from certain types of failures. However, this won't address the whole issue.
ISPs that serve the business market are adding extra services such as IP VPNs, competing with Frame Relay and ATM, and are having to improve their availability figures - over time, this technology will filter down to the consumer market.
The Internet is already much more reliable and much faster than it was in 1995 - hopefully this will continue...
Re:I found the problem (Score:3)
For example, file/transfer sizes seem to follow what's called a "Heavy-Tailed" distribution (usually modelled as Paretto). This means, roughly, "most of the files are small; most of the bytes are in big files."
The parameters of the distribution depend on where in the network you take the measurements (inside the client, mid-net proxy, server).
There are some old studies of which low-level protocols appear most on the backbone (UDP vs TCP for picking out "streaming" candidates etc); they're harder to get now that the backbones are commercial instead of research-centric.
As for how much is porn and how much is business, well... I've been involved with some studies that have casually looked at that, too; In one trace I checked out, about 13% of requests included some word that would indicate a site with strong sexual content (The 13% number is without trying very hard; it's also worth noting that the percentage of bytes in responses to those requests was a larger percentage, on the order of 20-something IIRC). Unfortunately, it's a little harder to differentiate "business" from "casual/home" with heuristics, so no numbers there.
The point is it is redundant for everyone else... (Score:2)
Anyway, lets all remember that the internet was built to service places that look more like datacenters and colocation gateways, than your living room or mine. That said, we as individual network subscribers an afterthought, not the primary design model. Redundancy is expensive, and $20-$40 a month doesn't quite cut it for that kind of expense.
The other thing to bear in mind with redundancy is that it was meant not to ensure your connection to the network no matter what, remember you don't exist any more because you were vaporized for being at the wrong end of an ICBM's parabola. =P That sort of thing is guaranteed to lower your modem connect speeds if you catch my meaning... The rest of the network, however will do just fine without your participation, and that is the redundancy that IP was designed for. I must say, with all of the posts complaining about service interruptions, my network connection was responsive, and useful through all of them. I expect it will be too....at least until some backhoe/ICBM moves in to complicate things.
Re:Wow.. (Score:1)
What redundancy means (Score:1)
So it was designed for any point to be physically destroyed, and for the whole to continue functioning. They did not, however, worry about an attack via less tangible means, like huge quantities of packets. So, the redundancy that you say is gone, isn't. The net will still function after a military strike or natural disaster, but a well-done DDoS attack can cripple it, and that's fine by the Day One specs.
~Conor (The Odd One)
not just smaller countries (Score:1)
For example, the vast majority of traffic in and out of Poland goes through through one link out of Teleglobe's NY pop. That's a country of 40 million people, at least 10% of whom use the Internet through the state telco. (almost everyone uses the state telco for Internet) Lose a router and 4 million people are disconnected from the net.
(by the way, if anyone wants to enlighten me of any recent changes in this situation, I'd be willing to listen, but still skeptical)
Re:Someone has an adjenda at work (Score:1)
That's all right, "overrated" is the closest to an accurate negative moderation of one of my posts that I've seen in a while. Usually I end up being "flamebait" or "troll". At least I have posted something in the past which was overrated, so I can consider this to be karmic retribution.
I hope for metamod too (and I like to think I've fixed some things in that phase) but I don't hold out too much hope.
Cost (Score:2)
---
It's a convenience feature. (Score:4)
--
Situation in Israel (Score:1)
For example, here in Israel, the most-used link we have is an optical connection to the US. Nobody cares of connection anywhere else, and even ISPs which have connections to Europe (e.g. Barak ITC [barak.net.il] which represents Global One in Israel) doesn't offer the European link to the common users. About connections to our neighboring countries, there's very little to talk about, since they're both mostly technically undeveloped and aren't in very friendly diplomatic relationships with us, to say it mildly. So it ends up that we route via the US to reach Turkey or the far east.
In case of a war, which is sadly something more likely in our region, there would be just one point of failure.
Of course, one of the leading ISPs, NetVision [netvision.net.il], seems to have relatively broadband satellite links which might be the solution.
...and probably never will be. (Score:3)
A lot of these business transactions mean that the organisation of the Internet, far from being organised like a spiderweb, is organised more like a tree in many places. So if one node fails, everything downstream loses connectivity.
--
Your funny (Score:1)
First off, we're lucky it works at all. The fact that I can get to slashdot every day is supprising.
On the polical front: There is no regulation, there are no rules. Peering is a joke. You can only peer if your one of the top 10 providers. Everyone else is buying from everyone else.
No one has the power to say your packets will always get from point A to point B. If one ISP is mad at the other, it can remove the route through thier network.
On the technical front: Most of the time, your packets will take the same path every time. If that link goes down, normaly, it will reroute (eventualy). But not in real time. And the path it just reroute to, may be sub optimal (IE Your packets take a 30 second round trip over a already overloaded link.)
Another problem is everyone is sharing fiber runs. This saves dollars, but one backhoe can (And has) put a huge black hole on the internet.
Anyway, thats my babble. I haven't looked into this stuff in a while, so my statements may be outdated.
-Tripp
PS I didn't proof read this, so don't insult my bad english.
Re:Wow.. (Score:1)
Re:Penalize USA and a free pass to CHINA??? (Score:1)
Re:BGP (Score:1)
Re:The bean counters took over (Score:2)
One, the major backbones are maintained by a small number of companies. Especially now as CLECs die like mayflies and regional ISPs and ILECs get gobbled up by nats and multinats. (In the ISP arena, from my experience, the bean counters are even willing to risk total pipe saturation than to pay the expense of the expansion they need to meet sales estimates -- never mind ensure backbone redundancy!) But basically, you have a small number of companies who though individually are expanding their pipes, on the whole the expansion is not enough. Not only that, but the complexity (not just technical but administrative and accounting-wise) of multiple pipes from multiple vendors and peers is considered unnecessary, when they can just get bigger/more pipes from the same upstream.
Two, the consumer focus on Internet isn't reliability -- it's speed. The popularity of DSL in the face of its gaping unreliability is a sure sign of this. In order to serve customers, ISPs/ILECs only need bigger pipes, not "better" ones. Customers will complain about a day or two's worth of downtime, but in the end rarely is the information or method of communication important enough for there to be a viable market in reliable connectivity over fast connectivity.
Basically, if you want any of the old Internet traits -- reliability, noncommerciality, technical assurance -- you'd be better off making your own Net. (Honestly I dont know why one hasnt sprung up already.)
--
I'd like some redundancy, that's for sure... (Score:2)
(or maybe Rogers@Home is just bad... hmmm)
Routing (Score:2)
I have to agree with all the people who say that much of the problem has to do with the routing protocols in common use on the Internet. IMO part of that problem is that everyone has gone to link-state protocols; protocols in this family have certain desirable properties wrt loop-freedom and optimality, but slow convergence is a known problem with this approach. Personally, I've always been a distance-vector guy.
All of this came back to me recently as I was reading Ad Hoc Networking by Charles Perkins. It's about protocols intended for use in environments where mobile nodes come and go relatively frequently, where the links go up and down as nodes move relative to one another, and where there's no central authority to keep things organized. A lot of this work has been done in a military context - think of a few hundred tanks connected via radio, rolling across a large and bumpy battlefield. It turns out that distance-vector protocols are making a comeback in this environment because of their faster convergence and lower overhead compared to link-state protocols, and researchers have pretty much nailed the loop-formation and other issues. It also turns out that a lot of the techniques that have been developed for this very demanding environment could be useful in the normal statically-wired Internet, not just in terms of robustness but also in terms of giving power over connectivity back to the people instead of centralizing it in huge corporations.
I strongly recommend that people read this book, to see what's happening on the real cutting edge of routing technology. In particular, anyone working or thinking of working on peer-to-peer systems absolutely must read this book, because it describes the state of the art in solving some connectivity/scalability problems that many P2P folks are just stumbling on for the first time. I've seen many of the "solutions" that are being proposed to these problems in the P2P space; I can only say that P2P will not succeed if such stunning and widespread wilful ignorance of a closely related field persists.
Re:BGP (adn IPv6) (Score:1)
There are a few hopeful signs on the horizon though. IPv6 should make routing a lot easier and give us a lot more operational "breathing room" which we can use for redundancy and robustness.
After pouring over things like this [isi.edu] and this [isi.edu], and keeping in mind the recommendations in other RFC's and discussions, I can't find anything that supports this. We certainly get breathing room as far as more address space, but how does this lead anything but requirements for more routing complexity to keep tabs on it all?
--
It's "an agendum" (Score:1)
Rogers@Home (Score:1)
The first outage was the result of thieves trying to steal copper cabling. They accidentally cut the ONE fibre-optic cable that services our province (located between Toronto and Buffalo). Brilliant, no? Rogers does have redundancy servers and connections in place, but chose at the time not to use them because they were so outdated, the service would slow to a crawl and crash anyway. So much fun!
The second instance of a problem was a server crash in California that brought us down again. Why Ontario servers are located in California is anyone's guess. Very dumb, IMO, but who am I to tell Rogers what to do? (To be fair, they are currently relocating the servers, but far too late).
At least the service is decent for the most part, and Rogers has the cable monopoly here, so I can't do much about it but live it out
(I did, however, have a *lot* of fun when they sent out a customer satisfaction survey a couple weeks ago!)
Re:BGP (Score:2)
At the last month's IETF in Minneapolis there was a slide during the plenary (which hasn't seem to have made it to the web site yet) that showed the average speed of route convergance. It was on the order of 90% propagation of route changes within 1-2 minutes. That's pretty fuckin fast.
One has to condisider what is the theoretical minimum one expects to see given the depth of the internet and how fast the links and CPUs on the routers are. There's improvements surely that can be made (some not without major protocol changes), but we're pretty darn close I think.
The major improvements that BGP needs to make are not in propagation speed, IMHO, but on general issues of scalability (size of the table as it relates to the memory and CPU avalable in a router).
(OT -- moderation comment) (Score:1)
(Go ahead, mod me down for spouting off like this, see if I give a rat's ass.)
JMR
MAE West (Score:2)
Below Tier Two, it really doesn't matter.
Re:Dont forget regulators and petty bureaucrats (Score:2)
Not without spraying my monitor
Its doubtful the NSA needed to ship all traffic to the US. They certainly have unfettered access inside telephone company switching points in every NATO country, and many other US-allied countries. When you work in those buildings, there are always some bits of unidentified kit doing something "important", the bosses let you know not to touch them or else your career will be very short.
crooked politicians
In the commission, that's redundant. Political lobbying by entrenched businesses is becoming positively American in depth and scope.
In Europe, never chalk up to conspiracy that which can best be explained by misguided nationalism and greed.
the AC
Dont forget regulators and petty bureaucrats (Score:5)
But in the rest of the world, there quite often are regulations preventing a company from just running a fibre from one place to another. It is starting to improve, but for the longest time, almost 99% of all intra-european traffic passed through the US. Traceroutes from one ISP to another in the same country often went via the US.
This meant that everyone was relying on a few trans-atlantic carriers, and the reliability was pathetic. To get from here in Belgium, all communications to neighboring countries passed by the US. the people in charge of the routers, at the bean-counter, lawyer, politician level, would forbid the engineers to create inter-country routes, in case there was a law somewhere being broken. It doubled the traffic on the trans-atlantic lines, and engineers couldn't do much about it.
Recently a number of peering points and interconnects have sprouted up all over Europe. Economics eventually overrules short-sighted politicians. It feels so good, as an engineer, to be able to route traffic as directly as possible. But there are still problems with NAPs run by telcos, as they have learned two decades of dirty tricks by US telcos, and they have polished up those tricks to hurt competitors. Shit happens.
The greed factor has also raised its head, as some of the more criminally backed peering points *cough*telehouse*cough* have tried to purchase European wide laws giving them 100% of the market. The argument is that all the incumbent telcos all are too greedy and incompetent and biased to run peering points, and all the peering points should be run by a single, greedy, politically aligned non-incumbent non-telco operator. Whoops, maybethose last points were raised by all the other NAP operators.
I feel the internet is coming to the breaking point, where its being pushed to do what it was never originally designed to do. The original design was for reliable communication, not censorship, business operations, or avoiding national laws. The telephone companies of the world worked out many of these issues in back rooms, with no real public insight into the down side to each policy. The result was a communication system which never worked very efficiently, and cost a huge amount more than it should have. Those costs and inefficiencies slowed the growth of telecoms the world over, until the US justice department broke up ma bell, and, unforseen to them, sparked a revolution for cheap telecoms which is now churning around the world. I remember when a short overseas call cost one weeks wages, now I don't even think about chatting for an hour to the US.
The internet has started to make people aware that unlimited communication has its downsides as well, since not all humans are perfect, good creatures. Because of this realisation, we are seeing a large backlash from the unwired masses who never had a need to communicate, and want others to stop communicating freely. The internet was designed to communicate, and there are no easy (or even complicated) engineering fixes to social problems placing limits on communication.
the AC
Cascade failure (Score:1)
Most ISP's backbones are sufficiently saturated that this is hard to avoid. Add in misconfigured routers causing looping, and one link can take you out.
As for the "last mile" issue, any half serious internet service will have full redundancy on this, down to the cable and switch level.
Re:Exactly (Score:2)
Redundancy seems difficult to get (Score:2)
It seems like there is just not much solid information out there about exactly how to configure such a setup. We have wireless links, ADSL, and a 10Mbps fibre-optic connection, each to a different ISP here, but actually using them in either a simultaneous or failover fashion seems difficult.
Presumably, this would require us to publish routes (BGP?) to our IP address-space to multiple ISPs, but obtaining our 'own' block of IP addresses, that we are truly responsible for - i.e. not allocated by some specific ISP seems horribly expensive, at least here in New Zealand.
Does anyone have any links to good documentation on setting up multipath routing - prefereably on a Linux/BSD-based router?
No longer decentralised (Score:2)
Can you even set up your own redundant links anymore? Not really -- you need a
Re:You can only be redundant to a point (Score:1)
That's the most ironic comment rating that I've ever seen.
Re:Exactly (Score:1)
Many ISP's have multiple routers to connect to multiple back bones. Two or more connections to a land backbone and one satelite connection should be pretty redundant! The server you are trying to connect to could actually be more than one physical machine, each possibly with more than one network card, maybe plugged into seperate hubs or switches for redundancy. Perhaps even those switches and hubs have redundant power supplies (along with the servers and routers) and even load sharing/redundant back planes. There could be redundancy even with the routers, with one working and another checking the health of the "live" router periodically, and then assuming it's identity if it dies. And all this, powered by redundant UPS.
The redundancy is there and when it works, you don't know about it. Only when a hosting co, or other site is badly designed that some server cannot be reached and then someone asks, "Whatever Happened to Internet Redundancy?".
Re:Redundancy is a thing of the past (Score:1)
Well, uh...that IS sort of their purpose in the first place.
Re:Redundancy is a thing of the past (Score:1)
Even in writing/language, where it is often criticized and carries a negative connotation, it can be effective. In speaking to a large group, it helps to reiterate one's point a few times. While this is redundant, it helps to emphasize the major purpose of one's argument, and saying one thing a few ways makes it more likely that it has been presented in a fashion that someone will understand.
In most other (not language) senses, redundancy is always a good thing: RAID, redundant networks as mentioned in this article. Redundancy means security and protection against failures of one thing in a chain. Space Shuttles and other risky ventures have redundant mechanisms so that the failure of one does not immediately constitute a mission- or life-threatening emergency.
Re:Kyoto treaty - 11th hour (Score:1)
Re:Penalize USA and a free pass to CHINA??? (Score:1)
Nobody really argues against people giving their own money away. The question is whether we ought to be forced to do so at the point of a gun. What a shabby method of charity, forcing it by government action.
As for Kyoto, it's a sham and a shame.
Re:Exactly (Score:2)
Redundancy measurement would be a great dotcom business idea... wait, we're past that, aren't we?
DB
special case (Score:1)
So for this story, would I get a +1 for Redundant?
Re:Redundancy: Inbound vs. Outbound (Score:2)
The harder part is giving other people multiple paths to reach you. One way is to get yourself a routable address block (your local policies will indicate whether this is
I can't speak for New Zealand - between physical isolation and occasional entertaining telecom and business regulation laws, there's lots of specialty detail involved. In particular, there may be fewer providers who can get you real paths off the islands, and you have to care a lot more about their service quality, but you still have a lot of flexibility for accessing local sites.
Re:Redundancy seems difficult to get (Score:1)
If your providers can't help you setup your BGP peering, then you probably need to find a different set of providers.
What you will need to do it correctly, though is your own Autonomous System Number... commonly known as an ASN. this is the number that actually identifies who your organization is to the world, and that BGP uses to define "paths".
Re:You can only be redundant to a point (Score:4)
Thank you, moderator, you just made my day!
(Sorry, T-Bone)
--
Only if every host is a router (Score:2)
This is my understanding, at least.
5% porn? You've got to be kidding. (Score:1)
interesting timing (Score:1)
Never has been any... (Score:2)
TCP/IP networks have never been particularly able to stand having a link drop, though. Even if you KNOW there are more ways to get to where you want to go, you'll never see the packets go to where you want them to go. I'd love to see more dynamic routing on the net. It'd be nice to be able to keep my traffic off Sprintnet and other backbone providers who got their routers in cereal boxes, for instance...
Re:ramifically speaking (Score:1)
You can read more about it here: http://www.wired.com/news/politics/0,1283,18390,0
Read a newspaper.
----
You should be careful, though... (Score:1)
Last month, Rogers@Home, the internet-via-cable provider in Ontario, lost connectivity for a day and a half. Not just locally, but every single client in the province, because of a cut cable in Boston. A cable that's cut in a different country, for crying out loud.
The problem isn't about individual connections. It's about states, provinces and possibly entire countries dropping off the net for days or weeks while the sabotaged hardware is repaired or replaced.
We get really upset when countries insist on being able to do this deliberately - how much more upset should we get if countries aren't preventing from happening inadvertently?
--
BGP (Score:3)
We can hope that someday we'll have better protocols to deal with this -- don't ask me, I'm no expert on this stuff -- but until the gurus come up with one I guess we just have to suffer.
Re:BGP (Score:3)
Two points to respond to here. First, 90% of route changes propagation occurs within 1-2 minutes; that doesn't necessarily help much if the remaining 10% take two hours. Yes, I know they don't, but in any case an average statistic would be more useful than a 90th percentile statistic.
Second, 1-2 minutes is fast when it comes to switching between working routes. Internet routing works pretty well when it comes to the problem of determining *which route is faster*. However, when it comes to routing around faults, 1-2 minutes is a pretty long time: With ISPs advertising "99.9999% uptime" (ie, down for at most a few seconds each month) downtime of 1-2 minutes is a Bad Thing.
What I'd like to see is some mechanism by which updates could be marked as "urgent" if they relate to fault-recovery -- that way, the few updates which are necessary in order for packets to be routed away from downed links could be propagated within a few seconds, while routine "link x is faster/slower than link y" updates could be handled more slowly.
Re:The redundancy never was there (Score:2)
If anything had happened to it, the east and west coasts would have been unable to communicate, even though there were several logical paths between mae-east and mae-west.
Of course you can't talk to Mae West, she's been dead for more than 20 years!
--
Redundancy is the BANE of Dictatorships! (Score:2)
The Island of Tonga decides to place any and all circumvention software banned under the US DMCA, on its government's archives. Then they put it on the web. Now, you have "illegal" software hosted on the site of a government no one else can legally touch.
Of course, the US Navy could just pound them from offshore, but what US President would want to face the public outrage over little ol' Tonga??
No, there's a BETTER way to handle this. Pay off an internet backbone to shut off their West Coast link to Tonga. Boom. Problem solved.
Or is it?
Redundancy means you can get to Tonga ANOTHER way, maybe by routing through Canada, or via Mae-East to Europe and through Europe to Asia and Asia to Tonga. Now you have the problem of telling everyone out there to cut off Tonga.
Redundancy is, again, the enemy of dictatorships. They have the greatest motivation of all, in keeping internet redundancy as weak as possible.
On a side note don't be surprised if the backbones leading out of the US, decide to install caching proxies (what's the official term for these, anyways?) that do like Junkbusters and edit out content from "banned" sites at the backbone level.
The other thing they can do to defeat redundancy at its foundation, is wipe it off the internet registry or DNS so that you get no such domain: "freedom.to" errors, or something.
Of course then you can just route to an ANONYMOUS PROXY in Europe or Asia and it'll bypass both problems
========================
63,000 bugs in the code, 63,000 bugs,
ya get 1 whacked with a service pack,
Yes, the internet was built to be redundant. (Score:2)
Re:low key packets (Score:2)
Attacking a nameserver only moves the problem away. Other nameservers have caching abilities and there are around 20 main nameservers on the internet to serve us with the toplevel domains.
You might want to read some RFC's on http://www.faqs.org [faqs.org].
Re:low key packets (Score:2)
Yes, you're absolutely correct, I should read some more RFC's also *grin*
PRL Article Re: Internet Vulnerablity (Score:2)
Breakdown of the Internet under Intentional Attack
Keren Erez,1 Daniel ben-Avraham,2 and Shlomo Havlin1
Volume 86, Issue 16 pp. 3682-3685
Worth checking out. Pretty readable.
weakest (nearest) link (Score:2)
but if your entry to that network is down, you're SOL; regardless of how redundant the network itself is.
I've frequently found that my local pacbell router is down (or the dslam at the CO for my dsl line) and that effectively cuts me off the net totally.
also, not every network has peering agreements with all other networks. this is business not pure technology. even if a packet theoretically -could- traverse a router, in many cases it won't due to BGP policy and such.
--
Timely Question (Score:5)
One well-placed bomb could wreck the entire Dutch Internet, the report states.The physical protection of (fiber optic) cables at critical network and ISP junctions is almost none, TNO claims. It is very easy to find out where exactly the cables are located and they can easily be approached. 'For now the chances of a deliberate disruption of the cable network by activists or terrorists are low. But as the importance of the Internet is growing, we fear that criminals, activists or terrorist will see the cable infrastructure or other critical infrastructure as targets in the near future.'
Sincerely,
Vergil
Vergil Bushnell
Depends how you look at it (Score:3)
Problem is of course when you crash the <1% of nodes that actually do the major routing.
Routing's getting hairier and hairier; it should really get fun once IPv6 kicks off and everyone and their dog have a squillion IP addresses each.
Smaller countries are easier targets (Score:2)
Unrelated? (Score:2)
It's not like my email goes through Yahoo.com as a node on its way to being delivered. Yahoo is an endpoint, not a pathway.
Internet Redundancy (Score:2)
Re:BGP (Score:5)
There are a few hopeful signs on the horizon though. IPv6 should make routing a lot easier and give us a lot more operational "breathing room" which we can use for redundancy and robustness. There will also be a lot more high speed fiber optic links from hither and thither, which should help out quite a bit (especially to fix the "backhoe" vulnerability).
Re:Redundancy seems difficult to get (Score:2)
Simple, use NAT for the humans and set up all your servers with IPs from all of your access providers, and use DNS to direct the traffic where-ever you want to go. Keep the TTL on the zone low, and you won't be out for more than a couple of minutes.
-Nathan
Care about freedom?
Redundancy is no more. (Score:2)
redundancy, &c (Score:4)
From the routing standpoint, the alternative is to advertise subnet blocks out a redundant connection. That is, you sign up for provider A and get a
At the next level, even if you get redundancy of ISPs, you may very well not have redundancy in your telco facilities. Fiber providers swap the actual fibers back and forth - I'll trade you a pair on my NY-Chicago route in exchange for one on your Chicago-Dallas - so even if you get your Provider A connection from Worlddomination and your Provider B connection from AT&CableTV, there's a measurable chance they're in the same bundle. Even if they aren't in the same bundle, they may well run through the same trench.
Thirdly, you don't know what providers A and B are doing for redundancy. Are they ordering all of their backbone circuits from diverse providers, and are they ensuring diverse physical routing of the fibers? On top of that, I recall reading on one occasion that telcos sometimes move circuits around, so you can order redundant circuits, have them installed correctly, and then have them moved on you later...
There's also been a lot of stuff flying around here about NAPs & MAEs. The MAEs and NAPs were quite important a few years ago, but since then the major providers have switched mostly to private peering arrangements, where their interconnect traffic doesn't go over the public peering points. Smaller providers still peer at those points, and some of them probably even peer with some of the big guys, but the major traffic goes via private DS3/OCx connections running off-NAP.
Lastly, vis-a-vis the redundancy of major backbone networks. It's been ages since I looked at them, but Boardwatch used to have maps of the various Tier 1/Tier 2 NSPs. Even back in 1997/1998, UUNET's US network looked like someone took a map of the US and scribbled all over it. They have a huge bloody lot of connections, and you can be they've got multiple redundancy out of virtually any city. (Disclaimer: never employed by UUNET or any related firm...) Yeah, I can see that some of the smallest national backbones (are there any left?) might only have 1 link into some cities, but even those guys set up fallback routing so that their traffic can get in and out.
Generally speaking, if your favorite site is not reachable, it's most likely something at the site's end of things. Second most likely is that it's at your end, if you're not using a major connectivity provider, or if you're using a DSL provider with known problems...
Re:(OT -- moderation comment) (Score:2)
Well, it's also comment #7, so redundant seems reasonable...
Re:You can only be redundant to a point (Score:3)
Where I work we use two providers. Redundancy in a company's ISPs/backbone connectivity is a reasonable and, depending on your needs, essential.
If you're sitting at home with only one ISP (which is expected), then you should just recognize and accept that having a single point of failure on your end is a fact of life on the consumer end of the commodity Internet. When I'm sitting at home, my power supply and hard drive and network card are all single points of failure as far as my network access is concerned, but I can live with that.
low key packets (Score:2)
What I see happening is a mixture of crappily assessed networks created by pundits who have zero skills configuring their networks [antioffline.com].
When companies go out of business as well, so do their networks, which means if your on a node with that connection, somewhere along the line your bound to have a broken link.
Sure there are DoS attacks [antioffline.com], and there are also fixes for them [antioffline.com], so DoS attacks should be 3rd or 4th in line for resolving host names.
Security risks associated with BIND problems could also be to blame for resolving hostnames, in which you could always try different servers for your nslookups to try to resolve them.
Personally I don't think people envisioned what the Internet would be in a few years when they made those statements.
Re:Hyperbole counterpoint question (Score:2)
Well I would not call DR. Mudge a punk. He is a respect security expert and at the time L0pht was only know to internet security people and those that hacked systems. He was not as well known as today.
>>And the Senate is part of Congress.
OK. Are there hearings subject to the same rules and regulation. I thought it was different.
>>an attack like you describe would require an awful lot of coordination.
Yes and No. What I mean by yes is that you are correct that it requires very detailed time line. The No part is how you or I could hack ( via virus and other tricks ) systems and set up the time line or even better, upload the time line at the last possible moment.
Taking out a router would not only require huge amounts of bandwidth hits but at the same time proper usage. I would definely use dieing packets,( packets that have to report back to the sender that they have died in transit and require a new packet to be resubmitted) this way I can clog up bandwidth at the same time.
Anyway, after the taking out of newark and white plains. The rest was a joke.
ONEPOINT
spambait e-mail
my web site artistcorner.tv hip-hop news
please help me make it better
Re:Hyperbole counterpoint (Score:3)
Move to the current.
A well-designed attack on the major routers (and it's not that hard to find them) could reduce traffic to a crawl.
Hell all they have to do is hit the
Hit the MCI routers for their newly installed OC192's and the back ups OC48, take both out in Newark NJ and the backup in Weehawken NJ then kill the Sprint loop in Weehawken. Kill the OC3 and 12's in Newark and Weehawken.
Yes, there is a lot of traffic that passes via Newark and Weehawken; the others are White Plains and the Bronx. Take out White Plains and that should take out 10% to 30% of inbound the British traffic.
Hell while were at it, lets take out the Aussie, hit them at the Singapore router, that will slow it down a bit, then hit them at the Philippines and kill them off at Sri Lanka
But wait how about the Latin Americans, Easy also, Start at Miami, then work over to Bahamas then kill shot Sao Paulo, Brazil.
What did you say? I did not mention the Asians, Oh my... so sorry, but I would like to keep my goods at the current cheap prices so I'll leave them alone.
All you need is to have is a large number of computers doing these attacks at the same time.
spambait e-mail
my web site artistcorner.tv hip-hop news
please help me make it better
redundancy::reality (Score:2)
* peering arrangements create static routes
* problems on dynamic routes are difficult to debug
Combine these two factors and you can see the problem.
Redundant versus Distributed (Score:2)
I took a course on routing and flow control in grad school. I get the impression that the features that people interpret as redundancy are actually examples of distributed processing. For example, no central location keeps the entire routing tree; local nodes don't need to know the global topology; nodes must find a way to route and prevent queues from busting without relying without supervision or instruction. That is, each IP gateway and router is expected to be co-operatively autonomous.
I also got the impression that although the potential for redundancy is included by distributing the authority, there really isn't all that much actual redundancy. For example, there are very few backbones that connect major routers across the country.
Hi! (Score:2)
Hell, I don't even pay attention to the unbridled explosion in consumed bandwidth on the Internet, or the protocols like BGP4 that ISPs use to delineate their peering relationships and shut down unwanted traffic, decreasing network redundancy by entire orders of magnitude.
But, um, slashdot, I was wondering...
why can't i get to my porn?
thanks.
ramifically speaking (Score:5)
Aren't you glad you have a Resident who cares?
Resident George W. Bush
Re:Timely Question (Score:2)
rumours that a internet outage a few weeks ago that affected the @home network was the result of vandals
This was not the work of vandals, it was the work of thieves.
Unfortunately I have no evidence
A report at the time of the incident can be found here [cbc.ca].
However the information in the article is not entirely accurate.
So far as I know the cops haven't caught the thieves yet, but their ilk has been seen before and their MO is no mystery.
This is what shakes:
Good thing most criminals are dumb
Unfortunately for the thieves in the story above, this proved too true. When they made the first cut they found they were dealing with fibre, which, in the eyes of thieves is useless and they left the scene.
Why would someone want to vandalize an internet line?
(It would be redundant to say here that these are not vandals but are in fact thieves). What the theives were after is good old copper wire. Copper wire theft [google.com] is a problem world wide. In this case the thieves were after 1/4 inch copper cable which they can sell for about a 75 cents a pound at the junkyard. In other parts of the world thievs go after the thin, colourful wires used in telephony, because they are valued as material for weaving.
- Vandals are annoying; thieves change the way we liveRe:Hi! (Score:5)
Hell, I don't even pay attention to the looks I get when my voice rises in frustration because no one else understands what I'm talking about when I'm in "the zone," or the simple human convention of being nice because I'm too busy plotting to take over the world and educating everyone about my vast knowledge of networking minutiae, decreasing my need to spend hours explaining things that I already know and holding it against other people because they don't know about decreasing network redundancy by entire orders of magnitude.
But, um, slashdot, I was wondering...
why can't i get a date?
thanks.