Millions in Middle East Lose Internet 304
Shipwack writes "Tens of millions of internet users across the Middle East and Asia have been left without access to the web after a technical fault cut millions of connections.
The outage, which is being blamed on a fault in a single undersea cable, has severely restricted internet access in countries including India, Egypt and Saudi Arabia and left huge numbers of people struggling to get online.
Observers say that the digital blackout first struck yesterday morning, with Egypt's communications ministry suggesting it was caused by a cut in a major internet pipeline linking it to Europe."
Re:redundancy (Score:3, Interesting)
Also, who actually has the responsibility for the cable? No telling how long the accountant types on each end will bicker. I just hope that it gets restored quicker than electricity in Bhagdad.
its a 'web' (Score:3, Interesting)
They must have their own servers, anything going into that cable is just a 'foreign' request.
Those are important - sure, but i would gather they dont make up more then 40% of all requests.
But only some of the routes should be down, and they still should have a very large lan, with dns, www, email and anything else they have on the spot, and im willing to bet that the ISP's there have stuff like that.
IIRC the web wasnt just designed to be foolproof, it was also designed to be autonomus once disconected from other networks.
Or am i missing something here, and all that they have is cables, no other infrastructure ?
Re:Information warfare? (Score:5, Interesting)
Clicky clicky: http://www.reuters.com/article/internetNews/idUSHAN1727620070607?feedType=RSS [reuters.com]
*snip*
State-run newspapers said an 11-km (7-mile) section of stolen TVH fibre-optic cable would be replaced at a cost of $5.8 million. It was part of the line that transmits data from Vietnam to Thailand and Hong Kong.
In all, about 43 km (27 miles) of fibre-optic cable is missing, including about 32 km (20 miles) stolen from a cable operated by a Singaporean company.
Re:Old news, but provides a fine example of TCP/IP (Score:5, Interesting)
Guess TCP was able route the packets through alternate gateways after detecting the problem.
1. TCP has nothing to do with routing packets. 2. IP also has nothing to do with selecting an "alternate gateway" after "detecting a problem". 3. If it was down for an hour, then I don't think this was anything to do with magical routing protocols. Human interaction was required to either repair the broken link or set up an alternate path.
According to the article:
"There has been a 50% to 60% cut in bandwidth," Rajesh Charia, president of the Internet Service Providers' Association of India told Reuters.
So it sounds like not every ISP was able to use the alternate path, and the alternate path didn't have sufficient bandwidth for those that could, anyway.
Mind you, the article then comes out with this astonishing "fact":
Is this the new version of the Majestik 12 that run the world?
I'm guessing this is a reference to [A-M].root-servers.net, but I'm pretty sure none of those are actually a single server, and several have multiple physical locations. Even so, the vast majority of even remotely popular sites will have their nameserver entries cached at a bazillion ISP DNS caches.
Re:Unlikely (Score:5, Interesting)
How's the spam? (Score:2, Interesting)
Well, it didn't happen in Israel (Score:3, Interesting)
Lucky us!
Re:redundancy (Score:2, Interesting)
Re:redundancy (Score:3, Interesting)
Of course the phone-modem connection isn't useful for any serious download, but I'm never helplessly disconnected from e-mail, news, slashdot etc.
In graph form (Score:3, Interesting)
It was actually two cables - how redundacy works (Score:3, Interesting)
Also, if you look at how internet transmission works, while you obviously want geographical redundancy, that doesn't mean that you don't send traffic on all available routes. Carriers are going to make sure they've got enough redundancy for their critical load levels (e.g. the voice network and private-line customers), but if they're doing redundancy at Layer 3 they're going to send traffic across multiple routes because it doesn't make sense to leave them idle.
To some extent, if you're doing Dense Wavelength Division Multiplexing, and if you haven't lit up all your wavelengths (because the optics and routers at the end are expensive), you can sometimes divert some wavelengths to the alternate routes. For instance, you'd provision wavelengths A, B, and C on the west route and Z, Y, X on the east route, and if something breaks you can push them onto the other route. But once your cable fills up, you've got less ability to do that until you build more cables or put even more expensive optics on the ends to light them.
And sometimes you just get surprised - like the big Taiwan earthquake last year that took out N-1 of the undersea cables between northern and southern Asia, which almost all go between Taiwan and the Philippines since that's what the ocean floor shape makes you do. They were spread out far enough to avoid problems with ship-anchors, but the quake was over a wide area. And there was a quake in the Med a couple of years ago that took out more than two cables as well.