Communicating Even When the Network Is Down 115
coondoggie writes to mention a NetworkWorld article covering efforts to maintain network connectivity even when the network has holes. Building off of the needs of the military, the end goal is to create a service which will route around network trouble spots and maintain connectivity for users. From the article: "Researchers at BBN Technologies, of Cambridge, Mass., have begun the second phase of a DTN project, funded by $8.7 million from the Department of Defense's Defense Advanced Research Projects Agency (DARPA). Earlier this year, the researchers simulated a 20-node DTN. With each link available just 20% of the time, the network was able to deliver 100% of the packets transmitted." The article is on five small pages, with no option to see a linkable, printable version.
Wait a minute... (Score:5, Insightful)
Zonk... (Score:3, Insightful)
If you want to make a difference, make a stand, stop linking to sites like these. Send them a quick letter saying you'd be happy to send X thousand happy clickers their way if they'd give a single page, printable version. With their "Slashdot it" link at the bottom of the page, they obviously care.
I can get to a printable version... (Score:2, Insightful)
Yea, except for maybe the link at the bottom of the article that says "Print".
What, AGAIN? (Score:1, Insightful)
The US taxpayer already fund edthis project back in the 70's and 80's. This was the goal of the original arpanet.
Or maybe BBN is admitting failure, which, in the world of military research contracting is code for "so you should give us another 8-10 million dollars to do the project again."
and again. and again.
sheesh!
Re:What, AGAIN? (Score:1, Insightful)
-Redundancy? Too expensive! CEOs need Porsches more than you need a second path to slashdot!
-Bandwidth? Bah! We can sucker consumers into buying packages with "up to" 500Mbps speeds, and then only actually provide 128kbit while they're locked into a 50 year contract!
-Best Path Routing? Our routers are the best in the business! And if you don't want to be routed to our customers by way of Kazakhstan, you'll pay up!
The old saying that the internet treats censorship as damage and routes around it might have been true once, now I'm waiting to see what happens to that pedo site, because once the pedos are gone, the vigilantes will move onto the next-despicable target, and then the next, and then the next...
There is a printable version! (Score:3, Insightful)
Reliable networks with malicious components (Score:3, Insightful)
The original internet design carried the naive assumption that all the devices on the net could be trusted -- all the devices assumed the validity of all control data, responses to protocols, etc. In the original model, devices had two primary states -- "unavailable" and "available" where "unavailable" might cover both damaged or overloaded components (a slightly more sophisticated version assesses capacity or latency as gradations between the binary unavailable/available dichotomy). In this one dimensional two-state model, disruption tolerance means routing around "Unavailable" or overloaded components.
Yet the rising threat is from malicious entities that want to subvert the network's functioning, not just disable it. Spam, phishing, click fraud, and extortion depend on twisting a functioning network, not just poking holes in the network -- all the parts remain "available" but their data and responses become deceptive. Thus future fault-tolerant networks will need to distinguish between trustworthy and untrustworthy components. This suggests employing techniques such as cryptographic signatures, polling systems, blacklisting, FOAF, firmware integrity checks, and device-to-device secret questions.
Designing a more robust internet is a laudable task but we need to spend more effort on securing against the true threat of untrustworthy components rather than unavailable components.
Re:This is not simply OSPF, this is a new layer 3 (Score:3, Insightful)
The OSI model and network researchers in general recognize that reliable transport facilities can easily be built on top of unreliable "best-effort" communication networks, whereas it's nigh impossible to create light-weight best-effort services on top of a store and forward network. Since both kinds of applications exist, those that need reliable transport, and those that need speed. It only makes sense to provide an underlying fast and light weight network which doesn't provide, and isn't expected to provide, 100% reliability.
Finally, in practice, it actually turns out to be rediculous to expect 100% reliability from anything, particularly a low-level networking scheme since in the real world, no network is 100% reliable. Life can get very interesting indeed when you're supposed to rely on 100% packet delivery and one of your packets never arrives.
The real problem IMO when dealing with wireless networks is that so many developers try to shoehorn existing land-line applications and methodologies into the wireless world. There's a big difference between a network with an avg latency of 80ms, standard deviation of 2ms and 0.3% packet loss compared to a network with an avg latency of 500ms, a non-standard deviation pattern ranging between 200ms and 6 seconds and 20% packet loss. And that's completely ignoring issues related to moving between coverage zones and maintaining proper routing.
Basically, TCP, FTP, and many of their friends can wind up being very bad deals in such an environment. And things get even *more* interesting when someone tries to "fix" the network to work well with them... (by, for instance, blocking up groups of packets and waiting for a certain data-size to accumulate before sending.)
Re:Yea, except... (Score:1, Insightful)
Posted AC because I have worked on this at an unmentioned university.
Re:This time Al Gore is doing it.... (Score:3, Insightful)
If you have two networks that are only intermittently connected, normal routing will drop packets when the connection is down. DTNs will allow the packets to be held until the connection is up.