Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Communicating Even When the Network Is Down 115

coondoggie writes to mention a NetworkWorld article covering efforts to maintain network connectivity even when the network has holes. Building off of the needs of the military, the end goal is to create a service which will route around network trouble spots and maintain connectivity for users. From the article: "Researchers at BBN Technologies, of Cambridge, Mass., have begun the second phase of a DTN project, funded by $8.7 million from the Department of Defense's Defense Advanced Research Projects Agency (DARPA). Earlier this year, the researchers simulated a 20-node DTN. With each link available just 20% of the time, the network was able to deliver 100% of the packets transmitted." The article is on five small pages, with no option to see a linkable, printable version.
This discussion has been archived. No new comments can be posted.

Communicating Even When the Network Is Down

Comments Filter:
  • Wait a minute... (Score:5, Insightful)

    by J05H ( 5625 ) on Thursday November 16, 2006 @07:17PM (#16877662)
    Wasn't that the point of the original ARPANET? To route around broken parts of the network? BBN was involved in that, too. What, have they been double-billing the DoD this whole time?
  • Zonk... (Score:3, Insightful)

    by Anonymous Coward on Thursday November 16, 2006 @07:17PM (#16877664)
    Baby, darling. I appreciate the warning, but you do realize, as a janitor at Slashdot you have a decent amount of power, clout in the nerd world. Even though you're condemning their actions with your comment, you're promoting their site, giving them extra ad revenue with their annoying practices.

    If you want to make a difference, make a stand, stop linking to sites like these. Send them a quick letter saying you'd be happy to send X thousand happy clickers their way if they'd give a single page, printable version. With their "Slashdot it" link at the bottom of the page, they obviously care.
  • by quincunx55555 ( 969721 ) on Thursday November 16, 2006 @07:17PM (#16877666)
    The article is on five small pages, with no option to see a linkable, printable version.

    Yea, except for maybe the link at the bottom of the article that says "Print".
  • What, AGAIN? (Score:1, Insightful)

    by stanwirth ( 621074 ) on Thursday November 16, 2006 @07:18PM (#16877676)
    "Researchers at BBN Technologies, of Cambridge, Mass., have begun the second phase of a DTN project, funded by $8.7 million from the Department of Defense's Defense Advanced Research Projects Agency (DARPA)

    The US taxpayer already fund edthis project back in the 70's and 80's. This was the goal of the original arpanet.

    Or maybe BBN is admitting failure, which, in the world of military research contracting is code for "so you should give us another 8-10 million dollars to do the project again."

    and again. and again.

    sheesh!

  • Re:What, AGAIN? (Score:1, Insightful)

    by Anonymous Coward on Thursday November 16, 2006 @07:32PM (#16877816)
    Arpanet was fairly decent at it. Then capitalism got involved.

    -Redundancy? Too expensive! CEOs need Porsches more than you need a second path to slashdot!
    -Bandwidth? Bah! We can sucker consumers into buying packages with "up to" 500Mbps speeds, and then only actually provide 128kbit while they're locked into a 50 year contract!
    -Best Path Routing? Our routers are the best in the business! And if you don't want to be routed to our customers by way of Kazakhstan, you'll pay up!

    The old saying that the internet treats censorship as damage and routes around it might have been true once, now I'm waiting to see what happens to that pedo site, because once the pedos are gone, the vigilantes will move onto the next-despicable target, and then the next, and then the next...
  • by quincunx55555 ( 969721 ) on Thursday November 16, 2006 @07:49PM (#16878002)
    You must be just as blink as Zonk. The link to the print version is right next to the "Slashdot it" link!
  • by G4from128k ( 686170 ) on Thursday November 16, 2006 @08:14PM (#16878258)
    Although this research is nice, it does not address the worst vulnerabilities of the current internet. Botnets, ARP poisoning, DNS poisoning, pwned routers seem to be a more dangerous risk than mere unreliable components. Cyberterrorism and criminal exploitation of the internet means subverting the system rather than just breaking pieces of it.

    The original internet design carried the naive assumption that all the devices on the net could be trusted -- all the devices assumed the validity of all control data, responses to protocols, etc. In the original model, devices had two primary states -- "unavailable" and "available" where "unavailable" might cover both damaged or overloaded components (a slightly more sophisticated version assesses capacity or latency as gradations between the binary unavailable/available dichotomy). In this one dimensional two-state model, disruption tolerance means routing around "Unavailable" or overloaded components.

    Yet the rising threat is from malicious entities that want to subvert the network's functioning, not just disable it. Spam, phishing, click fraud, and extortion depend on twisting a functioning network, not just poking holes in the network -- all the parts remain "available" but their data and responses become deceptive. Thus future fault-tolerant networks will need to distinguish between trustworthy and untrustworthy components. This suggests employing techniques such as cryptographic signatures, polling systems, blacklisting, FOAF, firmware integrity checks, and device-to-device secret questions.

    Designing a more robust internet is a laudable task but we need to spend more effort on securing against the true threat of untrustworthy components rather than unavailable components.
  • by ebyrob ( 165903 ) on Thursday November 16, 2006 @09:26PM (#16879012)
    The problem is, discarding extraneous packets is actually a VERY GOOD THING when it comes to the internet. Several store and forward systems pre-dated the current TCP/IP stack, but guess what. They weren't as efficient in terms of required hardware resources or latency. This is because in a store and forward network, certain problems (like network cards going nuts and spewing tons of garbage) can cause lots and lots of data to accumulate in the network, and then you have to wait for every single packet to move on before you get to the new and relevant data.

    The OSI model and network researchers in general recognize that reliable transport facilities can easily be built on top of unreliable "best-effort" communication networks, whereas it's nigh impossible to create light-weight best-effort services on top of a store and forward network. Since both kinds of applications exist, those that need reliable transport, and those that need speed. It only makes sense to provide an underlying fast and light weight network which doesn't provide, and isn't expected to provide, 100% reliability.

    Finally, in practice, it actually turns out to be rediculous to expect 100% reliability from anything, particularly a low-level networking scheme since in the real world, no network is 100% reliable. Life can get very interesting indeed when you're supposed to rely on 100% packet delivery and one of your packets never arrives.

    The real problem IMO when dealing with wireless networks is that so many developers try to shoehorn existing land-line applications and methodologies into the wireless world. There's a big difference between a network with an avg latency of 80ms, standard deviation of 2ms and 0.3% packet loss compared to a network with an avg latency of 500ms, a non-standard deviation pattern ranging between 200ms and 6 seconds and 20% packet loss. And that's completely ignoring issues related to moving between coverage zones and maintaining proper routing.

    Basically, TCP, FTP, and many of their friends can wind up being very bad deals in such an environment. And things get even *more* interesting when someone tries to "fix" the network to work well with them... (by, for instance, blocking up groups of packets and waiting for a certain data-size to accumulate before sending.)
  • Re:Yea, except... (Score:1, Insightful)

    by Anonymous Coward on Thursday November 16, 2006 @10:37PM (#16879548)
    Yes, a Delay Tolerant Network functions similar to SMTP, but as pointed out above, the actual destination address is not resolved before sending the message. More importantly, DTN messages can be sent even when there is no simultaneous connection possible between source and destination. The assumption is that different portions of the route will be up from time to time and that the message will be forwarded along the route whenever possible. Today's Internet can't do that because it generally doesn't use a store and forward approach to transport-layer messaging.

    Posted AC because I have worked on this at an unmentioned university.
  • by strstrep ( 879828 ) on Thursday November 16, 2006 @11:39PM (#16879974)
    No. Normal routing works through space. Packets move from node to node, avoiding nodes and links that are down. DTNs can route through space and time, delaying packets until they can be routed further along.

    If you have two networks that are only intermittently connected, normal routing will drop packets when the connection is down. DTNs will allow the packets to be held until the connection is up.

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...