Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
The Internet

DARPA Aims to Redo the Internet Protocol 389

Xaleth Nuada writes "The Defense Advanced Research Projects Agency's (DARPA) is looking to redo the entire Internet Protocol. With the DoD increasingly adopting network-centric warfare the shortcomings in the current IP have become resoundingly clear. Everything works fine for static hardwired networks. But not for dynamic wireless ones. The benefits for your average geek? How about REAL wireless networking? Easier network set-up? Increased wireless security protocol? Increased reliability in sending information?" Don't forget massive incompatibility and upgrade hassles. :)
This discussion has been archived. No new comments can be posted.

DARPA Aims to Redo the Internet Protocol

Comments Filter:
  • by Space cowboy ( 13680 ) * on Friday March 12, 2004 @12:58PM (#8544183) Journal
    Given the scale of the re-work proposals (replacing the Von-Nuemann architecture...), I'd be surprised if there wasn't some effort made to embed snooping and tracing into all packets transmitted. This *is* the DoD after all!

    On the other hand, given how slowly IPv6 is making its way into the wider world, we probably don't have too much to worry about for the time being!

    Simon
    • by Anonymous Coward on Friday March 12, 2004 @01:02PM (#8544250)
      You're right. It's a good thing they weren't involved in setting up our current system.

      Seriously, if they are going to rework it they better do something about the SPAM.
      • Protocols vs Spam (Score:3, Informative)

        by RAMMS+EIN ( 578166 )
        Actually, the cause of spam can largely be sought in faulty protocols. SMTP doesn't verify who you are, so spammers are very difficult to trace. If this were changed, I think there would be a lot fewer spammers.
      • IP's job is not to know anything about the data it's transmitting. IP specifically disavows any knowledge of what it's carrying in fact, as it's ONLY concern is moving datagrams from one place to another.

        That's the beauty of an n-tier system of protocols. One protocol says "okay, I do this and nothing else - you want something else, it's your responsibility to do it, not mine". For example, IP doesn't care if a datagram gets lost. In fact, IP doesn't even require an ICMP message to go back in the event

    • by spreadthememe ( 443755 ) on Friday March 12, 2004 @01:05PM (#8544299)
      It seems more likely that DARPA would create a protocol free from built-in snooping for fear that such a feature could be used by the enemy.

      While governments in general are guided by the will-to-power, militaries (at least the US military) are fairly well driven by readiness and victory. It doesn't seem likely that they would create such a vulnerable technology.

    • by Dr. Bent ( 533421 ) <<ben> <at> <int.com>> on Friday March 12, 2004 @01:06PM (#8544318) Homepage
      I'd be surprised if there wasn't some effort made to embed snooping and tracing into all packets transmitted.

      If the purpose of this redesign is to better allow the armed forces to communicate on the battlefield, I highly doubt that they will embed snooping and tracing into the protocol. The military takes great pains to ensure that thier communications are kept secure, and having a secret backdoor in their entire communication system (no matter who controls it) is not something they would tolerate.

    • by Tassach ( 137772 ) on Friday March 12, 2004 @01:08PM (#8544336)
      Wow, a relevant first post

      It is in the DoD's self interest to make a communications protocol be as resilient and secure as humanly possible. Secure and reliable communications are the cornerstone of the modern military. A built-in insecurity in a comm system can and will be exploited by an adversary just as readily (if not more so) as an unintentional one.

    • by beacher ( 82033 ) on Friday March 12, 2004 @01:08PM (#8544338) Homepage
      Heh.. the article is titled "DARPA Takes aim at IT Sacred Cows"... Love it. They rewriting the stack so that India can't connect? Is this the answer to outsourcing?
    • by temojen ( 678985 ) on Friday March 12, 2004 @01:28PM (#8544564) Journal

      It sounds to me more like some general had a brief introduction to computing theory, but didn't relate it to any real current technology.

      The alternative to Von Neuman (Code and Data in the same memory) is to have code and data in seperate memory areas. This makes it very difficult to make computers where the code can change. Sure, there's no buffer overflows, but there's no security patches either. It might be fine for embedded devices, but I'll not have it on my desktop. The Page (or Segment) executable flag of more modern memory management units does the job fine, without all the hassle.

      The OSI model is already not used anywhere except to compare proposed network models to; it's way too complex.

      He talks about replacing packet switching so that messages are delivered on time & with certainty. Presumably he means some kind of virtual circuit switching, but he also talks a lot about constantly shifting ad-hoc networks. Circuit switchinfg & ad-hoc networks don't mix well. You have to know what the path is going to be before you can reserve it. It's probably better to just turn on the QoS and AH already implemented in IPv6.

      • Post Von Neuman (Score:5, Interesting)

        by ka9dgx ( 72702 ) * on Friday March 12, 2004 @02:20PM (#8545178) Homepage Journal
        Yet another post Von Neuman architecture is to have a computing fabric. Imagine a grid of 1024x1024 single bit processors, each with its on state table (program) and inputs from each of its neighbors, and its own previous state. With 32 bits of RAM per cell, you can look up the new state, and output it. A grid of this nature, operating at a conservative 1GHz, could do amazing amounts of computation. Computation would become IO bound for quite a few tasks that bog down even the fastest intel servers.

        Map the cells in the state tables to appear as conventional RAM to the host, and reprogramming becomes as easy as a memory write. Bad cell?, just route around it. The fact that it's all state driven allows you to build an automated rerouter almost trivially.

        post Von Neuman computers are going to be wicked fast, if they can build IO to keep up with them.

        --Mike--

    • Its interesting to me that DoD is targeted in this way (ie, we can't trust DoD, DoD = Big Brother, etc). This is a little OT, but this is just wrong headedness. Replace DoD with perhaps CIA, or the administration, or the military industrial complex (which is the industry that feads of the DoD teat) and I'd agree. But DoD, and the services especially are the last place youd find the neo facist attitudes that lead to a big brother world. This is of course my opinion, but having worked in many areas of DoD
  • arf (Score:5, Funny)

    by Renraku ( 518261 ) on Friday March 12, 2004 @01:00PM (#8544211) Homepage
    "Don't forget massive incompatibility and upgrade hassles."

    I read that as:

    "Don't forget about the sudden explosion of extended-temp jobs flooding the market as the Internet decides to change over..."
  • by Anonymous Coward on Friday March 12, 2004 @01:00PM (#8544213)
    Upgraded to IPv6. Sigh.
    • IPv7 (Score:5, Funny)

      by Valdrax ( 32670 ) on Friday March 12, 2004 @01:22PM (#8544505)
      Yes, but the serious question is whether or not this so-called IPv7 will incorporate the Schumann resonance, tap into the collective unconsciousness of mankind, spontaneously create a little girl complete with family, and allow its creator to become some sort of god-like revenant.

      Maybe I'm just watching too much anime...
      • Re:IPv7 (Score:4, Interesting)

        by Wyzard ( 110714 ) on Friday March 12, 2004 @02:20PM (#8545177) Homepage

        The scary thing is, the underlying concept there is actually plausible. Think about the similarity between human social connections and the connections between neurons in the brain. You're not aware of being part of a collective consciousness called humanity, but the individual cells in your head aren't aware of being part of a larger consciousness either.

        You have to wonder how many things we consider "miracles" or extreme luck could really be actions of a larger entity which can influence groups of people as effortlessly as you can flex your fingers.

  • Protocol 7? (Score:4, Funny)

    by Anonymous Coward on Friday March 12, 2004 @01:00PM (#8544215)
    They'd best be careful, or this "Protocol 7" will inadvertently cause data from dead people to leak to the Internet...
  • Don't forget massive incompatibility and upgrade hassles. :)

    Yeah man, but massive incompatability and upgrade hassles are what keep some of us employed! GO DARPA!
    • by peragrin ( 659227 ) on Friday March 12, 2004 @01:10PM (#8544366)
      Ahh I see you have your shiny MSCE out on the wall as well.

      You know there's this thing called linux that will make your life easier. :->. Instead of massive incompatibility and upgrade hassles, you get to spend hours compiling it your self, but it will work.

      tis a joke people get a life
  • by RevDobbs ( 313888 ) on Friday March 12, 2004 @01:01PM (#8544236) Homepage

    And when will this new Internet Protocol be rolled out...

    shortly after IPv6 adoption?

    I don't see Satan reaching for his winter parka just yet...

    • Satan [go.com] is still in Buffalo [sabres.com]. Considering it's about 25 F and snowing there [weather.com], I'll bet he's reaching for his winter parka.
  • Other key benefits (Score:3, Insightful)

    by Anonymous Coward on Friday March 12, 2004 @01:02PM (#8544243)
    Easier activity tracing, easier monitoring, easing censorship of "bad" websites, easier disabling of internet access to undesirables.

    • by LostCluster ( 625375 ) * on Friday March 12, 2004 @01:15PM (#8544419)
      Easier activity tracing, easier monitoring, easing censorship of "bad" websites, easier disabling of internet access to undesirables.

      That gives as much as it takes. If it's harder to by anonymous online, then that also means it's going to be easier to locate and disable the access of spammers and pedophiles.

      Accountability tools are very good things when properly applied. The hard part is making sure they're not abused.
      • While killing spammers and pedophiles may be a good idea, that isn't the military's mission. That is the job for law enforcement. Even though both wear uniforms, carry guns, and have similar organizational structures, the military is VERY different than law enforcement in what it needs to do it's job, and who it's going up against.

        The military wants secure and reliable communications, period. From a military standpoint, it might be nice to monitor your adversaries, but not if it means that your adversa

    • by HTH NE1 ( 675604 ) on Friday March 12, 2004 @01:21PM (#8544498)
      easing censorship of "bad" websites

      "[W]e must absolutely have some mechanism for assigning network capabilities to different users...."

      Which is synonmous with "removing network capabilities from".

      They know they want to restrict certain classes of users from being able to produce services and restore the imbalance of controlled producers and restricted consumers.
  • IPv6 (Score:2, Insightful)

    by RAMMS+EIN ( 578166 )
    A new Internet Protocol? Isn't that called IPv6? They put a lot more security features in that time; if they need more now, why didn't they get it right back then? And what should convince me that they will this time?

    Now, off to RTFA.
    • Re:IPv6 (Score:3, Interesting)

      by RAMMS+EIN ( 578166 )
      ``Now, off to RTFA.''

      or so I thought, but TFAHBS (The Fine Article Has Been Slashdotted). Anyway, some more thougts:

      The claim seems to be that IP isn't suitable for mobile (ad hoc?) networks. But how can it not be? Basically, the fields that matter are the destination address and the length. I think that those are necessary and sufficient for communication. Source address could also come in handy if you want to hear if something went wrong. I don't see how this would be suitable for static networks but no
  • by wed128 ( 722152 ) on Friday March 12, 2004 @01:03PM (#8544259)
    what does it mean by REAL Wireless networking? isn't 802.11 wireless? i'm confused...
    • Isn't 802.11 the "physical" layer of the network. IP is still carried over that.
    • 802.11 is a wireless add on to IP. What they are talking about here is a protocol that is built with wireless in mind, not an add on. Dynamically changing where you are connected comes to mind (the signal from this tower/satelite is stronger now) as well as tracking location. Before everyone put on their tinfoil hats keep in mind this is the military; they have a legitimate desire to know where their troops are. Which isn't to say that other branches of the government would use it for something differe
      • No, it's not a wireless add on to IP.

        802.11 is a signaling protocol, and it relates to layers 1 and 2 of the OSI model. IP exists at layer 3.

        As far as 'email' having assured delivery, why would you have to muck with the whole stack to do this? Just write a better email engine and client software.

        The beauty of the OSI model is that you can do whatever the heck you want at any given layer, without having to change the other layers. Each layer has a specific, defined, well known input/output method (
    • by LostCluster ( 625375 ) * on Friday March 12, 2004 @01:27PM (#8544554)
      It's time to go back to basic networking class...

      The OSI Networking Model [freesoft.org] is a 7-layer system that can be used interchangably, layers run on top of each other... for example, HTTP specifies that it use TCP which wraps around IP over any physical protocol. It doesn't care if you're using WiFi or a hardwired connection.

      So, what this is saying is that IPv4, and even IPv6 are protocols that were written with wires and not wireless in mind. There are tweaks that can be made to the next version of the Internet Protocol and maybe even TCP and UDP to make them work better when on wireless without giving too much up when used on a wired physical link. This is the process of figuring out what changes should be made for next time.
    • They could mean ad hoc wireless networking. If they are looking for something that could help them communicate in the field, ad hoc wireless networking has great applications for them--basically, an ad hoc network does not have predefined hosts, access points, or what have you. Every node in the network communicates with the nodes around it (they could be a mixture of some wireless nodes and some wired nodes). There is no predefined leader, but the nodes themselves pick which nodes will act as temporary
    • Try seemlessly switching between access points, while maintaining a connection to another server. You can't because with IP you are assigned an address based on your upstream provider which can't float from network to network. If you are using an application protocol like HTTP you don't notice that much because you open a new connection everytime you request a page. But if you are using something like ftp or streaming video, you drop connection when switching access points and thus IP addresses.

      I can see a
  • by auburnate ( 755235 ) on Friday March 12, 2004 @01:03PM (#8544260)
    DARPA did help lay the foundations for the Internet. They may be in a good position to bring positive innovation to the IP protocol. Just as long as enough of us /.ers can see through any hidden embedded packet sniffing credit card stealing email reading we're watching you protocols, we should be GREAT.
  • by HullBreach ( 607816 ) on Friday March 12, 2004 @01:03PM (#8544268)
    Im a former Marine myself, and I fondly remember what a nightmare it was just trying to get everyone to have the same crypto loads for existing voice communications hardware. Im really curious as to how they propose to keep the network secure. On the other hand, the possible benifits are huge. Distributed sensor networks in particular could be revolutionized by this.
  • by Gunfighter ( 1944 ) on Friday March 12, 2004 @01:04PM (#8544275)
    Perhaps they can include, as a side project, a revamp of some of the transport layer protocols. How about something to replace SMTP with a protocol designed to help lessen the wide-spread proliferation of Spam? Perhaps we should all just switch to Jabber and get rid of that whole email thing.

    • by Anonymous Coward on Friday March 12, 2004 @01:09PM (#8544346)
      SMTP is not a transport-layer protocol. TCP and UDP are the most common transport-layer protocols that ride over IP - although many others exist.

      There are certainly some valid arguments for looking at other transport protocols (the lack of mobility features in TCP/UDP, for instance), but SMTP is not one of them since it's an application-layer protocol.
  • by Anonymous Coward on Friday March 12, 2004 @01:04PM (#8544281)
    Let's just all pray the military dosn't call this SKYNET.
  • YAY! (Score:2, Insightful)

    Yay! Sounds like a great idea... get the government involved with solving all the technical problems.

    Watch congress get involved! Watch how the project ends up championed by the "experts" at Microsoft (because they pay the dough and it's the only name the congressdrones know). Watch how the whole project ends up propritary and billg forces the government to pay $50 per node. Finally.. watch how the whole system ends up unreliable... so we end up with a system that is not free, expensive, and less reliab
  • by blunte ( 183182 ) on Friday March 12, 2004 @01:04PM (#8544285)
    Don't forget massive incompatibility and upgrade hassles. :)

    Yeah, heaven forbid we learn from our previous attempt and start fresh. We should aspire to do like Microsoft - maintain backward compatability above all other goals. Seems to work for them, right? It certainly makes things more secure...
    • "Yeah, heaven forbid we learn from our previous attempt and start fresh."

      'Starting fresh' is the doom of many a project. When you have a design that basically works, there's a huge amount of carefully-won knowledge inherent in that design which you lose the instant you decide to start again.

      This is probably less true in network design than software projects, but every software project I've worked on where someone decided that it made more sense to 'start fresh' has taken many, many times longer than impro
  • by Jugalator ( 259273 ) on Friday March 12, 2004 @01:05PM (#8544298) Journal
    Well, one of the improvements IPv6 does is better support for ad-hoc networking. Are they saying we need something even better than what that?

    Or are they just talking about IPv6? IPv6 is just that -- Internet Protocol version 6.
  • Article Text (Score:4, Informative)

    by Anonymous Coward on Friday March 12, 2004 @01:05PM (#8544302)
    DARPA takes aim at IT sacred cows

    By Joab Jackson
    GCN Staff

    ANAHEIM, Calif.--Now that the Defense Department is embracing network-driven warfare, it is taking a hard look at radically improving, or discarding altogether, some fundamental computer and network architectures.

    Flaws in the basic building blocks of networking and computer science are hampering reliability, limiting flexibility and creating security vulnerabilities, program managers said this week at the Defense Advanced Research Projects Agency's DARPATech conference.

    Among the IT holy grails that DARPA wants to see revamped are the Internet Protocol, the seven-layer Open Systems Interconnection model--which defines how devices communicate on today's networks--and the von Neumann architecture, the basic design style underpinning almost all computers built today.

    Many military commanders have been slow to adapt IT for critical tasks because they sense the equipment is unreliable, said Col. Tim Gibson. He is a program manager for DARPA's Advanced Technology Office, which is leading efforts to radically redefine computer architecture.

    "You go to Wal-Mart and buy a telephone for less than $10 and you expect it to work," Gibson said. Yet people usually do not expect the same of their computers. "We don't expect computers to work, we expect them to have a problem."

    "If a commander expects a system to have a problem, then how could they rely upon it?" Gibson said.

    Gibson cast some of the blame on the packet-based nature of Internet Protocol, which was not designed for foolproof delivery of messages. The protocol cannot guarantee delivery of e-mail, for instance.

    "The packet network paradigm probably needs to change," Gibson said. "I'm not advocating throwing out the Internet Protocol completely, but we must absolutely have some mechanism for assigning network capabilities to different users and that capability has to scale to large numbers of devices automatically. The commander wants to be able to send a message and have it delivered, completely, accurately and on time."

    Another limitation with the IP approach is the inability to dynamically build networks. The military wants to quickly set up ad hoc networks.

    "Static networks are no good for tomorrow's battlefield, because everything will move around all the time," Gibson said. "What we need is dynamic scalability. Today's networks are stationary and have a static infrastructure that provides service to static end-nodes. Moving the node outside its standard service area requires reconfiguring something. Moving infrastructure always means reconfiguring something."

    As a result, DARPA wants to fund development of new protocols or enhancements to the existing IP that will allow nodes, such as computers, to automatically sign on to networks in their vicinity.

    Another aspects of the networking that DARPA wants to revise is the seven-layer OSI stack, long held as the basic foundation for building network protocols.

    The OSI model was not designed for wireless communications devices, said Reggie Brothers, a DARPA program manager.

    "The OSI model served us pretty well for the stable, predictable world of wireline communications," Brothers said. "Mobile networks are nothing like that. They are unpredictable and highly variable. We need to think of different layers of the stack to relate to one another directly, like a mesh, instead of one level up to the next."

    The increased complexity of the network stack would let nodes enter a network quickly and without human intervention, Brothers said.

    The von Neumann architecture will also come under scrutiny from DARPA.

    "It is time to ask the harder questions about the ways of computer architecture we've been using for the past 30 years. Is it time to scrap the von Neumann architecture?" asked Anup Gosh, program officer for the Advanced Technology Office.

    This architecture, which defines the basic essential parts of
  • Hello DOD (Score:3, Insightful)

    by IamGarageGuy 2 ( 687655 ) on Friday March 12, 2004 @01:06PM (#8544310) Journal
    Can somebody try to tell these guys it's a little too late to put the genie back in the bottle. We can't change SMTP to stop spam and they want to change the whole TCP/IP thing. Good luck changing it in the next 30 years.
    • Re:Hello DOD (Score:3, Insightful)

      by Alan Cox ( 27532 )
      It does have "clueless" all over it, from the idea of reliable delivery (little hint - its provably mathematically impossible even with two way links) outwards. And the idea of non packet networks would be fun on a wireless link to say the least

      Ad-hoc secure networks are an intriguing little problem area and I can see them wanting those to work. You want instant communication between vehicles but you don't want anyone else joining in. Sounds a lot like the mesh-net stuff like locust already does really..

      N
  • TUNNELING! (Score:3, Interesting)

    by mekkab ( 133181 ) * on Friday March 12, 2004 @01:06PM (#8544320) Homepage Journal
    stop complaining- it'll work on the old IP systems via tunneling. Was that really so hard?
  • design goal 1: SNOOPING

    The days of DARPA leading the liberation of humans through information is long gone. As poison like John "Iran-Contra" Poindexter's Total (Big Brother) Information Awareness serves to their discredit, they're mainly the wedge of the NSA into our lives in the infosphere. Forget "information liberation": your information has been nationalized.
  • Don't forget massive incompatibility and upgrade hassles. :)

    I would imagine the upgrade of civilian equiptment would be something like the way they're doing Ipv6. Compatibility has been in software for a while now (Well, at least BSD and Linux). They're still several years away from upgrading, so I assume that when they do upgrade, if your hardware is older then 5 years, you're fscked. But because it's phased in gradually, how many people are going to actually have problems? Sort of like how USB was

  • by atomly ( 18477 ) on Friday March 12, 2004 @01:08PM (#8544344) Homepage
    just like it has for IPv6.

    People will only upgrade if it's absolutely painless or absolutely necessary, we should've learned this by now. I have friends that still use analog cell phones, just because it's easier not to switch.
  • by jdawson ( 29615 ) * on Friday March 12, 2004 @01:09PM (#8544350)
    DARPA invented the Internet Protocol before, and within a few decades the technology was widely deployed. Unfortunately, this time around, things won't be so easy.

    Before, it was competing against a vacuum. Now, it's competing against ubiquitous IP. They may develop some cool stuff that works on a battlefield, but it will never get widespread usage, commoditization, and economy of scale that IP has. If they come up with new features that work great, somebody will find a way to get similar functionality built on top of good old IP.

    IP isn't perfect, but it's good enough that there's no way to displace it, given its free nature and level of entrenchment=.

    • The link is down (Slashdotted probably) so I haven't read the article. Nonetheless, does DARPA really want to displace IP for the entire Internet or just for their own purposes? If it's the latter, then it shouldn't be nearly as difficult. It is afterall the military. I imagine it would be easier to get soldiers to comply with the new standard.
  • Err.. (Score:5, Informative)

    by t0shstah ( 629986 ) on Friday March 12, 2004 @01:13PM (#8544396)
    Gibson cast some of the blame on the packet-based nature of Internet Protocol, which was not designed for foolproof delivery of messages. The protocol cannot guarantee delivery of e-mail, for instance.

    Who is this guy really? Thats not what IP is for - foolproof delivery of packets is handled by connection-orientated protocols like TCP. Sure, it might not get the *right away*, but the flexibility of packet based routing is something that has made networks as reliable as they are today (despite the huge amount of moaning that people do about them).

    Mind you, as people have pointed out before, IPv6 has been waiting in the wings for a while now, and a military request for change might be the kind action needed to kick other people into gear.
    • Re:Err.. (Score:4, Interesting)

      by Roger Keith Barrett ( 712843 ) on Friday March 12, 2004 @01:25PM (#8544538)
      Obviously the writer of the article and Gibson don't understand how the system works at all... they're with the normal public thinking that e-mail is being transfered from place to place as some whole document and not understanding the basics of packets or anything in TCP/IP.

      I am not a network engineer... but I am pretty sure that if you wanted to assure the delivery of email you would do it at a HIGH level in the stack, not at the transport level. If they are talking about packets, it has already been done. I am not sure that the Gibson in the article really understands what he wants.

      It's pretty clear they've got the ideas and concepts all screwed up here.
  • by HTH NE1 ( 675604 ) on Friday March 12, 2004 @01:15PM (#8544428)
    we must absolutely have some mechanism for assigning network capabilities to different users

    Sorry, but the network capability of running a web server hasn't been assigned to you. You are blocked at the protocol layer.

    Sounds like they don't want the Internet to be a network of ends anymore and control who can do what with the network. Nice experiment, this unrestricted free speech on the Internet, but we've decided we don't want you to have that. Be consumers, not producers.
    • by Dun Malg ( 230075 ) on Friday March 12, 2004 @01:47PM (#8544777) Homepage
      Sorry, but the network capability of running a web server hasn't been assigned to you. You are blocked at the protocol layer. Sounds like they don't want the Internet to be a network of ends anymore and control who can do what with the network. Nice experiment, this unrestricted free speech on the Internet, but we've decided we don't want you to have that. Be consumers, not producers.

      Sheesh, RTFA. They're talking about a new protocol layer for use by the military. Combat-deployed wireless networks aren't "the Internet".

  • by SparafucileMan ( 544171 ) on Friday March 12, 2004 @01:20PM (#8544482)
    I'm not sure why the von Neumann architecture is such a security problem. I mean, the problem with computers not working isn't how they're built per se--turing machine, post machine, hell use cellular automata--it's that the mathematical theory says "it is impossible to write code (in general) that is guaranteed to be bug free". You could change the von Neumann archiecture, sure, but you could just as easily 'write an interpreter' (though with hardware) for the architecture. Either way, if you're writing code, you're going to have bugs.
  • by DarkOx ( 621550 ) on Friday March 12, 2004 @01:22PM (#8544504) Journal
    They blame the packet nature of the network for lots of the problems but I see not other perposal given. How on earth do you build a network as large as the internet based on a non-packet archetecture? I am studing computer science right now at school and haveing completed two telcom courses and nobody has ever discused a conection-oriented technology that or even a conection-oriented concept that could cope with a network as large as the internet with as many hosts. Do any of you in slashdot land have a clue how they might even start to go about doing this? The other posibility is its a new twist on a conectionless network but how on earht is that possible with out some sort of packet archetecture to send over it, otherwise you'd have no way to change path with conditions and changeing conditions are UNAVOIDABLE on any network I have ever seen.
  • Just love.. (Score:3, Insightful)

    by Creepy Crawler ( 680178 ) on Friday March 12, 2004 @01:25PM (#8544539)
    All the US Govt haters. You know, they only DESIGNED the current internet for us. And they give out cool schwag like NSALinux and stuff.

    And USgovt.. Yeah, they at NASA hired ol' Mr. Becker to make our lan drivers ;-) What would you trust? NE2k driver by some random polynesian company, or somebody who works on the computers at NASA?

    Understand then decide.
  • by bfree ( 113420 ) on Friday March 12, 2004 @01:37PM (#8544655)

    The article seems to have two different main points. Firstly that the entire networking model (7 layers) is inappropriate for "reliable" networks. Secondly they suggest that the entire model for building computers is wrong, and that somehow they need to use hardware to isloate programs.

    The issues they address in the first point were issues which I felt were meant to be addressed by IP6, has/will it fail? I always understood IP6 as being designed to (optionally) have secure connections, qos and an ip address structure to allow for floating nodes. Would IP6 not stand up to delivering messages in network time for the entire US military structure?

    The second issue seems simple to me, yes it will be much more reliable if you use a seperate computer for each task and allow them to communicate, but can you tolerate the lack of flexibility and is it even possible to do anything meaningful without adding lots of parts and weight (the more parts, the less reliable). I can imagine building a chip which actually contains 8 386s and 32M or ram split into 4M per 386, then have the disk controller map the device in an 8 way split so they can't touch each others data, a network chip could act as a switch to all the information, providing qos etc. buses to expansion could be mapped to cpus, but is it worth it or are you better off building two different but functionally identical systems so if one fails the other shouldn't? Also it's still one machine, as soon as you actually split it out into a meaningful number of machines weight, size and handling all become a problem. It would be lovely if you could sew tiny bluetooth enabled cpus w/mem into all the army gear and then they cluster together into a super cpu which reads the soldiers thumbprinted data device to figure out what to do, but would that actually require any sort of fundamental shift in how computers are made to achieve?

    To me this article simply states that they haven't managed to build a good enough network yet, and want some cash to do it, and that they haven't managed to build a reliable os/app combination to deal with their needs yet either! Just the talk of "One of the limitations inherent in this approach is that when an application malfunctions, it can affect other programs" made me think they need to look harder at their OS. I will be surprised if the end result isn't IP6 (perhaps a modified army version) but you never know! I wonder what OS they'll go with though?

    • The issues they address in the first point were issues which I felt were meant to be addressed by IP6

      Doesn't mean that it does so, or does so in a way that DARPA feels is sufficient. In particular, there's no protocol-layer method to restrict access, which was explicitly mentioned in the article. I think some of the stuff they're asking for (on-time, guaranteed delivery over an inherently unreliable network) is impossible, but it may be that a complete change in the way that you look at the problem can he
      • Even implementing hardware to prevent execution of non-executable code is insufficient, since all you do then is point at some executable code that can be exploited (e.g. -- buffer overflow to point at system(), and then execute your commands that way).

        You could create seperate data and return address stacks. You could write a very simple OS coupled with a very simple processor to create a much more hardened system. This might not be the highest performing OS. It would also have to be an RTOS to harden it
  • by asr_man ( 620632 ) on Friday March 12, 2004 @01:40PM (#8544687)

    Gibson cast some of the blame on the packet-based nature of Internet Protocol, which was not designed for foolproof delivery of messages. The protocol cannot guarantee delivery of e-mail, for instance.

    ...The commander wants to be able to send a message and have it delivered, completely, accurately and on time."

    Uh, ever heard of the two armies problem? [cmu.edu]

  • by Ato ( 44210 ) on Friday March 12, 2004 @01:54PM (#8544881)
    Oh, the moaning, oh, the bitching.

    Has it occurred to anyone else that DoD might not be out to reform the Internet in any way? They are out to build a network model to serve their own needs, but they have no need to reform the rest of the world.

    Now, if they make this revolutionizing new network protocol/infrastructure public other people might want to adopt it because it's neat. But me being a hardened cynic, this will most likely only find use in privately owned networking ponds...Kinda like a certain version pf IP we all know of :)
  • by bellings ( 137948 ) on Friday March 12, 2004 @02:15PM (#8545124)
    Flaws in the basic building blocks of networking and computer science... "It is time to ask the harder questions about the ways of computer architecture we've been using for the past 30 years. Is it time to scrap the von Neumann architecture?"

    This is the only interesting part of the article. I couldn't care less what they do with the OSI layers. As long as someone writes about it as well as Stevens wrote about TCP/IP, it'll take me a month of reading and programming to get under my belt. We all learned Pascal, then C++, then C++ again when the standard came out, then Java, and Lisp, and Smalltalk, and Perl, andd Python, and C#, and a half-dozen more languages as the need came up. Now, you have to learn a few new networking layers and protocols. No big deal -- you should be pretty damned familiar with learning different implementations of stuff you already understand.

    But, replacing the von Neumann architecture means changing just about everything I know. That's big. Everything is von Neumann. All the computational models, all the theory, all the basic underpinnings of what I know... it's all pretty much out the window once von Neumann goes. It's not just a dozen evenings at home with a book and reference implementation to relearn all of that stuff, either. It's relearning nearly all the Computer Science I know, and probably learning a whole bunch of new Maths to go with it.

    That's gonna hurt.
    • Babbage (Score:3, Funny)

      Flaws in the basic building blocks of networking and computer science... "It is time to ask the harder questions about the ways of computer architecture we've been using for the past 30 years. Is it time to scrap the von Neumann architecture?"

      Sigh... I guess it's back to building the Analytic Engine... Pass me the lathe, will ya...
  • by RogerRamjet98 ( 444469 ) on Friday March 12, 2004 @02:16PM (#8545134)
    I think most of you are missing the point.

    DARPA and the military aren't interested in rebuilding the internet, they are interested in rebuilding IP.

    They want to rebuild IP because they have a need for a better system. They need secure, reliable, ad hoc networking so that battle groups can communicate with each other.

    These are private WANs. Not the Internet! The Military is not going to send real time battlefield data across the public internet, and real time battlefield data is what this thing is all about. The military launches and rents satellites for that sort of thing, they don't send it across uunet.

    When they create a WAN, they have to have some mechanism to talk. Right now it might be IP, but in the future they want it to be something else. Something better for THEM.

    The US Military couldn't care less if the rest of the world, or the internet itself, started to use whatever they come up with.

    As far as those attacking technical limitations, when they started working on the original internet I'm sure everyone was saying, "Fault tolerant distributed networking with dynamic routing? That's impossible, why are they bothering" The point of DARPA is to do science and advance the field beyond current knowledge.

    They may succeed, and they may fail. But they shouldn't just not try.

  • by tiger99 ( 725715 ) on Friday March 12, 2004 @02:17PM (#8545143)
    ...when he invented the internet?
  • by tiger99 ( 725715 ) on Friday March 12, 2004 @02:24PM (#8545212)
    ....because internet protocols are developed, documented and controlled via the RFC system which works very well and is open to anyone who wants to participate.

    They are of course fully entitled to invent as many protocols as they need for their own use, and it is probably a good thing, but unless it goes through the RFC process, it will never be accepted for general use by the public.

    This is really a big non-event.

  • by szquirrel ( 140575 ) on Friday March 12, 2004 @02:28PM (#8545256) Homepage
    Don't forget massive incompatibility and upgrade hassles.

    Yeah, just like that PCI bus clusterfuck. What a nightmare that was. Was ISA really so bad that we all had to buy new motherboards and expansion cards? Oh wait, yes it was.

    Sometimes if you want to move forward you have to pick up your feet.
  • Ok, here goes (Score:4, Informative)

    by RAMMS+EIN ( 578166 ) on Friday March 12, 2004 @02:30PM (#8545280) Homepage Journal
    Now that I have read the article, I finally concluded it's full of shit. I'll break it down bit by bit:

    ``Among the IT holy grails that DARPA wants to see revamped are ... the seven-layer Open Systems Interconnection model''

    Well, they can't. It's just a model, an abstraction. It's not like networks are actually built by looking at the OSI model and carefully following it. It's more like you build your network infrastructure and protocols, and then the OSI model says that you can call your wires the physical layer, the software that does something with the network the application layer, etc.

    ``Many military commanders have been slow to adapt IT for critical tasks because they sense the equipment is unreliable''

    Well, that's their judgment, but what does it have to do with the Internet protocol?

    ``"We don't expect computers to work, we expect them to have a problem."''

    I guess many people do, but I don't. I buy my computer and expect it to work. If it doesn't, I'll return it and get a working one or my money back.

    ``Gibson cast some of the blame on the packet-based nature of Internet Protocol, which was not designed for foolproof delivery of messages. The protocol cannot guarantee delivery of e-mail, for instance.''

    Right he is. Reliability is in TCP, and this is why most application protocols build on TCP. The unrealiability of IP is there on purpose, so we don't have the overhead of TCP when it's not needed, and that if we come up with a better alternative to TCP, we can use that instead without having to throw away IP. Conversely, we can exchange IPv4 for IPv6 and implement TCP on top of that. It's called modular design, and generally considered a Good Thing.

    ``"The packet network paradigm probably needs to change," Gibson said. "I'm not advocating throwing out the Internet Protocol completely, but we must absolutely have some mechanism for assigning network capabilities to different users and that capability has to scale to large numbers of devices automatically. The commander wants to be able to send a message and have it delivered, completely, accurately and on time."''

    Ok, fine, so you need a real-time protocol. I can see how that wouldn't work with IP's best-effort (read: unreliable) delivery, without further guarantees. However, there is nothing in IP that says it _has_ to lose packets. If you find a way to guarantee timely delivery of packets (my bet is that you can't), then you can layer IP on top of that. Of course, you don't _have_ to use IP, but if you opt for a different protocol, that doesn't mean that I have to drop IP too.

    ``Another limitation with the IP approach is the inability to dynamically build networks. The military wants to quickly set up ad hoc networks.''

    I don't think that's true. Just like there is nothing in IP that _prevents_ guaranteed delivery, there is nothing in it that prevents building dynamic networks, either.

    ``"... Moving the node outside its standard service area requires reconfiguring something. ..."''

    Yes, necessarily. However, the implication seems to be that IP somehow cannot handle this. Again, there is nothing in IP to prevent this. You could simply broadcast a message to discover nearby access points, and attach to the one with the strongest signal. Periodically, or when the signal gets weak, you broadcast again.

    ``As a result, DARPA wants to fund development of new protocols or enhancements to the existing IP that will allow nodes, such as computers, to automatically sign on to networks in their vicinity.''

    Like ZeroConf? That would be a Good Thing. More power to them.

    ``The von Neumann architecture will also come under scrutiny from DARPA.''

    I won't comment on that. I don't know what exactly the Von Neumann architecture is, and besides it is off-topic in my discussion on network protocols.
  • by Anonymous Custard ( 587661 ) on Friday March 12, 2004 @02:50PM (#8545533) Homepage Journal
    Most companies don't even use the full power of their current networks, installed in the late 90's or early 00's. Would they be willing to throw out all the old stuff to get the new stuff? I doubt it...most of them are still hurting from over spending in the first place.
  • by mveloso ( 325617 ) on Friday March 12, 2004 @03:00PM (#8545616)
    Sounds like the DoD has some simple requirements. I thought some of these were taken care of by ip6?

    The main requirement seems to be self-configuring mobile networks and services.

    I suppose nobody wants to renumber IP addresses every time a battleship moves from one theatre to another. Imagine having to move a whole division from one place to another, and having to reconfigure all the appropriate devices. What a nightmare. Plus, you wouldn't be able to find anything anymore.

    They could move to zeroconf/rendevous for their network service naming, which is a bit better than a static address/conf file.

    But they still have routing issues. Maybe they should adapt the cell network routing? Cell providers seem to have a better idea about how to dynamically route information to devices that change location often. Phones have a unique address which is tracked by the network...or at least it behaves that way.

    Then there's the security side. How do you authenticate/authorize someone when they try and join the network? You don't want to lose a laptop then have someone be able to watch your operation. Biometric stuff won't work so well, because they can always cut off a hand and use it without the user attached (ugh).

    Pretty interesting problems, really.
  • by sakshale ( 598643 ) on Friday March 12, 2004 @03:41PM (#8546032) Homepage Journal
    Most people seem to miss the fact that the R in DARPA stands for Research. Research is not done by accepting the status quo. If ARPA had not invested in the original network research, who knows were we would be today!

    TCP/IP is not perfect for every use. If DARPA can find a better set of protocols to slide into layers three and four of the OSI model, more power to them.

    Internet protocol suite [wikipedia.org]
  • Clueless managers (Score:3, Interesting)

    by mwood ( 25379 ) on Friday March 12, 2004 @05:30PM (#8547306)
    Where have these guys *been* for the last, oh, *fifty* years? One guy doesn't know that guaranteed delivery isn't IP's job because that belongs to another layer, and seems to be unaware that adaptive routing has been in the Internet for decades; another apparently never heard of the memory mapping and protection that's been standard in most computers longer than many of today's hotshot programmers have lived. DHCP and the built-in address initialization stuff in IPv6 (cribbed from earlier work in OSI, btw) are apparently unknown at DARPA.

    Did I miss something?
  • by ciphertext ( 633581 ) on Friday March 12, 2004 @06:17PM (#8547800)

    Since this is a DoD project, its primary use will be for military networks. Perhaps there will be a trickle down to an "Internet 4" system through technology sharing. I don't see this changing the internet we currently use anytime soon. What it will change is how battlefield command systems and forward deployed units will communicate with each other. Establishing a network connection via traditional microwave, satellite, wired, and wireless (this is the key....wireless) will now exchange data using the DARPA protocol instead of IP.

    How nice would it be to have a soldier (or any other unit you wish to deem a "node" on your network) be able to "uplink" to the required military network (battlefield or otherwise) simply by broadcasting to the network. No need to configure a DHCP Server (in the case of dynamic allocation) to dish out an IP address...there is no more IP. I think that is what DARPA is attempting to achieve. They want the military to have a secure, easily scalable, and always available network infrastructure. How they plan to accomplish this...who knows, although it would probably be something similar to IPv6 where everything (network accessible device) has its own hardware created identifier. Perhaps like "DNA" for the hardware. Anyone own stock in Motorola? No? Perhaps it's time to buy some.

  • Get rid of ports. (Score:3, Interesting)

    by Peaker ( 72084 ) <gnupeaker@nOSPAM.yahoo.com> on Friday March 12, 2004 @09:29PM (#8549139) Homepage
    IPv4 and IPv6 have a slight ugliness people have come to take for granted. This could be fixed for IPv7.

    The concept of "ports". Ports are actually in-host entity identifiers, while the IP address itself is an in-network entity identifier.
    There should really be only one type of entity identifier, especially when it is 128-bit long.

    The idea is that the last few bits of an IP address would typically serve the function of a "port". This way, a DNS server could translate names to much more specific entities than full hosts. It would allow hosting multiple FTP servers on the same host, for example, without the clients having to connect to different ports. It would dissolve the need for the silly ad-hoc workarounds with virtual web hosts.

    This kind of addressing also allows much simplification of applications that would no longer need to use multiplexing over their connections. Instead, each application could allocate addressable "entities" and the multiplexing can be handled by the network layer.

    Finally, it would eliminate the need for the UDP protocol entirely, as in-host identifying becomes part of the network layer itself.

    TCP-layer becomes simpler as there is no need to handle in-host addressing as well.

    Lets eliminate ports, for a simpler network protocol :-)

1 + 1 = 3, for large values of 1.

Working...