Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet Networking Technology

Researchers Scheming to Rebuild Internet From Scratch 254

BobB writes "Stanford University researchers have launched an initiative called the Clean Slate Design for the Internet. The project aims to make the network more secure, have higher throughput, and support better applications, all by essentially rebuilding the Internet from scratch. From the article: 'Among McKeown's cohorts on the effort is electrical engineering Professor Bernd Girod, a pioneer of Internet multimedia delivery. Vendors such as Cisco, Deutsche Telekom and NEC are also involved. The researchers already have projects underway to support their effort: Flow-level models for the future Internet; clean slate approach to wireless spectrum usage; fast dynamic optical light paths for the Internet core; and a clean slate approach to enterprise network security (Ethane).'"
This discussion has been archived. No new comments can be posted.

Researchers Scheming to Rebuild Internet From Scratch

Comments Filter:
  • by mikecardii ( 978929 ) * on Thursday March 15, 2007 @02:50PM (#18365829) Homepage
    Gentlemen, we can rebuild it. We have the techonology. We can make it better, faster, stronger.
    • Re: (Score:3, Interesting)

      by kaizenfury7 ( 322351 )
      ....and with DRM baked in.
      • Re: (Score:3, Insightful)

        by pipatron ( 966506 )
        Funny, that was exactly what I thought even before I read the summary. I bet there will be no chance to browse anonymously this time.
        • Re: (Score:3, Informative)

          by GringoCroco ( 889095 )
          From the whitepaper PDF:

          It should be:
          1. Robust and available
          2. Inherently secure.
          3. Support mobile end-hosts
          4. Economically viable and profitable.
          5. Evolvable.
          6. Predictable
          7. Support anonymity where prudent, and accountability where necessary.
          • by trianglman ( 1024223 ) on Thursday March 15, 2007 @05:03PM (#18367767) Journal

            7. Support anonymity where prudent, and accountability where necessary.
            Who determines necessity? If left up to any current government, the necessity would be determined by who wants to be anonymous. Senators - sure, they need privacy for their solicitations of pages; Joe Shmoe Public - nah, its better to keep tabs on him, he could be a terrorist...
    • by cayenne8 ( 626475 ) on Thursday March 15, 2007 @03:23PM (#18366351) Homepage Journal
      "Gentlemen, we can rebuild it. We have the techonology. We can make it better, faster, stronger."

      Unfortunatly, I'm afraid they will make it more censorable, more business oriented vs. regular people, less anonymous, more regulated, govt/UN controlled, politically correct...and as someone mentioned, full DRM support forever.

      Frankly, for all its faults, I like the internet now as it is...kind of the 'wild west' of information. That just has to 'kill' some of those in power around the world.

      I think the last thing we want to do, is recreate it, now that those in power know what free flow of information can do...

      • I think the last thing we want to do, is recreate it, now that those in power know what free flow of information can do...

        Indeed, the only way to "recreate" it is to make it even more decentralized and unregulated!

      • by westlake ( 615356 ) on Thursday March 15, 2007 @04:30PM (#18367301)
        I like the internet now as it is...kind of the 'wild west' of information.

        The "Wild West" exists (and perhaps always has existed) mostly in fiction.

        In history it begins with the discovery of gold in California in 1848 and ends in 1876 at the Little Big Horn. The Last Stand for the Plains Indians as well as for Custer.

        It's a brief moment in time - and, in some ways, a pattern of settlement unique to the United States.

        It shouldn't surprise anyone if the Internet frontier has it's own ending.

      • by sehlat ( 180760 )

        I think the last thing we want to do, is recreate it, now that those in power know what free flow of information can do...
        Damn straight. If the Powers That Be had seen what an open network would do, they'd have strangled it in its cradle. Quite possibly, in their "generosity," we might have gotten a centralized "information utility" monster, something like France's "Minitel" system on steroids, with all information filtered, censored, corporatized, and source-trackable. Feh.
  • Damnit (Score:5, Funny)

    by 0racle ( 667029 ) on Thursday March 15, 2007 @02:52PM (#18365843)
    I haven't even upgraded to Internet2 and Web 2.0 and they're already doing work on Internet3.
  • Hmm.. (Score:2, Funny)

    by chowder ( 606127 )
    Is someone going to call Al Gore and get his opinion on this?
  • Sounds great... (Score:2, Insightful)

    by cedricfox ( 228565 )
    ...but the biggest hurdle is convincing people not to connect to these shiny new networks until it's all in place, end-to-end. It seems like this would have to be physically secured while it is being put together.
    • by Tackhead ( 54550 )
      > ...but the biggest hurdle is convincing people not to connect to these shiny new networks until it's all in place, end-to-end. It seems like this would have to be physically secured while it is being put together.

      Oh, that's simple. Don't put any pr0n, MP3z, movies, or warez on it until it goes live. Then, unleash the .torrents of hell.

  • What are the odds (Score:5, Insightful)

    by Lokatana ( 530146 ) on Thursday March 15, 2007 @02:54PM (#18365877) Journal
    What are the odds that, even given a great plan, that this has any hope of making it to daylight. IPv6 has been out for how long, yet how much real adoption have we seen in that space?
    • Re:What are the odds (Score:4, Informative)

      by griebels2 ( 998954 ) on Thursday March 15, 2007 @03:26PM (#18366373)
      The problem of IPv6 is due to the fact that it just doesn't work besides IPv4. You essentially need to build and maintain two seperate networks. Yes, you can share the same equipment, but the amount of configuration involved almost never justifies the efforts in corporate environments.

      In my opinion, there are a lot of things that need to be fixed for an "Internet for the future". One of the biggest hurdles of course is the address space shortage of IPv4, but there are a lot of other issues which need to be solved. Just to name a few:
      - More flexible routing of unique identifiers (let's call them IP numbers), so I can take my "identifier" with me (think mobile phones)
      - A solution to the ever growing "global routing table" (BGP4 as it is used today)
      - Better support for quality of service from end-to-end.
      - Better "multicasting" support, also end-to-end. (Let's avoid burning down networks during "cataclysmic" events)
      - Better redundancy. Although dynamic routing protocols should heal this problems, in practice they often fail to do this. Especially in cases where connections are semi-dead)
      - A much better built-in protection against DDoSes and other kind of abuses.

      Unfortunately, IPv6 really fixes none of those problems, except the IP number shortage. IPv6 also comes at great costs, since you need to upgrade your whole infrastructure at once, or it isn't really usable.

      So, IPv6 might have been a nice lesson for the next generation "IP protocol". IMHO this next generation should take the following things in mind:

      - Transition only works if it plays nicely with the legacy stuff during the transition.
      - Transition has either to be cheap or must have so many advantages that you simply cannot refuse.
      - Vendors need to agree upon a single standard, or somebody with a large impact should "dictate" it in the worst scenario.

      Reading TFA, I was quite disapointed, because anything about how this transition to this cleanslate network seems to be absent at this time. But it is still a research project and maybe somebody did learn something from the IPv6 "fiasco".
      • by mrchaotica ( 681592 ) * on Thursday March 15, 2007 @03:49PM (#18366689)

        The flip side is that some of your suggestions can have detrimental effects too:

        - Better support for quality of service from end-to-end.

        In other words, better support for introducing favoritism between ISPs and content providers, so that (for example) AT&T can extort money from Google and shut down BitTorrent. No thanks; I prefer the "dumb," route-everything-equally, neutral Internet we have now.

        - A much better built-in protection against DDoSes and other kind of abuses.

        And much better protection against free speech, anonymity, etc. Again, no thanks.

        - Vendors need to agree upon a single standard, or somebody with a large impact should "dictate" it in the worst scenario. [emphasis added]

        Yeah, that "somebody" being AT&T or Microsoft, who would undoubtedly screw it up with Treacherous Computing, built-in "micropayment" toll booths, and assorted other bullshit. Still sound like a great idea?

        • Re: (Score:3, Funny)

          by Bozdune ( 68800 )
          Brilliant post.
        • by griebels2 ( 998954 ) on Thursday March 15, 2007 @05:17PM (#18367943)

          In other words, better support for introducing favoritism between ISPs and content providers, so that (for example) AT&T can extort money from Google and shut down BitTorrent. No thanks; I prefer the "dumb," route-everything-equally, neutral Internet we have now.
          Do you really think the Internet is this "neutral" right now? I've worked for several ISPs and know all about routing traffic the cheapest, yet still acceptable way. In the end, I always was the techie and only wanted to get my traffic to the destination in a way the least users would complain about "speed" without violating traffic commitments from our upstreams. This "net neutrality" is only politically . I'm a big ISP and I want money from Google? I just route all my traffic to Google to this already filled-up-to-the-max transit link and let Google pay for a direct peering with me. The way this works in practice? The ISP's helpdesk will get flooded by complaints and this "upgrade" will be undone within a few days, until the next manager comes by with yet another great idea to make some more money. Being an somewhat honest ISP, better QoS support from end-to-end will give me much more possibilities to deliver services to my customers in a more reliable way. I could, for example, avoid customers line filling up with bitorrent while using Skype. There is no way of doing this right now. So better QoS support across the Internet is really a cornerstone for reliable services delivered across the Internet, especially for a neutral net.

          And much better protection against free speech, anonymity, etc. Again, no thanks.
          In an Internet without any protection against those kinds of attacks, the one with the biggest botnet wins? There are many ways to implement this kind of protection right into the protocol, without losing any kind of anonymity. Detecting and mitigating DDoSes more close to the source for example. Also, when I don't want to receive your traffic, why do I have to block it on the receiving end? How anonymous do you think you really are? Everything you do leaves traces. Posting on slashdot leaves your IP and your IP can always be traced back to your ISP. Your ISP will probably retain some logfiles, like from which DSL line did it come, from which dialup bank, etc. Public WiFi hotspots or some "anonymity services" might give you some anonymity, they will probably also do so in a "DDoS protected" environment.

          Yeah, that "somebody" being AT&T or Microsoft, who would undoubtedly screw it up with Treacherous Computing, built-in "micropayment" toll booths, and assorted other bullshit. Still sound like a great idea?
          Many of the not-so-evil standards we use today were originally conceived by private or public companies. Sometimes you cannot rely on "standards organisations", because they just are so damn slow and have a tendency to come up with standards that are to much of a compromise. Fortunately, not all companies think they can rule the world alone. For the remaining companies, let's hope they see their quasi-monopolies erode in the end.
      • Not exactly (Score:5, Informative)

        by mengel ( 13619 ) <mengel@users.sou ... rge.net minus pi> on Thursday March 15, 2007 @05:05PM (#18367805) Homepage Journal
        I couldn't help chuckling as I read the above post, as it outlines all of the things that were presented as benefits of moving to IPv6 when it was initially released. For example:
        • There are several mechanisms for running IPv4 and IPv6 side by side, and that was a major part of the discussion in the IPv6 rollout early on. Medium sized chunks of the net were running IPv6 [6bone.net] for quite a while, and were routed in and out of fairly seamlessly. transition mechanisms were designed [tascomm.fi], long before IPv6 was adopted by the IETF. (the linked RFC is from 1995).
        • IPv6 designers also put in tools designed to provide for mobile endpoints, although better designs have come out since.
        • IPv6 provides and uses multicast addresses as part of it's initial design, and its multicast is being used [cisco.com] successfully.
        You can claim that the implementations provided weren't good enough (although I'd like to see some actual data to back that up), but in fact the folks that did IPv6 did have all of those goals in mind when they put IPv6 together.
      • - More flexible routing of unique identifiers (let's call them IP numbers), so I can take my "identifier" with me (think mobile phones)
        - A solution to the ever growing "global routing table" (BGP4 as it is used today)


        I don't think it's possible to have both at the same time. A solution for a portable unique identifier already exists (DNS), and trying to achieve portability down at layer 3 could get real ugly and computationally expensive. DNS can be distributed very easily and allows leaf nodes to do the UR
    • That's not really a great comparison. IPv6 has no immediate benefits and has some short term problems. Using IPv6 is more like paying off the budget deficit. You don't do it because it's not your problem, it's your kid's problem.

      Presumably a non-trivial increase in connection speed would be a much bigger draw to people.
    • by dattaway ( 3088 )
      It does have hope. It has a great business plan. You can bet its protected by a large army of patents. The internet as we enjoy it does not have such restrictions. They want a new landscape where everything is owned in such a way where the new generation 20 years from now could be covered in patents too. Technology can always evolve and be forever be covered in new blankets of patents.
  • No matter how good a set of tools you make, some^H^H^H^H most people will use them incorrectly. I have yet to see a corporate network designed in a way that both makes sense and is secure at any place I've worked or knew anything about, despite all the good information available on how to do both.
    • Re:Won't work IMO (Score:5, Insightful)

      by jandrese ( 485 ) <kensama@vt.edu> on Thursday March 15, 2007 @02:59PM (#18365951) Homepage Journal
      Most corporate networks make sense when they were first deployed, but that was back in the 80s and the technology (not to mention corporate layout) has changed enough that it seems crazy today. I know our tech guys here work really hard to keep everything up to date, and for the most part our network is sane, but sometimes there are cases of legacy systems that really look out of place next to everything else.

      I want to know how they're going to avoid the second system effect with their new internet. One of the big reasons the Internet works is because a lot of effort was spent in keeping everything reasonably simple. Time has shown that anything that start out highly complicated tends to be only very slowly adopted, if at all. IP may have terrible security but at least it doesn't require someone 10 man-years to build a fully compliant router.
      • I think it would be far simpler to first build new protocols, to replace things like smtp, and pop first. kill off FTP to replace it completely with SFTP.

        Once that part is done moving to better hardware will be easier.
        • But if you're just transfering publicly available files over using anonymous accounts, then what is the point of SFTP? I understand the need to get rid of telnet, which I would assume is never anonymous login, but things like FTP do have their place. Why not just get rid of HTTP, and use HTTPS all the the time?
          • by jandrese ( 485 )
            I think the argument is to use HTTP for all anonymous file transfers. I'm ambivalent on that solution because current HTTP clients don't support resume or directory listing (sometimes you just have a ton of files you want to make publicly accessible with a minimum of fuss) and have no standard way to upload. On the other hand, the way FTP manages the connection (not to mention the confusion over passive vs. non passive) leaves a lot to be desired on the current internet (FTP hates NAT).
  • >a clean slate approach to enterprise network security (Ethane).

    Kinda flammable, and not shiny enough. I suggest we take it one step further and use ethylene.
  • by Red Flayer ( 890720 ) on Thursday March 15, 2007 @02:58PM (#18365931) Journal
    Can be found here [stanford.edu], is linked to within the first link provided in the summary.

    One of the most interesting criteria for a new internet, to me, was criteria #7:

    Support anonymity where prudent, and accountability where necessary.

    Maybe it's just me, but it seems true anonymity is becoming more and more important, and less and less available, as governments snoop more on the internet.
    • Yeah, I'm not sure how to fix this, but it seems to me that it's the single greatest problem with the internet. If you really know what you're doing, you can stay anonymous when you want to do something nefarious. However, if you're just a standard know-nothing user, all your innocuous activities are recorded all the time.

      That's the exact opposite of what you want. It's not an unusual sort of security problem, and like I said, I don't know how to fix it because how do you distinguish between nefarious a

    • by ScentCone ( 795499 ) on Thursday March 15, 2007 @03:23PM (#18366341)
      Maybe it's just me, but it seems true anonymity is becoming more and more important, and less and less available, as governments snoop more on the internet.

      On the other hand, unless you want this to be a tool only for and by the government, you've got to get businesses comfortable with it. Banks. Retailers. Airlines. Anonymity (of the you-can't-track-my-pr0n-use, or the posting-as-a-troll, or the PRC-can't-ID-the-rebel variety) is antithetical to trustworthy transactions, and without money changing hands, the plumbing is WAY less useful to the huge swaths of the economy that would fund (indirectly) the growth and adoption of such a thing.

      "Where prudent" and "as necessary" etc., are completely subjective. People who like to rip off movies have one set of priorities, and people who administer your payroll or need to transmit your cancer meds prescription are looking at it from a very different perspective.
      • Anonymity (of the you-can't-track-my-pr0n-use, or the posting-as-a-troll, or the PRC-can't-ID-the-rebel variety) is antithetical to trustworthy transactions.

        But that's not to say that they can't happen over the same infrastructure. Even today, you can send an e-mail with a fake address routed through some random SMTP server and it's pretty hard to trace. -or- You can digitally sign and encrypt e-mail traffic. Assuming the infrastructure can support both, it's a question of whether endpoints will accept

  • Besides the obvious, I mean? This is what is wrong with using common words as names for major projects. You can't find them with google!
    • by jim3e8 ( 458859 )
      Generally true, although the highly popular projects such as Gallery [google.com] tend to rise to the top nonetheless.
    • You can't find them with google!
      I typed in (or rather, copied and pasted) "enterprise network security (Ethane)" into google. You'll notice those are the last 4 words in the article summary. My first hit was this website [stanford.edu].

      What's so damned hard about that?
  • by michaelmalak ( 91262 ) <michael@michaelmalak.com> on Thursday March 15, 2007 @02:59PM (#18365965) Homepage
    I think it was called OS/2. Or maybe 68000. Or was it Itanium?
    • Re: (Score:3, Insightful)

      by nine-times ( 778537 )

      Yes, a great many projects that aim to "start from scratch" don't really make it. However, it's often the case that starting from scratch enables people to think about solutions from a fresh perspective, without all their old assumptions. Even if the actual "from scratch" product never really comes about, or if it comes about and is unsuccessful, often the solutions and the fresh insight creep into the old legacy systems' updates.

    • by LWATCDR ( 28044 )
      Actually the 68000 was very successful. It is still found in many embedded systems and sold millions of units.
      OS/2 failed not because it was a clean sheet but because it wasn't. IBM insisted that it run on the 286. Microsoft wanted to drop the 286 and design a version that would be multi-platform and 32-bit so IBM pushed ahead with Microsoft's help with OS/2 2.0 and Microsoft started work on Version 3... They later stuck the windows GUI on it and called it Windows NT.

      Itanium? Who knows. The PentiumPro as lo
      • Itanium is an interesting research project, and it may well fit in very nicely with future systems (I have a few JIT systems in mind that would be better suited to Itanium than any other current CPU). Things often don't end up doing what you expect, however. The predecessor to Itanium was the i860, which eventually found its way into a lot of workstation graphics cards (due to its vector processing ability), but failed completely as a general purpose CPU. The m68K was successful as a microcomputer CPU, b
    • Re: (Score:2, Insightful)

      by kad77 ( 805601 )
      Over a quarter billion 68000 series CPUs (including its direct variants) have been manufactured to date (probably, that particular design is still very active after 20+ years).

      It's success/failure is not even remotely comparable to OS/2 or the Itanium... get a clue!
  • by Anonymous Coward
    If they make a second Slashdot, I hope it will have a better dupe checker.
  • Who's In Charge? (Score:5, Insightful)

    by adavies42 ( 746183 ) on Thursday March 15, 2007 @03:01PM (#18365983)
    Unless this is being run by the IETF with EFF looking over their shoulder the whole time, I don't trust this to end up as something I want to use.
  • ...sounds so much better than Not Invented Here [wikipedia.org]
  • by wuie ( 884711 ) on Thursday March 15, 2007 @03:04PM (#18366043)
    "There's never time to do it right, but always time to do it over."
    • by starseeker ( 141897 ) on Thursday March 15, 2007 @03:30PM (#18366427) Homepage
      As frustrating as it may seem, there are actually fairly sound reasons for this in some situations. I would argue the internet was one.

      In theory, ten years of computer science research might have produced a better design for the internet than the one we have today, back when it was first being developed. However, we have learned a lot from the scale-up that on a practical level would be fairly hard to duplicate in a research setting. Sometimes you just don't think of the possible consequences until you see them happen, particularly things due to human beings TRYING to bring down the system. Think about how long telnet lasted, for example.

      In all honesty, it's a miracle the world wide web has scaled the way it has - consider the original scope of the military networks and the small amounts of data they were transmitting. The original designs were to Get Something Working and Justify Our Budget - that's how it has to work. I'd say the return on investment for the various stages of the internet has always more than justified even the costs of redoing it. Sometimes you can't wait to figure out how to do it right, because that will take too much time and what you can build NOW is still useful. Think about automobiles - 10 years from now we will undoubtedly be building better ones than we can build today, but the costs of waiting until we know how to do it "right" are much higher than the costs of replacement.

      Now, of course, the question of knowing how to do something right is distinct from doing correctly what we already know how to do - one is a research problem, one is an implementation problem. I'm inclined to think that the web is more of a research limitation than a "do it right" issue, although I could be wrong - it depends on how much was known in the beginning states.
      • by jgrahn ( 181062 )
        ["There's never time to do it right, but always time to do it over."]

        As frustrating as it may seem, there are actually fairly sound reasons for this in some situations. I would argue the internet was one.

        I think he was mocking the clean slate scheme, rather than criticizing the original design of the internet. As far as I'm concerned, the internet was done right (which doesn't mean it was finished and carved in stone thirty years ago, but rather the opposite).

        In all honesty, it's a miracle the world wide

  • "With what we know today, if we were to start again with a clean slate, how would we design a global communications infrastructure"

    Get rid of the porn, scam sites and domain squatters - however, this may not be possible.
  • by Kenja ( 541830 ) on Thursday March 15, 2007 @03:06PM (#18366071)
    Thats it... I'm gona make my OWN internet. With blackjack, and hookers. In fact, forget about the blackjack and the internet.
    • by no_pets ( 881013 )
      Hey, I know that you were joking but if some big, redo of DSM, gov't controlled Internet came to be I would certainly hope that ad hoc wireless "internets" would pop up and connect to each other. Or at least some encrypted version would run on top of things.
  • Admittedly, this is a quibble and slightly off-topic, but they could use a clean slate for their web design. It doesn't fit in my 1024x768 display.
  • by Ancient_Hacker ( 751168 ) on Thursday March 15, 2007 @03:12PM (#18366159)
    Hmmm, yep, let's get the experts to redesign the best network ever made.

    Let's get the guys that designed all those "wonderful" networks:

    • Morse Code
    • TeleText
    • Telex
    • DECNet
    • IBM's VTAM
    • IBM's CICS
    • IBM's SNA
    • Banyan Vines
    • AppleTalk
    • TELENET
    • CDCNET
    • IBM's LU 6
    • ISO net

    Oh yeah, let's get the "EXPERTS" involved!

    • What's so bad about Morse Code? Considering the technology and equipment that it was generally used on, it seems quite effective to me. Just because communications has moved passed it doesn't mean that it was bad.
    • Let's get the guys that designed all those "wonderful" networks:

      Morse Code. In general use 1844-1999.
      Trivially easy to adapt to almost any form of signaling, including assistive technology for the disabled.

      TeleText 1970-to date.
      In the U.S. most easily recognizable as Closed Captioning for the Hearing Impaired. But it's the root of the web page and any form of interactive television.

      Telex ca 1935-to date.
      Rugged, reliable and cheap. In Germany alone, more than 400,000 telex lines remain in daily operat

  • I would like to see similar a clean slate approach for Unix as well. For example, I am interested in the question - how would Unix work differently if extended attributes were available in all Unix filesystems from the beginning. Tradition often holds back innovation, I feel
    • Check out plan9. It was created by the same guys who built the orginial Unix to address some of the complaints they had with it.

      What I like about plan 9 is that it would work with everything. You could install it on a tv to act just as a remote or local display. it doesn't care.

      with plan 9 the network is just another conduit for passing back data. it doesn't matter what physical resource you are using or where on the network it is located. To the OS it is all the same.

       
    • Re: (Score:3, Interesting)

      by EvanED ( 569694 )
      For example, I am interested in the question - how would Unix work differently if extended attributes were available in all Unix filesystems from the beginning. Tradition often holds back innovation, I feel

      Fully agreed. For instance, NTFS supports alternate data streams, which are essentially really huge extended attributes. (They're a generalized version of HFS's resource and data forks. A number of other filesystems support similar things now too, such as HFS+, ZFS, and ReiserFS4 v4 in a slightly differen
  • Get the guys (and gals?) with the high multimedia delivery needs in on it from the start - they'll give you more bangs for the buck for both conception and practical trialling of the new system.

  • I have already patented scratch. So I am in for a huge stream of royalty payments!
  • by hackus ( 159037 ) on Thursday March 15, 2007 @03:23PM (#18366337) Homepage
    Translation:

    Lets rebuild the internet because it uses too much open source software and we are not making enough money. I know! Lets get all the vendors together and rebuild it using proprietary crud so that it is impossible for any of these "open source" guys to make server platforms that are freely available.

    Lets kill open standards too, because well....who needs those IETF guys anyway! They are just a bunch hippies!

    Seriously, though. The internet works better than my cell phone does.

    It doesn't need "fixing".

    It just needs a few upgrades.

    IPV6 would be a nice place to start!

    GAD.

    The thought of CISCO having a hand in anything the future internet could be makes me want to quit my current network manager job and open an Italian Restraunt.

    -gc

    -hack
    • by Jeffrey Baker ( 6191 ) on Thursday March 15, 2007 @04:01PM (#18366845)
      I'm with you. These guys are completely on crack. Haven't they ever read "Netheads vs. Bellheads"? You do not want to have intelligence inside the network, ever. Intelligence belongs at the edge. The core should be application-unaware, stupid, unreliable, and as simple as possible. Which is the Internet we have today, and it works great, thank you very much.
  • I'm sure the RIAA and MPAA won't try to force some kind of low-level piracy-monitoring/reporting mechanism into it. No, not at all.

    I see the New Internet joining New Coke in the dustbin of history.
  • This kind of research isn't just occurring at Stanford. The NSF has had a big push recently to grant this kind of research across the country.
  • by TheGratefulNet ( 143330 ) on Thursday March 15, 2007 @03:37PM (#18366515)
    or, rather, no, lets not.

    (and it got about as much attention as ipv6. they both planned for 'big networks' but we all know how popular OSI is, in the real world...)

  • by Colin Smith ( 2679 ) on Thursday March 15, 2007 @04:04PM (#18366875)
    Which doesn't talk to anything.

    If it's going to be useful, it has to talk to everything, that's the whole point of the network effect.

  • I would put the odds of this getting implemented at practically nil. If you do not fundamentally redesign most/all of the protocols, you are just refining IPv4/IPv6 to suit your needs. And if in fact you did come up with a "from scratch" design you have the following hurdles to meet:
    -port all known software/libs to use the new protocols
    -get all vendors of networking equip to issue major firmware upgrades to switches/hubs/firewalls anything that speaks on the network.
    -rewrite networking code for top 6 most p
  • by Inmatarian ( 814090 ) on Thursday March 15, 2007 @04:26PM (#18367237)
    http://en.wikipedia.org/wiki/Internet_Mail_2000 [wikipedia.org]

    The name is crappy, but the concept is a really good start. It's a shame this never caught on. Basically, Email's Subjects and Bodies are split, and the Subject is sent to the Receiver, and the Body is stored at the Sender's server. When the Receiver gets the Subject notification, they connect to the Sender's server and download the Body.

    The point of this strange scheme would be to crush spammers under the weight of their own To list, by having millions of incoming connections. The burden of storage goes to the Sender, not the Receiver.

    That should be one of the technologies Web 11.0 should implement. Somebody call up Al Gore and tell him this.
  • Sounds fine, as long as you can tunnel it over IP.
  • Content Management (Score:2, Insightful)

    by architimmy ( 727047 )
    How much of this effort do you think is oriented around builind content managment and DRM like tools into the internet at the foundation. I say leave it as it is. If people need something better let them build it for themselves. The internet just isn't that broken that it couldn't be fixed by simple things like... browsers conforming to standards etc. When you get into all this talk about multimedia content delivery etc, that's just something you build new networks for which layer funtionality on top of the
  • How will this help me look at boobies more efficiently?
  • by Nicopa ( 87617 ) <[moc.liamg] [ta] [reiamthcil.ocin]> on Thursday March 15, 2007 @06:22PM (#18368739)
    The current internet is to equalitary for them. In their whitepaper they state:

    [...] A related issue is that the current Internet does not provide support for differentiating between different packets on economic grounds. For example, two packets with the same origin and destination will typically be routed on the same path through the network, even if the packets have very different values.

    "Outrageous! The rich treated the same as the poor!" They want an internet in which a porn movie downloaded by a CEO preempts and disturbs a critical communication from a hospital to an investigation center.

    The internet as we have it is an open field. A dumb, simple, protocol so that people can innovate in the sides. This enabled us to be independent from ISP and to design new protocols (Gnutella, Bittorrent, etc.). Of course, they now say that this "dumbness" produced lack of innovation:

    Resistance to change is compounded by the end-to-end design philosophy that makes the Internet "smart" at the edges and "dumb" in the middle. While a dumb infrastructure led to rapid growth, it doesn't have the flexibility or intelligence to allow new ideas to be tested and deployed. There are many examples of how the dumbness of the network has led to ossification, such as the long time it took to deploy IPv6, multicast, and the very limited deployment of differentiated qualities of service. Deploying these well-known ideas has been hard enough; deploying radically new architectures is unthinkable today.

    It's not clear to me how having a more complex internet in the middle will be able to ease its growth. It seems as the opposite, as more complex middleware will be more complex to upgrade and setup. In fact, the main reason the current internet has "ossificated" *is* dumbness in the middle, but other kind of dumbness. The commercial companies' dumb administrators, dumb managers, who didn't care to provide us multicast, IPv6, mobile ip, IPsec, etc.

    The Internet as we have it could never had happened if it were for the private sector. It's too open, private companies don't like standards. See how the classical internet infrastructure got frozen when the commercial companies took over internet in the last century. HTTP, IMAP, POP, HTML, etc. got stuck in their last versions. It's because Internet needs a strong *public* presence. Companies can exist, provide service, but Internet needs a strong presence by the people (in the form of the state..? Universities? I don't know...)

    This group is not aiming at a better, utopic, internet. They are trying to recapture what they've lost when their CCITT (X.25, X.400, X.500) network wreck.

On the eighth day, God created FORTRAN.

Working...