Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Networking Communications Supercomputing The Internet

Grand Challenges in Networks for the Next 15 Years 90

jameshowison writes "Some of the researchers responsible for the Internet, including Bob Branden of ISI and David D Clark from MIT, have outlined what they see as the grand challenges for internetworking and computation in the next 10-15 years (PDF). The report from the IRTF's 'End-to-End Research Group' discussed the question, 'How might the computing and communications world be materially different in 10 to 15 years' and how do we get there? From a universal system for location, to small-area networks, to operation in time of crisis, software radio and an agenda to reduce the energy required for communications this document tries to imagine what will be like packet-switching was for the past 15 years."
This discussion has been archived. No new comments can be posted.

Grand Challenges in Networks for the Next 15 Years

Comments Filter:
  • by suso ( 153703 ) * on Sunday April 17, 2005 @12:53PM (#12262856) Journal
    Appearently, using HTML for documents is still a major challenge.

    It only takes one person or company to implement things wrong, break protocol and then you have a mess. That is the grand challenge.
  • by elid ( 672471 ) <eli.ipod@g m a il.com> on Sunday April 17, 2005 @01:02PM (#12262909)
    that this isn't one of those randomly generated MIT papers?
    • haha, 200 years of credibility just got flushed down the drain.
    • ... you aren't posting here just to pimp your FreePSP scheme?
    • I thought the same thing. The language is at least fluffy, if not bizarre. The document is peppered with odd sentences like this:

      "The older members of the data communications research community spent some of their formative years in the time when data communications was being revolutionized by the creation of a new paradigm: packet switching."

      I am a research professional in the areas of data communication and semiconductors, and I find this document very confusing and, well, wierd and perhaps even silly
  • by Anonymous Coward on Sunday April 17, 2005 @01:10PM (#12262965)
    this document tries to imagine what will be like packet-switching was for the past 15 years.

    I'm trying to imagine what this sentence means.. and it might take me 10-15 years.

    • In the last 15 years there has been a huge switch from circuit based networks to packet based networks. This has been driven by TCP/IP over Ethernet largely.
  • by Animats ( 122034 ) on Sunday April 17, 2005 @01:10PM (#12262966) Homepage
    The major Internet applications, by volume, are spam, piracy, and advertising. This trend will continue. By 2020, 98% of all Internet traffic will be illegal in some way.
  • by samael ( 12612 ) <Andrew@Ducker.org.uk> on Sunday April 17, 2005 @01:23PM (#12263049) Homepage
    They claim there isn't an emergency broadcast system - but we have Slashdot! The second anything big goes wrong, there it is!
    • Do you think when skynet becomes self aware, it wouldnt acquire admin rights on ./ and lock out all postings about "killer terminator robots seen motorbiking round LA" or "help, the military grid is now sentient".

      Instead, assuming that the /. audience are the people who stand a chance of stopping it, it would probably distract them all with different postings, like "free video p0rn service", or introduce a special distro of linux which looked like a descendant of debian but turned out to be a node in the s
    • With built-in fail-over redundancy!
  • by Beatbyte ( 163694 ) on Sunday April 17, 2005 @01:23PM (#12263050) Homepage
    Because it's still going to be the WAN from LAN network that we'll be working on forever.

    I've got a LAN setup running 200x as fast as the fastest WAN/Internet connection readily available (minus a special order and uber expensive DS3). And at the pace we're going, the US is getting slower and slower as far as the Internet connections go.

    Right now I can completely rewire my office and home for $5k with state of the art, high end network components and have it done in less than a week. I can't get close to those speeds with my net connection for 4x that price ($20k/year).

    That being said, there is still hope somewhere [utopianet.org]
    • The document focuses on technical challenges, not business or political ones. But you're right that technical innovation is useless unless there is a business and political climate that can foster it.

      • If I look five years ahead, I worry about how to design networks and protocols that are defensible against MPAA, RIAA and generic lawsuits.

        A lot of the adhoc stuff in the PDF look a bit like this something that must terrify the {MPA.RIA}A lawyers who would like to make DRM a requirement of all future network topologies and protocols.

        TCP was an implicit political statement. It said "we don't need telcos to make us pay for every second of a virtual ciruit", the way the OSI architecture was designed.

        Future
      • The document focuses on technical challenges, not business or political ones. But you're right that technical innovation is useless unless there is a business and political climate that can foster it.

        Right. At one point the article mentions "a range of anti-social behavior, including spam, spyware and adware, and phishing." I personally think vigilante-style copyright enforcement should be at the top of the list of anti-social behaviors. DRM issues are probably going to have more impact on network design
    • Right now I can completely rewire my office and home for $5k with state of the art, high end network components and have it done in less than a week. I can't get close to those speeds with my net connection for 4x that price ($20k/year).

      You say that like there's no difference between a LAN and a WAN. The reason your LAN setup is so much cheaper is that none of your cable runs are THREE MILES LONG. You think teh intarweb runs over 100BTX Ethernet cable everywhere?

      • Not only the distance but latency. Switching 1500 byte packets locally between two computers is trivial.

        Try that with 300,000 subscribers ...

        Tom
      • by Beatbyte ( 163694 ) on Sunday April 17, 2005 @02:22PM (#12263462) Homepage
        That's my point. Your excuse is distance. My point is, I don't care. I want speed. We need to quit focusing so much effort on making LAN's faster and focus on WAN/Internet connections.

        As far "teh intarweb" you speak of... nope, I don't think it runs on "100BTX Ethernet cable".. I've been in the ISP business for 10 years now and I'm pretty familiar with both ends of the Internet. The first being the provider end. The second, being the customer's end. Considering the customers pay my bills, I'm more worried about providing them with what they want.
  • IPv6 (Score:4, Insightful)

    by Anonymous Coward on Sunday April 17, 2005 @01:23PM (#12263051)
    The biggest challenge will be moving the entire internet onto IPv6
    • IPv6 is useful, and at some point we'll need the address space, but basically until Cisco and Juniper make routers that perform well using IPv6, nobody feels motivated to move wholesale - almost half the IPv4 space is still unused.. Microsoft is doing a bunch of IPv6 work that'll help the chicken&egg problem in a couple of years, but without a killer app, there's no real motivation.

      The big problem I've seen with IPv6 is that its goals not only included bigger address space, which we've been able to s

      • One of the purposes of the huge address space is to divide it up so that there won't be such a glut of small class-C's that have to be kept in the routing tables, instead it'll be much more aggregateable.
  • google cache, html version [64.233.167.104]

    Would've posted anonymously, but apparently excessive bad posting has occured from my IP or Subnet.
  • Wait a second... (Score:2, Insightful)

    by bungley ( 768242 )
    In 10 years, our communications infrastructure should be based on an architecture that provides a coherent framework for security, robust operation in the face of attack, and a trustworthy environment or services and applications.
    Wasn't that what it was designed for in the first place?
    • No. It was designed to make DARPA research more efficient by improving communication and sharing resources and tools.
    • Actually there was some thought about malicious users in the first place. Basically they thought that malicious users could always be tracked down. This line of thought was something like:
      • If some user behaves too bad, the host operator will disable access for that user.
      • If some host behaves too bad, the local network operator will disable access for that host.
      • If some local network behaves too bad, the network operator will disable access for that local network.
      • If some network behaves too bad, the other
  • We need to improve interpersonal communication via computer internetworking. And until Punch You In The Face over Ethernet (PYITFoE) is widely available, we will only ever scratch the surface of the rich tapestry of human interaction.

  • by Anonymous Coward
    I for one welcome our new visionary, low cost, ubiquitous, location aware, trustworthy, special needs supporting, quantum coherence preserving, intelligent, energy-efficient, cyber-world overlords.
  • By 15 years and the way the worlds going ( development of EMP bombs ect) have to fall back to (RFC2549) [faqs.org]

    Not that I'm cynical or anything ;)
  • -identify a person
    -identify a computer
    -move a file (yes an email can be wrapped as a file)
    -non-lossy streaming (IM, telnet,...)
    -lossy streaming (MMedia)
    Now if we could replace the thirty-elleven thingies with a "p" as last letter by the above 5...
  • Location technology (Score:3, Interesting)

    by jessecurry ( 820286 ) <jesse@jessecurry.net> on Sunday April 17, 2005 @01:43PM (#12263161) Homepage Journal
    The little section about location technology was very interesting. I love using my GPS, it has opened up a new sport to me and allows me to do some very interesting things, but I am bothered by the fact that it only works outdoors. Having a GPS-like systems that worked everywhere will be very cool, and integration with existing devices might bring the "smart home" I've been hearing about for the past 15 years into reality.
    The one part missing from my home automation system is the ability to autonomously process input. I have to use a remote control for events that aren't based on a repeating schedule. It would be nice to be able to walk into a room and have my wrist watch alert my automation server as to my whereabouts, then have the lighting dynamically adjust to me.
    • GPS receiver technology has improved dramatically in recent years. New receiver designs have greatly improved the sensitivity of GPS devices. The new receivers can acquire the GPS signal in many places that were hopeless with older receiver designs. Much of this research was spurred by the need to identify the location of cellular phone handsets for emergency services. It will take a few years for the new chips to show up in production hardware. When it does, you can expect GPS to work in many more places
    • The idea of having a GPS type device which operates everywhere is a great idea, however the problem with the current system is the need for the receiver device to be able to see satellites (i.e. the signal is line of sight only), and that buildings do a fairly good job of blocking the satellite signal.

      There are three solutions to this with the current technology:

      • Pump up the power output from the satellites. Although possible, I doubt that the satellite owners will boost the transmission power.
      • Increase
  • Van Jacobson? (Score:1, Interesting)

    by Anonymous Coward
    It appears as co-author

    Wasn't he dead?
  • by jgold03 ( 811521 )
    Switch to IPv6

    Multimedia "over IP" will not become mainstream without virtual circuit technologies. Also, we are being lazy and letting NAT take care of the lack of addressing provided by IPv4.
    • I agree that we are being lazy and letting NAT taking care of addressing (as opposed to IPv6).

      I would think that rather than virtual circuits what we need is effective flow control (but maybe that is what you mean....).

      The big problem here in many ways is the unsuitability of UDP to cope with multimedia flows as it has no built-in congestion control and apps like Skype to some degree abuse this.

      I am working away with a protocol that is meant to help solve this (DCCP) which is at draft RFC state at pre
      • I would think that rather than virtual circuits what we need is effective flow control (but maybe that is what you mean....). Flow control is pretty much the same thing as virtual circuits. When two end hosts need a path with a certain amount of bandwidth across the Internet, the routers have to maintain state information about that connection in order to provide it's guarantees. IPv6's proposed solution is to add a flow # to the packet header.
  • Low-latency interactive services (VoIP, video conferencing, games, more esoteric things ...) are not on their radar. Surprising ...
  • by G4from128k ( 686170 ) on Sunday April 17, 2005 @02:56PM (#12263646)
    A few ideas:
    1. QoS: The wider use of quality-of-service metrics to regulate bandwidth/latency/drop-rate will spread from backbone to the backplane. QoS will be assigned not just to packets or network streams, but extended to applications, processes, and threads.
    2. authentication-intensive network: Anti-spam, anti-phish, anti-piracy initiatives will deanonymize the network. Expanding liability may force commercial providers of network infrastructure to adopt so-called trusted computing initiatives. Counterfeiting a header may become crime similar to counterfeiting money because both crimes degrade the public trust in the system.
    3. militarization of networks: When will network security become so important to the national interest that the government deploys .mil computers to DDoS offending servers. If the economy runs on the net, someone will become the defender of that infrastructure.
    4. physical layer/application layer dichotomy: Currently the value of the network is in the application, but the cost is in the physical layer. This lead to the problem of price wars among infrastructure service provider or the war over municipal wifi. Perhaps an alternate approach would more closely link the value and costs of networking.
    5. Multiple IPs per device: I wonder if the move to multiple cores will push systems toward multiple IPv6 addys per machine. Technologies such as IBM's cell architecture support the potential for multiple OSes running on a single hardware platform. With such a large IPv6 address space, it may be easier to give each running OS instance its own IP address, rather than try to share an address and try to use a meta OS to share network resources. This, in turn, may lead to a proliferation of addresses that fill the larger space.
  • for me that's a pretty big assumption
    • It's definitely a radical assumption, which is why it's the kind of problem for an academic "Grand Challenge" rather than incremental private-sector development, either by businesses or typical hobbyists. The issues that make it worth considering
      • Research/Development of Quantum Computers has progressed far enough that they *might* become actually possible and practical in 10-20 years.
      • If they do work, and scale up adequately, they can do amazingly different kinds of computation than conventional machines.
  • by ItWasThem ( 458689 ) on Sunday April 17, 2005 @04:20PM (#12264098)
    The main issue I take with this paper is that it proposes a series of solutions without talking about any relevant application or problem that it will solve except for in an occasionally very generic way "We need better security" for example.

    That and the fact that it seems to have been written with the longest most convoluted sentences possible.

    Major change happens when an intelligent person solves a very real problem in a way that seems obvious once it's completed but that few others would have come up with.

    This paper starts by dissing incremental improvements and then goes on to rehash... wait for it... incremental improvements. How can you compare "better security" to Packet Switching in terms of revolutionary technology?

    In my opinion major advances in the next 10-15 years will be driven by content-based applications. Technology is cheap and is becomming a commodity. It will not make any more major leaps until there is a content driver and industry to take it there.

    For example, when we can all print flat panels for wall paper what will we have to display on them? An entirely new content and distribution industry will emerge to fill these and other voids and THEN technology will again stride ahead.

    Just my .02
  • I know my grand challenge for the next 15 years is to get laid! Who's with me?

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...