Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
The Internet

Universities Tapped To Build Secure Net 155

Wes Felter writes "InfoWorld reports that the National Science Foundation (NSF) has enlisted five university computer science departments to develop a secure, decentralized Internet infrastructure. I thought the Internet was already decentralized, so I'm curious about what exactly they're fixing. The article quotes Frans Kaashoek from MIT PDOS, which is working on decentralized software such as Chord."
This discussion has been archived. No new comments can be posted.

Universities Tapped To Build Secure Net

Comments Filter:
  • hmm, i wonder what the commercial applicatoins of this are? 3 of 3.
  • by SirSlud ( 67381 ) on Wednesday September 25, 2002 @03:21PM (#4330554) Homepage
    > I thought the Internet was already decentralized, so I'm curious about what exactly they're fixing.

    The only thing that needs fixing is the spammers. You know, so they can't have kids who take up the family business. We could even have Bob Barker provide the PSA at the end of Price Is Right episodes. ("Remeber to have your spammers spayed or neutered.")
  • Agents, Security (Score:3, Insightful)

    by goombah99 ( 560566 ) on Wednesday September 25, 2002 @03:21PM (#4330560)
    If you want a decentralized secure system you have to create a system that does not need an omnisceint trusted party. In otherwords you need an agent based system where each agent's local utility function is such that by optimizing it, it approximates the global utility function. This does not enforce security, but by clever design of the local utility function could make for a bobust system even with "evil" agents.
    • bobust system... bombust system.. it's really bombastic!
      Cool idea.
    • by Zeinfeld ( 263942 )
      If you want a decentralized secure system you have to create a system that does not need an omnisceint trusted party.

      So goes the dogma. The problem is that if you stick to that dogma the systems tend to be full of technology that is there just to get rid of the posibility of a single master party.

      A much better approach in practice is to separate out the logical and infrastructure elements of the problem. For example the Internet currently depends on there being only one logical service set associated with a particular IP address (convoluted phraseology due to the existence of anycast). That is you do not want there to be two companies that claim to 'own' the same IP address.

      Some folk want it to be possible for two people to share a DNS name. That is not a good idea either.

      What is a good idea is for services like Google to be able to return multiple listings for the same query..

      In other words, there is a need for unique identifiers which for the sake of convenience we call names and addresses. There is also a need for keyword identifiers that can be shared by many parties.

    • The only portion of the Internet that depends on a central authority, IIRC, is DNS.

      But DNS isn't the Internet.

      DNS is just an extension to the 'Net, added on later to make URLs easier to understand. Besides, who says we OSS'ers can't come up with, and implement, a better system?

      The problem with the Internet that I see, now, is the fact that you need manual effort to fix things like routing issues. Anyone remember about three or four years back when two routers in Florida each thought the other one was the destination for all their incoming connections?

      It wouldn't have been so bad if they hadn't told all the other routers in the world that they were where all connections needed to go.

      Then there's also the fact that most of Michigan looses its internet connection whenever Chicago has problems. The very nature of hubs make them weak points in the Internet infrastructure.
  • Obviously then... (Score:2, Insightful)

    by Anonymous Coward
    I thought the Internet was already decentralized, so I'm curious about what exactly they're fixing.
    Clearly they're working on the "secure" aspect of it.
    • Re:Obviously then... (Score:3, Informative)

      by pe1rxq ( 141710 )
      Or something really decentralized...
      Most of the internet indeed is decentralized, but take out the root servers and the internet is gone...

      Jeroen

  • How so? (Score:5, Informative)

    by YanceyAI ( 192279 ) <IAMYANCEY@yahoo.com> on Wednesday September 25, 2002 @03:26PM (#4330602)
    But what is really exciting is that if we succeed, we could change the world.

    If they do succeed, how exactly have the changed the world? Am I missing the point? Do I just not get it? Won't they just have changed the Internet...and in a way that would be seamless to most users? Isn't the general consensus that we are not all that vunerable.

  • by Kickstart70 ( 531316 ) on Wednesday September 25, 2002 @03:26PM (#4330611) Homepage
    The internet is horribly vulnerable as it is. It's not so much a problem of pure decentralization as it is one of too many people/requests to handle through too tight a pipe if the other pipe goes down.

    As an example...if one day some serious news happened that caused everyone to get on the net at once (Kyoto Earthquake, OJ Simpson on the freeway, Iraq drops a nuclear bomb), and this coincided with a failure of some large piece of hardware along the western coast (under extreme load), the remaining paths for much of this area would be so bogged down as to be useless. Effectively the internet would break under the pressure.

    What needs to happen to avoid the problem here is have many more paths for the data to flow, which requires better hardware and further decentralization (would love to see everyone's cable modem be a small internet router for people's data to travel through). Barring that, with the increased worldwide participation on the net expect that some days you just won't be able to use it.

    Kickstart
    • If everyone's computer were a router, then password-sniffing would get a LOT easier. What the internet most desperately needed was the new fiberoptic backbones, which were put in a while ago.
    • by shren ( 134692 )

      would love to see everyone's cable modem be a small internet router for people's data to travel through

      Is it just me, or is that statement total technobabble? Say I put a router in my house. Where does the data go through it to?

      • > Is it just me, or is that statement total technobabble? Say I put
        > a router in my house. Where does the data go through it to?

        The OP was probably confused about what cable modems do, but he
        brings up an interesting point...

        With a heirarchical routing system like what TCP/IP uses, it can
        pretty much only go upstream to the backbone. It is possible for
        a network to be designed so that there's no backbone, and the data
        can be routed wherever there are open connections -- so that if you
        have ethernet connections to the people in the houses nextdoor and
        a wireless connection to your relatives across town and another to
        your mobile phone (which connects to your phone service provider)
        and a DSL connection to an ISP, data could be routed in one of
        these connections and out the other.

        Such a system would have higher latency, because it would have
        more hops, but the bandwidth could be okay, if _everybody_ runs
        fiber to the house nextdoor. TCP/IP won't work, because it can't
        do routing in that kind of environment; some kind of routing
        protocol would have to be devised that understood the topology
        of such a network (perhaps by using latitude and longitude as
        metrics for the routing, along with other factors such as "how
        busy is the network in that direction"). The really major problem
        with such a system is, how much do you charge your neighbors to
        route their data, and what about the people whose data your
        neighbors are routing (through you), and so on? Unless everyone
        suddenly becomes a fair player (haha), the network protocols
        (or their implementation) would have to include some kind of
        reciprocal quota system or somesuch, which would add complexity
        and drive the latency up, possibly beyond usefulness.
        • some kind of routing protocol would have to be devised that understood the topology of such a network (perhaps by using latitude and longitude as metrics for the routing,

          That smacks of geolocation to me. People don't want others to know their incoming IP addresses, let alone their real coordinates!

          Distributed routing could work, but I can see a lot of ways for such a decentralized approach to break down.
        • by pyite ( 140350 )
          TCP/IP has nothing to do with it. TCP/IP is a routed (routable) protocol. Routing protocols are what do the routing. TCP/IP is fine, and there are already routing protocols that do most of the things you specify. Latitude / Longitude is a horrible metric as it can't really measure anything useful. We already have protocols such as IGRP and EIGRP which use bandwidth, MTU, reliability, delay, and load to calculate a scalar metric. Once again, TCP/IP has nothing to do with it. PLEASE don't go saying it is the problem when it's not.
        • arg..

          why is this mod'ed as insightful? its absolute waffle. Poster hasnt a breeze about how inter domain routing works.

          "With a heirarchical routing system like what TCP/IP uses, it can pretty much only go upstream to the backbone."

          Eh? Since when is TCP/IP hierarchical? For that matter, wtf has TCP got to do it? (other than that some routing protocols use TCP). backbone? what backbone? Show me where the internet has a backbone. (hint: it doesnt).

          "It is possible for a network to be designed so that there's no backbone, and the data can be routed wherever there are open connections"

          no sh$t sherlock. What an amazing idea. I wonder if the guys that came up with BGP [lindsay.net] thought of it before you. And I wonder if anyone actually uses it. (hint: the entire internet).

          "Such a system would have higher latency, because it would have more hops"

          oooh.. ok.. why's that then?

          "but the bandwidth could be okay, if _everybody_ runs fiber to the house nextdoor."

          ah... so if everyone used fibre things'd go faster? Damn you should work for an ISP, mine are still trying to persevere with RFC2549 [faqs.org] links to all their peers.

          "TCP/IP won't work, because it can't
          do routing in that kind of environment;some kind of routing protocol would have to be devised that understood the topology of such a network"


          gosh good point, and may i refer you the BGP link above again?

          "The really major problem with such a system is, how much do you charge your neighbors to
          route their data, and what about the people whose data your neighbors are routing (through you), and so on?"


          Hmm.. tricky one that. I believe some people are though trying their best to solve that one. (namely the lawyers who draw up contracts, and the accounts dept. of ISPs). Ie, yes, you pay the people you connect to depending on your comparitive standing (ie customers and traffic carried). If one is small and the other big, well why the small one generally pays the bigger one. Why one would almost call the smaller one a customer of the larger one. (there's a thought, you could run a business along these lines!). If the two are of equal comparitive standing, and can both agree they are, then they might peer with each other for free. For further discussion on this i really should direct you to the legal and accounting depts. of any decent sized (guess what?) ISP.

          In fairness, what you describe is actually generally how the internet works if you substitute your neighbours for ISPs / v. large organisations, its just i'm in a sarcastic mood, and you have a lot of reading up to do. sorry.
    • You dumb troll, the arpanet was designed exactly to be a self healing system to survive nuclear attack. Time after time, earthquakes and power failures have not killed the internet. And if everyone got on at the same time it might suck in thoughput and packet loss but it would function because it has done so.
      • When arpanet was first designed, I don't think there was any thought that it would have as many users as it currently does. In fact, I'm betting that the absolute ceiling on the expected number of total (not concurrent) users would have been 1,000,000 or so.

        Arpanet's main concern, I think, was forming a network that could go through many pathways -- not a network that could handle an endlessly growing amount of bandwidth usage.

        I myself have experienced occasions in which the ISP's backbone provider had part of their network go down, and the access time became painfully slow...something on the order of 200 bytes per second over a DSL modem.

        I don't know all the details, but they have been able to show that excessive usage can slow down access times over the Net.
      • by Zeinfeld ( 263942 ) on Wednesday September 25, 2002 @04:38PM (#4331189) Homepage
        You dumb troll, the arpanet was designed exactly to be a self healing system to survive nuclear attack

        No, it was not, Vint Cerf has dispelled that myth a number of times.

        The Internet does not emply flood fill routing or any of the technologies that one would want to have available if you wanted to survive a nuclear attack.

        TCP/IP was actually designed with the idea that networks could be quickly assembled with minimal configuration issues and without the need for every node to have access to a central co-ordination point.

        The Internet does actually have one central coordination point, the A root of the DNS service. However that is decoupled from the minute by minute actions of the Internet hosts so that the A root could in theory go down and come back up without a calamity (but nobody wants to try to find out!).

        • You suggest Vint Cerf dispelled the myth a number of times that the Internet was designed to withstand (in this case, gracefully degrade) under a nuclear attack. I'd be most interested to see a link to somewhere where this is quoted. Most textbooks relating to TCP/IP propagate this alleged myth and I'd be interested to see what exactly Vint said.

          I was always under the impression that the decentralized nature of the original network was a design criteria which arose from the desire to withstand (or degrade gracefully more correctly stated) in the event of significant damage to the overall infrastructure. Are you suggesting this is not the case? If so, I'd _really_ like to see the sources you have used to arrive at this conclusion.

          • *sigh* Three seconds with Google and the words "cerf myth nuclear" yields:
            1. http://www.usatoday.com/life/cyber/tech/ctg000.htm [usatoday.com]
              "I think that the old arguments that will come up at the (UCLA) conference and have come up over and over is everybody is claiming responsibility for everything at this point," says [Lawrence] Roberts, who was the designer and developer of ARPANET.

              But one thing all agree on is that the Internet was not conceived as a fail-safe communications tool in case of nuclear war, a much-promulgated myth over the years. The Rand Research Institute was developing a study shortly after ARPANET's birth that has been confused with the research-oriented ARPANET and subsequent developments.

              Nuclear war "wasn't the reason we did anything," Roberts says. "That story is just wrong."
            2. http://www2.aus.us.mids.org/mn/1002/myth.html [mids.org][In 1999], Alex McKenzie (BBN 1967-1976) posted the following:

              While it is true that the design of the ARPANET was not at all influenced by concerns about surviving a nuclear attack, it is also true that the designers of the ARPANET and other ARPA-sponsored networks were always concerned about "robustness", which means the ability to keep operating in spite of failures in individual nodes or the circuits connecting them.
            3. http://www.ibiblio.org/pioneers/ [ibiblio.org]
              The architecture of the ARPANET relied heavily on the ideas of Paul Baran who co-invented a new system known as packet-switching.( A British computer scientist, Donald Davies, independently came up with his own theories of packet-switching). Baran also suggested that the network be designed as a distributed network. This design, which included a high level of redundancy, would make the network more robust in the case of a nuclear attack. This is probably where the myth that the Internet was created as a communications network for the event of a nuclear war comes from. As a distributed network the ARPANET definitely was robust, and possibly could have withstood a nuclear attack, but the chief goal of its creators was to facilitate normal communications between researchers.

            And that's just the first three hits. Why is it that people are all too willing to tell others to provide links, when it's now just as easy to find them yourself? While it's true that the "burden of proof" usually rests with the party proposing an opinion, when that burden becomes as light as it is with the modern Internet, it's irresponsible and unproductive to just lob "links, please" comments without engaging one's own brain.
          • If so, I'd _really_ like to see the sources you have used to arrive at this conclusion.

            Sorry, I don't know where Vint is at the moment, I spoke with him directly. Also Tom Knight, David Clark, quite a few people.

            Try looking on google, cerf myth nuclear internet

            Hit #1 http://www.ibiblio.org/pioneers/

            However, you don't need to take my word for it, go look at the RFCs describing the design of the Internet, the first to contain the word 'nuclear' is 2731 and it is in a mention to where Homer Simpson works:

            Google- nuclear site:ietf.org

            • Slashdot being what it is and having the diverse population that it does, I'm not about to gainsay your claim. However, you must admit that making a claim about a particular authority having made a particular statement and not providing any sort of reference (nor have I yet seen one, nor found one) does call things into question. And unverifiable (in terms of their actual content) personal conversations certainly aren't very useful to the rest of us.

              But then, this is slashdot and just about anything said here could be entirely true, entirely made up, or anything in between. :)

              Thanks for the additional information.
      • >the arpanet was designed exactly to be a self healing system to survive nuclear attack.

        Actually, you are makeing two statements.
        The first one is only partially true and only in context of the second statement.

        The Internet was designed to facilitate the communication between scientists and military even in the event of a major outage (a nuclear attack in mind).
        It was not designed to be "self healing", it was designed to degrade gracefully.

        You are surely aware of the differences between a nuclear attack and DoS (may it be voluntary or involuntary).
        Both may require redundancy, but the first one a redundancy of transmission paths, the second one a redundancy of sources.

        Not to mention, that the actual Internet and the theory have only the standards in common.
        There are central exchange points where most of the traffic is routed through, (London, New York, comes to mind), most Root-DNS servers are concentrated in the US, routing-tables are statically set (to accomodate economical/political decisions).

        >Time after time, earthquakes and power failures have not killed the internet.

        Not the Internet as whole. But the current requirements have changed. Best Effort is not good enough anymore. We are not happy anymore, just being able to communicate somehow, in the event of a nuclear attack.
        A degradation of data transfer from Tbit/s to some Mbit/s between two continents can be considered as a major breakdown.
    • What needs to happen to avoid the problem here is have many more paths for the data to flow, which requires better hardware and further decentralization

      Problem solved [wired.com]
    • the internet is not that vulnerable... except to a nonrandom hazard. for example, a whole lot of internet traffic would not bring the internet down. it would slow it quite a bit... but taking the internet down by flooding with traffic would be incredibly hard. however, by taking out specific nodes on the internet there would be a dramatic effect. the internet being an aristocratic complex network of sorts is bound by the small worlds theory. the health of a network can be defined by the largest degree of seperation between any two points on the network. if the degree of seperation is low, the network is considered healthy. a nuclear blast on the westcoast would drive up the degree of seperation up a bit, but i think that the network could handle the stress since the west coast is only one portion of the network.

      "Even with nearly half of all the nodes removed, those that remained were still sewn together into one integrated whole."
      (Nexus,Mark Buchanan,p131)

      -daniel
      • sorry... submitted before i finished my thoughts. there is alot to be learned in attempting to secure an aristocratic complex network because of its decentralized nature. it would be in no way an easy task. the research should quite interesting
        -daniel
    • If you build excess capacity into the system, the use will expand to fill it. Warezd00dz will just download more stuff.

      Also, there's a tradeoff between efficiency and fault-tolerance. You want more connections, but are you willing to pay for it? Are you willing to pay twice the amount every month that you're currently paying, in order to be able to access Slashdot on the day Iraq lobs a nuke?

      If so, then hey, get cable and DSL and some satellite thingie and anything else you can get, and learn how to configured "gated" on your home firewall/router.

  • by Bookwyrm ( 3535 ) on Wednesday September 25, 2002 @03:27PM (#4330629)
    Neither the DNS system (root servers), or the allocation/control of IP address(ing) is decentralized -- they may be heirarchial, but both still have a root.

    It will be interesting to see if IPv6 will use geographic hierarchies for routing, or even relaxes the hierarchial assignment-scheme at all. If your IPv6 suffix is static/fixed (based on your MAC address, say), and your IPv6 prefix is from the current network/area you are in, that will be an interesting tool to let people track devices as they move around/between networks.
    • I don't know how you can say DNS is centralized... There are many "Root" servers (ROOT-A through ROOT-'n') that are scattered around the globe... Each of these servers is capable of handling requests for any DNS query.

      Now the database load/creation is not decentralized, nor do I think it should be. The failure case for the database creation going down is that new domains do not go online till it is back up, not a horrible failure case (unless you just applied for a new domain name that is). The failure case for multiple people creating multiple databases is that as they go out of synch, getting VERY different answers for the same query depending on which root server you happen to hit. Same thing goes for IP address allocation... Oh well.

      By the way the last issue on IPv6 address allocation (tracking a device using the lower 64 bits of the IPv6 address) has been talked about for many years in the IPv6 development groups. There are solutions, the end result is most people don't care... Oh well...

      • Memory fades, but -- reportedly -- someone at Network Solutions in Herndon loaded the wrong, or bad, DNS tape a few years ago. So, for the better part of a day, lots of helpless little packets went to the wrong place.

        Anyone know if there's some truth in this, or is it another myth of the Internet?
    • Neither the DNS system (root servers), or the allocation/control of IP address(ing) is decentralized -- they may be heirarchial, but both still have a root.

      This is a fundamental aspect of those systems. You want one domain name to map to one (set of) server, and similarly for IP addresses. If you don't have one authority dictating who gets what address, you'll have disagreements and things less reliable.

      MAC addresses also have one authority behind them, but typically only the manufacturer has to deal with them. MAC addresses actually could be decentralized, since they only need to be unique on the local network, where the others need to be globally unique.

      Anyway, I think the only way to avoid having one (or a small number of) central authority is to have these decisions part of the spec, ie decide on a scheme ahead of time that's unambiguous in nearly all cases.
    • Neither the DNS system (root servers), or the allocation/control of IP address(ing) is decentralized -- they may be heirarchial, but both still have a root.

      Actually the logical registration is co-ordinated in a single logical database. However the implementation is very highly distributed.

      There are multiple DNS root servers and there are even multiple A root servers, but only one A root is active at any one time and they all use the same IP address.

  • DNS Servers (Score:2, Interesting)

    by cadillactux ( 577893 )
    If you think about it, the DNS servers are a "centralized" systems. With the Root Servers, if I query my DNS server at home, and cannot find www.fubar.com, I query one of the DNS root servers to find which DNS server has the records I need.

    Now imagine, what if one of those root servers went down. The other servers have to take the load of the failed server. Now imagine two went down, however unlikely, but that puts loads of extra traffic on the remaing servers. After a while, this will add up. Now, I admit, it is probibly very unlikely, but with enough traffic, even a root server could be /.ed. Or, in a less extreme case, it could take quite a while for my query for www.fubar.com to pass through.

  • ... use Microsoft Passport!?
  • by CXI ( 46706 )
    I thought the Internet was already decentralized, so I'm curious about what exactly they're fixing.

    The Internet is designed to be decentralized but it is built to maximize profit.
  • I thought the Internet was already decentralized, so I'm curious about what exactly they're fixing.

    Wouldn't the DNS system count as a point of failure. That they would like fix. That would also be a good argument for developing a decentralized system.

  • The might be referring to IP address assignments, DNS, and related protocols, which are all somewhat centralized right now. The secure part is obvious, but more important when specifically applied to the preceding list. Example: You want a secure system so the decentralized DNS information can be trusted.

    Then again, I could be WAY off. :)
  • Wireless lilly pads.. Viral Wireless>

    Definitely decentralized.

  • Decentralisation (Score:1, Insightful)

    by Anonymous Coward
    One of the cool things in the future we'll be seeing is decentralised networking through quanta, i.e. quantum particles. Right now, for the most part, the Internet is point-to-point. Your modem connects to an internet provider, which connects to the backplane. If your link to the host provider is severed, you can't read any other machines, because you only have one link to the Interweb. A pair of quantum particles can be used to exchange information between to computing machines. So, if you had a nicely sized set of pairs of quantum particles, you could reach any machine on the Internet directly (point-to-point) as long as you and it had a matching set of quanta. This means you don't go through 19-30 hops.
  • Interesting pick of universities that are getting the cash. Compare that list to Usnews' 2003 ranking of CS grad schools: 1. Carnegie Mellon University (PA) Massachusetts Institute of Technology Stanford University (CA) University of California-Berkeley 5. University of Illinois-Urbana-Champaign See for yourself @ http://www.usnews.com/usnews/edu/grad/rankings/phd sci/brief/com_brief.php
    • forgot to preview. here it is more legibly:

      Interesting pick of universities that are getting the cash. Compare that list to Usnews' 2003 ranking of CS grad schools:

      1. Carnegie Mellon University (PA)
      Massachusetts Institute of Technology
      Stanford University (CA)
      University of California-Berkeley
      5. University of Illinois-Urbana-Champaign


      See for yourself @
      http://www.usnews.com/usnews/edu/grad/rankings/phd sci/brief/com_brief.php [slashdot.org]
    • Re:The Chosen (Score:2, Interesting)

      by GoBears ( 86303 )
      This is interesting why? The "chosen" contains (1) MIT PDOS and two schools (NYU and UCB) where MIT PDOS alumni have recently been hired, (2) a network shop (ICSI/ACIRI) and (3) a security shop (Rice). Like many such "picks," it reflects human connections and a fit with someone's agenda more than some abstract notion of organizational merit.
      • Re:The Chosen (Score:2, Insightful)

        by chenzhen ( 532755 )
        That is why it is interesting. I suspect it is not the best arrangement, and therefore exploring why it happened as it did can lead to a better understanding of what is right/wrong in the scientific community. Always room for improvement.
  • by Duderstadt ( 549997 ) on Wednesday September 25, 2002 @03:33PM (#4330678)
    Quoth the poster:

    I thought the Internet was already decentralized, so I'm curious about what exactly they're fixing.

    Not quite. The primary vulnerability lies within the Root DNS servers, which contain all DNS information for the entire Internet*. IIRC, there are only eleven or twelve of them. And because each replicates its data set to all other Root servers, catastrophic failure of one would bring down all of the others.

    If that ever happens, you can pretty much say goodbye to the Net, at least temporarily.

    *Actually, I think they hold the addresses of all Local DNS servers, which is basically the same thing.

    • The primary vulnerability lies within the Root DNS servers, which contain all DNS information for the entire Internet*. IIRC, there are only eleven or twelve of them. And because each replicates its data set to all other Root servers, catastrophic failure of one would bring down all of the others.

      That would be a stupid way to run the root servers. My understanding is that the root servers are updated from an offline master; the whole point is that if one fails the others still work and can pick up the load.
    • by glwtta ( 532858 ) on Wednesday September 25, 2002 @04:16PM (#4331016) Homepage
      And because each replicates its data set to all other Root servers, catastrophic failure of one would bring down all of the others.

      Um, very untrue - the primary root server replicates the data to the rest. If a non-primary root server goes down, you don't notice it. If the primary one goes down, the function is moved to any one of the rest (and you still don't notice it). Basically something like 3 or 4 of them have to go out before Joe InternetUser will notice any effect, and even then it would be somewhat inconvinient, not "catastrohpic". (This is what I rember from some article on the topic awhile back - it's not like I know anything about these things.)

    • because each replicates its data set to all other Root servers, catastrophic failure of one would bring down all of the other

      Nonononono, that would be extremely stupid. If one of the root servers went down, the others would pick up the slack, that is part of the redundancy.

      If that ever happens, you can pretty much say goodbye to the Net, at least temporarily.

      Not exactly. Even if all the root DNS servers were wipped from the face of the earth, the caches of all the local DNS servers would still know the addresses for any sites that were recently visited by its clients. So as long as the IPs of the sites didn't change, it would be ok as the local DNS servers would still know where to look.
      Now if you made a request to a site that the DNS server has never been to before, it would look up to higher DNS servers. If none of them, all the way back to the root servers, knew the answer, you wouldn't be able to get at those sites.

    • This is informative?

      The "root servers" contain the locations of the "top level domain (TLD) servers". They can answer queries such as "where is the DNS for .com?"

      The TLD servers contain locations of the "next-to-top level domain servers. They can answer queries such as "where is the DNS for IBM.com?"

      IBM's own DNS can answer the question "where is www.ibm.com?".

      The system is already decentralized to the point that an attacker would have to hit numerous targets to have any significant effect. The only "central point" is the "source files" that feed the upper-level DN servers. Decentralizing those sources would turn the Net into anarchy. "I'm the DNS for .com", "no, I'm the DNS for .com".

      I suppose you *could* decentralize the sources, but you would need to implement a system of trust which would have its own center.

    • 13 actually. And the replication doesn't quite work the way you claim: the 13 are all actually secondaries to a "hidden" primary.

      The main problem with that system, though, is that one mistake on the hidden primary (which has happened) screws up the entire system. And, yes, many many zones were hosed for a while as Network Solutions tried to figure out what the hell they did. And, of course, there's only 13 machines to DoS before all DNS becomes totally useless.
  • Clarification (Score:3, Insightful)

    by I_am_Rambi ( 536614 ) on Wednesday September 25, 2002 @03:34PM (#4330683) Homepage
    DHT is like having a file cabinet distributed over numerous servers

    Is this DHT going to be decentralized so different servers are throughout the country? If so, would yahoo hold files for google? If it is this way, it sounds like my credit card data would be insecure. (Say a p0rn site is holding data for ebay)

    Or is it more like a backup of the server that is in the same room? If it is this way, don't most organizations that host their own site have more than one server with the same data?

    Or am I just totally confused?
    • would yahoo hold files for google? If it is this way, it sounds like my credit card data would be insecure

      A large part of how a system like this is supposed to work is the observation that having someone hold an encrypted and signed piece of data might help you survive a failure or improve performance, but doesn't do the holder any good whatsoever in terms of inspecting or modifying your data. If you consider the encryption to be secure, then this type of system can be just as secure.

  • NIIIP (Score:3, Informative)

    by Gaggme ( 594298 ) on Wednesday September 25, 2002 @03:35PM (#4330694) Homepage Journal
    The infrastructure of the internet has evolved out of the past few decades yet many key parts are still integral to the existance of the Internet.

    After 9/11 several security consultants met in a Senate hearing and demonstrated in a simulation, how the removal of a few key segments could cripple internet traffic (granted some of the plan involved small amount of urban sabatoge).

    The internet if scaled down could be compareable to the P2P networks. 90% of content on the internet is provided by less than 10% of computers connected.

    The people at http://www.niiip.org/ have amazing documents with regard to security and how the infrastructure of the internet works. Well worth a read.

    Another good spot for information, though slightly tainted, is http://www.iisweb.com/. They offer a skewed view of security, as well as some examples of "Worse Case Senarios"
  • The InfoWorld article describes a secure distributed storage system, not just plain old messaging connectivity. There aren't too many such beasts around; usually it's more of a "distributed, secure, usable - pick two" kind of thing. Some of the projects that approach the goal of combining all three actually seem to sharing the IRIS award - i.e. OceanStore [berkeley.edu] at Berkeley and various projects [nyu.edu] at NYU. I don't know off the top of the head how ICSI and Rice fit in, but I'm about to go check their sites because I'll bet it's interesting.

    • by Salamander ( 33735 ) <`jeff' `at' `pl.atyp.us'> on Wednesday September 25, 2002 @03:45PM (#4330779) Homepage Journal

      The Rice connection almost certainly has to do with Peter Druschel and Pastry [rice.edu] (for which the other PI seems to be Antony Rowstron of Microsoft Research, interestingly enough). I'm not totally sure of the ICSI connection, but they seem to be closely affiliated with UCB and I know that Ion Stoica works in these areas. OceanStore, CFS/SFS, Pastry, Kademlia - it's definitely a pretty good collection. A lot of the top people in DHT/DOLR (Distributed Hash Table, Distributed Object Location and Routing) research are involved, and I'd love to know how they plan to converge their various efforts toward a common solution.

    • Thanks; the article was a little unclear about what this project is actually about. Part of it talked about the Internet in general, part of it was about DHTs, and buried in there was a mention of storage.
  • not decentralized (Score:2, Informative)

    by RussRoss ( 74155 )
    The design is meant to be decentralized (except for some databases like DNS) but in practice it isn't nearly as decentralized as it should be.

    I remember an anecdote about some company that installed multiple data feeds from multiple vendors to ensure reliability--redundancy is always good, right? Some construction worker was fixing a pipe and cut a fiber cable and sure enough, the company was offline. The different vendors all shared the same fiber so the redundancy wasn't real.

    Tons of traffic gets jammed through a few key distribution routes. I'll bet the typical internet user sends traffic through many routers with no backups--you could probably shut down my home cable modem service by pulling the plug on any of at least half-a-dozen routers before it gets out of the provider's internal network. Redundancy in the backbone is nice, but useless if the endpoints are vulnerable.

    - Russ
    • This is because of tier 1 providers. We're a tier 2 that only has network within Wisconsin, but we have 4 peers that we only route inter-network traffic for, e.g. from us to Norlight, from Norlight to us, but not from us through Norlight to TDS or vice versa. They don't want to give us access as an "equal" peer, because they'd rather charge us for a connection, even though we can provide them with high capacity long distance (OC-3) cross-network routes. Since they won't help us, we certianly wont help them by establishing the cross-net routes on our end for free. If any one of our peers offered to take outbound traffic for us we would allow them traffic through our network to our other peers, no problem. Them not wanting to scratch our back is what causes the lack of a decentralized network.
  • by Ashurbanipal ( 578639 ) on Wednesday September 25, 2002 @03:38PM (#4330719)
    > I thought the Internet was already decentralized, so I'm curious about what exactly they're fixing.

    Since every release of BIND ties us more thoroughly to ICANN-dominated centralised name control, I'd guess that DNS would be what they are fixing.

    It used to be easy to use alternative roots in conjunction with the "authoritative" (authoritarian?) roots... but now it's one or the other. Caveat - I haven't tried the BIND alternatives yet, there are only so many hours in the day.

    The namespace of the Internet is hosed, even USENET's namespace.namespace.namespace is more useful. And the geographic separation of the root nameservers doesn't matter much when all change authority is vested in a single entity.

    • root schmoot... if the roots went down, IP addressing still works, and BIND still supports the good old HOSTS file...

      IF all of the authoritative roots were nuked, I bet it would be a matter of hours before small networks bounced back up using HOSTS files, and soon had an alternative in place... and IF all the authoritative roots were targeted and taken out, it's going to be pretty obvious what's going on in the world, and thereby easily worked around.

      It doesn't have to be all automagic, we're still smart people behind these screens.

    • What are you talking about ?
      which part of BIND ties you to ICANN roots ?

      you just might be a cracker
  • by fleabag ( 445654 ) on Wednesday September 25, 2002 @03:38PM (#4330723)
    The idea that just because storage is distributed, then it is secure, is only partially true.

    If your data is distributed, and one server gets taken out, then fine, you still have service, and the downed server can be re-synched.

    If your data is distributed, and someone updates it, then the update is faithfully replicated - even if it is wrong. I work for a company that has its Lotus Notes address database distributed across > 50 locations. One of these would probably survive World War III. Unfortunately, a few years ago, none of them survived a deletion, followed by automatic replication. Took us down for a day, becuase the tapes were only in 1 location.

    Of course, you could skip the replication. The you have the non-trivial problem of finding the latest version.
    • If your data is distributed, and someone updates it, then the update is faithfully replicated - even if it is wrong.

      Depends on your definition of "wrong"; if your system supports true deletion and a properly authorized entity deleted something, it should be gone from all replicas. Largely for that reason, many of the systems being developed in this area tend toward an archival model where previous updates are supposed to remain available almost forever and deletion just means "mark it as not being part of the current data set".

      The you have the non-trivial problem of finding the latest version.

      Yep, it's non-trivial all right, but these are just the kinds of people who might be able to beat the problem into submission.

    • I don't think anyone would make the claim that a distributed database would save you from accidently hitting the 'Delete' button. That's an interface problem, not a security one.

      If someone ELSE hit the Delete button, then its a security issue, but a different one. The data itself, though, is fairly safe.
  • The same institutions who are fighting that which will rely strongly on a decentralized infrastructure (P2P networks of today and tomorrow) are also researching ways to improve it.

    Ok, I know universities generally aren't against P2P technology, just what it is being used for.
  • by Merik ( 172436 ) on Wednesday September 25, 2002 @03:44PM (#4330771) Homepage
    "The researchers hope that they can create a robust, distributed network that could essentially act as a secure storage system for the Internet. Governments, institutions and businesses worldwide could theoretically choose to place their data in the secure system, which would minimize the effects of outage or attack."

    This seems it would reduce an individual entity's loss to an attack with the idea of, everyone loses a little rather than one losing alot. But it also seems, even though the details in this article are lacking, that physical security of boxes would become more important.

    Should the british goverment, a university, and whoever else, trust a small buisness in san diego to house its part data.

    the only way this would work from a security stand point would be to make the information that is spread out over 50 or so computers not accessible from the machine its hosted in on. and it seems this would be pretty much impossible(er.. hackerd00ds) from a purely software approach....

    do you trust me with your data? um... i dont

    • the only way this would work from a security stand point would be to make the information that is spread out over 50 or so computers not accessible from the machine its hosted in on

      Isn't this what freenet does by encrypting all the data that is stored on your machine but not telling you the key to unencrypt the data on your machine?
    • Freenet is a software-only system that already stores information with strong encryption. Any individual freenet node cannot be reasonably scanned for certain content, IIRC.

      http://freenetproject.org/

      So, as the tagline goes.....
    • Actually, freenet [freenet.org] does exactly that. When you use freenet, you store someone else's data on your computer. However, it's encrypted so you never have any idea what you're storing. And you also don't have the only copy of it, so if you delete all your partial encrypted data, it doesn't cease to exist.
    • Should the british goverment, a university, and whoever else, trust a small buisness in san diego to house its part data.

      If the data is encrypted and signed, why not? They can't inspect it, they can't modify it, the worst they can do is drop it on the floor and that's exactly equivalent to the sort of failure that other parts of the system are designed to deal with. It gets more difficult when there might be a very large number of "rogue servers" that promise to store copies and then don't, but even that scenario need not be fatal and the basic idea is still sound.

  • if these universities are being tapped how can they be secure??? :)
  • That's what it sounds like to me, redundant storage of DNS info and content
  • by Anonymous Coward
    Back in the days of bang paths. That was a while back. The system was peer-to-peer and designed to withstand the nuking of many but not all nodes.

    Now everything is centralized, with backbone pipes, etc.
  • Sounds like they mean they want to store related information in a redundant way so that if one part of the network goes down you can still access the info. Like a RAID array.
  • by DaoudaW ( 533025 ) on Wednesday September 25, 2002 @04:01PM (#4330901)
    C'mon guys did you even read the article. NSF is not proposing changing the structure of the web, rather they are hoping to utilize the structure to make data more secure by storing it in decentralized fashion. No one server will contain enough data to reconstruct the file, any server can crash and the file will still be available.
  • by TheSHAD0W ( 258774 ) on Wednesday September 25, 2002 @04:06PM (#4330936) Homepage
    The current internet was designed to be decentralized, with no specific backbone required; routers would figure out what paths to send what packets over. Scaling-wise, it's been pretty successful. Redundancy-wise, it is less than so. A bad route typically doesn't result in a smooth transfer to another link unless a lot of work has been done to assure it would happen; instead, packets are dropped and communications are badly disrupted.

    I had a perfect example of that happen to my current ISP; after getting terrible communications errors, I called them. Turns out one of three of their routes was out; they reset a router, and everything was copacetic. But the other two routes should have been able to handle the traffic. They didn't.

    With the advent of IP6, the structure of the net becomes even more convoluted, and errors may become even more difficult to handle. In order to have a nice, stable internet, a system of handling broken routes needs to be integrated into the new spec.
  • The Internet is decentralized. The services required to operate it are not. Central administration is required for domain name resolution and routing tables... I'm sure there are other things, but I'm not an Inet expert.

    Perhaps they are trying to make a self organizing network... automatic rerouting, dynamic topology creation, decentralized name resolution. Similar ideas have been discussed with P2P networks.

    Perhaps they are designing a network using P2P concepts.

    And perhaps I should just read the article. :-)
  • as many U networks are run by students that may not have the knowledge/experience that you would find in the private sector? NOT A TROLL, this is an observation of mine...
  • > I thought the Internet was already decentralized, so I'm curious about what exactly they're fixing.

    The DNS is what they are decentralizing, among other things. If someone takes out the root domain server, the internet would be pretty screwed right now. If we had an easy system for routing information that wasn't based on DNS, it would change a lot of systems. Web Sites, Email accounts, Instant Messaging, are all dependent on DNS. If this project works, we may be able to say goodbye to AOL's monopoly on IM.

    Who needs a tag line anyways!
  • This sounds more like some politicos trying to 'make a diffrence' over something that doesn't need to be dealt with.

    NO ONE relies on the Internet for matters of 'life and death', which is the only reason you would go to the expense/aggrivation to make something that fault tolerant (can you hear the drums beating out the old 'we must be safe from everything' rythm?).

    When people couldn't get all the pretty pictures on the last few disasters we have had online, what did they do. They went to a medium better suited for broad and instantaneous information distribution. Television and Radio! What a concept! An amazing technology that is capable of reaching millions of people within range of any one of hundreds of 'broadcast stations' located all over the planet!

    Of course, because the Internet doesn't work that way, there must be something wrong with it, right?

    This reminds me of the telcos demanding QoS for IP, so they could start using a more familiar revenue model for IP and IP services...

  • Anyone who's dealt with memory or disk allocation knows that performance suffers when a resource (file, data string, etc.) is fragmented over several locations on the same physical unit. This is why smart Oracle DBAs define storage parameters when they create objects, why smart Windows users run "Defrag" on their FAT volumes periodically, etc.

    If I understand the (altogether too brief) article correctly, the "secure net" will work by fragmenting a file across multiple servers, in multiple locations. To get the most recent copy of a file, any given node will have to go out onto the network and retrieve all the pieces that aren't stored locally. This is sure to yield much poorer performance than a purely-local retrieval (not to mention the inherent security risk of transferring data over the network...)

    What am I missing here
  • Please explain how this is decentralized, not to mention secure:

    This file is made available by InterNIC registration services
    under anonymous FTP as
    file /domain/named.root
    on server FTP.RS.INTERNIC.NET -OR- under Gopher at RS.INTERNIC.NET
    under menu InterNIC Registration Services (NSI)
    submenu InterNIC Registration Archives
    file named.root

    last update: Aug 22, 1997
    related version of root zone: 1997082200

    formerly NS.INTERNIC.NET
    . 3600000 IN NS A.ROOT-SERVERS.NET.A.ROOT-SERVERS.NET. 3600000 A 198.41.0.4

    formerly NS1.ISI.EDU
    . 3600000 NS B.ROOT-SERVERS.NET.B.ROOT-SERVERS.NET. 3600000 A 128.9.0.107

    formerly C.PSI.NET
    . 3600000 NS C.ROOT-SERVERS.NET.C.ROOT-SERVERS.NET. 3600000 A 192.33.4.12

    formerly TERP.UMD.EDU
    . 3600000 NS D.ROOT-SERVERS.NET.D.ROOT-SERVERS.NET. 3600000 A 128.8.10.90

    formerly NS.NASA.GOV
    . 3600000 NS E.ROOT-SERVERS.NET.E.ROOT-SERVERS.NET. 3600000 A 192.203.230.10
    formerly NS.ISC.ORG. 3600000 NS F.ROOT-SERVERS.NET.F.ROOT-SERVERS.NET. 3600000 A 192.5.5.241

    formerly NS.NIC.DDN.MIL. 3600000 NS G.ROOT-SERVERS.NET.G.ROOT-SERVERS.NET. 3600000 A 192.112.36.4

    formerly AOS.ARL.ARMY.MIL
    . 3600000 NS H.ROOT-SERVERS.NET.H.ROOT-SERVERS.NET. 3600000 A 128.63.2.53

    formerly NIC.NORDU.NET
    . 3600000 NS I.ROOT-SERVERS.NET.I.ROOT-SERVERS.NET. 3600000 A 192.36.148.17
    temporarily housed at NSI (InterNIC)
    . 3600000 NS J.ROOT-SERVERS.NET.J.ROOT-SERVERS.NET. 3600000 A 198.41.0.10

    housed in LINX, operated by RIPE NCC
    . 3600000 NS K.ROOT-SERVERS.NET.K.ROOT-SERVERS.NET. 3600000 A 193.0.14.129
    temporarily housed at ISI (IANA)
    . 3600000 NS L.ROOT-SERVERS.NET.L.ROOT-SERVERS.NET. 3600000 A 198.32.64.12

    housed in Japan, operated by WIDE
    . 3600000 NS M.ROOT-SERVERS.NET.M.ROOT-SERVERS.NET. 3600000 A 202.12.27.33 End of File
  • Actually one aspect of the 'Net -- network access points -- is remarkably centralised. I've read that anywhere from 40% to 80% of traffic in North America passes through UUNet's network. If UUNet goes down, anywhere from 2/5 to 4/5 of traffic in North America would, if not grind to a halt, be slowed down tremendously. And that's a scary thought.

Your own mileage may vary.

Working...