Universities Tapped To Build Secure Net 155
Wes Felter writes "InfoWorld reports that the National Science Foundation (NSF) has enlisted five university computer science departments to develop a secure, decentralized Internet infrastructure. I thought the Internet was already decentralized, so I'm curious about what exactly they're fixing. The article quotes Frans Kaashoek from MIT PDOS, which is working on decentralized software such as Chord."
wow, interesing (Score:1)
fix the spammers (Score:5, Funny)
The only thing that needs fixing is the spammers. You know, so they can't have kids who take up the family business. We could even have Bob Barker provide the PSA at the end of Price Is Right episodes. ("Remeber to have your spammers spayed or neutered.")
Agents, Security (Score:3, Insightful)
Re:Agents, Security (Score:1)
Cool idea.
Re:Agents, Security (Score:1)
[though "bobust" is pretty cool isn't it. ought to be a word. maybe it means a Bozo-proof robust system. I better patent it now.]
Re:Agents, Security (Score:3, Insightful)
So goes the dogma. The problem is that if you stick to that dogma the systems tend to be full of technology that is there just to get rid of the posibility of a single master party.
A much better approach in practice is to separate out the logical and infrastructure elements of the problem. For example the Internet currently depends on there being only one logical service set associated with a particular IP address (convoluted phraseology due to the existence of anycast). That is you do not want there to be two companies that claim to 'own' the same IP address.
Some folk want it to be possible for two people to share a DNS name. That is not a good idea either.
What is a good idea is for services like Google to be able to return multiple listings for the same query..
In other words, there is a need for unique identifiers which for the sake of convenience we call names and addresses. There is also a need for keyword identifiers that can be shared by many parties.
DNS isn't the Internet. (Score:2)
But DNS isn't the Internet.
DNS is just an extension to the 'Net, added on later to make URLs easier to understand. Besides, who says we OSS'ers can't come up with, and implement, a better system?
The problem with the Internet that I see, now, is the fact that you need manual effort to fix things like routing issues. Anyone remember about three or four years back when two routers in Florida each thought the other one was the destination for all their incoming connections?
It wouldn't have been so bad if they hadn't told all the other routers in the world that they were where all connections needed to go.
Then there's also the fact that most of Michigan looses its internet connection whenever Chicago has problems. The very nature of hubs make them weak points in the Internet infrastructure.
Obviously then... (Score:2, Insightful)
Re:Obviously then... (Score:3, Informative)
Most of the internet indeed is decentralized, but take out the root servers and the internet is gone...
Jeroen
How so? (Score:5, Informative)
If they do succeed, how exactly have the changed the world? Am I missing the point? Do I just not get it? Won't they just have changed the Internet...and in a way that would be seamless to most users? Isn't the general consensus that we are not all that vunerable.
Re:How so? (Score:1)
-s
The broken internet (Score:4, Insightful)
As an example...if one day some serious news happened that caused everyone to get on the net at once (Kyoto Earthquake, OJ Simpson on the freeway, Iraq drops a nuclear bomb), and this coincided with a failure of some large piece of hardware along the western coast (under extreme load), the remaining paths for much of this area would be so bogged down as to be useless. Effectively the internet would break under the pressure.
What needs to happen to avoid the problem here is have many more paths for the data to flow, which requires better hardware and further decentralization (would love to see everyone's cable modem be a small internet router for people's data to travel through). Barring that, with the increased worldwide participation on the net expect that some days you just won't be able to use it.
Kickstart
Re:The broken internet (Score:1)
Re:The broken internet (Score:3, Insightful)
would love to see everyone's cable modem be a small internet router for people's data to travel through
Is it just me, or is that statement total technobabble? Say I put a router in my house. Where does the data go through it to?
Re:The broken internet (Score:3, Insightful)
> a router in my house. Where does the data go through it to?
The OP was probably confused about what cable modems do, but he
brings up an interesting point...
With a heirarchical routing system like what TCP/IP uses, it can
pretty much only go upstream to the backbone. It is possible for
a network to be designed so that there's no backbone, and the data
can be routed wherever there are open connections -- so that if you
have ethernet connections to the people in the houses nextdoor and
a wireless connection to your relatives across town and another to
your mobile phone (which connects to your phone service provider)
and a DSL connection to an ISP, data could be routed in one of
these connections and out the other.
Such a system would have higher latency, because it would have
more hops, but the bandwidth could be okay, if _everybody_ runs
fiber to the house nextdoor. TCP/IP won't work, because it can't
do routing in that kind of environment; some kind of routing
protocol would have to be devised that understood the topology
of such a network (perhaps by using latitude and longitude as
metrics for the routing, along with other factors such as "how
busy is the network in that direction"). The really major problem
with such a system is, how much do you charge your neighbors to
route their data, and what about the people whose data your
neighbors are routing (through you), and so on? Unless everyone
suddenly becomes a fair player (haha), the network protocols
(or their implementation) would have to include some kind of
reciprocal quota system or somesuch, which would add complexity
and drive the latency up, possibly beyond usefulness.
Hmmm.... (Score:2)
That smacks of geolocation to me. People don't want others to know their incoming IP addresses, let alone their real coordinates!
Distributed routing could work, but I can see a lot of ways for such a decentralized approach to break down.
Re:The broken internet (Score:2, Informative)
Re:The broken internet (Score:2)
why is this mod'ed as insightful? its absolute waffle. Poster hasnt a breeze about how inter domain routing works.
"With a heirarchical routing system like what TCP/IP uses, it can pretty much only go upstream to the backbone."
Eh? Since when is TCP/IP hierarchical? For that matter, wtf has TCP got to do it? (other than that some routing protocols use TCP). backbone? what backbone? Show me where the internet has a backbone. (hint: it doesnt).
"It is possible for a network to be designed so that there's no backbone, and the data can be routed wherever there are open connections"
no sh$t sherlock. What an amazing idea. I wonder if the guys that came up with BGP [lindsay.net] thought of it before you. And I wonder if anyone actually uses it. (hint: the entire internet).
"Such a system would have higher latency, because it would have more hops"
oooh.. ok.. why's that then?
"but the bandwidth could be okay, if _everybody_ runs fiber to the house nextdoor."
ah... so if everyone used fibre things'd go faster? Damn you should work for an ISP, mine are still trying to persevere with RFC2549 [faqs.org] links to all their peers.
"TCP/IP won't work, because it can't
do routing in that kind of environment;some kind of routing protocol would have to be devised that understood the topology of such a network"
gosh good point, and may i refer you the BGP link above again?
"The really major problem with such a system is, how much do you charge your neighbors to
route their data, and what about the people whose data your neighbors are routing (through you), and so on?"
Hmm.. tricky one that. I believe some people are though trying their best to solve that one. (namely the lawyers who draw up contracts, and the accounts dept. of ISPs). Ie, yes, you pay the people you connect to depending on your comparitive standing (ie customers and traffic carried). If one is small and the other big, well why the small one generally pays the bigger one. Why one would almost call the smaller one a customer of the larger one. (there's a thought, you could run a business along these lines!). If the two are of equal comparitive standing, and can both agree they are, then they might peer with each other for free. For further discussion on this i really should direct you to the legal and accounting depts. of any decent sized (guess what?) ISP.
In fairness, what you describe is actually generally how the internet works if you substitute your neighbours for ISPs / v. large organisations, its just i'm in a sarcastic mood, and you have a lot of reading up to do. sorry.
You dont know what you are talking about (Score:2, Insightful)
Re:You dont know what you are talking about (Score:2)
Arpanet's main concern, I think, was forming a network that could go through many pathways -- not a network that could handle an endlessly growing amount of bandwidth usage.
I myself have experienced occasions in which the ISP's backbone provider had part of their network go down, and the access time became painfully slow...something on the order of 200 bytes per second over a DSL modem.
I don't know all the details, but they have been able to show that excessive usage can slow down access times over the Net.
Re:You dont know what you are talking about (Score:4, Interesting)
No, it was not, Vint Cerf has dispelled that myth a number of times.
The Internet does not emply flood fill routing or any of the technologies that one would want to have available if you wanted to survive a nuclear attack.
TCP/IP was actually designed with the idea that networks could be quickly assembled with minimal configuration issues and without the need for every node to have access to a central co-ordination point.
The Internet does actually have one central coordination point, the A root of the DNS service. However that is decoupled from the minute by minute actions of the Internet hosts so that the A root could in theory go down and come back up without a calamity (but nobody wants to try to find out!).
Re:You dont know what you are talking about (Score:3, Interesting)
I was always under the impression that the decentralized nature of the original network was a design criteria which arose from the desire to withstand (or degrade gracefully more correctly stated) in the event of significant damage to the overall infrastructure. Are you suggesting this is not the case? If so, I'd _really_ like to see the sources you have used to arrive at this conclusion.
Re:You dont know what you are talking about (Score:2)
"I think that the old arguments that will come up at the (UCLA) conference and have come up over and over is everybody is claiming responsibility for everything at this point," says [Lawrence] Roberts, who was the designer and developer of ARPANET.
But one thing all agree on is that the Internet was not conceived as a fail-safe communications tool in case of nuclear war, a much-promulgated myth over the years. The Rand Research Institute was developing a study shortly after ARPANET's birth that has been confused with the research-oriented ARPANET and subsequent developments.
Nuclear war "wasn't the reason we did anything," Roberts says. "That story is just wrong."
While it is true that the design of the ARPANET was not at all influenced by concerns about surviving a nuclear attack, it is also true that the designers of the ARPANET and other ARPA-sponsored networks were always concerned about "robustness", which means the ability to keep operating in spite of failures in individual nodes or the circuits connecting them.
The architecture of the ARPANET relied heavily on the ideas of Paul Baran who co-invented a new system known as packet-switching.( A British computer scientist, Donald Davies, independently came up with his own theories of packet-switching). Baran also suggested that the network be designed as a distributed network. This design, which included a high level of redundancy, would make the network more robust in the case of a nuclear attack. This is probably where the myth that the Internet was created as a communications network for the event of a nuclear war comes from. As a distributed network the ARPANET definitely was robust, and possibly could have withstood a nuclear attack, but the chief goal of its creators was to facilitate normal communications between researchers.
And that's just the first three hits. Why is it that people are all too willing to tell others to provide links, when it's now just as easy to find them yourself? While it's true that the "burden of proof" usually rests with the party proposing an opinion, when that burden becomes as light as it is with the modern Internet, it's irresponsible and unproductive to just lob "links, please" comments without engaging one's own brain.
Re:You dont know what you are talking about (Score:2)
A link that does discuss the quote (Score:2)
Note he does mention that being Defense-funded, it did have to display some potential for some military usage. So I would agree that it wasn't developed "to survive a nuclear war" but it was likely funded because it could serve a military purpose (command and control capability enhancement).
Re:You dont know what you are talking about (Score:2)
Sorry, I don't know where Vint is at the moment, I spoke with him directly. Also Tom Knight, David Clark, quite a few people.
Try looking on google, cerf myth nuclear internet
Hit #1 http://www.ibiblio.org/pioneers/
However, you don't need to take my word for it, go look at the RFCs describing the design of the Internet, the first to contain the word 'nuclear' is 2731 and it is in a mention to where Homer Simpson works:
Google- nuclear site:ietf.org
Re:You dont know what you are talking about (Score:2)
But then, this is slashdot and just about anything said here could be entirely true, entirely made up, or anything in between.
Thanks for the additional information.
Re:You dont know what you are talking about (Score:2)
Actually, you are makeing two statements.
The first one is only partially true and only in context of the second statement.
The Internet was designed to facilitate the communication between scientists and military even in the event of a major outage (a nuclear attack in mind).
It was not designed to be "self healing", it was designed to degrade gracefully.
You are surely aware of the differences between a nuclear attack and DoS (may it be voluntary or involuntary).
Both may require redundancy, but the first one a redundancy of transmission paths, the second one a redundancy of sources.
Not to mention, that the actual Internet and the theory have only the standards in common.
There are central exchange points where most of the traffic is routed through, (London, New York, comes to mind), most Root-DNS servers are concentrated in the US, routing-tables are statically set (to accomodate economical/political decisions).
>Time after time, earthquakes and power failures have not killed the internet.
Not the Internet as whole. But the current requirements have changed. Best Effort is not good enough anymore. We are not happy anymore, just being able to communicate somehow, in the event of a nuclear attack.
A degradation of data transfer from Tbit/s to some Mbit/s between two continents can be considered as a major breakdown.
Re:The broken internet (Score:1)
Problem solved [wired.com]
Re:The broken internet (Score:1)
"Even with nearly half of all the nodes removed, those that remained were still sewn together into one integrated whole."
(Nexus,Mark Buchanan,p131)
-daniel
Re:The broken internet (Score:1)
-daniel
Re:The broken internet (Score:2)
Also, there's a tradeoff between efficiency and fault-tolerance. You want more connections, but are you willing to pay for it? Are you willing to pay twice the amount every month that you're currently paying, in order to be able to access Slashdot on the day Iraq lobs a nuke?
If so, then hey, get cable and DSL and some satellite thingie and anything else you can get, and learn how to configured "gated" on your home firewall/router.
Re:The broken internet (Score:1)
But imagine this...instead of having one connection into your home, you have two (split the bandwidth on your cable line), which connects you in a mesh topology with other cable internet users. Do this on a grand scale, with the millions (billions?) of people who will one day be on the net. When you connect to somewhere, rather than absolutely HAVING to go through a central point to a bit pipe, you enter a cloud and get your data across it.
Now I realize this pretty much requires last-mile fiber, but it would be a hell of a lot more decentralized and less prone to failure than the current internet.
Kickstart
Re:The broken internet (Score:1)
No matter how many connections I have to neighbors, if MY access is down I cannot hit them, if the Gateway goes down then everyone on my network sharing that gateway is also trapped. If the backbone goes done you and you nead to get accross the country you nead direct (non internet) connections for every hop alonge the way (that is a whole lot of we are connecting to neighbors, even in a broad sense).
DNS and IP allocation not decentralized (Score:5, Informative)
It will be interesting to see if IPv6 will use geographic hierarchies for routing, or even relaxes the hierarchial assignment-scheme at all. If your IPv6 suffix is static/fixed (based on your MAC address, say), and your IPv6 prefix is from the current network/area you are in, that will be an interesting tool to let people track devices as they move around/between networks.
Re:DNS and IP allocation not decentralized (Score:1)
Now the database load/creation is not decentralized, nor do I think it should be. The failure case for the database creation going down is that new domains do not go online till it is back up, not a horrible failure case (unless you just applied for a new domain name that is). The failure case for multiple people creating multiple databases is that as they go out of synch, getting VERY different answers for the same query depending on which root server you happen to hit. Same thing goes for IP address allocation... Oh well.
By the way the last issue on IPv6 address allocation (tracking a device using the lower 64 bits of the IPv6 address) has been talked about for many years in the IPv6 development groups. There are solutions, the end result is most people don't care... Oh well...
Botched DNS "Tape Load"? (Score:2)
Anyone know if there's some truth in this, or is it another myth of the Internet?
Re:DNS and IP allocation not decentralized (Score:2)
This is a fundamental aspect of those systems. You want one domain name to map to one (set of) server, and similarly for IP addresses. If you don't have one authority dictating who gets what address, you'll have disagreements and things less reliable.
MAC addresses also have one authority behind them, but typically only the manufacturer has to deal with them. MAC addresses actually could be decentralized, since they only need to be unique on the local network, where the others need to be globally unique.
Anyway, I think the only way to avoid having one (or a small number of) central authority is to have these decisions part of the spec, ie decide on a scheme ahead of time that's unambiguous in nearly all cases.
Re:DNS and IP allocation not decentralized (Score:2)
Actually the logical registration is co-ordinated in a single logical database. However the implementation is very highly distributed.
There are multiple DNS root servers and there are even multiple A root servers, but only one A root is active at any one time and they all use the same IP address.
DNS Servers (Score:2, Interesting)
Now imagine, what if one of those root servers went down. The other servers have to take the load of the failed server. Now imagine two went down, however unlikely, but that puts loads of extra traffic on the remaing servers. After a while, this will add up. Now, I admit, it is probibly very unlikely, but with enough traffic, even a root server could be
Why don't they just... (Score:1, Funny)
Theory vs Implementation (Score:2, Insightful)
The Internet is designed to be decentralized but it is built to maximize profit.
DNS comes to mind (Score:1)
Wouldn't the DNS system count as a point of failure. That they would like fix. That would also be a good argument for developing a decentralized system.
Decentralized (Score:1)
Then again, I could be WAY off.
How About. a lilly pad (Score:1)
Definitely decentralized.
Re:How About. a lilly pad (Score:1)
Re:How About. a lilly pad (Score:1)
It is decentralized though. But the frequency can too easily be disrupted. And thats not very secure..
Decentralisation (Score:1, Insightful)
Re:Decentralisation (Score:1)
The Chosen (Score:1)
sorry bout the formatting (Score:1)
Interesting pick of universities that are getting the cash. Compare that list to Usnews' 2003 ranking of CS grad schools:
1. Carnegie Mellon University (PA)
Massachusetts Institute of Technology
Stanford University (CA)
University of California-Berkeley
5. University of Illinois-Urbana-Champaign
See for yourself @
http://www.usnews.com/usnews/edu/grad/rankings/ph
Re:sorry bout the formatting (Score:1)
Re:The Chosen (Score:2, Interesting)
Re:The Chosen (Score:2, Insightful)
Current Internet not *that* decentralized (Score:3, Informative)
I thought the Internet was already decentralized, so I'm curious about what exactly they're fixing.
Not quite. The primary vulnerability lies within the Root DNS servers, which contain all DNS information for the entire Internet*. IIRC, there are only eleven or twelve of them. And because each replicates its data set to all other Root servers, catastrophic failure of one would bring down all of the others.
If that ever happens, you can pretty much say goodbye to the Net, at least temporarily.
*Actually, I think they hold the addresses of all Local DNS servers, which is basically the same thing.
Re:Current Internet not *that* decentralized (Score:2)
That would be a stupid way to run the root servers. My understanding is that the root servers are updated from an offline master; the whole point is that if one fails the others still work and can pick up the load.
Re:Current Internet not *that* decentralized (Score:5, Interesting)
Um, very untrue - the primary root server replicates the data to the rest. If a non-primary root server goes down, you don't notice it. If the primary one goes down, the function is moved to any one of the rest (and you still don't notice it). Basically something like 3 or 4 of them have to go out before Joe InternetUser will notice any effect, and even then it would be somewhat inconvinient, not "catastrohpic". (This is what I rember from some article on the topic awhile back - it's not like I know anything about these things.)
Re:Current Internet not *that* decentralized (Score:2)
Nonononono, that would be extremely stupid. If one of the root servers went down, the others would pick up the slack, that is part of the redundancy.
If that ever happens, you can pretty much say goodbye to the Net, at least temporarily.
Not exactly. Even if all the root DNS servers were wipped from the face of the earth, the caches of all the local DNS servers would still know the addresses for any sites that were recently visited by its clients. So as long as the IPs of the sites didn't change, it would be ok as the local DNS servers would still know where to look.
Now if you made a request to a site that the DNS server has never been to before, it would look up to higher DNS servers. If none of them, all the way back to the root servers, knew the answer, you wouldn't be able to get at those sites.
Re:Current Internet not *that* decentralized (Score:3, Informative)
The "root servers" contain the locations of the "top level domain (TLD) servers". They can answer queries such as "where is the DNS for
The TLD servers contain locations of the "next-to-top level domain servers. They can answer queries such as "where is the DNS for IBM.com?"
IBM's own DNS can answer the question "where is www.ibm.com?".
The system is already decentralized to the point that an attacker would have to hit numerous targets to have any significant effect. The only "central point" is the "source files" that feed the upper-level DN servers. Decentralizing those sources would turn the Net into anarchy. "I'm the DNS for
I suppose you *could* decentralize the sources, but you would need to implement a system of trust which would have its own center.
Re:Current Internet not *that* decentralized (Score:3, Informative)
The main problem with that system, though, is that one mistake on the hidden primary (which has happened) screws up the entire system. And, yes, many many zones were hosed for a while as Network Solutions tried to figure out what the hell they did. And, of course, there's only 13 machines to DoS before all DNS becomes totally useless.
Clarification (Score:3, Insightful)
Is this DHT going to be decentralized so different servers are throughout the country? If so, would yahoo hold files for google? If it is this way, it sounds like my credit card data would be insecure. (Say a p0rn site is holding data for ebay)
Or is it more like a backup of the server that is in the same room? If it is this way, don't most organizations that host their own site have more than one server with the same data?
Or am I just totally confused?
Re:Clarification (Score:2)
A large part of how a system like this is supposed to work is the observation that having someone hold an encrypted and signed piece of data might help you survive a failure or improve performance, but doesn't do the holder any good whatsoever in terms of inspecting or modifying your data. If you consider the encryption to be secure, then this type of system can be just as secure.
NIIIP (Score:3, Informative)
After 9/11 several security consultants met in a Senate hearing and demonstrated in a simulation, how the removal of a few key segments could cripple internet traffic (granted some of the plan involved small amount of urban sabatoge).
The internet if scaled down could be compareable to the P2P networks. 90% of content on the internet is provided by less than 10% of computers connected.
The people at http://www.niiip.org/ have amazing documents with regard to security and how the infrastructure of the internet works. Well worth a read.
Another good spot for information, though slightly tainted, is http://www.iisweb.com/. They offer a skewed view of security, as well as some examples of "Worse Case Senarios"
What's new about it (Score:2)
The InfoWorld article describes a secure distributed storage system, not just plain old messaging connectivity. There aren't too many such beasts around; usually it's more of a "distributed, secure, usable - pick two" kind of thing. Some of the projects that approach the goal of combining all three actually seem to sharing the IRIS award - i.e. OceanStore [berkeley.edu] at Berkeley and various projects [nyu.edu] at NYU. I don't know off the top of the head how ICSI and Rice fit in, but I'm about to go check their sites because I'll bet it's interesting.
Re:What's new about it (Score:5, Informative)
The Rice connection almost certainly has to do with Peter Druschel and Pastry [rice.edu] (for which the other PI seems to be Antony Rowstron of Microsoft Research, interestingly enough). I'm not totally sure of the ICSI connection, but they seem to be closely affiliated with UCB and I know that Ion Stoica works in these areas. OceanStore, CFS/SFS, Pastry, Kademlia - it's definitely a pretty good collection. A lot of the top people in DHT/DOLR (Distributed Hash Table, Distributed Object Location and Routing) research are involved, and I'd love to know how they plan to converge their various efforts toward a common solution.
Could have been clearer (Score:2)
not decentralized (Score:2, Informative)
I remember an anecdote about some company that installed multiple data feeds from multiple vendors to ensure reliability--redundancy is always good, right? Some construction worker was fixing a pipe and cut a fiber cable and sure enough, the company was offline. The different vendors all shared the same fiber so the redundancy wasn't real.
Tons of traffic gets jammed through a few key distribution routes. I'll bet the typical internet user sends traffic through many routers with no backups--you could probably shut down my home cable modem service by pulling the plug on any of at least half-a-dozen routers before it gets out of the provider's internal network. Redundancy in the backbone is nice, but useless if the endpoints are vulnerable.
- Russ
Re:not decentralized (Score:2)
No longer decentralized. (Score:3, Insightful)
Since every release of BIND ties us more thoroughly to ICANN-dominated centralised name control, I'd guess that DNS would be what they are fixing.
It used to be easy to use alternative roots in conjunction with the "authoritative" (authoritarian?) roots... but now it's one or the other. Caveat - I haven't tried the BIND alternatives yet, there are only so many hours in the day.
The namespace of the Internet is hosed, even USENET's namespace.namespace.namespace is more useful. And the geographic separation of the root nameservers doesn't matter much when all change authority is vested in a single entity.
Re:No longer decentralized. (Score:2)
IF all of the authoritative roots were nuked, I bet it would be a matter of hours before small networks bounced back up using HOSTS files, and soon had an alternative in place... and IF all the authoritative roots were targeted and taken out, it's going to be pretty obvious what's going on in the world, and thereby easily worked around.
It doesn't have to be all automagic, we're still smart people behind these screens.
Re:No longer decentralized. (Score:1)
which part of BIND ties you to ICANN roots ?
you just might be a cracker
Replication has its own dangers (Score:3, Informative)
If your data is distributed, and one server gets taken out, then fine, you still have service, and the downed server can be re-synched.
If your data is distributed, and someone updates it, then the update is faithfully replicated - even if it is wrong. I work for a company that has its Lotus Notes address database distributed across > 50 locations. One of these would probably survive World War III. Unfortunately, a few years ago, none of them survived a deletion, followed by automatic replication. Took us down for a day, becuase the tapes were only in 1 location.
Of course, you could skip the replication. The you have the non-trivial problem of finding the latest version.
Re:Replication has its own dangers (Score:2)
Depends on your definition of "wrong"; if your system supports true deletion and a properly authorized entity deleted something, it should be gone from all replicas. Largely for that reason, many of the systems being developed in this area tend toward an archival model where previous updates are supposed to remain available almost forever and deletion just means "mark it as not being part of the current data set".
Yep, it's non-trivial all right, but these are just the kinds of people who might be able to beat the problem into submission.
Re:Replication has its own dangers (Score:1)
If someone ELSE hit the Delete button, then its a security issue, but a different one. The data itself, though, is fairly safe.
Ironic (Score:1)
Ok, I know universities generally aren't against P2P technology, just what it is being used for.
insert RIAA joke here (Score:4, Insightful)
This seems it would reduce an individual entity's loss to an attack with the idea of, everyone loses a little rather than one losing alot. But it also seems, even though the details in this article are lacking, that physical security of boxes would become more important.
Should the british goverment, a university, and whoever else, trust a small buisness in san diego to house its part data.
the only way this would work from a security stand point would be to make the information that is spread out over 50 or so computers not accessible from the machine its hosted in on. and it seems this would be pretty much impossible(er.. hackerd00ds) from a purely software approach....
do you trust me with your data? um... i dont
Re:insert RIAA joke here (Score:1)
Isn't this what freenet does by encrypting all the data that is stored on your machine but not telling you the key to unencrypt the data on your machine?
Re:insert RIAA joke here (Score:1)
http://freenetproject.org/
So, as the tagline goes.....
Re:insert RIAA joke here (Score:2, Informative)
Re:insert RIAA joke here (Score:2)
If the data is encrypted and signed, why not? They can't inspect it, they can't modify it, the worst they can do is drop it on the floor and that's exactly equivalent to the sort of failure that other parts of the system are designed to deal with. It gets more difficult when there might be a very large number of "rogue servers" that promise to store copies and then don't, but even that scenario need not be fatal and the basic idea is still sound.
tapped? (Score:1)
Raid Array of Servers (Score:1)
the internet USED to be decentralized (Score:1, Insightful)
Now everything is centralized, with backbone pipes, etc.
Sounds like (Score:2)
Its the storage stupid! (Score:4, Insightful)
Re:Its the storage stupid! (Score:1)
Re:Its the storage stupid! (Score:2, Informative)
Doesn't this sound like the freenet project [freenetproject.org]? An encrypted and decentralized system where everything is P2P, no-one can re-construct your data, and everyone trusts everyone else?
real decentralization is needed (Score:4, Interesting)
I had a perfect example of that happen to my current ISP; after getting terrible communications errors, I called them. Turns out one of three of their routes was out; they reset a router, and everything was copacetic. But the other two routes should have been able to handle the traffic. They didn't.
With the advent of IP6, the structure of the net becomes even more convoluted, and errors may become even more difficult to handle. In order to have a nice, stable internet, a system of handling broken routes needs to be integrated into the new spec.
What They Might Be Fixing (Score:2)
Perhaps they are trying to make a self organizing network... automatic rerouting, dynamic topology creation, decentralized name resolution. Similar ideas have been discussed with P2P networks.
Perhaps they are designing a network using P2P concepts.
And perhaps I should just read the article.
aren't universities ususally more insecure? (Score:1)
Domain Decenralizaiton (Score:1)
The DNS is what they are decentralizing, among other things. If someone takes out the root domain server, the internet would be pretty screwed right now. If we had an easy system for routing information that wasn't based on DNS, it would change a lot of systems. Web Sites, Email accounts, Instant Messaging, are all dependent on DNS. If this project works, we may be able to say goodbye to AOL's monopoly on IM.
Who needs a tag line anyways!
Re:Domain Decenralizaiton (Score:1)
Why this doesn't matter (Score:2, Insightful)
NO ONE relies on the Internet for matters of 'life and death', which is the only reason you would go to the expense/aggrivation to make something that fault tolerant (can you hear the drums beating out the old 'we must be safe from everything' rythm?).
When people couldn't get all the pretty pictures on the last few disasters we have had online, what did they do. They went to a medium better suited for broad and instantaneous information distribution. Television and Radio! What a concept! An amazing technology that is capable of reaching millions of people within range of any one of hundreds of 'broadcast stations' located all over the planet!
Of course, because the Internet doesn't work that way, there must be something wrong with it, right?
This reminds me of the telcos demanding QoS for IP, so they could start using a more familiar revenue model for IP and IP services...
I'm not clear on the concept (Score:2)
Anyone who's dealt with memory or disk allocation knows that performance suffers when a resource (file, data string, etc.) is fragmented over several locations on the same physical unit. This is why smart Oracle DBAs define storage parameters when they create objects, why smart Windows users run "Defrag" on their FAT volumes periodically, etc.
If I understand the (altogether too brief) article correctly, the "secure net" will work by fragmenting a file across multiple servers, in multiple locations. To get the most recent copy of a file, any given node will have to go out onto the network and retrieve all the pieces that aren't stored locally. This is sure to yield much poorer performance than a purely-local retrieval (not to mention the inherent security risk of transferring data over the network...)
What am I missing here
already decentralized? (Score:2)
This file is made available by InterNIC registration services
under anonymous FTP as
file
on server FTP.RS.INTERNIC.NET -OR- under Gopher at RS.INTERNIC.NET
under menu InterNIC Registration Services (NSI)
submenu InterNIC Registration Archives
file named.root
last update: Aug 22, 1997
related version of root zone: 1997082200
formerly NS.INTERNIC.NET
. 3600000 IN NS A.ROOT-SERVERS.NET.A.ROOT-SERVERS.NET. 3600000 A 198.41.0.4
formerly NS1.ISI.EDU
. 3600000 NS B.ROOT-SERVERS.NET.B.ROOT-SERVERS.NET. 3600000 A 128.9.0.107
formerly C.PSI.NET
. 3600000 NS C.ROOT-SERVERS.NET.C.ROOT-SERVERS.NET. 3600000 A 192.33.4.12
formerly TERP.UMD.EDU
. 3600000 NS D.ROOT-SERVERS.NET.D.ROOT-SERVERS.NET. 3600000 A 128.8.10.90
formerly NS.NASA.GOV
. 3600000 NS E.ROOT-SERVERS.NET.E.ROOT-SERVERS.NET. 3600000 A 192.203.230.10
formerly NS.ISC.ORG. 3600000 NS F.ROOT-SERVERS.NET.F.ROOT-SERVERS.NET. 3600000 A 192.5.5.241
formerly NS.NIC.DDN.MIL. 3600000 NS G.ROOT-SERVERS.NET.G.ROOT-SERVERS.NET. 3600000 A 192.112.36.4
formerly AOS.ARL.ARMY.MIL
. 3600000 NS H.ROOT-SERVERS.NET.H.ROOT-SERVERS.NET. 3600000 A 128.63.2.53
formerly NIC.NORDU.NET
. 3600000 NS I.ROOT-SERVERS.NET.I.ROOT-SERVERS.NET. 3600000 A 192.36.148.17
temporarily housed at NSI (InterNIC)
. 3600000 NS J.ROOT-SERVERS.NET.J.ROOT-SERVERS.NET. 3600000 A 198.41.0.10
housed in LINX, operated by RIPE NCC
. 3600000 NS K.ROOT-SERVERS.NET.K.ROOT-SERVERS.NET. 3600000 A 193.0.14.129
temporarily housed at ISI (IANA)
. 3600000 NS L.ROOT-SERVERS.NET.L.ROOT-SERVERS.NET. 3600000 A 198.32.64.12
housed in Japan, operated by WIDE
. 3600000 NS M.ROOT-SERVERS.NET.M.ROOT-SERVERS.NET. 3600000 A 202.12.27.33 End of File
NAP's (Score:2)
Re:Freenet without the freedom? (Score:1, Interesting)
From the document at http://iris.lcs.mit.edu/proposal.html
Re:Security != secure hosts, encryption of traffic (Score:2)
Re:freenet? (Score:2)
From the webpage: "Freenet is a large-scale peer-to-peer network which pools the power of member computers around the world to create a massive virtual information store open to anyone to freely publish or view information of all kinds."
How does this shit get moderated down?! (Score:2)
that was damn-near brilliant!
LADIES and GENTLEMEN, Alan Thicke has left the building!