
Journal Sanity's Journal: Response to Peacefire's "Distributed Cloud" paper 4
After our conversation I discovered his 2001 paper entitled Why a "distributed cloud" of peer-to-peer systems will not be able to circumvent Internet blocking in the long run. Of course, Freenet is the leading example of a "distributed cloud" architecture.
Needless to say, I didn't entirely agree with his conclusions - and so here is an email I sent to his "Circumventor Design" mailing list, I am still awaiting a response (either from Bennett, or someone else on the list).
Thanks for subscribing me Bennett.
After our interesting off-list conversation, I have read your paper "Why a "distributed cloud" can't work", and have given the matter some thought. here are some preliminary observations, along with self-serving explanations of how this relates to Freenet
;-) While I agree that this "spidering" attack is theoretically possible, I don't believe that it would be anywhere near practical with a well designed architecture, even for a very well funded and motivated government. I further suspect that this attack will always be a theoretical possibility with any censorship circumvention technology that relies on IP, that is sufficiently usable that it could gain wide acceptance in countries like China (of course I would love somebody to contradict me by describing an easy-to-use architecture that is not vulnerable
;) This is not to say that there aren't strategies which maximize the cost of such an attack - and I think that Freenet is a good example of this. If you have a situation where an attacker can identify nodes and shut them down, it is important to do the following:
- Make any kind of "directed harvesting" difficult or impossible
By this I mean that the Chinese government cannot easily direct their node address harvesting efforts to those nodes they can block - rather they are forced to wade through a potentially large number of nodes in order to find the ones susceptible to blocking.This is pretty-much the case with Freenet, nodes have little control over what nodes wind up in their datastore. A censor would just have to passively collect nodes which would be a slow process. Further, if the censor started to kill every node it's node was seeing, then that node would rapidly become isolated (much like a cop who killed all of his informants). It is an oft-abused and rather questionable saying that the "Internet routes-around censorship", but in Freenet's case there is much truth to it.
A corollary of this is that the mechanism through which new nodes are added to the network should not provide a shortcut for the censors to identify fresh nodes. This means that the mechanism through which new nodes are added to the network must be as decentralized as possible.
While Freenet is typically distributed from our web site, we also have a mechanism which we call our distribution servlet, which facilitates "viral" distribution of Freenet. Basically a user can configure his Freenet node to make a web page available from his computer from which other people can download a copy of Freenet which is "seeded" with the nodes in the "parent" nodes routing table. These are made available for a limited time at a randomly generated URL such as:
http://80-192-4-36.cable.ubr09.na.blueyonder.co.uk:8889/MM9L2lTOmNI/
which that user can then send to their friends. Note that there isn't anything in this URL that would make it easy for an automated email monitoring tool to spot. Through this mechanism, Freenet can self-procreate without any reliance on a centralized download source or seeding mechanism.
- Minimize the effect of shutting down any given node by making the network fault tolerant and spread reliance evenly across the nodes in the network
Freenet achieves this, in simulations we could shut-down up to 30% of the nodes in the Freenet network without any significant degradation in performance (this means they were all shut down at the same moment). Further, we could shut down the busiest 20% of the nodes in the network without seeing significant problems (see page 9 of [1]).
It is worth saying that the goal of evenly distributing load across the network is in conflict with the desire to take advantage of resources where available, I think we've reached a good compromise between these two goals in Freenet but it is an area of ongoing development.
I'm not saying this is a comprehensive list of guidelines when defending against this type of threat, but it's all I can think of right now.
Another issue which Bennett and I discussed was the fact that it is likely to be easier for a censor to restrict access to servers outside their country, than between computers inside the country. personally I think that even if an architecture could not support direct communication with servers outside the repressive country, this certainly does not mean that it isn't useful. In fact I think that giving a voice to people inside the repressive country is more valuable than just letting them hear what we have to say. further, it would only take one unrestricted line of communication between the outside world and the internal censorship resistant network to give people inside the country access to external information.
On a different note, it is well known that Freenet does have latency issues, although these have been steadily improving as development continues. We are currently working on a concept we call "Next Generation Routing" which we hope will lead to a dramatic improvement in Freenet's latency. I'm currently working on an article that describes this, but if anyone would like to learn more sign up to the Freenet development mailing list where it is currently a topic of discussion.
All the best,
Ian.
[1] http://freenetproject.org/papers/freenet-ieee.pdf
thank you for sharing with us over at infoanarchy (Score:1)
Mr clarke (Score:3, Interesting)
Madster Decision (Score:1)