Towards an Internet-Scale Operating System 305
gschoder writes: "Two Berkeley computer scientists (including David P. Anderson of SETI@home) envision an Internet-scale operating system to harness the processing power, networking efficiency, and storage capacity of everyone's computers. Scientific American has their proposal."
Why buy a computer? (Score:1, Interesting)
Why should I want my computer doing others' work? (Score:2, Interesting)
It's another thing when a person volunteers to participate (I run SETI@athome) but this proposal sounds like a forced standard upon a consumer.
High latency? (Score:4, Interesting)
The OS will never be fully "functional" as OSes are considered today, because people will lie and cheat and steal. IMO (read: opinion removed from ass) the only practical use of this would be the equivalent of making a kernel patch that could have a slice of disk, a slice of memory usage, and a slice of bandwidth, and then it would run SETI@home, or whatever code it was instructed to run from the "master".
If it was not run on public machines I could immagine something akin to Beowulf from the ground up. An OS designed for premeditated clustering. That's not Internet sized though...
P2P makes the inroad more acceptable (Score:3, Interesting)
Five years ago, I'd have said no way, this is unfeasible, people would not contribute their storage space and CPU cycles to someone else.
But now, with server-obfuscated peer to peer systems like AudioGalaxy, it could be possible. Imagine selling people on the idea of a 'universal public hard drive', where all you do is search for a file, then copy it over locally without actually knowing where/who it came from. I doubt there'd be any objections, given how convenient and 'anonymous' it would be. Sacrificing a share of your own hard drive space for cacheing files you might not be interested in would be a small price to pay for that. That's one resource down; do the same thing for CPU cycles (provided we have a killer app reason for people to need more cycles, given high speed processors of today) and other computing resources and the rest will fall in place.
I doubt it'll go as far as this proposal, at leastnot for a LONG time, but the unthinkable is already becoming the thinkable in some areas.
hmmm (Score:4, Interesting)
"Consider Mary's movie, being uploaded in fragments from perhaps 200 hosts. Each host may be a PC connected to the Internet by an antiquated 56k modem--far too slow to show a high-quality video--but combined they could deliver 10 megabits a second, better than a cable modem."
Ok, thats nice, how do they propose Mary receive 10Mbps? Get 12 DSL lines? What about the people on dial-up? While people gain access to the internet around the world, those of us with the uber-connections will just leech on them? Now, they talk about the "digital divide" but that is just plain vicious. I'd rather be stickin it to The Man then Uncle Sven in Stockholm. So then what, everyone gets a fast connection -> backbone upgrade -> ATT, MCI, Earthlink, Sprint, etc. spend the money that Amgen would save.
Also: How would individuals choose who can use their computers resources given their ethical or moral convictions. While I would surely donate my CPU and disks to cancer research or finding larger prime numbers, I don't want the DoD using it to think up new ways to kill people.
Re:Whats in it for me? (Score:2, Interesting)
Sell computers at or just above cost to consumers in a package that provides all the necessary hardware / software. The end user will be forced to sign an agreement that will provide for them the DSL / cable line at a reduced cost and the computer for the end user. They must also agree (stated within the terms of service, that their computer should always remain on (when reasonable) and when not being used is subject to being used by my company (we'll call it MyCo).
Now, to offset the costs of the reduced price of computers and the reduced cost of cable / dsl - MyCo then can sell a client to a larger corporation who is interested in large scale computing without having to purchase one. For those of you who are familiar with the supercomputer environment, it isn't uncommon to lease out cycles on a larger scale computer to other entities to help offset the cost of some of the larger super computers. By leasing out the number crunching abilities of the distributed network of computers, this would be able to cover the costs of selling consumer hardware / packages and would allow for large-ish companies to harness the power of a distributed number crunching system.
Like I said, this is all very preliminary and more of just a thought than anything, but I think that something like this might attract more than just the "geek novelty" users. It would allow consumers to benefit, and would allow other companies to piggy-back on the system without having to make the large investment into a "supercomputer."
Just wait.... (Score:3, Interesting)
Communist, Schmommunist... (Score:2, Interesting)
Perhaps if you set up your computer service like a secret society this would work. Then you'd have to know all the users, and would be able to track everything. It would be like the Masons, only with computers.
distributed backup is the killer app (Score:4, Interesting)
Consider a distributed backup program which works roughly as follows.
This type of application would provide at least 3 important benefits for backup. First, its relatively cheap. If you want to backup more data, just buy more local disk space and trade files with more computers. This seems much easier (at least for a home user) than setting up a tape backup system, making sure the tapes get replaced, making sure the tapes get put someplace safe, etc. Second, its much safer than pretty much any backup system you could buy today commericially since your data is literally spread all over the world. Finally, the backup system isn't controlled by any large corporation.
Obviously there are still some details left to be worked out such as how to let computers who want to trade files find each other (both centralized and distributed options exist analagous to napster and gnutella), how to prevent cheating (having your computer periodically ask its partners for hashes of the data they are backing up should work), how to control redundancy most efficiently (error correcting codes like Reed-Solomon codes or Tornado codes would probably be smarter than just repeating data).
If you're looking for a great distributed open source project that will make the world a better place, I encourage you to develop prototypes for distributed backup. I plan to develop my own prototype one day, but currently I'm pretty busy with graduate school.
-Emin
Re:i don't know.. (Score:1, Interesting)
Trusted data (Score:3, Interesting)
The more stock and importantce you put in something, the more likely people will use it as a means of abuse. I can envision a world where people who are against a particular scientific task (for whatever reason, ethical, on principal, or whatever), use this Internet OS, and join particular distributed apps simply to throw noise into the upstream
Re:i don't know.. (Score:3, Interesting)
What if the computer you bought for US$2000 was largely subsidized by the colation of entities that wanted to use your CPU and mass storage when you weren't so that it only cost you like US$1000 or even US$500. Would you participate then? Even if you wouldn't, could you see how someone else might?
Error correcting codes (Score:2, Interesting)
Should I guess the missing 40% from the available 60%?
Yes! Error-correcting codes will make it possible to guess the whole file from fragments that add up to 50%. Mojo Nation [mojonation.net] already does this.
can anyone say... (Score:4, Interesting)
How many people do you know that are too scared to purchase anything online because they're afraid that some crazy cracker will intercept vital financial information? I know quite a few. We have to keep in mind that a relatively small portion of the overall population will actually see the benefit of this technology; and even fewer will trust it.
Things that should be considered:
Storage (Score:3, Interesting)
Add to that the fact that when you start dealing with serious amounts of data (~1TB), making backups to tape or any other media starts to get really difficult. If the free disk space on people's computers (I've got around 30 or 40GB free on my home machines) could be put to use to store backups, I'm sure businesses would be willing to pay a significant amount of money for it.
-Esme
How does one control what one's PC is used for? (Score:3, Interesting)
Probably not.
Re:Scary... (Score:4, Interesting)
It can be a lot more scary than you think.
I/O Bound (Score:3, Interesting)
Processors faster than 2GHz are dirt cheap today. High-bandwidth connections aren't cheap, and connections to home users are 3 orders of magnitude slower than an internal disk drive channel.
This kind of thing only seems to make sense for the most geek-oriented scientific types of calculations, and of those only the jobs that are trivially parallelized, like SETI. I don't see everyone changing their OS to support it.
a couple of issues (Score:3, Interesting)
even if we have lots of unused processor time (which I'm sure we do), pumping the data in to and out of a remote procedure call can consume a lot of bandwidth and result in a huge lag time. Many problems don't distribute well, even when you have relatively high bandwidth connections to send the data over (like multi-GB memory busses), so the problem only gets worse when you use a measley network pipe or modem line. (processor memory bus bandwidth tends to be in the 5-10 Giga-bit range, even the best home internet access is only 10-100 Mega-bits)
the steady state of a hard drive is full. There just isn't going to be enough spare, on-line, storage space on folks' desktops to give any appreciable amount out to share. If you have to deal with the bloat of a self healing encoding, the problem only gets worse.
Consider the case of N users, each with one hard drive of size X. They share out half of their hard drive space, but a file takes three times as much space to store on the distributed system than it does purely locally (for the self-healing encoding). The total hard drive space available to the group is now N*X/2 + 1/3*N*X/2 = N*X*4/6, or just over half the actual total space on the network. The average space available to any single user is the total available space on the network divided by the number of users, or just over half the actual space on the individual user's local hard drive.
That doesn't sound like too good a deal to me. Admittedly, I will be getting some extra reliability, but given how many home user's back-up their data on a regular basis, I don't think reliability is worth much (at least to home users).
At first blush, it sounds like a nice idea, but I don't think the economics are going to support it. It will always be easier and cheaper for the folk that actually need more storage or processing power to just go out and buy it, especially while Moore's law is in effect. For anyone else, it just doesn't matter.
Been there, done that (Score:3, Interesting)
While this is not directly mentioned by David Anderson in his article I know for a fact that this is something that United Devices is interested in because late last year Mojo Nation was in discussion with UD to provide just this sort of service to its users.
This sort of distributed backup is what the current private branch of the Mojo Nation codebase does, with a little taskbar app that sits in the background and distributed backed up files to peers within the enterprise. One major benefit that your post missed is that the majority of the data stored on hard drives within an enterprise is redundant data (e.g. multiple copies of MS Word, etc.) and with a distributed backup system you only need to keep a few copies of such files around for restores. You can back up 99% of your data while only needing 10-15% of the available space on individual PCs.
In what is turning out to be one of life's interesting ironies, the company that was most intrested in this UD/MojoNation pairing was Enron's bandwidth trading group (mostly for storing medical imaging data and distributed corporate backups.) When Skilling left Enron just before the whole accounting scandal started to blow up the Enron guys became "unavailable" so things never moved forward, but you can be certain that this sort of a distributed data storage and backup system will appear again.
Jim
Intended use... (Score:2, Interesting)
I'm not so worried about the technical side of things, but more along the lines of intended use...
Could someone queue a job to crack a encrypted password file, or a document stolen from the government? I imagine that with 150 million computers using their spare cycles, this job could be done with relative ease. This is definitely an issue that the authors have failed to address in their proposal.
The legal rammifications alone makes this prohibitive. Is a person who's computer did 0.1% of an illegal activity just as liable as someone who did 10%, 25%, 50% or as liable as the person who submitted the job? Can you even fully control what kind of jobs your system is doing using this proposed infrastructure?
It may be a great idea for say X machines inside a large corporation, but there is already some alternatives to fill that need. I just don't see how they can work out the logistics of issues such as the one I present above, when they have to also worry about technical and financial issues that such a system would bring with it.
Re:It's been done, and no one uses it (Score:3, Interesting)
Hm. So we have a set of "theoretical" problems, for which it's doubtful that solutions exist. Except that you say they've already been solved...and apparently they're not just theoretical either. Truly, you have a dizzying intellect.
Cheaper, yes. More robust? For what value of "robust"? Are we talking about data that only exists in one place, or in multiple places? Which one's more resistant to the type of failure that takes out a whole site? Please provide a definition by which something that exists only on your machine (whose mere existence is only known locally) is more robust than something that exists in multiple places.
Irrelevant. In any but the most stupidly designed distributed data stores, most data would be served out of a local cache under most conditions. In many, the next step would be to serve it out of another geographically-local machine over a fast LAN connection. Just because you personally can't think of a distributed-storage architecture any better than traversing the globe for every datum doesn't mean that better architectures don't exist.
Really? Ever try to do mmap-style I/O over Napster? How about plain old open/read/write over Gnutella? Byte-range locking within a Freenet file? Hmmm. If you want to talk about solved problems, how about ideas like VFS layers and network-protocol abstractions? To provide generalized, transparent access to data, on a par semantically with the sort of access that you get with a local filesystem, your "user-level process" isn't going to cut it. Not by a long shot. That's like going back to the days when every application needed its own library just to get keyboard input or draw stuff on the screen. This kind of thing belongs, at least partially, inside the operating system so that all applications can use all equivalent protocols without special linkage; see my file-sharing manifesto [platypus.ro] for a fuller explanation.
Re:A question of trust (Score:3, Interesting)
The purported purpose of many redistributive taxes is to either offer a "temporary" relief against hardship of some sort, or, more insidious, offer investment capital for some venture which is expected to generate wealth in the future.
Historically, private charity (when not the victim of dollars that go toward taxes instead of the charity) does a better job of taking care of the poor and destiture than does government.
As for "investment capital", if the venture were worthy of funding, private investors would do so, for a share of the expected gains.
Sometimes, of course, the government wins, or at least had a miniscule investment in something that wins big (think "Al Gore's" Internet). And I've seen many a slashdotter argue where government should "invest" -- NASA being a favorite "charity" (because they do cool stuff, I suppose). So, we slashdotters, as a group, are not immune to the lure of redistributed tax dollars. The big problem here, is that no matter how small the "government's" (i.e. taxpayers) investment, they claim ownership, lock, stock, and barrel, citing that "it wouldn't be if not for Uncle Sam [substitute your government as appropriate]".
Perhaps not as soon, but worthwhile things do get tended to by the private sector "when the time is right" (yes, to expect to profit, of course). The private sector tends to be far more responsive as well, espescially in innovative new technologies exploited by startups.
So, no, I am not any friend of government redistributive taxation, but I do think we should have strong counter arguments for all the "justifications" for it.