Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Comment Re:Federal vs. local decision (Re:I like...) (Score 1) 643

And probably non-constitutional, if any state had had the pills to take it to the Supreme Court. I seem to remember this bit about separation of powers, and tying it to special treatment of states in the disbursement of commonly raised tax money is openly, transparently, an attempt to circumvent that bit and hide the fact that it is basically passing a federal law that applies to a domain reserved to the states. And while I think the camera is a good idea as well, and only the tip of the iceberg of electronics needed to safeguard civil liberties (just think of doing precisely the same thing within the prison system) I don't at all like the idea of illegal arm-twisting a state compliance through the threat of differential access to federal funds. The states are not constitutionally bound to do what the federal government wants them to do on states rights issues in order to be eligible for federal funds returned to the states on an absolutely equal basis.

Of course, this is only one of many, many places where the federal government exceeds its mandate both in the collection of taxes and the return of those taxes to the states in a substantially inequitable way. We should either scrap this part of the constitution and eliminate states altogether or else do some pretty serious cleanup to try to put Our Government in some vague approximation of compliance with both the letter of and the clear intent of the founding fathers in the constitution.

rgb

Comment ...because giving them a hardwired unique ID... (Score 1) 465

...that enables the thief to be arrested and the phones returned to their actual owners the first time the miscreant tries to connect the phone to a service provider, that would be, I dunno, undemocratic. Un-amurrican. Besides, it would undercut the important corporate businesses that insure phones, make new phones, sell you upgraded phones, and they employ a lot of people. If we actually arranged it so that phone theft is impossible because stolen phones could always be traced the first time the non-owner tried to register to use them anywhere in the world, how would poor people and unemployed teenagers ever get smartphones?

No, it makes much more sense to completely rearrange it so that the phones can automatically be turned off when they are stolen (or whenever some official wants to violate your civil liberties without a warrant) and not even try to arrest the criminals. Our police are too busy busting pot smokers, underage beer drinkers, and giving out citations for expired boat trailer license plates -- y'know, keeping those streets safe -- to bother to run down actual theft, even when it is impossible to use the stolen device without connecting it to a network that can locate it to within a meter or so almost anywhere in the world at will.

This makes complete sense. Go California!

rgb

Comment Re:Gotcha covered... (Score 2) 259

Oh, and what about the graphic dimensions and hidden dimensions? Just because your working physical-space dimensionality fits in 640K -- at least, if you have a backing store with a few megadimensions to spare -- doesn't mean that you don't need someplace for God to hang out and run things, or dimensions needed for your inner spiritual eye to be able to visualize the projective results of the stuff in the 640K.

Now, for just 2^{640!} dollars, I'd be happy to sell you an expansion space with an extra 400K dimensions, to let you offload God into a meta-space of Its own and still have sufficient dimensional resolution to be able to achieve satori or visualize the cosmic whole in some sort of projection. And it comes with both serial and parallel dimensional portals, not to mention a built-in communication channel connecting your working dimensionality with God-space. It also permits you to expand your paltry 64K dimensional mother-Universe to a proper full-scale Universe with all 2^{1024} dimensions that the underlying physics can use -- with indirect dimensional addressing -- accessible.

For the first time, your matter assemblers and compilers will have the dimensions that they need to work. Inflation will be tremendously accelerated. You can cut the time required for a full-scale big bang reboot to end up with Intelligent Life from 14 billion years to a mere 20! Just think of what you can evolve after that!

rgb

Comment Re:Derp (Score 2, Informative) 168

Surely you must be joking. There have been Explorer bugs that went unpatched for six months. No operating system is immune and security flaws arising from bugs in code are an inevitable accompaniment to having code in the first place, especially complex code with lots of moving parts (some of them infrequently tested/visited), but Microsoft has historically been Macrosquishy when it comes to security and patches. LOTS of holes, and many of them (in the historical past) have taken a truly absurd amount of time to be patched, resulting in truly monumental penetration of trojans and viruses via superrating wounds like Outlook. I still get an average of one email message a day that makes it through my filters purporting to be from a correctly named friend or a relative and encouraging me to click on a misspelled link. You think those messages are arising from successful data-scraping via Linux malware or Apple malware or FreeBSD malware?

Perhaps, driven by the need to actually compete with Apple and Linux (including Android) instead of resting on their monopolistic laurels, they have cleaned up their act somewhat over the last few releases of Windows, but on average over the last 10 or 15 years, certainly since the widespread adoption of apt and yum to auto-maintain Linux, the mean lifetime of a security hole in a Linux based system all the way out to user desktops has been around 24 hours -- a few hours to patch it and push it to the master distro servers, mirror it, and pull it with the next update. Microsoft hasn't even been able to acknowledge that a bug exists on that kind of time frame, let alone find the problem in the code, fix it, test it, and push it.

If they are doing better now, good for them! However, look at the relative penetration of malware even today. Linux malware has a very hard time getting any sort of traction. Apple malware has a very hard time getting any sort of traction. Windows? It's all too easy to whine that it gets penetrated all the time because it is so popular and ubiquitous, except that nowadays it is neither.

rgb

Comment Few alternatives? (Score 1) 89

While that may seem slow, people in remote areas may have few alternatives.

Other than:

Solar power, at roughly $1/watt (and then "free" for 10-20 years), price falling on a nearly Moore's Law trajectory.
Wind power -- expensive, unreliable but simple technology and humidity isn't reliable either.
The entire panoply of standard sources -- coal, oil, gasoline, nuclear, hydroelectric, alcohol, diesel, methane... which we can deliver a variety of ways including simply delivering a small generator and fuel.

I would truly be amazed if a new, patented technology of this sort was within an order of magnitude -- or even two -- of the cost of a solar source superior in nearly every way, and there are very few places where the humidity is high, temperatures are reasonable, and the sun does not produce enough light to make this work. This is truly an edge technology unless they make it astoundingly cheap.

rgb

Comment Re:Ingredients for water? (Score 1) 190

The interesting question is, I suppose, whether or not this source of "water" is responsible for the oceans, or if they came about from e.g. cometary impacts post-crust formation (before the crust formed they don't really count as "cometary impacts", it was all just part of the formation process). This has a significant impact on the probability of finding water on extrasolar planets and hence on the CO_2/O_2/H_2O/N_2 life cycle establishing itself. There is of course evidence in the form of e.g. Europa and Titan that there is abundant water out there that COULD form seas on planetoid objects in our own solar system if the temperature/atmosphere composition range were right, but I'm not sure that we have a compelling, evidence supported picture of the details of the Earth's early evolution and how much of it was a comparatively rare accident, how much is commonplace in planetary formation. If we built a really, really big telescope at e.g. one of the Lagrange points -- maybe something with a 100 meter or even a kilometer primary mirror and similar scales for the optical paths -- we might be able to "see" extrasolar planets at a level of detail sufficient to resolve the chemistry and maybe more of smaller planets and planetary objects, not just the ones with orbits and mass parameters sufficient to make the current cut. And see a lot of other really cool stuff as well, of course -- such an eye in the sky could look across time to the big bang and immediate aftermath a lot more effectively than the Hubble.

Let's see, a primary mirror with a diameter of d = 1000 meters, \alpha = 1.22 \lambda/d, visible light is roughly 1 micron, so diffraction-limited resolution would be order of 10^9 radians. Nearish stars are order 10^16 meters, so we could barely resolve details 10^7 meters in size. Darn, that's just over the size of he Earth. We could actually photograph Jupiter-sized planets, but Earth-like planets would still just be a (fat) dot. Of course in the UV spectrum we could get one more order of magnitude out of ordinary optics so we could possibly see continent sized features and oceans in the UV (and resolve an Earth as more than just a dot). And people might find a way to cheat resolution a bit more than that -- build a coherent array of smaller telescopes, whatever. It would need damn good optics, as well.

One can dream, right? The Big Eye. Crowdfunding, anyone? If everybody on the planet contributed a dollar a year, we could build it inside a decade. Or maybe two. I might even live to see the first pictures come back. But probably not.

rgb

Comment Re:"Simplest explanation" (Score 5, Informative) 105

Damn, I had to give up modding this to answer, but I can't leave this.

One cannot "capture" a body the size of the moon by any two body elastic (e.g. gravitational) interaction. Within irrelevant perturbations such as gravitational wave radiation (presuming such a thing to exist), energy is conserved, and if it starts out unbound to the Earth it will end up unbound to the Earth.

One can capture in a three (or more) body interaction, but in that case the missing energy has to go someplace, and we are talking about a LOT of energy in the case of an orbiting moon. Enough energy to basically melt the moon and the earth and then some. One would expect to see some sort of orbital remnants of such a many-body event, and all of the other bodies in the solar system are a bit too far away to be good candidates in terms of the forces needed, and show none of the orbital perturbation one would expect as a consequence.

That leaves inelastic events. Tidal interaction is inelastic over time, but to make it strong enough to mediate a "capture" it would damn near be a collision anyway, brushing up on Roche's Limit (look that up). Also, that too would leave the nascent moon in an orbit much closer than the initial radius of its apparent orbit. Also, it wouldn't explain the apparent deficit of heavier elements and an iron core in the moon (thought to have been literally blown out of the incoming body in the collision and either ejected altogether to carry away the missing energy and momentum needed to leave the remnant in orbit or absorbed into the Earth) and a bunch of other things.

So really, the collision hypothesis makes "enough" sense and is consistent with enough data that it is AFAIK the "accepted" explanation of the moon's origin, with the usual caveat that contrary evidence or a better argument in the future might change that as we cannot easily be certain about events 4.5 billion years ago.

rgb

Comment Re:Gnome; Mate; Cinnamon; Unity; Xfce4...Save Me (Score 2) 24

It is, indeed, sad. Gnome 2 was a perfectly usable desktop. Gnome 3 as you say looks like and acts like a "kinda" tablet interface, but it doesn't make it on tablets and truly sucks on desktops and laptops compared to G2. My own solution has been to use a release that still supports G2. It is an imperfect solution, but I work at the interface level, and the imperfections and risks are all occult or fixable with care, where nothing can "fix" G3 but snipping the entire fork and pretending that it never existed. In the meantime, I have my six hot keyed desktops, my keystroke-cyclable windows, and can work (or play) for hours in ten or twenty windows and never touch my mousepad or take my fingers off of the home keys.

That's the real problem with G3. Tablets are lovely in the Macintosh/Apple sense -- you can learn to use their interface in a day, and pay for that knowledge for a lifetime in reduced productivity compared to what you could realize with a more complex interface with more configurable options and ways of doing things. If they had kept G2's keyboard/mouse driven structure and general function and customizability and merely added support for a user-selectable touchscreen swipe mode, I'd a) never have noticed and; b) if I ever found myself trying to run G3 on a tablet and DID notice, pleasantly surprised to find that it had a tablet-savvy mode that otherwise preserved my desktop setup (as best as possible given the differential screen sizes).

The other sad thing about not only Gnome but most of the rest of the desktops is that no progress was made in places where it would have been GOOD to make it. For example, I work on at least three or four different (all linux based) systems. They have different screen resolutions, different sized hard disks, different speed CPUs, different capacity memories. Yet Gnome is still too stupid for me to be able to clone my home directory across those systems -- or e.g. NFS mount a single home directory from a server on all of those systems -- and have it just work, fixing the font sizes, default window sizes, and so on. I've written my own highly custom startup scripts in the past that do things like determine architecture etc and then do the right thing by literally overwriting some of core startup data or follow complex conditional branches when logging in, but this sucks and is a pain to maintain. Yet nobody even tries to do better, at the right level (that is, within e.g. gconf or the gnome configuration manager itself). Linux has actually gotten worse as a client-server, shared home directory architecture, compared to what it was when it was closer to e.g. SunOS and so on back in the 90's (and it wasn't completely great back then).

Whine, whine, I know. If I were a good open source human and wanted this fix, I'd participate. And if I were a frickin' robot who didn't need sleep or if I weren't I but we, me and my ten clones, I would. But one lifetime isn't enough time to do all that I'm doing already. So all I can do is pray that somebody, somewhere, keeps G2 alive or forks it out and develops it in ways that do NOT break the features that I rely on most to support my daily work activities. G3 was actually the deal breaker with me and Fedora -- I stuck with it until then and even thrived, but to get G2 I had to go back to Centos 6 (and overlay the good parts of Fedora, namely Centos ports of the key Fedora add-on software). If Fedora re-embraced a functional G2 fork or clone (that worked and was well-maintained) I'd be perfectly happy to go back to it. I never minded having bleeding edge software handy, even at the moderate expense of stability -- as long as they don't break Fedora Core to the point where it interferes with workflow, that is, at the desktop provisioning level.

rgb

Comment Re:I wish they'd make up their minds... (Score 1) 76

Fair enough (and yeah, I know about Gamma Ray bursts:-). The Sun could hiccup tomorrow and wipe out most of the life on the planet in an event hardly noticeable from light years away (Larry Niven wrote a lovely short story based on this theme) -- it wouldn't even take a full gamma ray burst. But the point is -- why do they assume that planets orbiting a Red Dwarf will not have a magnetic field? Indeed, I would expect the opposite -- if it has a nickel-iron core like the Earth does, one would expect magnetic protection like the Earth has. If it were a super-Earth like the one discussed on /. yesterday, with a density, size and mass all larger than Earth's, you'd even have additional gravity to bind the atmosphere.

But Red Dwarfs are the last place to look as a source of "real radiation", even if you are closer to the sun -- if one has to BE closer to the sun, see comments on greenhouse effect -- in order to stay warm. Venus has atmosphere to a fault. Put Venus near a Red Dwarf and wait a billion years or so and maybe its atmosphere will thin enough and alter chemistry enough to support Earth-like temperatures and free water. Make Mars the size of the Earth -- but keep the oceans and a pronounced CO_2 bias in the atmosphere -- and maybe Mars would have sustainable temperatures and equatorial oceans, at least. It's doubtful that the "habitable zone" is as narrow as we might think it is using the Earth and our own solar system as the N=1 sample. And then, it is a big Universe.

rgb

Comment I wish they'd make up their minds... (Score 1) 76

...about greenhouse gases. We are told that high concentrations will make a Venus out of Mars, that in spite of the young sun being substantially "cooler" than the sun is now, the Earth's high GHG concentration over most of the last 600 million years is responsible for it being substantially warmer than it is now, etc. Surely there are atmospheric chemistries that would keep iron-core, magnetic field equipped, water-bearing planets nice and toasty a good safe distance away from a red dwarf. Give the temperature, life will (probably) find a way...

Of course if it is really the case that temperature is mostly determined by net insolation and perhaps things like the presence of a vast water ocean covering 70% of the surface, with GHGs only contributing an easily saturable "blanket effect" good for a few tens of degrees absolute, well then, I could see that there could be a problem.

Also, it is worth remembering that water is a great radiation barrier. We obviously want to find "land life" because of our occupational bias, but as long as the planet has liquid water oceans, who really cares if the atmosphere is too radioactive for genetic stability? First of all, one can still imagine all sorts of ways that animals or plants could evolve to protect their genetic inheritance and re-stabilize a speciesization process -- a half-dozen sexes, for example, with some sort of majority rule on the chromosome slots, using information redundancy to combat entropy as it were (or evolving more advanced stuff -- genetic "checksum" correction of some sort). Red Dwarfs have much longer lifetimes than the sun, and given ten or twenty billion years, who knows what evolution will kick out? It could be that all of the really old, stable, wise life forms in the Universe evolved around Red Dwarves because mutation rates (and consequently rolls of the evolutionary dice) are high. We don't completely understand genetic optimization as employed in actual evolution, any more than we completely understand how the brain's neural networks avoid some of the no-free-lunch theorems and empirically demonstrated flaws in e.g. classification by even the most sophisticated networks we can yet build. I'm not asserting that there is any "mystery" there, but there are damn sure a lot of scientific questions yet unanswered, and speculating about what we might find living in orbit around a Red Dwarf -- publicly and with much fanfare -- when we cannot reasonably go and find out is science fiction masquerading as science, not the real thing.

rgb

Comment Re:Science Writers: Stop Causing Us Intellectual P (Score 1) 147

The real problem (or interesting thing about this if you don't like "problem") with this is scaling. 2.3^3 = 12.2. If this mystery planet is 2.3 times the size of Earth, one would expect it to have 12.2 (give or take a hair) times the mass of Earth, presuming that it has a similar core structure. It is almost half again more massive. This in turn suggests that the mantel is proportionally less of the total volume of the sphere, or rather, that it has a disproportionately larger core (nickel-iron core densities are 2-3 times the density of the mantel). At a guess, the core alone -- if it is nickel-iron as seems at least moderately reasonable -- is at least half again larger than the size of the Earth. Alternative, its core could contain an admixture of much heavier/denser stuff -- tungsten, lead, gold -- and not be so disproportionate.

rgb

Comment Re:This research should receive enormous funding. (Score 2) 202

Please excuse my absolute ignorance, but I was under the impression that classical information channel was only required to transmit one of the entangled photons. If one of the entangled photons (or what ever it is that is entangled) was transported elsewhere (truck, fiber optics, what-not) the two entangled would still maintain the same state (spin etc) and information could then be transmitted faster than light by changing the state of one and reading the state of the other.

Information cannot be transmitted faster than light as far as we know in standard physics today (barring extreme relativistic things like white or black holes and I doubt even those unless/until experiment verifies any claim that they can).

Quantum theory doesn't get around it. You cannot choose the direction to "collapse" or "change the state" of one of the two entangled spins, because the instant you measure it, it "collapses". You might now be able to predict the state of the other end of the channel, but the person there can't because he doesn't know what you measured, so if he measures up or down when he tries (again, supposed "collapsing the wavefunction") he won't know what you measured at your end or (since the two spins are no longer entangled as soon as a measurement is made at either end) what you do to it subsequently.

But the real problem (the "paradox" bit of EPR) is much worse than that. Suppose the two "entangled" electrons are separated by some distance D. Non-relativistic naive stupid quantum theory states that when one of the two electrons is measured, the wavefunction of the whole thing collapses. But suppose that D is nice and large -- in gedanken experiments we can make it a light year, why not? In the "rest frame of the Universe" (the frame in which the cosmic microwave background has on average no directional doppler shift) experimenters on both ends simultaneously perform a measurement of the spin state of the two electrons. This (simultaneity) is a perfectly valid concept in any given frame but is not a frame invariant concept. Neither is temporal ordering a universally valid concept. But given a simultaneous measurement of the two spins, which measurement causes the wavefunction to collapse and determines the global final state, given that the entropy of their measuring apparatus (which is responsible for the random phase shifts that supposedly break the entanglement, see Nakajima-Zwanzig equation and the Generalized Master Equation) is supposedly completely separable and independent?

By making D nice and large, we have a further problem. I said that the measurements were simultaneous in "the rest frame" (and even gave you a prescription for determining what frame I mean), but that means that if we boost that coordinate frame along one direction or the other, we can make either measurement occur first! That is, suppose the spins are in a singlet spin state so that if one is measured up (along some axis) the other must be measured down. Suppose that in frame A, spin 1 interacts with its local measuring apparatus first and is filtered into spin down. This interaction with its local entropy pool -- exchanging information with it via strictly retarded e.g. electromagnetic interactions -- supposedly "transluminally", that is to say instantaneously in frame A -- "causes" (whatever you want that word to mean) spin 2 in frame A to collapse into a non-entangled quantum state in which the probability of measuring its spin up in that frame some time later than the time of measurement in frame A is unity. In frame B, however, it is spin 2's measurement that is performed first, and as the electron interacts with its entropy pool you have a serious problem. If you follow any of the quantum approaches to measurement -- most of them random phase approximation or master equation projections that assume that the filter forces a final state on the basis of its local entropy and unknown/unspecified state -- it cannot independently conclude that the spin of this electron is down -- the measurement will definitely be up -- because in frame A the measurement of spin 1 has already happened. In no possible sense can the measurement of spin 2 in frame B in the up state "cause" spin 1 to be in a state that -- independent of the state of its measurement apparatus -- will definitely be measured as spin down. Otherwise you have (in frame A) to accept the truth of the statement that a future measurement of the state of spin 2 is what determines the outcome of the present measurement of the state of spin 1. Oooo, bad.

The problem, as you can see, is that relativity theory puts some very stringent limits on what we can possibly mean by the word "cause". They pretty much completely exclude any possible way that the statement "measuring spin 1 causes the 1-2 entangled wavefunction to collapse" can have frame-invariant meaning, and meaning that isn't inertial frame invariant in a relativistic universe isn't, that is, it is meaningless. We can only conclude that the correlated outcomes of the measurements was not determined by the local entropy state of the measurement apparatus at the time of the measurements.

Fortunately, we have one more tool to help us understand the outcome. Physics is symmetric in time. Indeed, our insistence on using retarded vs advanced or stationary (Dirac) Green's functions to describe causal interactions is entirely due to our psychological/perceptual experience of an entropic arrow of time, where entropy is strictly speaking the log of the missing/lost/neglected information in any macroscopic description of a microscopically reversible problem. That's the reason the Generalized Master Equation approach is so enormously informative. It starts with the entire, microscopically reversible Universe, which is all in a definite quantum entangled state with nothing outside of it to cause it to "collapse". In this "God's Eye" description, there is just a Universal wavefunction or density operator for a few gazillion particles with completely determined phases, evolving completely reversibly in time, with zero entropy. One then takes some subsystem -- say, an innocent pair of unsuspecting electrons -- and forms the e.g. 2x2 submatrix describing their mutually coupled state. Note well that both spins are coupled to every other particle in the Universe at all times -- this submatrix is "identified", not really created or derived, within the larger universal density matrix, and things like rows and columns can be permuted to (without loss of generality) bring it to the upper left hand corner where it becomes the "system". The submatrix for everything else (not including coupling to the spins) is similarly identified.

Nakajima-Zwanzig construction treats this second submatrix statistically because we cannot know or measure the general state of the Universe and have a hard enough time measuring/knowing the state of the 2x2 submatrix we've identified as an "entangled system". It projects the entirety of "everything else" into diagonal probabilities (by e.g. a random phase approximation, making the entropy of the rest of the Universe classical entropy) and then treats the interaction of these diagonal objects with the spins as being weak enough to be ignored, usually, except of course when it is not. It is not when e.g. the spins emit or absorb photons from the rest of the Universe (virtual or otherwise) while interacting with a measuring apparatus or the apparatus that prepared the spins. Because we cannot track the actual fully entangled phases of all the interactions in this enormous submatrix and with the submatrix and the system, the best we can manage is this semiclassical interaction that takes entropy from "the bath" (everything else) and bleeds it statistically into "the system".

In this picture (which should again be geometrically relativistic) there was never any question as to the outcome of the "measurement" of the entangled spin state by the remotely separated apparati, and furthermore, while the NZ equation is not reversible, we can fully appreciate the fact that if we time reverse the actual density matrix it approximates, the two electrons will leap out of the measuring apparatus, propagate backwards in time, and form the original supposedly quantum entangled state because it never left it -- it was/is/will be entangled with every particle that makes up the measuring apparatus that would eventually "collapse" its wavefunction over the entire span of time.

Note that in this description there is no such thing as wavefunction collapse, not really. That whole idea is neither microreversible nor frame invariant. It describes the classical process of measurement of a quantum object, where the measuring apparatus is not treated either relativistically correctly or as a fully coupled quantum system in in a collectively definite state in its own right. It isn't surprising that it leads to paradoxes and hence silly statements that don't really describe what is going on.

This is a more detailed discussion of the very apropos comment above that similarly resolves Schrodinger's Cat -- the cat cannot be in a quantum superposition of alive and dead because every particle in the cat and the quantum decaying nucleus that triggers the infernal device is never isolated from every other particle in the Universe. The cat gives off thermal radiation as it is alive that exchanges information and entropy with the walls of the death chamber, which interact thermally with the outside. The instant the cat dies, there is a retarded propagation of the altered trajectories of all of its particles communicated to the outside Universe of coupled particles, which were in turn communicating/interacting with all of the particles that make up "the cat" and with the nucleus itself and with the detector and with the poisoning device both before, during, and after all changes. the changes never occur in the "isolation" we approximate and imagine to simplify the problem.

Hope this helps.

rgb

Comment Re:Nice try cloud guys (Score 1) 339

Although I don't want to get into the specific definition of "cloud" vs "cluster" vs "virtualized service server" etc -- with the understanding that perhaps it is a definition in flux along with the underlying supporting software and virtualization layers and hence will be hard to pin down and hence easy to argue fruitlessly about -- I agree with all of this. A major point of certain kinds of clustering software from Condor on down has been maintaining a high duty cycle on otherwise fallow resources that you've paid for already, that have to be plugged in all the time to be available for critical work anyway, that burn some (usually substantial) fraction of their load energy in idle mode waiting for work, and that depreciate and eventually are phased out by e.g. Moore's Law after 3-5 years in many cases even though they aren't broken and are perfectly capable of doing work. Software like Condor lets even desktops be part of a local "cloud" that can be running background jobs that don't really interfere with interactive response time much but that keep the duty cycle of the hardware very close to 100% instead of the 5-8% a mostly-idle desktop might be (while still burning half or even 3/4 of the energy it burns when loaded).

So it really isn't all about carbon (except insofar as energy (carbon based or not) costs money). It's about money, and some of the money is linked to the use of carbon. High duty cycle utilization of resources is economically much more efficient. That's why businesses like to use it. It's often cheaper to scavenge free cycles from resources you already have than it is to build dedicated resources that might end up sitting idle much of the time.

The catch, however, is systems management. In many cases, the biggest single cost of setting up ANY sort of distributed computing environment is human. A single sysadmin capable of setting up serious clustering and managing virtualized resources could easily be six figures per year, and that could easily exceed the cost of the resources themselves (including the energy cost) for a small to medium sized company. All too often, the systems management that is available is of questionable competence, as well, which further complicates things. Virtualization in the cloud can at least help address some of these issues too, as one shares high end systems management people and high end software resources across a large body of users and hence get much better scale economy IF you can afford enough competence locally to get your tasks out there into the cloud in the first place and still satisfy corporate rules for due diligence, data integrity and security, and so on.

However, be aware that for all of the advantages of distributed computing, there are forces, market and otherwise, that push against it. I buy a license for some piece of mission critical (say accounting) software, and that license usually restricts it to run on a single machine. If I put it on a virtual machine and run it on many pieces of hardware (but on only one machine at a time) I'm probably violating the letter of the law, and the company that sold the software has at least some incentive to hold me to the letter so they can sell me a license for every piece of hardware I might end up running a virtualized instance upon. Correctly licensing stuff one plans to run "in the cloud", then, is a bit of a nightmare -- if you care about that sort of thing. If one is a business, this can be a real (due diligence sort of) issue.

Which brings us full circle back to the top article. There are ever so many things that would be vastly more efficient "in the cloud" or just "run from an internet and distributed servers" as a more general version of the same thing. Netflix, sure, but how about paper newspapers? Every day, they require literally tons of paper per locality, cubic meters of ink, enough electricity to power a small manufactory, transportation fuel for the workers that cut the trees, the trees as they go to the paper mill, the fuel that carries the paper to the newspapers, and finally the fuel needed to deliver the newspaper to the houses that receive them and as the final insult, the fuel needed to pick up the mostly unread newsprint and cart it off to "recycle" (which may save energy compared to cutting trees, but costs energy compared to not having newspapers at all).

Compare that to the marginal cost of storing an image of the same informational content on a server with sufficient capacity and distributing that replicated image to a household. The newspaper costs order of a dollar a day to deliver. The image of the newspaper costs such a thin fraction of a single cent to deliver that the only reason to charge for an online paper or news service at all is to pay the actual reporters and editors that assemble the image.

Compare the cost of delivering paper mail to email. Compare the cost of driving out to "shop' vs shopping online. The world hasn't even begun to realize the full economic benefits of the ongoing informational/communication revolution. And sure, some of the benefit can be measured in terms of "saving fuel/energy resources" (including ones based on carbon, but even if the electricity I use or that is used in steps that are streamlined or eliminated comes from a nuclear power plant it costs money just the same).

Personally, I don't worry as much about "carbon" utilization reduction as I do about poverty and improved standards of living worldwide (which I think is by far the more important priority) but network based efficiencies accomplish both nicely.

rgb

Slashdot Top Deals

I think there's a world market for about five computers. -- attr. Thomas J. Watson (Chairman of the Board, IBM), 1943

Working...