Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Sigh. Or rather Sci...Fi (Score 1) 153

Precisely! In fact, I'm thinking of rewriting Plato's Republic except replacing all instances of Philosophers with Science Fiction Writers. Think of the advantages! Instead of neurosing over healthcare and global warming we can have replacement organs, dinosaurs and space aliens! We can build our own space habitat! The Stars are Ours! No longer will mankind be limited by silly little things like physical law and economics, not with SF writers in control.

Best of all, SF writers tend to be pretty nerdy and (if we carefully exclude the horror contingent and zombie squad) inclined towards epic-heroic monumental happy endings. Life could never be boring with them in charge.

On to the asteroids! Don't worry about cost or whether or not the risks are worth the benefits! Damn the space torpedos! So what if another million or two of small children die of easily preventable causes this year! It helps reduce the rate of population growth, and how can that be a bad thing?

rgb

Comment Sigh. Or rather Sci...Fi (Score 3, Informative) 153

Science fiction authors have totally solved this problem a zillion different ways. They all share certain features. First you go to the asteroid. Second, you set up some sort of mass driver on the asteroid or ion driver, ideally one that uses solar electricity or heat and not imported fuel, but if you don't mind a bit of radioactivity, propulsion by nuke is OK (Orion).

Depends on the mass of the asteroid as well, and how long you want to wait to get it home, and how much of it you want to have left when you get there. If you don't mind waiting a VERY long time, you could even use an angled light sail for propulsion. Third, you drive it home, or rather, have your fully automated computer tools do it for you. Fourth, you get it into Earth Orbit and then use it to threaten the hegemony running Earth, insisting that they send you dancing girls and exotic foods or you'll drop it on their heads -- it makes you way more money than actually selling the metal.

Optionally, you can have your robots smelt the asteroid in place first, using large mirrors to concentrate solar energy to melt the asteroid rock into slag plus metal, perhaps even collecting the slag (with a thin metal coating) to use in your linear accelerator or solar heated rocket as reaction mass. Some asteroids are really comet heads and might be covered with solid gases and ice and might support making real fuel on the spot as well. And fusion would no doubt shift the plan a bit as well.

But the final stage is always to drop them on Earth, not use them for good. Otherwise there isn't any real plot. Sometimes they don't even bother dropping them per se, they just fall by accident. But nobody can resist an umpty teraton-of-TNT explosion: not invading space aliens, not Dr. Evil, not the asteroid mining company's board of directors, not even the grizzled old asteroid miner whose sainted mother was put out onto the street to starve during the housing riots of 2057.

rgb

Robotics

Robots Put To Work On E-Waste 39

aesoteric writes: Australian researchers have programmed industrial robots to tackle the vast array of e-waste thrown out every year. The research shows robots can learn and memorize how various electronic products — such as LCD screens — are designed, enabling those products to be disassembled for recycling faster and faster. The end goal is less than five minutes to dismantle a product.

Comment Re:Fucking magnets, how do they work? (Score 3, Informative) 26

You mean, as in "read a physics textbook"?

Seriously. Depending on how much physics you've already studied, the right place to start will vary. A passable (free) intro is in my free online physics textbook http://www.phy.duke.edu/~rgb/C..., or wikipedia articles. A good intermediate treatment might be Griffiths' Classical Electrodynamics. If you want the pure quill uncut stuff, J. D. Jackson's Classical Electrodynamics is excellent, but it is not for the faint of heart or the wussy of PDE-fu.

In a nutshell, parallel currents of electric charge attract; antiparallel charged currents repel, changing charged currents radiate electromagnetic energy, and there are electrostatic forces happening in there somewhere too, in the cases where the currents are produced by unbalanced moving charge. Oh, and there is a fair bit of twistiness to the magnetic fields (called "curl") and forces, and the currents in question in "magnets" (or the general magnetic susceptibility of materials) tend to be non-dissipative (quantum) nuclear, atomic, or molecular circulations of charge, not Ohm's law type currents in a resistor. Ferromagnets in particular are what is being referred to, and they are characterized by long range order and a "permanent" magnetization in the absence of an external field below a certain temperature.

Hope this fucking helps:-)

rgb

Comment Re:Not exactly (Score 3, Interesting) 161

Besides, the invention of accelerators order of 12" in size is very, very old news. The Betatron:

http://physics.illinois.edu/hi...

is, as one can see, order of a foot in diameter and could produce electrons at order of 6 MeV in 1940. Yes, that is actually before the US entered WWII and long before the invention of the cyclotron. That is gamma ~12, or v ~ 0.997 c. So if the top presentation were at all relevant to TFA it would actually be boring. One might safely conclude that it is wrong and boring.

The betatron was damn near the first particle accelerator truly worthy of the name, and was just about exactly 12" in diameter (a bit larger than that including the frame for the magnets etc) as one can clearly see in the second photo on this page if not the first.

rgb

Comment Re: How about we hackers? (Score 4, Insightful) 863

Yeah, I've done a fair bit of time as sysadmin of several networks AND enjoy the cool stuff that comes with change and improvement in hardware and software over time.

Systemd no doubt will have growing pains associated with it, but I still remember the "growing pains" associated with kernel 2.0 (the first multiprocessor kernel) and issues with resource locking and ever so much more. Anybody want to assert that this wasn't worth it, that "single core/single processor systems were good enough for my granddad, so they are good enough for me"? Server environment or not?

Decisions like this are always about cost/benefit, risk, long term ROI. And the risks are highly exaggerated. I'm pretty certain that one will be able to squeeze system down to a slowly varying or unvarying configuration that is very conservative and stable as a rock, even with systemd. I -- mostly -- managed it with kernels that "could" decide to deadlock on some resource, and when the few mission critical exceptions to this appeared, they were aggressively resolved on the kernel lists and rapidly appeared in the community. The main thing is the locking down of the server configurations to avoid the higher risk stuff, and aggressive pursuit of problems that arise anyway, which is really no different than with init, or with Microsoft, or with Apple, or with BSD, or...

But look at the far-side benefits! Never having to give up a favorite app as long as some stable version of it once existed? That is awesome. Dynamical provisioning, possibly even across completely different operating systems? The death of the virtual machine as a standalone, resource-wasteful appliance? Sure, there may well be a world of pain between here and there, although I doubt it -- humans will almost certainly keep the pain within "tolerable" thresholds as the idea is developed, just as they did with all of the other major advances in all of the other major releases of all of the major operating systems. Change is pain, but changes that "wreck everything" are actually rare. That's what alpha/beta/early implementation are for, and we know how to use them to confine this level of pain to a select group of hacker masochists who thrive on it.

On that day, maybe just maybe, systemd will save their ass, keep them from having to replace some treasured piece of software and still be able to run on the latest hardware with up to date kernels and so on.

I've been doing Unix (with init) for a very long time at this point. I have multiple books on the Unix architecture and how to use systems commands to write fully complex software, and have written a fair pile of software using this interface. It had the advantage of simplicity and scalability. It had the disadvantage of simplicity and scalability, as the systems it runs on grew ever more complex.

Everybody is worried about "too much complexity", but Unix in general and linux in particular long, long ago passed the threshold of "insanely complex". Linux (collectively) is arguably one of the most complex things ever build by the human species. The real question is whether the integrated intelligence of the linux community is up to the task of taming the idea of systemd to where it is a benefit, not a cost, to where it enables (eventually) the transparent execution of any binary from any system on a systemd-based system, with fully automated provisioning of the libraries as needed in real time as long as they are not encumbered legally and are available securely from the net.

We deal with that now, of course, and it is so bloody complex and limiting that it totally sucks. People are constantly forced to choose between upgrading the OS/release/whatever and losing a favorite app or (shudder) figuring out how to rebuild it, in place, on the new release -- if that is even possible.

I'll suffer a bit -- differently, of course -- now in the mere hope that in five years I can run "anything" on whatever system I happen to be using and have it -- just work.

rgb

Comment Why bother... (Score 1) 272

a) It's already done, and is called "wikipedia". The problem of accessing wikipedia after the solar flare in a few days wipes out human technological civilization is left as an exercise for the reader.

b) OK, so it's not really done, and is going to be even less done as paper books more or less disappear from the world and people stop learning how to read because their personal digital implant delivers content directly into your cortex in full sensory mode, all of which goes away when a nuclear war followed by a space alien invasion reduces humans to a marginal species living in abandoned mines and sewage tunnels and living on rats. Brevity is then the soul of wit. We need three things:

1) How to make and blow glass.
2) How to turn glass into lenses and lenses into microscopes and telescopes.

These two things are already sufficient. They extend human senses into the microscopic and macroscopic, otherwise hidden, Universe, and nothing but common sense and observation is required from that point on. However,

3) How to build a printing press.

is also good, provided that people can still read.

Oh, you want to rebuild civilization QUICKLY? Either we're restarting from a partial, not full, reboot (that is, we still have easy access to things like unburned oil and coal, iron, maybe a few undamaged nuclear power plants with the engineers to run them) or it's just not happening!

The problem, you see, is easy access to those resources. The more we deplete the Earth's crust of readily minable resources, the harder it is to reboot civilization on a collapse. We just don't have a lot of places where oil still comes oozing up to the surface of the Earth, for example, so why and how exactly are people going to go looking for it a kilometer or two down? How easy is it going to be to find any? Steel requires iron (still fairly plentiful, granted) and coal. Hmmm, easy coal isn't so easy any more. Easy copper, not so much. Easy aluminum? No such thing, needs massive amounts of electricity (although ore is still plentiful enough. Even making chemical reagents like sulphuric or nitric or hydrochloric acid (key to building nearly anything interesting) require sulphur, salt, electricity.

This is what is going to be tough. Bootstrapping directly from type 0 pre-civilization to type 2 civilization is going to be very difficult if we've depleted all of the easy pathways to 2 while we are type 1, even if we preserve usable copies of wikipedia, the CRC handbook, the library of congress science section, the entire proceedings of the IEEE, and a complete copy of all patents ever filed in the US patent office (and have people who can read them, and who have managed to learn calculus and build stuff). Hydroelectric power, maybe. Alcohol can drive simple motors. But going straight to nuclear or photovoltaics is going to be pretty much impossible, and going the coal/oil route we've followed the first time is going to be much, much harder.

The best thing, therefore, is to take care of the civilization we've got...

rgb

Comment Re:please no (Score 1) 423

I not only have seen spectrographs of the atmospheric radiative effect, I actually own a copy of Grant Petty's book A First Course in Atmospheric Radiation", and have taught both undergrad and grad electrodynamics for over 30 years. Precisely what does this have to do with my statements above? I'm not "denying" that the greenhouse effect exists -- there is direct spectroscopic evidence for it. I can derive one simple model (a complete absorber model leading to 1.19x warming) for it on a piece of paper in three minutes. I regularly argue with people who want to claim that it doesn't exist at all, or that it violates the second law. Both are absurd -- of course it exists, and no it doesn't violate the laws of thermodynamics it is a direct consequence of them (although the actual atmospheric radiation effect is a great deal more complex than simple single layer models!)

All of this is completely irrelevant to my statements above. Let me explain the null hypothesis, since the terminology apparently eludes you. It is this: Supposed we increase atmospheric CO_2 from 300 to 600 ppm in the very simplest model planet we can imagine -- one where the only change we permit is this. One can work through the arguments for the greenhouse warming one should expect -- they involve looking at the measured spectrum of CO_2, doing a bit of work with the relevant Beers-Lambert formula, and thinking a bit about the lapse rate -- but in the end most people who do the calculation end up with somewhere between a 1 C and 1.5 C warming.

At this point one invokes the principle of ignorance -- we don't really know how the entire Earth system will respond nonlinearly to this. Nor do we have any plausible means with which to measure it -- we have no experimental Earths to do controlled observations with similar structure, e.g. 70% saltwater ocean confined in a complex pattern of continents -- and we already know that the establishment of particular circulation patterns of the confined ocean and atmosphere were the sole really plausible explanation for the Pliestocene ice age, which started when the isthmus of Panama closed between 3 and 4 million years ago.

We also know one important point from linear stability analysis. For the Earth's climate system to be stable at all, it has to respond to perturbations in forcing by opposing the change, not augmenting it. That is, at a stable point, the response to all perturbations has to be to push the system back to the stable point, not away from it, or the point isn't stable, it is unstable, like balancing a pencil on its point. This principle is taught in introductory first year physics, so presumably you are familiar with it.

The Earth's top of atmosphere "forcing" varies by roughly 90 W/m^2 every year, simply from the eccentricity of the Earth's orbit. It varies by order of a percent from fluctuations in albedo (mostly due to clouds, but also due to shrinking and expanding ice and snow fields) on a much shorter time scale, as short as days. The climate is if anything remarkably stable, at least on a short time scale (and we have the devil's own time explaining any of the longer time scale variations observed in the paleo record or the much shorter thermometric record, where the stable point itself exhibits considerable climate "drift" even while remaining sufficiently locally stable to be still considered "climate"). There is little evidence of any sort of runaway nonlinear instability from this natural variation in forcing. Quite the opposite, in fact, right up to the point where factors we do not yet really understand and cannot compute or predict seem to cause transitions like the advent of glaciation in the current, continuing, Pleistocene ice age.

Given a lack of knowledge of how the enormously complex system will respond to a small, linear variation in forcing on top of the annual periodic variation in forcing that is roughly two orders of magnitude larger and incidentally is in counterphase with the annual associated variation in global average temperature (just so you can see how non-intuitive and complex the Earth as a planetary climate really is), the null hypothesis is that it will simply shift the equilibrium, linearly, by the base estimate above. That is, doubling CO_2 will most likely increase the planet's mean temperature by roughly 1.25 C, call it 2 whole degrees F. This is of the same order as the temperature change associated with the Little Ice Age (descent into, emergence from) or the natural variation in global temperature that has been proceeding over the entire Holocene interglacial. It is unlikely to be catastrophic. It isn't even out of proportion to the warming we might have observed without the help of CO_2, or the warming we did observe over the first 2/3 of the thermometric record where CO_2 was an irrelevant factor.

This null hypothesis -- that the warming we should most likely expect due to doubling CO_2 is the direct warming from the CO_2 itself neither augmented nor diminished by nonlinear feedbacks we cannot compute, justify, or directly observe however much people do love to argue about them -- is the assertion that has to be disproved by temperature observations over -- according to all the climate people themselves: time spans in excess of (say) 25 years. Most climate people also seem to agree that CO_2 was an ignorable factor in climate forcing before the post-WWII industrialization (in particular that it was irrelevant to the substantial warming that occurred in the first half of the 20th century, even though that is all rolled into one convenient "hockey stick" in presentations without ever acknowledging that subtle point. So, start at 1940 (to avoid picking any "particular" start date, you can look at any date "around" 1950) and what do we see:

http://www.woodfortrees.org/pl...

There is one single visible episode of warming in this entire record. It is confined to a stretch of time that is not as long as 25 years -- it is pointless to try to pick endpoints of linear trends in this obviously nonlinear trended timeseries but the eye can clearly see that the warming is pretty much confined to the stretch between a start somewhere 1975 and 1985 and an end somewhere between 1995 and 2000. If one uses the most optimistic set of assumptions possible, this stretch is a "climate shift" across 1975 to 2000 to barely make 25 years. But this really is cherrypicking in the extreme, especially when the big bumps at the beginning and end can be tied to discrete non-driven-climate events -- ENSO, and the warming stretch itself coincides with the warming phase of the Pacific Decadal Oscillation and hence some fraction of it is probably natural.

So the big question is, does this graph falsify the null hypothesis, that the observed warming over the entire stretch of ~65 years is due to some mix of unknown, and really uncomputable, natural variation due to all of the internal coupled feedbacks that otherwise conspire to leave the system pretty stable (except when it isn't) plus the linear forcing due to CO_2 only?

Well, the warming observed is somewhere between 0.4 and 0.5 C over (say) 65 years as CO_2 has gone from roughly 300 ppm to roughly 400 ppm. Let's be pessimistic: \Delta T = 0.5 C. Beers-Lambert etc suggest a (natural) logarithmic warming response to atmospheric forcing, so we might expect to see roughly half of the warming in the first third of the increase. Which is (and pay attention, as this is important!) exactly what is observed.

So forget non-computable natural variation. Forget assertions of runaway warming due to non-computable presumptions of positive feedback from water vapor in a system that is manifestly stable against annual variations in forcing almost 100x greater than the total additional forcing expected upon a doubling of CO_2 -- any sort of sane stability analysis would conclude before even examining the issue in any detail that the mostly likely sign to any sort of forcing feedback is negative, see remarks above, and more likely than not would reduce the observed warming, not increase it, although in a non-computable, nonlinear, chaotic, damped, driven macroscopic system of this sort simple glib assumptions could easily be wrong in either direction, which is why we prefer to rely on what nature tells us not what we think might be the case a priori. If we admit our ignorance, and ask the simple question: "Do we need to worry about feedbacks increasing the warming that "should" result from doubling CO_2 alone?" the answer is unambiguously No!

Not from my opinion, not from any real computation, just from a back of the envelope computation compared to observation. Well, back of the envelope given the results of any of the many papers estimating or measuring the expected CO_2-only forcing. Indeed, if anything the data suggest that we are surprisingly close to this expected rate of total warming over the era where CO_2 has increased by roughly 1/3.

The big question is: why should anybody believe that we need additional stuff to explain this variation? And that's attributing 100% of the observed warming to CO_2 only, and using the most optimistic of heavily processed thermometric data "adjusted" over and over again to increase the "instrumental" warming (but curiously, never decreasing it, although one would ordinarily expect the probability of errors in measurement to be distributed without bias, at least until one thinks about the obvious UHI warming bias that is not removed in the HADCRUT4 data presented in the graph above).

Note that no end points were cherrypicked in this. No trends, linear or nonlinear, were fit. We just take two numbers -- \Delta T and \Delta P_CO_2/P_CO_2 -- and connect them from almost anywhere in the vicinity of 1950 to almost anywhere in the vicinity of the present, and we conclude that the warming observed can be completely explained without invoking any sort of feedback, and without spending a small fortune doing computations that we have no good reason to think have any predictive value at all, that do not fit the data particularly well anywhere outside of their reference period, which was (inexplicably!) chosen to be the single stretch of visible warming in the second half of the twentieth century, punctuated by (and probably at least partially caused by!) ENSO events.

So by all means, assert that since I disagree with the experts I must be wrong. Assert that since I said that the GHE doesn't exist (straw man -- I said nothing of the sort) or that CO_2 isn't part of it (ditto) I must not even have ever looked at a spectrograph even though I explicitly said I did (hmm, it's getting hard to count here -- a bit of ad hominem (basically free, in arguments of this sort) plus assertions of my dishonesty and incompetence devoid of any sort of factual support. To me, it seems that you are perfectly happy to argue using logical fallacy instead of addressing what I say, to the point where I'm tempted indeed to get out a logical fallacy bingo card and see if I've already got a two or three paragraph winner.

Or, you could address the actual points I make, learning about null hypotheses in hypothesis testing (and perhaps Ockham's razor and a few other related principles) along the way if you need to to keep up.

Here's the very simplest picture possible of the point I'm making. Consider a mass on a spring in a damping fluid, being driven not particularly near resonance by a force that consists of two pieces:

F_tot = F_0 + A\cos(\omega t)

where A is roughly 0.07 F_0 (and time is measured in years). Wait for the system to arrive at equilibrium. When it does, it will be oscillating around a displaced equilibrium (displaced by F_0), with an amplitude determined by the need to balance total energy added to the system by A\cos(\omega t) against the total energy removed by the damping force.

Now change one thing: Make F_0 = F_0 + 0.01A, that is, add roughly one part in a thousand of F_0 to F_0. Without redoing everything, estimate the change in the solution. You basically have three choices:

a) The equilibrium shifts by 0.01A/k (where k is the effective spring constant of the oscillator) and nothing else happens.
b) The equilibrium shifts by 0.02A/k to 0.05A/k.
c) The whole system races out of control, with the amplitude varying wildly higher until the spring breaks.

a) is what happens for a linear response model. b) is possible only if there are nonlinear terms large enough to double (or worse) a linear response to what is a tiny perturbation. Be prepared to carefully justify your Taylor series and prove the existence of the nonlinear terms in the actual trajectory observed before the shift (where the oscillation obviously samples them). c) is what happens if the system is nonlinear and is on the threshold of chaos. Damping, in general, shifts the system towards linear stability -- indeed b) is basically asserting highly nonlinear damping (or a highly nonlinear spring) but that sort of damping is already contradicted by the observed stability of the oscillator with A approx 0.07 F. If nothing else, it is a lot harder to imagine an integrated response of 2 to 5 times the usual linear response without a most peculiar damping behavior, one that I think is overwhelmingly inconsistent with the data.

To conclude, the simplest estimate for the warming expected from doubling CO_2 is 1 C. This estimate is entirely consistent with observations, and is if anything in almost too good agreement with it. There is absolutely no doubt that it is well within any sort of reasonable error bars, given that it is near the middle with very little error to be explained even by natural variation and noise. One cannot defend assertions of catastrophic climate change by any sort of simplistic argument such as "doubling CO_2 is expected to cause 2 to 5 C warming by 2100" as if this result is somehow obvious or supported by the data-- it is not. It relies on an entire tower of shaky assumptions and attempts to compute something that is probably not computable (and is definitely not measurable) against noise and natural variation an easy 1-2 orders of magnitude larger. It is inconsistent with experimental observations of non-catastrophic warming resulting from doubling CO_2. We are, in fact, dead on the expected linear response track, empirically, from 1950 on, and can reasonably expect to see another 0.3 to 0.4 C as we go from 400 ppm CO_2 to 500 ppm CO_2, and the remaining 0.1-0.3 C as we go from 500 ppm to 600 ppm, if -- and it is a big if -- natural variations of the same order to do not trump this one way or another, or net negative natural feedbacks kick in to further limit the observed warming, or chaos assert itself in the underlying nonlinear chaotic system and kick us into runaway warming or the next glacial episode.

rgb

Comment Re:please no (Score 1) 423

Yes, you have. You missed, for example, the entire bit about the null hypothesis. You also missed the fact that I am not asserting that the Earth isn't warming, or that CO_2 increases are probably not a factor in the warming we have experienced. I can actually read a spectrograph and have a decent understanding of the GHE from the basic physics on up. I'm only pointing out that the trivial model you suggest is precisely why we should doubt that TCS is over 2 C! That is, the null hypothesis is around 1 to 1.5 C total warming from CO_2 alone, which is all we have even weak direct evidence for. Everything else is built on a shaky tower of model assumptions, physics toy models, and an attempt to solve a probably unsolvably difficult problem in a particular way to put some sort of stamp of authenticity on a conclusion that is both unfounded and so far, contradicted pretty strongly by observational fact.

rgb

Slashdot Top Deals

An authority is a person who can tell you more about something than you really care to know.

Working...