Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Sigh. Or rather Sci...Fi (Score 1) 153

Precisely! In fact, I'm thinking of rewriting Plato's Republic except replacing all instances of Philosophers with Science Fiction Writers. Think of the advantages! Instead of neurosing over healthcare and global warming we can have replacement organs, dinosaurs and space aliens! We can build our own space habitat! The Stars are Ours! No longer will mankind be limited by silly little things like physical law and economics, not with SF writers in control.

Best of all, SF writers tend to be pretty nerdy and (if we carefully exclude the horror contingent and zombie squad) inclined towards epic-heroic monumental happy endings. Life could never be boring with them in charge.

On to the asteroids! Don't worry about cost or whether or not the risks are worth the benefits! Damn the space torpedos! So what if another million or two of small children die of easily preventable causes this year! It helps reduce the rate of population growth, and how can that be a bad thing?

rgb

Comment Sigh. Or rather Sci...Fi (Score 3, Informative) 153

Science fiction authors have totally solved this problem a zillion different ways. They all share certain features. First you go to the asteroid. Second, you set up some sort of mass driver on the asteroid or ion driver, ideally one that uses solar electricity or heat and not imported fuel, but if you don't mind a bit of radioactivity, propulsion by nuke is OK (Orion).

Depends on the mass of the asteroid as well, and how long you want to wait to get it home, and how much of it you want to have left when you get there. If you don't mind waiting a VERY long time, you could even use an angled light sail for propulsion. Third, you drive it home, or rather, have your fully automated computer tools do it for you. Fourth, you get it into Earth Orbit and then use it to threaten the hegemony running Earth, insisting that they send you dancing girls and exotic foods or you'll drop it on their heads -- it makes you way more money than actually selling the metal.

Optionally, you can have your robots smelt the asteroid in place first, using large mirrors to concentrate solar energy to melt the asteroid rock into slag plus metal, perhaps even collecting the slag (with a thin metal coating) to use in your linear accelerator or solar heated rocket as reaction mass. Some asteroids are really comet heads and might be covered with solid gases and ice and might support making real fuel on the spot as well. And fusion would no doubt shift the plan a bit as well.

But the final stage is always to drop them on Earth, not use them for good. Otherwise there isn't any real plot. Sometimes they don't even bother dropping them per se, they just fall by accident. But nobody can resist an umpty teraton-of-TNT explosion: not invading space aliens, not Dr. Evil, not the asteroid mining company's board of directors, not even the grizzled old asteroid miner whose sainted mother was put out onto the street to starve during the housing riots of 2057.

rgb

Comment Re:Fucking magnets, how do they work? (Score 3, Informative) 26

You mean, as in "read a physics textbook"?

Seriously. Depending on how much physics you've already studied, the right place to start will vary. A passable (free) intro is in my free online physics textbook http://www.phy.duke.edu/~rgb/C..., or wikipedia articles. A good intermediate treatment might be Griffiths' Classical Electrodynamics. If you want the pure quill uncut stuff, J. D. Jackson's Classical Electrodynamics is excellent, but it is not for the faint of heart or the wussy of PDE-fu.

In a nutshell, parallel currents of electric charge attract; antiparallel charged currents repel, changing charged currents radiate electromagnetic energy, and there are electrostatic forces happening in there somewhere too, in the cases where the currents are produced by unbalanced moving charge. Oh, and there is a fair bit of twistiness to the magnetic fields (called "curl") and forces, and the currents in question in "magnets" (or the general magnetic susceptibility of materials) tend to be non-dissipative (quantum) nuclear, atomic, or molecular circulations of charge, not Ohm's law type currents in a resistor. Ferromagnets in particular are what is being referred to, and they are characterized by long range order and a "permanent" magnetization in the absence of an external field below a certain temperature.

Hope this fucking helps:-)

rgb

Comment Re:Not exactly (Score 3, Interesting) 161

Besides, the invention of accelerators order of 12" in size is very, very old news. The Betatron:

http://physics.illinois.edu/hi...

is, as one can see, order of a foot in diameter and could produce electrons at order of 6 MeV in 1940. Yes, that is actually before the US entered WWII and long before the invention of the cyclotron. That is gamma ~12, or v ~ 0.997 c. So if the top presentation were at all relevant to TFA it would actually be boring. One might safely conclude that it is wrong and boring.

The betatron was damn near the first particle accelerator truly worthy of the name, and was just about exactly 12" in diameter (a bit larger than that including the frame for the magnets etc) as one can clearly see in the second photo on this page if not the first.

rgb

Comment Re: How about we hackers? (Score 4, Insightful) 863

Yeah, I've done a fair bit of time as sysadmin of several networks AND enjoy the cool stuff that comes with change and improvement in hardware and software over time.

Systemd no doubt will have growing pains associated with it, but I still remember the "growing pains" associated with kernel 2.0 (the first multiprocessor kernel) and issues with resource locking and ever so much more. Anybody want to assert that this wasn't worth it, that "single core/single processor systems were good enough for my granddad, so they are good enough for me"? Server environment or not?

Decisions like this are always about cost/benefit, risk, long term ROI. And the risks are highly exaggerated. I'm pretty certain that one will be able to squeeze system down to a slowly varying or unvarying configuration that is very conservative and stable as a rock, even with systemd. I -- mostly -- managed it with kernels that "could" decide to deadlock on some resource, and when the few mission critical exceptions to this appeared, they were aggressively resolved on the kernel lists and rapidly appeared in the community. The main thing is the locking down of the server configurations to avoid the higher risk stuff, and aggressive pursuit of problems that arise anyway, which is really no different than with init, or with Microsoft, or with Apple, or with BSD, or...

But look at the far-side benefits! Never having to give up a favorite app as long as some stable version of it once existed? That is awesome. Dynamical provisioning, possibly even across completely different operating systems? The death of the virtual machine as a standalone, resource-wasteful appliance? Sure, there may well be a world of pain between here and there, although I doubt it -- humans will almost certainly keep the pain within "tolerable" thresholds as the idea is developed, just as they did with all of the other major advances in all of the other major releases of all of the major operating systems. Change is pain, but changes that "wreck everything" are actually rare. That's what alpha/beta/early implementation are for, and we know how to use them to confine this level of pain to a select group of hacker masochists who thrive on it.

On that day, maybe just maybe, systemd will save their ass, keep them from having to replace some treasured piece of software and still be able to run on the latest hardware with up to date kernels and so on.

I've been doing Unix (with init) for a very long time at this point. I have multiple books on the Unix architecture and how to use systems commands to write fully complex software, and have written a fair pile of software using this interface. It had the advantage of simplicity and scalability. It had the disadvantage of simplicity and scalability, as the systems it runs on grew ever more complex.

Everybody is worried about "too much complexity", but Unix in general and linux in particular long, long ago passed the threshold of "insanely complex". Linux (collectively) is arguably one of the most complex things ever build by the human species. The real question is whether the integrated intelligence of the linux community is up to the task of taming the idea of systemd to where it is a benefit, not a cost, to where it enables (eventually) the transparent execution of any binary from any system on a systemd-based system, with fully automated provisioning of the libraries as needed in real time as long as they are not encumbered legally and are available securely from the net.

We deal with that now, of course, and it is so bloody complex and limiting that it totally sucks. People are constantly forced to choose between upgrading the OS/release/whatever and losing a favorite app or (shudder) figuring out how to rebuild it, in place, on the new release -- if that is even possible.

I'll suffer a bit -- differently, of course -- now in the mere hope that in five years I can run "anything" on whatever system I happen to be using and have it -- just work.

rgb

Comment Why bother... (Score 1) 272

a) It's already done, and is called "wikipedia". The problem of accessing wikipedia after the solar flare in a few days wipes out human technological civilization is left as an exercise for the reader.

b) OK, so it's not really done, and is going to be even less done as paper books more or less disappear from the world and people stop learning how to read because their personal digital implant delivers content directly into your cortex in full sensory mode, all of which goes away when a nuclear war followed by a space alien invasion reduces humans to a marginal species living in abandoned mines and sewage tunnels and living on rats. Brevity is then the soul of wit. We need three things:

1) How to make and blow glass.
2) How to turn glass into lenses and lenses into microscopes and telescopes.

These two things are already sufficient. They extend human senses into the microscopic and macroscopic, otherwise hidden, Universe, and nothing but common sense and observation is required from that point on. However,

3) How to build a printing press.

is also good, provided that people can still read.

Oh, you want to rebuild civilization QUICKLY? Either we're restarting from a partial, not full, reboot (that is, we still have easy access to things like unburned oil and coal, iron, maybe a few undamaged nuclear power plants with the engineers to run them) or it's just not happening!

The problem, you see, is easy access to those resources. The more we deplete the Earth's crust of readily minable resources, the harder it is to reboot civilization on a collapse. We just don't have a lot of places where oil still comes oozing up to the surface of the Earth, for example, so why and how exactly are people going to go looking for it a kilometer or two down? How easy is it going to be to find any? Steel requires iron (still fairly plentiful, granted) and coal. Hmmm, easy coal isn't so easy any more. Easy copper, not so much. Easy aluminum? No such thing, needs massive amounts of electricity (although ore is still plentiful enough. Even making chemical reagents like sulphuric or nitric or hydrochloric acid (key to building nearly anything interesting) require sulphur, salt, electricity.

This is what is going to be tough. Bootstrapping directly from type 0 pre-civilization to type 2 civilization is going to be very difficult if we've depleted all of the easy pathways to 2 while we are type 1, even if we preserve usable copies of wikipedia, the CRC handbook, the library of congress science section, the entire proceedings of the IEEE, and a complete copy of all patents ever filed in the US patent office (and have people who can read them, and who have managed to learn calculus and build stuff). Hydroelectric power, maybe. Alcohol can drive simple motors. But going straight to nuclear or photovoltaics is going to be pretty much impossible, and going the coal/oil route we've followed the first time is going to be much, much harder.

The best thing, therefore, is to take care of the civilization we've got...

rgb

Comment Re:please no (Score 1) 423

I not only have seen spectrographs of the atmospheric radiative effect, I actually own a copy of Grant Petty's book A First Course in Atmospheric Radiation", and have taught both undergrad and grad electrodynamics for over 30 years. Precisely what does this have to do with my statements above? I'm not "denying" that the greenhouse effect exists -- there is direct spectroscopic evidence for it. I can derive one simple model (a complete absorber model leading to 1.19x warming) for it on a piece of paper in three minutes. I regularly argue with people who want to claim that it doesn't exist at all, or that it violates the second law. Both are absurd -- of course it exists, and no it doesn't violate the laws of thermodynamics it is a direct consequence of them (although the actual atmospheric radiation effect is a great deal more complex than simple single layer models!)

All of this is completely irrelevant to my statements above. Let me explain the null hypothesis, since the terminology apparently eludes you. It is this: Supposed we increase atmospheric CO_2 from 300 to 600 ppm in the very simplest model planet we can imagine -- one where the only change we permit is this. One can work through the arguments for the greenhouse warming one should expect -- they involve looking at the measured spectrum of CO_2, doing a bit of work with the relevant Beers-Lambert formula, and thinking a bit about the lapse rate -- but in the end most people who do the calculation end up with somewhere between a 1 C and 1.5 C warming.

At this point one invokes the principle of ignorance -- we don't really know how the entire Earth system will respond nonlinearly to this. Nor do we have any plausible means with which to measure it -- we have no experimental Earths to do controlled observations with similar structure, e.g. 70% saltwater ocean confined in a complex pattern of continents -- and we already know that the establishment of particular circulation patterns of the confined ocean and atmosphere were the sole really plausible explanation for the Pliestocene ice age, which started when the isthmus of Panama closed between 3 and 4 million years ago.

We also know one important point from linear stability analysis. For the Earth's climate system to be stable at all, it has to respond to perturbations in forcing by opposing the change, not augmenting it. That is, at a stable point, the response to all perturbations has to be to push the system back to the stable point, not away from it, or the point isn't stable, it is unstable, like balancing a pencil on its point. This principle is taught in introductory first year physics, so presumably you are familiar with it.

The Earth's top of atmosphere "forcing" varies by roughly 90 W/m^2 every year, simply from the eccentricity of the Earth's orbit. It varies by order of a percent from fluctuations in albedo (mostly due to clouds, but also due to shrinking and expanding ice and snow fields) on a much shorter time scale, as short as days. The climate is if anything remarkably stable, at least on a short time scale (and we have the devil's own time explaining any of the longer time scale variations observed in the paleo record or the much shorter thermometric record, where the stable point itself exhibits considerable climate "drift" even while remaining sufficiently locally stable to be still considered "climate"). There is little evidence of any sort of runaway nonlinear instability from this natural variation in forcing. Quite the opposite, in fact, right up to the point where factors we do not yet really understand and cannot compute or predict seem to cause transitions like the advent of glaciation in the current, continuing, Pleistocene ice age.

Given a lack of knowledge of how the enormously complex system will respond to a small, linear variation in forcing on top of the annual periodic variation in forcing that is roughly two orders of magnitude larger and incidentally is in counterphase with the annual associated variation in global average temperature (just so you can see how non-intuitive and complex the Earth as a planetary climate really is), the null hypothesis is that it will simply shift the equilibrium, linearly, by the base estimate above. That is, doubling CO_2 will most likely increase the planet's mean temperature by roughly 1.25 C, call it 2 whole degrees F. This is of the same order as the temperature change associated with the Little Ice Age (descent into, emergence from) or the natural variation in global temperature that has been proceeding over the entire Holocene interglacial. It is unlikely to be catastrophic. It isn't even out of proportion to the warming we might have observed without the help of CO_2, or the warming we did observe over the first 2/3 of the thermometric record where CO_2 was an irrelevant factor.

This null hypothesis -- that the warming we should most likely expect due to doubling CO_2 is the direct warming from the CO_2 itself neither augmented nor diminished by nonlinear feedbacks we cannot compute, justify, or directly observe however much people do love to argue about them -- is the assertion that has to be disproved by temperature observations over -- according to all the climate people themselves: time spans in excess of (say) 25 years. Most climate people also seem to agree that CO_2 was an ignorable factor in climate forcing before the post-WWII industrialization (in particular that it was irrelevant to the substantial warming that occurred in the first half of the 20th century, even though that is all rolled into one convenient "hockey stick" in presentations without ever acknowledging that subtle point. So, start at 1940 (to avoid picking any "particular" start date, you can look at any date "around" 1950) and what do we see:

http://www.woodfortrees.org/pl...

There is one single visible episode of warming in this entire record. It is confined to a stretch of time that is not as long as 25 years -- it is pointless to try to pick endpoints of linear trends in this obviously nonlinear trended timeseries but the eye can clearly see that the warming is pretty much confined to the stretch between a start somewhere 1975 and 1985 and an end somewhere between 1995 and 2000. If one uses the most optimistic set of assumptions possible, this stretch is a "climate shift" across 1975 to 2000 to barely make 25 years. But this really is cherrypicking in the extreme, especially when the big bumps at the beginning and end can be tied to discrete non-driven-climate events -- ENSO, and the warming stretch itself coincides with the warming phase of the Pacific Decadal Oscillation and hence some fraction of it is probably natural.

So the big question is, does this graph falsify the null hypothesis, that the observed warming over the entire stretch of ~65 years is due to some mix of unknown, and really uncomputable, natural variation due to all of the internal coupled feedbacks that otherwise conspire to leave the system pretty stable (except when it isn't) plus the linear forcing due to CO_2 only?

Well, the warming observed is somewhere between 0.4 and 0.5 C over (say) 65 years as CO_2 has gone from roughly 300 ppm to roughly 400 ppm. Let's be pessimistic: \Delta T = 0.5 C. Beers-Lambert etc suggest a (natural) logarithmic warming response to atmospheric forcing, so we might expect to see roughly half of the warming in the first third of the increase. Which is (and pay attention, as this is important!) exactly what is observed.

So forget non-computable natural variation. Forget assertions of runaway warming due to non-computable presumptions of positive feedback from water vapor in a system that is manifestly stable against annual variations in forcing almost 100x greater than the total additional forcing expected upon a doubling of CO_2 -- any sort of sane stability analysis would conclude before even examining the issue in any detail that the mostly likely sign to any sort of forcing feedback is negative, see remarks above, and more likely than not would reduce the observed warming, not increase it, although in a non-computable, nonlinear, chaotic, damped, driven macroscopic system of this sort simple glib assumptions could easily be wrong in either direction, which is why we prefer to rely on what nature tells us not what we think might be the case a priori. If we admit our ignorance, and ask the simple question: "Do we need to worry about feedbacks increasing the warming that "should" result from doubling CO_2 alone?" the answer is unambiguously No!

Not from my opinion, not from any real computation, just from a back of the envelope computation compared to observation. Well, back of the envelope given the results of any of the many papers estimating or measuring the expected CO_2-only forcing. Indeed, if anything the data suggest that we are surprisingly close to this expected rate of total warming over the era where CO_2 has increased by roughly 1/3.

The big question is: why should anybody believe that we need additional stuff to explain this variation? And that's attributing 100% of the observed warming to CO_2 only, and using the most optimistic of heavily processed thermometric data "adjusted" over and over again to increase the "instrumental" warming (but curiously, never decreasing it, although one would ordinarily expect the probability of errors in measurement to be distributed without bias, at least until one thinks about the obvious UHI warming bias that is not removed in the HADCRUT4 data presented in the graph above).

Note that no end points were cherrypicked in this. No trends, linear or nonlinear, were fit. We just take two numbers -- \Delta T and \Delta P_CO_2/P_CO_2 -- and connect them from almost anywhere in the vicinity of 1950 to almost anywhere in the vicinity of the present, and we conclude that the warming observed can be completely explained without invoking any sort of feedback, and without spending a small fortune doing computations that we have no good reason to think have any predictive value at all, that do not fit the data particularly well anywhere outside of their reference period, which was (inexplicably!) chosen to be the single stretch of visible warming in the second half of the twentieth century, punctuated by (and probably at least partially caused by!) ENSO events.

So by all means, assert that since I disagree with the experts I must be wrong. Assert that since I said that the GHE doesn't exist (straw man -- I said nothing of the sort) or that CO_2 isn't part of it (ditto) I must not even have ever looked at a spectrograph even though I explicitly said I did (hmm, it's getting hard to count here -- a bit of ad hominem (basically free, in arguments of this sort) plus assertions of my dishonesty and incompetence devoid of any sort of factual support. To me, it seems that you are perfectly happy to argue using logical fallacy instead of addressing what I say, to the point where I'm tempted indeed to get out a logical fallacy bingo card and see if I've already got a two or three paragraph winner.

Or, you could address the actual points I make, learning about null hypotheses in hypothesis testing (and perhaps Ockham's razor and a few other related principles) along the way if you need to to keep up.

Here's the very simplest picture possible of the point I'm making. Consider a mass on a spring in a damping fluid, being driven not particularly near resonance by a force that consists of two pieces:

F_tot = F_0 + A\cos(\omega t)

where A is roughly 0.07 F_0 (and time is measured in years). Wait for the system to arrive at equilibrium. When it does, it will be oscillating around a displaced equilibrium (displaced by F_0), with an amplitude determined by the need to balance total energy added to the system by A\cos(\omega t) against the total energy removed by the damping force.

Now change one thing: Make F_0 = F_0 + 0.01A, that is, add roughly one part in a thousand of F_0 to F_0. Without redoing everything, estimate the change in the solution. You basically have three choices:

a) The equilibrium shifts by 0.01A/k (where k is the effective spring constant of the oscillator) and nothing else happens.
b) The equilibrium shifts by 0.02A/k to 0.05A/k.
c) The whole system races out of control, with the amplitude varying wildly higher until the spring breaks.

a) is what happens for a linear response model. b) is possible only if there are nonlinear terms large enough to double (or worse) a linear response to what is a tiny perturbation. Be prepared to carefully justify your Taylor series and prove the existence of the nonlinear terms in the actual trajectory observed before the shift (where the oscillation obviously samples them). c) is what happens if the system is nonlinear and is on the threshold of chaos. Damping, in general, shifts the system towards linear stability -- indeed b) is basically asserting highly nonlinear damping (or a highly nonlinear spring) but that sort of damping is already contradicted by the observed stability of the oscillator with A approx 0.07 F. If nothing else, it is a lot harder to imagine an integrated response of 2 to 5 times the usual linear response without a most peculiar damping behavior, one that I think is overwhelmingly inconsistent with the data.

To conclude, the simplest estimate for the warming expected from doubling CO_2 is 1 C. This estimate is entirely consistent with observations, and is if anything in almost too good agreement with it. There is absolutely no doubt that it is well within any sort of reasonable error bars, given that it is near the middle with very little error to be explained even by natural variation and noise. One cannot defend assertions of catastrophic climate change by any sort of simplistic argument such as "doubling CO_2 is expected to cause 2 to 5 C warming by 2100" as if this result is somehow obvious or supported by the data-- it is not. It relies on an entire tower of shaky assumptions and attempts to compute something that is probably not computable (and is definitely not measurable) against noise and natural variation an easy 1-2 orders of magnitude larger. It is inconsistent with experimental observations of non-catastrophic warming resulting from doubling CO_2. We are, in fact, dead on the expected linear response track, empirically, from 1950 on, and can reasonably expect to see another 0.3 to 0.4 C as we go from 400 ppm CO_2 to 500 ppm CO_2, and the remaining 0.1-0.3 C as we go from 500 ppm to 600 ppm, if -- and it is a big if -- natural variations of the same order to do not trump this one way or another, or net negative natural feedbacks kick in to further limit the observed warming, or chaos assert itself in the underlying nonlinear chaotic system and kick us into runaway warming or the next glacial episode.

rgb

Comment Re:please no (Score 1) 423

Yes, you have. You missed, for example, the entire bit about the null hypothesis. You also missed the fact that I am not asserting that the Earth isn't warming, or that CO_2 increases are probably not a factor in the warming we have experienced. I can actually read a spectrograph and have a decent understanding of the GHE from the basic physics on up. I'm only pointing out that the trivial model you suggest is precisely why we should doubt that TCS is over 2 C! That is, the null hypothesis is around 1 to 1.5 C total warming from CO_2 alone, which is all we have even weak direct evidence for. Everything else is built on a shaky tower of model assumptions, physics toy models, and an attempt to solve a probably unsolvably difficult problem in a particular way to put some sort of stamp of authenticity on a conclusion that is both unfounded and so far, contradicted pretty strongly by observational fact.

rgb

Comment Re:please no (Score 2) 423

One knows this because one studies nonlinear chaotic systems (in systems with far simpler coupled DEs), learns about things like the Kolmogorov scale, turbulence, Lyupanov exponents, one monkeys about with solving nonlinear coupled ODEs with both adequate and inadequate integration stepsize. From this one learns that the climate models are arguably some 30 orders of magnitude shy of a spatiotemporal step that one might reasonably expect to be able to integrate over some significant time to get an actual solution.

This gap is bridged two ways. One of the two ways is to make pure assertions about the physics in between the Kolmogorov scale and the scale we can afford to integrate. For example, forget local dynamics of thunderstorms -- thunderstorms are phenomena that are basically invisible on a 100x100x1 km grid. Assume that one can use some sort of probability distribution of thunder-storminess in the dynamics and that this is adequate to describe all of the violent and rapid heat transport vertically and laterally in thunderstorms with sizes distributed on length scales of 2 to 10 km and with time scales of significant variation of a minute or longer (the time required to get out of your car and reach the house, of course). Do this repeatedly, with everything -- tornadoes (and other small scale velocity fields with nonzero curl) -- gone, replaced with and assertion regarding averages. Don't worry about the fact that none of these assertions can be formally derived and that we know perfectly well that we won't get the right answer for any other chaotic system studied by mankind (for example, try this for a simple damped driven rigid oscillator, replace the driving force with an average of almost any sort and see what happens) thus far if we do this, but don't forget to shout that the models are based on physics if anybody dares to point this out.

The other is even better. When the models are run, they are still nonlinear iterated maps, even if they are integrated with approximated dynamics and an enormous spatiotemporal step, so they still exhibit chaos and make lots of nifty patterns that "look like" weather (and even are a theoreticaly and empirically defensible approximation to weather, for integration periods of a week or so from reasonable well-known initial conditions before the chaotic trajectories diverge to fill phase space and render them worthless for weather prediction any more). One gets, from even tiny perturbations of the initial conditions and/or physical parameters, butterfly-effect divergences that create an entire bundle of "possible microtrajectories" for the model system being solved which is, note well, not even arguably the actual equation of motion for the coupled Earth-Sun-Atmosphere-Ocean system, it is a pure toy model that nobody sane would expect to actually work. And of course it empirically does not work, not even close. The microtrajectories produced, which generally only work across a reference period (trial data) by carefully choosing large, cancelling forcing terms in the approximated dynamics, end up having far too much variance (compare to the actual climate), the wrong autocorrelation spectrum (direct evidence of the wrong physics but who is counting), and range from (for CMIP5 models) a handful that actually cool over very long time scales to some that go sky high.

The actual Earth, of course, only has one trajectory and it doesn't look anything like any of these model trajectories. So now comes the best part. The "ensemble" of microtrajectories is actually averaged and used as a prediction for the trajectory.

Words fail me. Again to fall back on a trivial example, imagine taking a damped driven rigid rod oscillator operating in the chaotic regime, starting it from an "ensemble" of slightly perturbed different initial conditions, integrating it on so coarse a timestep that one gets chaos but perhaps chaos that is not even qualitatively similar to the chaos observed with an adequate timestep, and then take the numerical average over the trajectories on obtains to assert that this is a good approximation to the long time behavior of the system!

And this is before one does something even more striking. Linearize the driving force in some way, and predict that the derivative of this average of many chaotic trajectories is a valid predictor of some property of the actual, single trajectory of the actual chaotic system.

And don't forget, it is physics based. Or was, sort of (but not really), before you did the averaging. Now I don't have any idea what the basis or justification is for the result that is obtained. Trivial counterexamples demonstrate that the entire approach is so unbelievably flawed that it would literally take a numerical miracle for the result at any given integration scale to have the slightest relevance to any actually observed trajectory of the actual system being modelled.

But of course, they are still not done. After doing this averaging over some unspecified number of microtrajectories (well, they are specified, but not anywhere where the models are collectively presented such as in Chapter 9 of AR5, lest it cause people to call into serious doubt the statistical treatment of the model results singly and collectively) the per-model average trajectories still have too much variance, the wrong autocorrelation and spectrum, produce utterly nonphysical distributions of atmospheric heat (tropical tropospheric hot spot, anyone?) and spend far, far too much time with temperatures higher than the observed temperature rather than below the observed temperature everywhere outside of the reference period (training set data) for the last 165 years of thermometric data even after 32 adjustments have been made that spectacular increased the warming of the present relative to the past 31 times and left it unchanged 1 time (odds 1 in 4 billion, at least if one assumes errors from the past are at worst unbiased, odds absolutely astronomical if one considers the UHI effect ignored in the evaluation of e.g. HADCRUT4 and that somehow fails to cool the present relative to the past even in GISS, where they claim to have one.

So they average all of the models in CMIP5 together and call that the best prediction -- oops, I mean "projection" because predictions can be falsified and predictions have to be at least arguably physics based and this superaverage of averages of individually badly failed microtrajectories of individual models that are not even approximately mutually independent, which each have very different numbers of contributing microtrajectories and so are not even equally weighted in that regard, and which use different spatiotemporal grid sizes, entirely different ways of treating the ocean, and which have to balance things like the radiative balance between CO_2 and aerosols and water vapor feedback in different ways to fit the reference period is most definitely not a prediction. In fact, as far as I can tell, it is a mere statistical abomination. But don't forget! Somewhere back in there there is actual physics!

The wonderous virtue of this is that one can plot the envelope of the average of the individual model microtrajectories (not the actual microtrajectories themselves, or their actual variance singly or collectively, as that would instantly reveal this for the nonsense that it is) and pretend that this variance is somehow a normal predictor according to the central limit theorem so that as long as the bottom of this range doesn't get too far above the actual observed trajectory it doesn't falsify any of the contributing, non-independent, incorrectly weighted, individual model with their structurally absurd microtrajectories contributing to it!

Finally, one can ignore the fact that this average of averages of failed individual model microtrajectories visibly spends roughly 90% of it is time warmer than the aforementioned e.g. HADCRUT4 everywhere outside of the reference period, both in the past and in the future of that period (and that the underlying single model average trajectories are visibly oscillating all over the place with far too great a variance even after being averaged) and then write the Summary for Policy Makers. In this summary, not one tiny bit of this enormous tower of unproven stack of assumptions, questionable methods, outright worrisome intermediate results, erasure of any vestige of connection to actual physics, is ever mentioned. Instead its results are used to state at high confidence that post 1950 warming was more than half due to CO_2, in spite of the fact that almost all of that warming was confined to a single time span of roughly 15 year (certainly no more than 20) out of the almost 65 years post 1950, and that almost as much warming was observed from 1920 to 1950 without much help from CO_2, warming that the superaverage of all of the models skates straight over as one can see in figure 9.8a of AR5.

Indeed, I defy anyone to provide a quantitatively defensible definition of the term "confidence" as used in the SPM of AR5 for any of the assertions made therein about global average temperature or the consequences thereof. The term "confidence" is used in this document in the human sense, as in the writers of the section themselves strongly believe that their statements are true. However, this is a summary of supposedly scientific results and any reader is naturally going to assume that the assertions of confidence are defensible, as they are anywhere else in science that this sort of terminology is used from approving new drugs to the confidence one has that a new aerodynamic design will work as predicted if one invests a billion dollars to build it, rather than the moral equivalent of drug companies telling the FDA and NIH that they sincerely believe that a new drug is safe and effective in spite of using absolutely indefensible steps in the statistical analysis from start to finish that is their sole basis for any sort of belief at all.

That's how one knows it. It's also why climate researchers are falling over one another to come up with explanations for this failure (see e.g. Box 9.2 in AR5, with a total of over 50 distinct hypothesized but obviously unproven explanations in the peer reviewed literature so far), why people are finally thinking that it is time to lose the worst of the CMIP5 models before they backfire and lose the entire discipline all credibility, and why estimates for total climate sensitivity are in freefall, already under the 2 C by 2100 limit that all of the expensive measures being taken to ameliorate carbon dioxide was supposed to have produced if we dropped CO_2 consumption so fast that it caused the collapse of western civilization as just one of many side effects along the way. Good news! We're there already, even if CO_2 rises to 600 ppm by 2100, according to most of the latest results, and we might be as low as 1 C, hardly even noticeable and arguably net beneficial!

1C is what one expects from CO_2 forcing at all, with no net feedbacks. It is what one expects as the null hypothesis from the very unbelievably simplest of linearized physical models -- one where the current temperature is the result of a crossover in feedback so that any warming produces net cooling, any cooling produces net warming. This sort of crossover is key to stabilizing a linearized physical model (like a harmonic oscillator) -- small perturbations have to push one back towards equilibrium, and the net displacement from equilibrium is strictly due to the linear response to the additional driving force. We use this all of the time in introductory physics to show how the only effect of solving a vertical harmonic oscillator in external, uniform gravitational field is to shift the equilibrium down by \Delta y = mg/k. Precisely the same sort of computation, applied to the climate, suggests that \Delta T \approx 1 C at 600 ppm relative to 300 ppm.

That's right folks. Climate is what happens over 30+ years of weather, but Hansen and indeed the entire climate research establishment never bothered to falsify the null hypothesis of simple linear response before building enormously complex and unwieldy climate models, building strong positive feedback into those models from the beginning, worked tirelessly to "explain" the single stretch of only twenty years in the second half of the 20th century, badly, by balancing the strong feedbacks with a term that was and remains poorly known (aerosols), and asserting that this would be a reliable predictor of future climate.

I personally would argue that historical climate data manifestly a) fail to falsify the null hypothesis; b) strongly support the assertion that the climate is highly naturally variable as a chaotic nonlinear highly multivariate system is expected to be; and c) that at this point, we have extremely excellent reason to believe that the climate problem is non-computable, quite probably non-computable with any reasonable allocation of computational resources the human species is likely to be able to engineer or afford, even with Moore's Law, anytime in the next few decades, if Moore's Law itself doesn't fail in the meantime. 30 orders of magnitude is 100 doublings -- at least half a century. Even then we will face the difficulty if initializing the computation as we are not going to be able to afford to measure the Earth's microstate on this scale, and we will need theorems in the theory of nonlinear ODEs that I do not believe have yet been proven to have any good reason to think that we will succeed in the meantime with some sort of interpolatory approximation scheme.

rgb

Comment Re:The last sentence in the summary... (Score 1) 232

Was that to me? Sorry, I have physics classes to teach and am insanely busy teaching them, and there is no point in posting a short answer to a difficult or subtle question. I had time to answer this morning and did so. Not that I expect my reply to make any difference in your beliefs. If you wish to accept the word of the climate "oracles" as god-descended truth instead of something that, well, could easily be doubted on multiple grounds I doubt that pointing those grounds out will change your beliefs. I'll merely point out that actual statisticians often make fun of climate scientists (see e.g. William Briggs' blog and his patient, detailed posts on the subject), and for pretty good reasons. Making reliable inferences from computational models in this class is something I've done a fair bit of work in, and it is very, very difficult. This isn't computing the trajectory of a baseball.

rgb

Comment Re:The last sentence in the summary... (Score 1) 232

I was replying to "Here is a graph...". It states that it LOOKS like SLR is already happening (duh!) and that the rise is accelerating.

As to whether or not the future projections are based on physics: How, exactly? Do you mean that there is physics in the models used to make those projections? No argument. Are the models capable of using the physics that are in them to make a prediction of future SLR that can be falsified? Not in any possible way. Hence one integrates the models (contingent on the assumptions that go into the "physics" inside, which is often in the form of semi-empirical formulae that kind-of-work for short-run weather forecasting in the models from whence the GCMs are descended until chaos makes these predictions worthless) , observes a staggering range of possible future climates, assumes further that in this case -- more or less uniquely in the general class of problems "like" this in mathematics and physics -- it's OK to solve the problem on a spatiotemporal granularity close to 10^30th larger than the Kolmogorov scale, assumes further that even though the resulting bundle of trajectories is so broad as to be nearly useless and each one is a "possible" future history of the climate, that the mean of this bundle is a number that is somehow relevant to the future behavior of the actual climate as a single realization of a space of possibilities that is almost certainly far larger than the model space given the coarse graining and smoothing, goes one step beyond that and average over many models that aren't even independent and what -- pray to a benign deity that these are good numbers on which to bet trillions of dollars and millions of lives on right now to -- perhaps -- avoid a catastrophe later?

What part of this makes sense?

rgb

Slashdot Top Deals

Saliva causes cancer, but only if swallowed in small amounts over a long period of time. -- George Carlin

Working...