Comment Re:please no (Score 2) 423
One knows this because one studies nonlinear chaotic systems (in systems with far simpler coupled DEs), learns about things like the Kolmogorov scale, turbulence, Lyupanov exponents, one monkeys about with solving nonlinear coupled ODEs with both adequate and inadequate integration stepsize. From this one learns that the climate models are arguably some 30 orders of magnitude shy of a spatiotemporal step that one might reasonably expect to be able to integrate over some significant time to get an actual solution.
This gap is bridged two ways. One of the two ways is to make pure assertions about the physics in between the Kolmogorov scale and the scale we can afford to integrate. For example, forget local dynamics of thunderstorms -- thunderstorms are phenomena that are basically invisible on a 100x100x1 km grid. Assume that one can use some sort of probability distribution of thunder-storminess in the dynamics and that this is adequate to describe all of the violent and rapid heat transport vertically and laterally in thunderstorms with sizes distributed on length scales of 2 to 10 km and with time scales of significant variation of a minute or longer (the time required to get out of your car and reach the house, of course). Do this repeatedly, with everything -- tornadoes (and other small scale velocity fields with nonzero curl) -- gone, replaced with and assertion regarding averages. Don't worry about the fact that none of these assertions can be formally derived and that we know perfectly well that we won't get the right answer for any other chaotic system studied by mankind (for example, try this for a simple damped driven rigid oscillator, replace the driving force with an average of almost any sort and see what happens) thus far if we do this, but don't forget to shout that the models are based on physics if anybody dares to point this out.
The other is even better. When the models are run, they are still nonlinear iterated maps, even if they are integrated with approximated dynamics and an enormous spatiotemporal step, so they still exhibit chaos and make lots of nifty patterns that "look like" weather (and even are a theoreticaly and empirically defensible approximation to weather, for integration periods of a week or so from reasonable well-known initial conditions before the chaotic trajectories diverge to fill phase space and render them worthless for weather prediction any more). One gets, from even tiny perturbations of the initial conditions and/or physical parameters, butterfly-effect divergences that create an entire bundle of "possible microtrajectories" for the model system being solved which is, note well, not even arguably the actual equation of motion for the coupled Earth-Sun-Atmosphere-Ocean system, it is a pure toy model that nobody sane would expect to actually work. And of course it empirically does not work, not even close. The microtrajectories produced, which generally only work across a reference period (trial data) by carefully choosing large, cancelling forcing terms in the approximated dynamics, end up having far too much variance (compare to the actual climate), the wrong autocorrelation spectrum (direct evidence of the wrong physics but who is counting), and range from (for CMIP5 models) a handful that actually cool over very long time scales to some that go sky high.
The actual Earth, of course, only has one trajectory and it doesn't look anything like any of these model trajectories. So now comes the best part. The "ensemble" of microtrajectories is actually averaged and used as a prediction for the trajectory.
Words fail me. Again to fall back on a trivial example, imagine taking a damped driven rigid rod oscillator operating in the chaotic regime, starting it from an "ensemble" of slightly perturbed different initial conditions, integrating it on so coarse a timestep that one gets chaos but perhaps chaos that is not even qualitatively similar to the chaos observed with an adequate timestep, and then take the numerical average over the trajectories on obtains to assert that this is a good approximation to the long time behavior of the system!
And this is before one does something even more striking. Linearize the driving force in some way, and predict that the derivative of this average of many chaotic trajectories is a valid predictor of some property of the actual, single trajectory of the actual chaotic system.
And don't forget, it is physics based. Or was, sort of (but not really), before you did the averaging. Now I don't have any idea what the basis or justification is for the result that is obtained. Trivial counterexamples demonstrate that the entire approach is so unbelievably flawed that it would literally take a numerical miracle for the result at any given integration scale to have the slightest relevance to any actually observed trajectory of the actual system being modelled.
But of course, they are still not done. After doing this averaging over some unspecified number of microtrajectories (well, they are specified, but not anywhere where the models are collectively presented such as in Chapter 9 of AR5, lest it cause people to call into serious doubt the statistical treatment of the model results singly and collectively) the per-model average trajectories still have too much variance, the wrong autocorrelation and spectrum, produce utterly nonphysical distributions of atmospheric heat (tropical tropospheric hot spot, anyone?) and spend far, far too much time with temperatures higher than the observed temperature rather than below the observed temperature everywhere outside of the reference period (training set data) for the last 165 years of thermometric data even after 32 adjustments have been made that spectacular increased the warming of the present relative to the past 31 times and left it unchanged 1 time (odds 1 in 4 billion, at least if one assumes errors from the past are at worst unbiased, odds absolutely astronomical if one considers the UHI effect ignored in the evaluation of e.g. HADCRUT4 and that somehow fails to cool the present relative to the past even in GISS, where they claim to have one.
So they average all of the models in CMIP5 together and call that the best prediction -- oops, I mean "projection" because predictions can be falsified and predictions have to be at least arguably physics based and this superaverage of averages of individually badly failed microtrajectories of individual models that are not even approximately mutually independent, which each have very different numbers of contributing microtrajectories and so are not even equally weighted in that regard, and which use different spatiotemporal grid sizes, entirely different ways of treating the ocean, and which have to balance things like the radiative balance between CO_2 and aerosols and water vapor feedback in different ways to fit the reference period is most definitely not a prediction. In fact, as far as I can tell, it is a mere statistical abomination. But don't forget! Somewhere back in there there is actual physics!
The wonderous virtue of this is that one can plot the envelope of the average of the individual model microtrajectories (not the actual microtrajectories themselves, or their actual variance singly or collectively, as that would instantly reveal this for the nonsense that it is) and pretend that this variance is somehow a normal predictor according to the central limit theorem so that as long as the bottom of this range doesn't get too far above the actual observed trajectory it doesn't falsify any of the contributing, non-independent, incorrectly weighted, individual model with their structurally absurd microtrajectories contributing to it!
Finally, one can ignore the fact that this average of averages of failed individual model microtrajectories visibly spends roughly 90% of it is time warmer than the aforementioned e.g. HADCRUT4 everywhere outside of the reference period, both in the past and in the future of that period (and that the underlying single model average trajectories are visibly oscillating all over the place with far too great a variance even after being averaged) and then write the Summary for Policy Makers. In this summary, not one tiny bit of this enormous tower of unproven stack of assumptions, questionable methods, outright worrisome intermediate results, erasure of any vestige of connection to actual physics, is ever mentioned. Instead its results are used to state at high confidence that post 1950 warming was more than half due to CO_2, in spite of the fact that almost all of that warming was confined to a single time span of roughly 15 year (certainly no more than 20) out of the almost 65 years post 1950, and that almost as much warming was observed from 1920 to 1950 without much help from CO_2, warming that the superaverage of all of the models skates straight over as one can see in figure 9.8a of AR5.
Indeed, I defy anyone to provide a quantitatively defensible definition of the term "confidence" as used in the SPM of AR5 for any of the assertions made therein about global average temperature or the consequences thereof. The term "confidence" is used in this document in the human sense, as in the writers of the section themselves strongly believe that their statements are true. However, this is a summary of supposedly scientific results and any reader is naturally going to assume that the assertions of confidence are defensible, as they are anywhere else in science that this sort of terminology is used from approving new drugs to the confidence one has that a new aerodynamic design will work as predicted if one invests a billion dollars to build it, rather than the moral equivalent of drug companies telling the FDA and NIH that they sincerely believe that a new drug is safe and effective in spite of using absolutely indefensible steps in the statistical analysis from start to finish that is their sole basis for any sort of belief at all.
That's how one knows it. It's also why climate researchers are falling over one another to come up with explanations for this failure (see e.g. Box 9.2 in AR5, with a total of over 50 distinct hypothesized but obviously unproven explanations in the peer reviewed literature so far), why people are finally thinking that it is time to lose the worst of the CMIP5 models before they backfire and lose the entire discipline all credibility, and why estimates for total climate sensitivity are in freefall, already under the 2 C by 2100 limit that all of the expensive measures being taken to ameliorate carbon dioxide was supposed to have produced if we dropped CO_2 consumption so fast that it caused the collapse of western civilization as just one of many side effects along the way. Good news! We're there already, even if CO_2 rises to 600 ppm by 2100, according to most of the latest results, and we might be as low as 1 C, hardly even noticeable and arguably net beneficial!
1C is what one expects from CO_2 forcing at all, with no net feedbacks. It is what one expects as the null hypothesis from the very unbelievably simplest of linearized physical models -- one where the current temperature is the result of a crossover in feedback so that any warming produces net cooling, any cooling produces net warming. This sort of crossover is key to stabilizing a linearized physical model (like a harmonic oscillator) -- small perturbations have to push one back towards equilibrium, and the net displacement from equilibrium is strictly due to the linear response to the additional driving force. We use this all of the time in introductory physics to show how the only effect of solving a vertical harmonic oscillator in external, uniform gravitational field is to shift the equilibrium down by \Delta y = mg/k. Precisely the same sort of computation, applied to the climate, suggests that \Delta T \approx 1 C at 600 ppm relative to 300 ppm.
That's right folks. Climate is what happens over 30+ years of weather, but Hansen and indeed the entire climate research establishment never bothered to falsify the null hypothesis of simple linear response before building enormously complex and unwieldy climate models, building strong positive feedback into those models from the beginning, worked tirelessly to "explain" the single stretch of only twenty years in the second half of the 20th century, badly, by balancing the strong feedbacks with a term that was and remains poorly known (aerosols), and asserting that this would be a reliable predictor of future climate.
I personally would argue that historical climate data manifestly a) fail to falsify the null hypothesis; b) strongly support the assertion that the climate is highly naturally variable as a chaotic nonlinear highly multivariate system is expected to be; and c) that at this point, we have extremely excellent reason to believe that the climate problem is non-computable, quite probably non-computable with any reasonable allocation of computational resources the human species is likely to be able to engineer or afford, even with Moore's Law, anytime in the next few decades, if Moore's Law itself doesn't fail in the meantime. 30 orders of magnitude is 100 doublings -- at least half a century. Even then we will face the difficulty if initializing the computation as we are not going to be able to afford to measure the Earth's microstate on this scale, and we will need theorems in the theory of nonlinear ODEs that I do not believe have yet been proven to have any good reason to think that we will succeed in the meantime with some sort of interpolatory approximation scheme.
rgb