Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:ok ... (Score 1) 248

I'm guessing it's not remotely relevant to you this is happening after they've been born?

Yes, you wanting to control women's bodies to please your bad reading of the writings of your almighty sky fairy make you an extremist.

And the 'writings of a sky fairy' as the main argument against abortion is an extreme strawman.

The rational end of the debate is about when is a fetus granted/promoted to the status of human and having human rights? It's possible to disagree on this without invoking hatred of women or murder of infants as the underlying motivation. It also means the uncomfortable notion of wrestling without the comfort of the absolutes that make your pre-chosen correct answer unassailable.

Comment You say tomato (Score 1) 445

You seem to be copy pasting this claim all over the thread, but the actual journal article predictions are for climate sensitivity. They also aren't models that have run fifty years of accurate predictions, because they aren't models at all in that sense, but rather a series of math modelled estimations to find climate sensitivity. It's nice and all that you can find a paper from 50 years ago with a similar climate sensitivity to current best estimates, but it doesn't match your claim at all.

To be blunt, you are misrepresenting things to the point of lying. "Over a run of fifty years, it turns out that Manabe (and Wetherald)'s models were pretty good at predicting..." might sound better than saying, someone 50 years ago estimated the same climate sensitivity as modern estimates, but it is also dishonest. That's the opposite of how science is supposed to be disseminated,

Comment Because Models Never were predictive (Score 1) 445

Climate models never were able to simulate the energy imbalance accurately enough to be predictive for that. Given that is THE underlying driver, this latest result is unsurprising and is just another iterative step among many on the road to getting better.

As per the IPCC AR5 assessment of climate models:

For instance, maintaining the global mean top of the atmosphere (TOA) energy balance in a simulation of pre-industrial climate is essential to prevent the climate system from drifting to an unrealistic state. The models used in this report almost universally contain adjustments to param-eters in their treatment of clouds to fulfil this important constraint of the climate system (Watanabe et al., 2010; Donner et al., 2011; Gent et al., 2011; Golaz et al., 2011; Martin et al., 2011; Hazeleger et al., 2012; Mauritsen et al., 2012; Hourdin et al., 2013).

As parent mentioned, cloud contribution to energy imbalance was so poorly modelled before they had to manually adjust it to get a realistic state. Problem still not solved. This is only a problem for the crowd that was hedging on the models being gospel truth for crusades, rather than, you know, science.

Comment The scientific evidence refuting it (Score 1) 201

...provide the scientific evidence that refutes it...

As requested, I'll post one of the journals that was used as a source for the xkcd comic, Apologies for copying my response from elsewhere in thread:

Yeah, I know from my title alone everybody is queuing up articles refuting the common stupidity people invoke to make that claim.

Allow me to pre-empt that be exclusively referring to Michael Mann's(of the original hockey stick fame) own follow up work and very quickly presenting his own findings in his own words. Having laid that out, let me point out the uncertainties.

Following up his original much publicized 'hockey stick' article, Mann released the following paper in 2008. He basically extended his idea against more data and exploring additional methods of analysis as you would expect. On calibrating and assessing the results of his old and new methods against an expanded data set he notes:
The CPS and EIV methods (Dataset S2 and Dataset S3) are both observed to yield reconstructions that, in general, agree with the withheld segment of the instrumental record within estimated uncertainties based on both the early (1850–1949) calibration/late (1950–1995) validation and late (1896–1995) calibration/early (1850–1895) validation. However, in the case of the early calibration/late validation CPS reconstruction with the full screened network (Fig. 2A), we observed evidence for a systematic bias in the underestimation of recent warming. This bias increases for earlier centuries where the reconstruction is based on increasingly sparse networks of proxy data. In this case, the observed warming rises above the error bounds of the estimates during the 1980s decade, consistent with the known “divergence problem”

And on looking specifically at his new "EIV" method:
Interestingly, although the elimination of all tree-ring data from the proxy dataset yields a substantially smaller divergence bias, it does not eliminate the problem altogether (Fig. 2B). This latter finding suggests that the divergence problem is not limited purely to tree-ring data, but instead may extend to other proxy records. Interestingly, the problem is greatly diminished (although not absent—particularly in the older networks where a decline is observed after 1980) with the EIV method, whether or not tree-ring data are used (Fig. 2 C and D). We interpret this finding as consistent with the ability of the EIV approach to make use of nonlocal and non-temperature-related proxy information

And I'm not sure how to insert a graphic here, but if I may summarize he later has a graph of the the new, old and other peer reviewed temperature reconstructions all put together. It's in Fig 3 of the link above, so don't trust my interpretation and go ahead and verify for yourself. The graph has 2 interesting aspects I want to highlight.
1. The EIV temperature reconstruction that Mann acknowledges above as superior, has by far the highest peak temperatures in reconstruction, exceeding 4 or 5 times in the past even the highest reconstructed temperatures since 1900.
2. The Instrumental record is tacked onto the end of the graph, as in Mann's previous work, in order to create the same hockey stick in the same way as before. As in, the hockey stick does NOT exist within the reconstructed temperatures, but in the combination of attaching the instrumental onto the reconstructed.

Alright, 100% of what I have above is either Mann's own words or direct observation of the graph he published. From this a few key conclusions that seem extremely straight forward and I'd love if someone could approach them honestly for me if you see anything I've got badly incorrect.

Start my own observation:

1. The known "divergence problem" is acknowledged by Mann, in that calibrating reconstruction methods to early times, and trying to recreate current warming fails and has a divergence problem. One of the things this could indicate is that either the data or methods are insensitive to signals like the current warming and thus would not recreate them even if they did occur historically.
2. Trimming the data to the 'better' data doesn't eliminate the problem, but the newer EIV method at least shows considerable improvement.
3. None of the reconstructions reconstruct current warming that is as warm as historic warming from the EIV method.

These observations together lead me to the conclusion that there remains a tremendous amount of uncertainty in the reconstructions of historic temperature. More over, I am left with the very strong impression that attaching the visual of the instrumental record on the end of the graph badly undermines the overall science of the reconstructions and only serves to highlight their inability to recreate it. The only positive reason to append it is public opinion because it looks scary. This IMO is dishonest, lazy and a fast path to public distrust of scientists which we can ill afford.

That all is to say that the underlying "science" of an historically unprecedented current warming is not nearly so dramatic as presented.

Comment Climate hockey stick fairs poorly (Score 1) 201

Yeah, I know from my title alone everybody is queuing up articles refuting the common stupidity people invoke to make that claim.

Allow me yo pre-empt that be exclusively referring to Michael Mann's(of the original hockey stick fame) own follow up work and very quickly presenting his own findings in his own words. Having laid that out, let me point out the uncertainties.

Following up his original much publicized 'hockey stick' article, Mann released the following paper in 2008. He basically extended his idea against more data and exploring additional methods of analysis as you would expect. On calibrating and assessing the results of his old and new methods against an expanded data set he notes:
The CPS and EIV methods (Dataset S2 and Dataset S3) are both observed to yield reconstructions that, in general, agree with the withheld segment of the instrumental record within estimated uncertainties based on both the early (1850–1949) calibration/late (1950–1995) validation and late (1896–1995) calibration/early (1850–1895) validation. However, in the case of the early calibration/late validation CPS reconstruction with the full screened network (Fig. 2A), we observed evidence for a systematic bias in the underestimation of recent warming. This bias increases for earlier centuries where the reconstruction is based on increasingly sparse networks of proxy data. In this case, the observed warming rises above the error bounds of the estimates during the 1980s decade, consistent with the known “divergence problem”

And on looking specifically at his new "EIV" method:
Interestingly, although the elimination of all tree-ring data from the proxy dataset yields a substantially smaller divergence bias, it does not eliminate the problem altogether (Fig. 2B). This latter finding suggests that the divergence problem is not limited purely to tree-ring data, but instead may extend to other proxy records. Interestingly, the problem is greatly diminished (although not absent—particularly in the older networks where a decline is observed after 1980) with the EIV method, whether or not tree-ring data are used (Fig. 2 C and D). We interpret this finding as consistent with the ability of the EIV approach to make use of nonlocal and non-temperature-related proxy information

And I'm not sure how to insert a graphic here, but if I may summarize he later has a graph of the the new, old and other peer reviewed temperature reconstructions all put together. It's in Fig 3 of the link above, so don't trust my interpretation and go ahead and verify for yourself. The graph has 2 interesting aspects I want to highlight.
1. The EIV temperature reconstruction that Mann acknowledges above as superior, has by far the highest peak temperatures in reconstruction, exceeding 4 or 5 times in the past even the highest reconstructed temperatures since 1900.
2. The Instrumental record is tacked onto the end of the graph, as in Mann's previous work, in order to create the same hockey stick in the same way as before. As in, the hockey stick does NOT exist within the reconstructed temperatures, but in the combination of attaching the instrumental onto the reconstructed.

Alright, 100% of what I have above is either Mann's own words or direct observation of the graph he published. From this a few key conclusions that seem extremely straight forward and I'd love if someone could approach them honestly for me if you see anything I've got badly incorrect.

Start my own observation:

1. The known "divergence problem" is acknowledged by Mann, in that calibrating reconstruction methods to early times, and trying to recreate current warming fails and has a divergence problem. One of the things this could indicate is that either the data or methods are insensitive to signals like the current warming and thus would not recreate them even if they did occur historically.
2. Trimming the data to the 'better' data doesn't eliminate the problem, but the newer EIV method at least shows considerable improvement.
3. None of the reconstructions reconstruct current warming that is as warm as historic warming from the EIV method.

These observations together lead me to the conclusion that there remains a tremendous amount of uncertainty in the reconstructions of historic temperature. More over, I am left with the very strong impression that attaching the visual of the instrumental record on the end of the graph badly undermines the overall science of the reconstructions and only serves to highlight their inability to recreate it. The only positive reason to append it is public opinion because it looks scary. This IMO is dishonest, lazy and a fast path to public distrust of scientists which we can ill afford.

That all is to say that the underlying "science" of an historically unprecedented current warming is not nearly so dramatic as presented.

Comment Re:Don't overstate modelling uncertainty (Score 1) 294

Yes, I know how GCMs work.

I think we're talking past each other. You're repeatedly saying "parameter X has uncertainty!" and I'm saying "we don't need to know X to calculate the change in climate, we only need to know the change in X."

I can say that over and over again in different words, but I'm ending up just saying the same thing.

Let's try examples then. Let's pretend we are tuning our own climate model. One of the parameters we have for our model is the conversion rate from water vapor to precipitation. Moreover, we have a very wide range of possible values because real world measurements still leave a wide valid range for us to pick from. Like virtually all models, we've initially set all our parameters to our best estimates, including this one. Unfortunately that left us with a positive energy imbalance over an annual test run. This is all pretty much status quo. Next step then is we take our water vapor conversion parameter to increase the conversion of water vapor expecting that will reduce the energy imbalance. We run things again and low and behold, the energy balanced out. In practice you just do that many, many times for many many variables, but that's the general idea. Is that example something we can agree upon as a starting point?

I'll proceed hoping that's straightforward and agreeable. If the net energy imbalance our tuning fixed was say +10WM2, how much can we rely on our model to test out energy imbalance from increasing CO2 concentrations to where they introduce an imbalance around 3WM2? Given our example, and that we needed only tune 1 variable, maybe we can learn a lot.

Let's walk back though, because tuning is not exclusively done for net energy balance of zero. Everyone modelling also KNOWS that today's energy imbalance with today's CO2 concentrations should be 0.6 WM2. Being competent modellers, this gives us more data to make our model more accurate. So now we run our model with increased CO2 and find it runs a lower energy imbalance than expected. We then walk back and we tune our water vapor up a little bit. We run a few more tests, and settle on a middle ground where our parameter change now gives the 'right results' for energy balance in both scenarios.

Now that we've completed that tuning step, can we step back, run our model, introduce feedback of CO2 in 1900, and when it reproduces a static pre-1900 and realistic post 1900 warming declare our model independent proof CO2 is the driver? The answer is of course no.

The unfortunate reality is we are still stuck choosing between forcing in known results, or a model with unrealistic energy imbalances. We need a lot more research and iterative improvement before we can build our models to have accurate energy imbalances without forcing it in and thus forcing in known answers we'd ideally like to get independent results for.

Comment Re:Don't overstate modelling uncertainty (Score 1) 294

...Likewise. I don't need to know the baseline to calculate the change in temperature. I just need to know the change in forcing. And I don't need to know the cumulative errors. I only need to know the change in cumulative errors.

I think your missing that with climate simulations, changing one variable changes the behavior of everything else as well.

Of course. That's called "feedback." The fact that it is a highly-coupled system with multiple feedback loops is why global circulation models are run on supercomputers, and not on your laptop.

But again: you don't have to calculate the effects ab initio. You set the baseline from observation (you can call this "fine tuning the model" if you like, but it is really no more than setting the initial parameters to the real Earth climate), and all you need to actually calculate is the change in feedback.

No, climate models have moved long past the naive sort of collection of feedback loops you describe.

General Circulation Models use actual physics modelling to simulate what actual particles are doing in atmosphere. The globe is broken down into a series of cells, and the average state of piles of parameters to represent everything from temperature, through particulates, etc. all calculated out for each cell. Then a series of calculations are run each cycle simulating the collective underlying physics within and between each cell. There is no singular, clouds = -17WM2 variable or setting, it is instead a collective emergent observation of the state of the simulated atmosphere over a given range of iterations. No matter what you do, everything is changing with every single step you advance the simulation.

Now within the simulation of these cells, we still have parameters where we don't know the best 'real' values, so those are used for 'tuning'. The primary purpose of tuning currently is to adjust the unknowns so that when the simulation is run over say a year, the net energy balance averages to net zero. This is necessary currently so we can simulate the known parameters in a state where the energy balance is reasonable, and we can learn lots from that. However, something we can't easily take away, is the interaction with net energy balance, because we've already knowingly altered that directly within the simulation. That said, it's still a necessary step, and hopefully we'll gain enough knowledge from these current runs to get to a state where we can reduce the unknowns to small enough values that we no longer have to tune net energy manually. Regrettably, we also have to be fair and acknowledge that even the current observed values have uncertainties higher than the known contribution from CO2 :(. Hard problems don't go away just because it'd make it easier to shut up people that want to do nothing or deny there is anything happening at all.

Comment Re:Don't overstate modelling uncertainty (Score 1) 294

You're talking about uncertainty in the modeled baseline, but what was being asked about is uncertainty in the modeled change.

They are different things.

The models I'm referencing aren't 'baseline' or 'change', they are basic physics based models.

Let me try to explain this more clearly.

If I walk upstairs from the front door to my second-floor office, I can say that I have changed my elevation by 4 meters with a precision of plus or minus one meter.

But you tell me "No, you don't know your distance from the center of the Earth to within plus or minus fifty meters! You can't possibly have plus or minus one meter accuracy on the distance you climbed!"

But I don't need to calculate my distance from the center of the Earth with 1 meter accuracy to be able to calculate my change in elevation to 1 meter accuracy.

Likewise. I don't need to know the baseline to calculate the change in temperature. I just need to know the change in forcing. And I don't need to know the cumulative errors. I only need to know the change in cumulative errors.

I think your missing that with climate simulations, changing one variable changes the behavior of everything else as well. Instantaneously moving co2 concentrations up in a simulation will have a fairly predictable delta to net global energy balance in timeslice 1. Run the simulation for a few months or years though and all the other factors and their contribution to global energy imbalance will be changed. Predicting future trends is necessarily dependent on the whole and we simply can't declare that knowing CO2's ability to capture radiation is enough to understand and predict it's impact on future temperatures. We can not just reject the impact of the unknowns and limitations to our current knowledge like that.

Making things even more challenging, the cumulative errors and unknowns are GREATER than the known CO2 contribution.

Comment Maybe this is more succinct (Score 1) 294

...

In fact, the natural factors, including the natural greenhouse effect, are much much larger than the human generated greenhouse effect. But the uncertainty in the part that isn't varying is not relevant to the discussion of changes in climate. It is only tuning of the effects of input to the part of the model that changes which matters to our understanding of change.

Qualitatively your reasoning makes sense. We've measured an enormous number of factors that drive climate changes. The observed CO2 concentration increases paired with warming and an absence of other measured variables that should cause warming tell us CO2 is clearly the driver.

Quantitatively though, you can't simplify the climate models that way. Simulations are not a vacuum, and everything interacts with and affects everything else. You can make CO2 the only thing you change, but by changing it you necessarily change the behaviour of everything else as well. Fundamentally the accuracy of simulating that change will be no greater than the accuracy of the overall model. Finally, as I've repeatedly pointed out, that accuracy is much less than the known singular impact of CO2 concentration increases.

Comment Re:Don't overstate modelling uncertainty (Score 1) 294

You're talking about uncertainty in the modeled baseline, but what was being asked about is uncertainty in the modeled change.

They are different things.

The models I'm referencing aren't 'baseline' or 'change', they are basic physics based models. As you pointed out, we understand the basic physics very well. Most fundamentally we know that the earth is not a closed system, so energy comes in from the sun, and bleeds out to space. We know with zero doubt that if more energy comes in than goes out, that means warming. We know with zero doubt that if more energy leaves than comes in, that means cooling. Furthermore, we know that in the long term, heating and cooling ONLY happen through this mechanism.

Without net energy entering or leaving our atmosphere, air temperatures will still go up and down but it will be a result of energy coming from somewhere else like the oceans which will correspondingly cool.

The trouble we have with predicting change is that it is the modelling of the energy imbalance that is still not nearly precise enough to match the very small imbalance our CO2 concentration increases are KNOWN to cause. All the other cumulative errors, unknowns and outright mistakes in our current climate models cause a greater deviation from observed energy imbalances than the signal we are interested in predicting.

You said the modeling is done and there is no other explanation than CO2 that fits the data. You are correct that CO2 is the only known reasonable explanation. You are incorrect though to suggest that modelling has shown that CO2 alone fits the data, because in reality, the precision required in the energy imbalance to make that declaration is still way out of reach.

Comment Re:Don't overstate modelling uncertainty (Score 1) 294

What you're pointing out is true, and even insightful, but does not actually address the point that we have pretty good understanding of the fact that the human greenhouse gas emissions are responsible for the current warming that we see, and that natural variations are not enough to account for them.

You are correct, reminding people that there is uncertainty in the modelling is very important. That uncertainty is about plus or minus 50% in the climate sensitivity (how much the average temperature changes with a fractional change in CO2.) But even at the low end, the human generated greenhouse gas emissions are the dominant effect in the change in climate.

(And I will snarkily point out that most of the "it's not a problem" faction tend to ignore that the uncertainty goes both directions: the effect of our emissions could, in fact, be much larger than the current best estimate value.)

The kinds of uncertainty you are talking about are mostly irrelevant to understanding climate change-- for this, we need to understand the uncertainty in changes in forcing factors.

In fact, the natural factors, including the natural greenhouse effect, are much much larger than the human generated greenhouse effect. But the uncertainty in the part that isn't varying is not relevant to the discussion of changes in climate. It is only tuning of the effects of input to the part of the model that changes which matters to our understanding of change.

I think the uncertainty in modelling is important though for determining future action. Climate models are one of our best tools for projecting what change CO2 concentrations will have going forwards a century. The IPCC emission scenario projections are exactly the thing we need for policy planning, but the uncertainties that the models used to create them must also be understood.

The trouble is that the current state of the art in climate models still can't simulate the global energy imbalance precisely enough for CO2 contributions to matter without essentially making ad-hoc adjustments by hand to get the known correct answer. Worse still, even the directly observed(not simulated) measurements of global energy imbalance has an uncertainty range almost twice as large as the contribution from the CO2 we've added.

Climate models have come a very, very long way. Lots of very good progress and new work are being done to greatly improve them as well. The trouble is for the purpose of predicting the future impact of CO2 concentrations, the changes in the global energy imbalance is the singular key. Currently, the tuning adjustments we are forced to make are bigger than the signal we are trying to predict. Worse still, the known real world measurements we have to compare simulations against are themselves hardly precise enough for the task we are demanding of them. Tempting as it may be to point at models simulating the last 100 years of warming as proof they are 'good enough', the energy imbalances say otherwise. If you read between the lines of the comments I quoted from the modellers above, they agree.

First and foremost we need a lot more $$ put into satellite and ocean observations for tracking global energy imbalance and improving our precision there. Secondly, we need to give modelling teams a lot more time and resources before expecting them to be able to provide accurate predictive models.

Comment Re:Don't overstate modelling uncertainty (Score 1) 294

https://skepticalscience.com/clouds-negative-feedback-intermediate.htm

You are confusing feedback with radiative effect. Neither my post, nor any of the journals linked had anything to say about cloud feedbacks.

In models, radiative effect is the immediate/instantaneous impact on radiation coming in/out of our atmosphere. Feedback is the change in radiative effect of a process in response to temperature changes.

The -17WM2 radiative cooling effect of clouds is based on observed values. That is the reference made in the journal articles I linked. That number tells us nothing about whether clouds are a negative, positive or neutral feedback mechanism. If as temperature goes up, clouds cool less that would be a positive feedback.

Discussion of positive/negative feedback is entirely besides the point of anything I posted. The heart of what I shared from the modelling community the IPCC relies upon was the quickly resummarized facts below:
Observed radiative effect if clouds is -17WM2.
Uncertainty of overall net global energy imabalance is +- 4WM2
Contribution of increased CO2 concentrations is 2.6 WM2

This highlights a couple critical challenges climate modelling teams are working to overcome. Worst, is the observed uncertainty in energy imbalance is greater than the imbalance from CO2 causing our current warming. The next is that one of the key processes we understand the least in terms of modelling(clouds), has an order of magnitude greater impact on the energy budget than the imbalance causing current warming.

If you read the rest of the journals I linked, you'll see that the 'tuning' discussed is essentially compensating for the unknown or poorly modelled processes by tuning them not to get them more correct, but instead to get the global energy imbalance correct. This is a necessary step until we iteratively improve and refince all the processes and get the errors and unknowns small enough that we can simulate things accurately without tuning.

Finally, the last quote I gave basically points out that the knowledge that the 'correct' energy balance to tune to is net 0 imbalance for pre-1900 CO2 levels and net imbalance matching today's observed imbalance for current CO2 concentrations is inherently forcing the 'right' answer on our models. Simply put, any errors and unknowns in our models that we use for tuning 'could' also contribute to warming, but to the models it would look the same because EVERYBODY is tuning to make certain that CO2 is responsible for the full change in net change in energy balance. That's why the authors of the paper or proposing making the tuning process more automated and less a hand picked function so we can reduce the problem of influencing our own results more than we'd like.

Comment Don't overstate modelling uncertainty (Score 3, Informative) 294

It turns out that yes, we are pretty certain that human-generated trace gasses, primarily CO2, are responsible for warming. The theory is well understood, the modelling has been done by dozens of independent groups on five different continents (with open source code that thousands of people have been scrutinizing for errors), and there simply isn't an alternate theory that fits the measured data and explains the temperatures on Earth (and also on the other planets and moons of the solar system with atmospheres-- you do know that Earth isn't the only planet that we analyze) We have very very good measurements in the 21st century. We KNOW the inputs to the climate. We measure the solar variability. We know the infrared absorption of carbon dioxide and other trace gasses. There simply is no other input that is large enough to account for the present warming.

The climate models you discuss have a lot more unknowns in them than you let on: ...clouds remain one of the largest source of uncertainties in climate predictions from general circulation models (GCMs). Globally, clouds cool the planet by17.1Wm2 [Loeb etal. 2009]. This cooling results from a partial cancelation between two opposing contributions: cooling in the shortwave (46.6Wm2) and warming in the longwave (29.5Wm2). To put these numbers in perspective, the radiative impact of the increase in longlived greenhouse gases since 1750 is estimated to be 2.63±0.26Wm2 [Forster et al., 2007, Table2.12]. It should therefore not come as a surprise that uncertainties in the representation of clouds can have considerable impact on the simulated climate.

You can read the full article, by one of the IPCC model teams, here. Before you call that a lone wolf, the IPCC's last report reference at least 3 other teams all corroborating the same as the linked article. For reference, that includes 100% of the model authors that discussed their tuning procedures and methods...

So, that is to say that clouds, which we admittedly model very poorly, impact the energy imbalance by an order of magnitude more than human CO2.

One of the other journal articles the IPCC references on climat emodel tuning is here.

Within they note the extremely challenging problem modelers are faced with:
The observations correspond to the Clouds and the Earth’s Radiant Energy System (CERES)–Energy Balanced and Filled (EBAF) L3b product for Loeb et al. (2009). The height of the gray rectangle in (a) and thickness of the gray curves in (b) and (c) correspond to an observation uncertainty of ±4 W m2.

So, we have a limitation still on the actual observed energy imbalance of +/- 4 WM2 and a reasonably accurately known contribution of 2.6 WM2 from increased CO2 concentrations.

The same article later notes:
The often-deployed paradigm of climate change projection is that climate models are developed using theory and present-day observations, whereas ECS is an emergent property of the model and the matching of the twentieth-century warming constituting an a posteriori model evaluation. Some modeling groups claim not to tune their models against twentieth-century warming; however, even for model developers, it is difficult to ensure that this is absolutely true in practice because of the complexity and historical dimension of model development.

The reality of this paradigm is questioned by findings of Kiehl (2007), who discovered the existence of an anticorrelation between the total radiative forcing and climate sensitivity in a model ensemble; high-sensitivity models were found to have a smaller total forcing and low-sensitivity models were found to have a larger forcing, yielding less cross-ensemble variation of historical warming than otherwise to be expected. Even if alternate explanations have been proposed and even if the results were not so straightforward for CMIP5 (cf. Forster et al. 2013), it could suggest that some models may have been inadvertently or intentionally tuned to the twentieth-century warming.

Put more succinctly, there is evidence that hand tuning to poorly understood components in climate models has led our models to 'agree' on impacts from other factors than CO2, but as a consequence of us already knowing the correct answer.

In summation, climate modelling is hard, and there ares till a number of unknowns and poorly understood processes that impact energy balance more than CO2. You aren't wrong in that the consensus is still that none of the other variables have a reason to systematically bias warm over the last century. The trouble is pointing at models as 'proof' is circular still at this point.

Comment It's not Left/Right (Score 1) 294

No worries but I won't sign up for global wealth redistribution.

Strangely, people with a rightward political orientation don't seem to have an alternate plan.

Why is that?

Remind me again which political orientation has been opposing nuclear power again?

You can make yourself feel better by belittling large groups of people, or you could try and reach out and try to get cooperation on solutions...

The climate ins't impacted over much by how we distribute our wealth, how we generate power though certainly does.

Work on getting a consensus for mass conversion to nuclear power, asap, or you can entirely give up on trying to convince anyone you actually believe we are facing an eminent crisis. If we've got 'time' to wait a few decades on solar, wind and storage for down times to fill in for fossil fuels, maybe it's premature to implement punitive wealth redistribution schemes.

Comment The actual "Hockey Stick" data (Score 1) 172

Everyone knows the hockey stick is bogus manipulation of data

Nope, the hockey stick graph has been confirmed by several studies. You can find plenty of references in the wikipedia page above. And if you dismiss all of the data, then what are you going to use to show that "we are still coming out of an ice age" as GP tried to claim?

Try looking at the actual findings in the actual studies though. The distinctive Hockey Stick shape in Michael Mann's original graph came about by showing 2 disparate datasets on the same graph, the reconstructed temperatures for the last couple thousand years, and then the instrumental record appended on the end. The immediate deviation from the trend for the past millenia doesn't just correspond to the start of the industrial era, it corresponds to a change in datasets. You don't get a more obvious red flag than that.

Now, absolutely, followup studies have been done since, and they have largely confirmed the flat/static trend from Mann's original work. They've also recreated the hockey stick at the end the same way by introducing the instrumental record.

Fine, you'll then say if that is all as sketchy and iffy as it sounds, you'd expect the proxy reconstructions to have troubles recreating recent warming, right? Here's an updated study by the same Michael Mann of fame for the first hockey stick graph. If you just read the overall conclusions, Mann mostly says that with new data and new methods they largely validate their previous work. However, if you look close the article also notes:

However, in the case of the early calibration/late validation CPS reconstruction with the full screened network (Fig. 2
A ), we observed evidence for a systematic bias in the underestimation of recent warming. This bias increases for earlier centuries where the reconstruction is based on increasingly sparse networks of proxy data. In this case, the observed warming rises above the error
bounds of the estimates during the 1980s decade, consistent with the known ‘‘divergence problem’’ (e.g., ref. 37), wherein the tempera-
ture sensitivity of some temperature-sensitive tree-ring data appears to have declined in the most recent decades. Interestingly, although the elimination of all tree-ring data from the proxy dataset yields a substantially smaller divergence bias, it does not eliminate
the problem altogether (Fig. 2B). This latter finding suggests that the divergence problem is not limited purely to tree-ring data, but
instead may extend to other proxy records. Interestingly, the problem is greatly diminished (although not absent—particularly in
the older networks where a decline is observed after 1980) with the EIV method,

If you then look down to Fig. 3, you'll notice that the EIV reconstruction doesn't just do a better job tracking recent warming, it also is by far the warmest reconstruction, with historic peaks exceeding anything but the big red instrumental record tacked on again.

So to summarize, your declaration that "the hockey stick graph has been confirmed by several studies" is true in the sense that they've recreated the historic trend many times. However, even the original author(Mann) as linked above notes that the methodology for recreating a fairly flat/cool historic reconstruction, also fails to reconstruct current warming. So much so as to fall outside the "error bars", and even mentions that this is the "known divergence problem", meaning it is well known that without attaching the instrumental record on the end, you don't get your nice hockey stick graph.

Slashdot Top Deals

"The eleventh commandment was `Thou Shalt Compute' or `Thou Shalt Not Compute' -- I forget which." -- Epigrams in Programming, ACM SIGPLAN Sept. 1982

Working...