In other words you completely missed the point.
I will believe these pig fuckers when they can accurately predict what will happen next year. Otherwise, why would I believe what they say will happen 10 years from now. Computer models are only as accurate as the assumptions programmed into them.
And this got voted insightful?!
Let me give you an analogy. I am going to write a simple computer model that predicts how many sixes you'll get if you roll a die 1000 times. In fact, here's the source code (Python 2 compatible):
By your argument, this is wrong because it can't tell me whether the next roll of the die will be a six.
RockDoctor: I think you are misunderstanding my comment. What I was saying was that I *don't* believe the CO2 effect is saturated and am sceptical about those who claim that it is. This is not because I have expert knowledge but because I don't believe in conspiracies. Apologies for not being clear.
If it were so trivially obvious that the CO2 effect is saturated, it follows that lots of scientists are either too stupid to understand this or they understand it but are deliberately ignoring it or 'covering it up'. Alternatively, maybe there actually is a bit more to it than there seems. As a sceptic, my instinct is to favour the latter.
This article explains it quite nicely and even goes into the history of why some of the 'obvious' conclusions turned out to be faulty.
>An example of how the alarmists seize on every theoretical positive feedback for their models and ignore negative feedbacks which the observable long term stability of the climate demonstrates must exist, otherwise the first really big volcano eruptions would have charbroiled the planet.
You are assuming that a net positive feedback implies instability rather being something that might drive the system to a new equilibrium. There is plenty of historic evidence of positive feedback - insolation changes alone are not sufficient to account for past changes in climate - and the climate has been stable long term only in the sense that it has not got into a runaway feedback. Its 'stability' has encompassed a very broad range of conditions, many of which would be very uncomfortable for human civilisation as we know it.
I don't think you are correct that negative feedbacks are ignored. There is plenty of discussion in the literature of which way cloud feedbacks go and it is widely acknowledged to be a complex area. Nevertheless, I am not aware of any compelling evidence of strong negative feedback from clouds.
Let's turn things around. We know that there *are* feedbacks - they are irrefutable from basic physical arguments. Are you saying that it just happens that these all balance each other out?
? So you agree that your assertion was innumerate.
Anyway, I see you've moved on to some different zombie arguments.
> And the biggest of those is water vapor, by an overwhelming margin.
It's not an 'overwhelming' margin. Yes it is a stronger GH gas than CO2 but it is limited by its saturation vapour pressure in the atmosphere, which in turn is a function of temperature. Hence, as you know, water vapour can amplify the GH effect of CO2.
> Some even question whether there's *any* IR left over in the proper bands for C02 to make a difference.
Who questions this and are they credible?
> Mind you, some intuitively ridiculous things are in fact true.
Perhaps you should have taken more heed of this possibility before embarrassing yourself.
If you had engaged your brain you might have realised that your assertion about percentages makes sense only if the entire atmosphere comprises greenhouse gases rather than mainly consisting of gases that are unaffected by IR. It is the change in greenhouse gases that is important.
Is there something wrong with this chart? https://en.wikipedia.org/wiki/File:Global_Temperature_Anomaly_1880-2012.svg
I don't know. Do you think there is? For sure, it doesn't lend any support to the claim that an AC made up there that things have 'levelled off'. Unless of course they meant that little bit at the right. If they did, then they must be pretty thick as they have not noticed that similar features appear at various points earlier in the graph but that there is nevertheless a clear upward trend. Certainly there does not appear to be sufficient information to draw any conclusion whatsoever from that little bit at the right.
There is some information on this site that gives an overview of the adjustments that have been made to the USHCN data and provides links to further detailed references. I am no expert but my impression is that the adjustments have been made for sound and fairly standard reasons such as time of observation. Furthermore, and the whole point of that page, a different method of adjustment has been applied that yields very similar results. This would tend to suggest that both methods are robust.
It is a standard 'skeptic' tactic to complain vaguely about 'adjustments' to data as if adjustments are intrinsically wrong or suspicious whereas in fact it is rare in science for raw data not to need some pre-processing before robust conclusions may be drawn from it. However I will give you the benefit of the doubt. Unlike me, you might very well be an expert on this topic, so I'd be interested if you could explain specifically what you think is wrong with the adjustments.
And where exactly is this being 'vilified'?
Slashdot (the headline of this post, for one) felt the need to counter Bloomberg's summary with NY Times summary.
I see. The problem is that you don't know what vilified means. Fair enough.
All I see is a study that accepts mainstream climate science and offers another data point about climate sensitivity
You must be talking about something else then. The study claims the data shows plateauing since the 2000. It doesn't directly conclude that the previous conclusions (that GW is anthropogenic) were wrong. But it does provide evidence to support investing such a possibility.
You seem to be a bit confused. The whole point of the study is to estimate climate sensitivity - how much the atmosphere warms for a doubling of CO2. How on earth does that provide evidence against AGW?
And why are the climate change alarmists vilifying this study?
And where exactly is this being 'vilified'? All I see is a study that accepts mainstream climate science and offers another data point about climate sensitivity. It's at the lower end of the range accumulated from previous studies but nevertheless consistent with that range. It remains to be seen whether this is more accurate than the 2.5 degrees often assumed as the most likely climate sensitivity value. If it is, then that's a bit of good news but we're not off the hook by any means.
Can't say I've ever wanted to perform a set difference. But if I did, there's be a method difference in the class Set, and it would take the second set as a parameter and spit out the result.
The whole point about generic algorithms is that you only have to write them once and can then use them with all sorts of containers, including ones that might not have been written yet, as long as the containers satisfy the minimal requirements of the algorithm. So for example, the 'set' in set_difference does not refer to the container type - it is a description of what the algorithm does. The algorithm does not demand a set; you can equally apply it to a sorted vector. Furthermore, the two input sequences to set_difference do not even have to be the same type as long as their elements are compatible, so I can apply it to a set of strings and a sorted vector of strings if I want to. By your argument, I would have to have a set class with a difference method, and a sorted vector class with a difference method. And then if I wanted set's difference method to work with sorted vectors and other compatible sequences, how would that work? I would have to write it as some sort of generic member function anyway.
Same with a sort- the class would have a sort function. I would reluctantly not bounce using the sort function of the STL since it's so useful, but it's still not the right way of doing things. And it's much more complex than it should be, since the calling code has to worry about things like passing in comparators, when that should really be the job of the sort function.
So what you are saying is that instead of having a sort algorithm implemented once, I need to reimplement that algorithm in every class that I might want to sort. So either I guess that I might want to sort it at the time of writing it or, if I didn't get that right, I have to go back and modify the class. Compare that with the non-intrusive sort algorithm. How is what you are proposing good software engineering practice by any stretch of the the imagination? And I don't understand your point about comparators. In most cases a type you want to sort probably defines a less than operator, which is all you need and you don't need to provide an explicit comparator. It's only when you need to do something special that you need a comparator. How would the sort member function be better?
Here's a hint: go and look up the word 'orthogonal'. It's a key concept in understanding the STL.
So where are the reviews that actually challenge the hypothesis
Presumably you have studied the scientific evidence of alternative hypotheses in order to arrive at your conclusions so it seems a bit odd that you are asking this question. Shouldn't you be pointing us to them?