The previous 1.5-2 ppm peaks would be just as invisible as the current one.
Even more so, the chart seems to have less high frequency noise the further back in time it goes. I wonder why. My initial guess is diffusion of stuff trapped in the ice.
The graph might also be an average from many ice cores, and the increasing difficulty of dating the samples accurately would cause more diffusion. Also it seems to me that the number of samples per time gets smaller towards more ancient times. Perhaps the scientists, being aware of the aging and diffusion problems, chose to take larger samples from deeper down. This might have allowed them a better resolution near the surface (as science also must obey the limits from finite resources..)
So, you're saying that there's no evidence of any problems.
The previous 1.5-2 ppm peaks would be just as invisible as the current one.
Only if the previous peaks were very short-lived. No reason to believe there ever were very narrow high peaks.
Also, whether or not it happened before is a moot point. Sure, mass extinctions happened before. We still have reasons not to have one more of those happen now.
No scientist, ever, anywhere, thinks that the Antarctic is going to melt completely. Ice mass and permafrost that happens to be in a sufficiently cold place (central Antarctic continent being the most obvious location) will stay frozen. The exact amount of permafrost today must necessarily be a delicate balance, so some warming must melt some permafrost (well, given that some landmass does exist at intermediate latitudes).
The increase of methane must be both the result, and a partial cause, of any warming. Causation can and does go both ways. No, it's not a runaway chain reaction, but it's a settling-to-some-new-balance, which might be disastrously different from the pre-industrial balence (for debatable values of "disastrous").
ATI once bricked my Radeon laptop by suddenly making drivers that can't draw a single pixel on the mobile 9600. Okay, so maybe it was a bug, but they weren't in a hurry to fix it. Yes, I could have installed an older driver, but because of Linux, that would have also meant installing an old distribution with an old kernel. I needed new features and programs. And even while the ATI driver initially worked, it didn't support everything (dual screen in particular was hacky).
I'd be very happy to see AMD make stable and feature-perfect drivers (consistently) for Linux. But given their very earned reputation and personal hardships with radeons, I'm not buying another ATI/AMD graphics card unless I see several years of flawless drivers from them (at least the kind of flawlessness that Nvidia offers).
So everybody "knows" that Nvidia is better for Linux, and not many people are left to find out if the drivers turn better. Too bad. They're in the grave they dug themselves.
most people are not scientists, i voted believe it because a lot of people that i consider to be a lot smarter than me have stated that it's true. neutral observers only count when money isn't involved.
Don't ever, ever, ever say something must be true because someone(s) smarter than me said it must be true. If they smart people can't convince you by presenting their methods, data, and conclusions, then they're not that smart and don't really understand what they are trying to say.
Well, that also depends greatly on *you*. Are you a physicist? No? Well, then you have no way of really appreciating their elaborate research and evidence. And if a climate scientist thinks you're an uneducated bigot, they very correctly choose to concentrate on their research instead of trying to convince you. Actually, even if they don't think that, they make that choice. A great researcher is not often a great educator, and the few who are educate people in universities. Scientists are not politicians. It is politicians' job to try to convince you. That doesn't always go well, because they're not scientists. But hey, not everything comes on a golden plate. So I've a suggestion. If you actually read what scientists write about the climate change, then you may be convinced because of the right reasons. The IPCC 2007 documents are a goldmine of scientific data, argumentation and analysis, and there have been vast amounts of new scientific publications about the matter after 2007. For convincing arguments, maybe you should read university coursebooks and talk to students and professors.. Or better yet, study the matter.
When evidence doesn't fit the model, *fix* *the* *model*.
This doesn't make sense to me. If, when any contradictory data arises, you simply change the test conditions (the model), how is anyone ever to prove the hypothesis invalid? If you keep moving the goalpost by altering the model to better portray the conclusion you wish to see, how would an opponent find any contradiction? It seems to be the scientific equivalent of fudge factors. Couldn't someone who wants to see a different outcome just devise a new model with different fudge factors that portray an entirely different picture?
Well, suppose for a moment that Newton came up with F = A*m^3.4 - m^2 + B*ma - a^2 instead of F = ma. Then he made a few measurements to fix fudge factors A and B and to prove the equation. Someone else made other measurements and got some contradictory data. That invalidated Newton's original formula. Simple as that. Now, if Newton came up with another formula, they would start again. This time it might be the right one.
What hypothesis are you talking about? The greenhouse effect? Well, prove any of its parts invalid. You might prove that CO2 doesn't absorb EM radiation in the way we thought, or that it doesn't emit EM radiation in the way we thought, or that the Earth surface doesn't radiate heat the way we thought. But all these things are studied very much in the lab. The foundations are solid.
Prove anthropogenic greenhouse effect invalid? Simple, just prove that humans don't, after all, release CO2. Well..
Prove invalid the conclusion that the net effect of anthropogenic greenhouse effect is heating? Well, make very careful calculations about that and release the results amongst other scientists to see if they find errors in your logic. If they don't, then probably your study is valuable. This has indeed been done over and over again, and if one avoids deliberate fudge factors, the result always is that AGW is a fact.
Prove invalid the concept of computer models? Sorry. Computer models are nothing more than a way to make many calculations efficiently. So maybe you can go prove that mathematics is actually flawed. Good luck with that.
You could however prove invalid some specific numerical methods applied in the models. These have been studied in depth too, and people have found lots of ugly pitfalls, but these can be avoided. You could of course find out that some model falls into some pitfall, and thus the numerics are invalid.
Models are nothing more than a collection of equations of so-called "laws of nature", to see what the laws of nature mean in practice. They don't contain deliberate fudge factors. They do contain "parametrizations" - if something is not known from a law of nature, scientists go out and try to measure the value from the nature (or rather read studies from other scientists who have done that). These are questionable, but usually it's found that changing these values a bit doesn't effect the outcome very much.
The model is not the "test conditions". The model is a tool of calculation. Other calculations not done with models (but pen and paper) agree with the calculations using models. So, in short, one could prove model results invalid by showing 1) our understanding of laws of nature is mistaken, or 2) these laws are not applied properly in the models, or 3) the model result is caused by unrealistic assumptions.
I hope this answers the question although I'm not sure I understood it correctly.
Here is my delima. I don't understand how two well educated scientists can look at the exact same data and come up with two completely opposite results. How does that happen? My guess is that the wildcard is assumptions that are included in the models. Maybe a model with zero assumptions would help, but I doubt that is possible.
Does that happen? I have never seen this happen, given two well educated scientists who both work in their area of expertise. The "scientists" that I've heard come up with opposite results from climate data have not been climate scientists.
"They" were not predicting an ice age. Google it up. In 1975, there was a single lunatic predicting an ice age, while the scientific consensus thought quite otherwise. You're perpetuating a myth.
The sun's power output cannot explain why 1980-2010 was warmer than 1950-1980 or 1920-1950. It may play a part in why 2000-2010 didn't see a large trend.
We don't know the temperatures of other planets. What, we have one working thermometer on Mars? Even you can't seriously mean I can have one thermometer on Earth and deduce the mean temperature from that.
As I said, a good level of accuracy for short-term forecasts. Though any forecast more than 10 or 14 days ahead is mostly useless. It's a known accuracy.
Hm? We have a workable model of inputs and outputs. The inaccuracies in weather forecasts are mainly from two sources; the computational resolution is awful (aye, still something like 100 square kilometers lumped in to one value), and we're feeding rubbish into them. The latter is a subtle point. If we know the laws of physics and the state of some system, we can predict the future states of the system. However, we *don't* know the state of the atmosphere. Not now, not yesterday. This is the problem. It's equivalent to calculating the orbit of some asteroid, when we only know its location, velocity and mass very approximately.
Forecasting weather is like taking a group of teenagers and coming up with their supposed incomes 20 years later. Forecasting climate is like taking the group and coming up with the group's average income 20 years later. The latter problem is simpler.
The last thing one knows in constructing a work is what to put first. -- Blaise Pascal