Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:Universe is too Strange! (Score 1) 164

Instead, what they're doing is the same crap science we see so much of these days; gather a bunch of data and look at it for all kinds of things after the fact. There's value to that, because it can tell you what you should look for next time; but it should never be confused with science.

I think that's the stupidest thing I've read on Slashdot.

Where do you think new theories and discoveries come from, anyway? Scientists do experiments, not knowing ahead of time what they're going to find, and find something new. There would be no point in doing experiments that can only confirm what we already expect to find.

Gravity can be explained by these equations but I don't know how it works or why. It's useful, but it's inaccurate and it's not science.

This equally clueless. Gravity is one of the most accurate theories we have, and the fact that you don't understand it has no bearing on its scientific validity.

Comment Re:Maybe they'll finally explain it (Score 2) 66

Of course it's going up, NASA has confirmed that with satellite information as well as several other sources all showing quite clearly that the temperature is rising.

News flash: the satellite and surface station temperature records closely agree.

Basing models on data that is at least 1/3 bogus is fucking stupid

News flash: "data that shows cooling" != "bogus data". Parts of the Earth do cool from time to time, you know (and are expected to, even with the enhanced greenhouse effect). The satellite data shows this as well.

NOAA puts a LOT of weight on the land-based temperature data in their models.

Climate models usually don't use temperature data at all (i.e., it's not an input to the model). They're run freely using only the forcing data (greenhouse gases, solar varations, aerosol loadings, etc.) and allowed to predict their own temperatures, without reference to any temperature observations.

When they are initialized with temperature data (in "data assimilation" mode), land gets exactly the weight it should: about 30% of the Earth's surface.

Temperature observations are used for testing the predictions of climate models, but the above remains true: land data gets exactly as much weight as its area average. And models are compared to a variety of temperature records (surface and satellite), not that it matters much, since (as noted above), they all agree pretty closely.

I would like nothing more than accurate climate models but we'll never get them until people admit that the data we have is shit.

As amply demonstrated above, you have no idea what you're talking about.

Comment Re:Why is this such a bad thing? (Score 5, Informative) 584

This basically makes 3rd-party software - like you get from Fink, for example - non-existent, as far as a Mac user is concerned, because all software for Macs will have to be retrieved from this "app store".

You're spreading FUD.

Software for Macs will NOT have to be retrieved from the app store only. This does not kill 3rd-party software or Fink. This announcement ONLY applies to applications that are voluntarily listed in the app store by their developers. Developers do not have to use the app store to distribute their apps.

It is possible that Apple may someday require all apps go through the app store, as you suggest, but that's not what this announcement is about.

Comment Re:Climate models are even more wrong? (Score 1) 676

The key is that the further into the future they look, the more uncertain the results of modeling chaotic system become, whether you average them or not. All averaging does is abstract away the underlying behavior, and in the absence of any additional information (variance for instance) is essentially useless for drawing conclusions from the model.

The previous poster is correct: chaos limits your ability to predict the exact state of the system (e.g., the weather over Sydney in 2093), but it doesn't necessarily limit your ability to predict statistical averages (such as the global surface temperature). Certainly there is wide uncertainty in predicting global average quantities, but this is generally not related to chaos. It's more a function of model structural errors and input parameter uncertainties.

Now I'm sure climate scientists are publishing a little more complete statistical analysis of the results of their modeling experiments. However, when they communicate their findings to policy makers and the general public, they seem to have some difficulty expressing the full scope of such an analysis, and instead point to the average or possibly a most likely outcome without the benefit of the additional information which is necessary to properly contextualize the single number.

A full uncertainty analysis of climate models has been difficult because of their complexity. (You see the same problems in many other fields that rely on large computer models.) But there are statistical uncertainty analyses of climate models, and the IPCC has been continually adding more discussion of uncertainties, error bars, etc. to its reports including summaries for policymakers.

The only way to validate them is to continue to tune them against the historical record.

Climate models are generally not tuned to the historical record, in the sense of fitting them to a historical temperature time series or something. They are tuned to data, however. This is a bit subtle, so let me elaborate:

Typically, climate modelers don't try to tune the entire model at once. They isolate subcomponents of the model, such as the cloud parameterization, and tune that. And they usually don't tune it to time series data or trends. Rather, they try to tune the submodel to the mean climate state over some period of time (e.g., to reproduce the average cloud cover in the 1990s).

This does have potential for overfitting, but by tuning subcomponents individually, they reduce the potential for compensating errors between components, and by tuning to base climate instead of climate trends, they try to keep the tuning independent of the human changes or "forcings" which occur over longer periods of time. It also allows for "independent" validation on later periods of time beyond the period of baseline climatology.

In addition, climate models can be "validated" against completely different periods of time that were not used in any tuning exercise, such as to reproduce the climate of the Last Glacial Maximum, although this is only approximately a validation due to data and input uncertainties.

In short, no, you can't truly validate a climate model's predictions for the next century without just waiting a century. (You also can't avoid tuning the model, which will always have unknown effective parameters that can't be calculated from first principles.) But you can build some confidence in the model physics through weak tuning, separation of concerns, and testing of subcomponents on independent data.

Comment Re:Climate models are even more wrong? (Score 1) 676

Chaos isn't the problem being discussed in the article, nor is it a problem for long-term climate predictions. The problem with both the geophysical models discussed in the article, and climate models, is that historical data aren't sufficient to eliminate the uncertainty in model parameters.

In extreme cases (non-identifiable models), you lose all predictive skill. In milder cases, you simply get wide uncertainties. For example, in climate models the parameter identifiability problems mean that the climate sensitivity (predicted response to doubling atmospheric CO2) is uncertain by a factor of 2 (the "canonical" range of this parameter has been 3 +/- 1.5 degrees C). This uncertainty will go down over time, as more observations are made, but to some extent it is an unavoidable limitation of finite data.

Comment Re:Wow (Score 1) 676

I really don't see anything insightful in the article; it looks a lot like circular reasoning - that models built to fit events X Y and Z will fit X Y and Z well. This is fine and dandy till A B and C come along.

The point of the article is commonly known, but slightly subtler than your interpretation: it's that events X Y and Z may be uninformative about A B and C even if the model is perfect and is capable of making perfect predictions (if the inputs are known, which is the problem, because X Y and Z don't let you infer the inputs).

Comment Re:Wow (Score 1) 676

So small changes in inputs can produce big, unpredictable changes in the output of complex systems?

The article is actually about the exact opposite: when big changes in inputs produce similar outputs (and therefore you can't use the output to infer what the inputs were).

Comment Re:Nothing to do with chaos theory (Score 1) 676

Yes, situation (ii) is the one I tend to encounter in practice. It may also be the situation that TFA is describing, which could be this paper. In that study, they find a very multimodal objective function (analogous to the log-likelihood function). But whether a likelihood is multimodal is a function of the data. It can happen that if you accumulate enough data, the likelihood concentrates about one of its former modes. But in practice, that could require far more data than are available for training and validation.

Slashdot Top Deals

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...