Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Numbers game. (Score 1) 365

No. Here is a definition of unthinkable:

Too unlikely or undesirable to be considered a possibility

Therefore the word doesn't necessarily say anything about how much thought has been put into a contingency. Next time you feel the urge to write a pedantic nit-pick post which adds nothing to the discussion you ought to get your facts straight first.

To avoid being found guilty of exactly what I just accused you of, both GP and GPP are wrong. X is a cliche excuse for Y is itself a cliche excuse for not confronting the truth that the world is a complicated place with few right answers and even fewer absolute truths.

Comment Re:Close your blog. Start a Journal. (Score 1) 353

Nice Strawman. The issue isn't whether the government can abridge the freedom of the press, but rather what activities ought to be protected under 'freedom of the press'. In other words, since the Constitution doesn't explicitly define what 'the press' is, there's room for discussion about whether new activities, such as blogging, ought to be included.

Comment Re:wow, a guy made a mistake (Score 1) 234

GP:

you pay money to the government in return for the privilege of living and working in your nation.

Parent:

You don't pay the government for the privilege of living, and reading that actually makes me a little sick to my stomach.

A beautifully subtle and clever troll. Who will notice if you omit a tiny preposition that is a couple words away from the verb it is linked to? Still deserves a Troll mod though. Too bad my points ran out earlier today.

Comment Re:Seriously (Score 2) 363

You're giving advice to someone you've never met about what his girlfriend, who you've also never met, will like to do on vacation.

This comment sums up Slashdot so well: Complete arrogance married to utter ignorance.

Comment Re:Good god... (Score 1) 676

Seeing how that data for those years has yet to be recorded, he cannot possibly compare the forecasted values to the actual values to test the accuracy.

Of course he can. His data comes from a model, so he can produce data for any 'years' he wants to.

It's entirely relevant to the accuracy of the model. You cannot use fictional data to try to understand the causes and variation in a real world variable.

The article doesn't address understanding causes and variation of a real world variable, it address the ability of a model to track the underlying model it was built from. Whether the underlying model was reality or just another model, if anything, ought to make the task easier.

I called out his "calibration" as bullshit.

Your work, as you've described it, is no different than what the article claims does not work.

Care to tell the class what you do for a living and your educational background?

Masters in Computer Science--not that my credentials are relevant to the strength or weakness of my argument.

We've already established that I have reason (both educationally and career-wise) to know WTF I'm talking about

First, your education and career may indicate that you ought to know what you're talking about, but the arguments you put forth show that you don't.

Second, this is perhaps the crux of matter addressed by the article. You, and I assume many other economists, think you know what you're doing, but you don't. You're practicing cargo cult science--going through the motions of statistical analysis without understanding what you're doing or why, and consequently producing garbage models that don't predict anything other than the 10 years of data you tested against.

Comment Re:Good god... (Score 2) 676

He had two models. The first model produced hypothetical historical data (analogously, the data from 1950 to 2010). He then created a second model, built on part of the 'historical data' (1950-1999) and tested on the remainder (2000-2010). He then used the first model to produce another segment of data (2011-2020), and found that the second model did not predict this 'new' data at all.

You and he are doing the same thing; the fact that your 'historical' data comes from reality and his comes from a model is irrelevant so far as the second model is concerned. The phenomenon described by the article is known as 'over fitting the training data' in Machine Learning circles, and is widely known in other fields, according to the comments. Perhaps if you pursued a degree in something a little less soft than 'Applied Economics' you might understand the Math you're blindly applying.

Comment Re:Good god... (Score 1) 676

If you've done a good job collecting a large enough data set and including the necessary variables, you'll have some pretty damn good predictions for the first part of your time series.

Did you even read the article? Allow me to quote the relevant portion:

The problem, of course, is that while these different versions of the model might all match the historical data, they would in general generate different predictions going forward

Regardless of your spurious claims about 'not calibrating models', the point remains that in any complex system multiple different models can be found that fit the small slice of time you use for accuracy testing, without fitting the unseen future data.

Slashdot Top Deals

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...