>If you want to talk about science, then show me a tested climate model that has been subjected to an empirical test of its validity. It isn't that hard guys. We have a lot of very accurate historical data. Feed in past climate data and see if your climate model can predict the past or the present accurately. The first model that can do that which isn't just a collection of plug variables is something worth taking seriously.
What? No.
You have it completely backwards. All serious models are trained on and tested using historical data. If they can't even predict the past, what use are they?
But - here's the key point - predicting the past is *worthless* other than as a sanity check. As Garrison Cottrell told me, predicting the past is easy (even trivial). It's predicting the future that is hard.
The only way to really know for sure if a model works is to test it moving forward. And the IPCC doesn't have a great track record at that.
>STL sucks, I still have to do single character input and output from files, so much for getline BS
Gah, I/O in C++ is so horrible. In just the last month I've come across the following:
1) No platform independent way to do non-blocking I/O.
2) No iostream-compatible way of doing dup() or dup2(). You can change the buffers on iostreams, but this is not the same thing.
3) Just how shitty iostreams are at processing input files in a fault tolerant manner. On any major project, I always seem to just drop down to reading files one character at a time.
>Also, some problems can't be done in parallel, but we won't know how many can until we start trying....and then try for a few decades.
Right, but there's also a grey area between completely serializable and embarassingly parallel, in which methods like this will allow scaling algorithms up from "a few" computation nodes to "many", with the optimal numbers depending on the specific algorithms.
The biggest problems are still the same ones that existed when I got my Master's over a decade ago. Language support for parallelism isn't very good (I personally used MPI, which was awkwardly bolted on top of C++), it requires a certain amount of specialized knowledge to write parallel code that doesn't break or deadlock your machine (and writing optimized code is a bit more advanced than that), and library calls aren't all threadsafe. On the plus side, a lot of frameworks and libraries are now multithreaded by default, which nicely isolates the problems of parallel computing away from people who haven't been trained in it, and gives the benefits of parallel computing with only the downside of having to use a framework. =)
Ungar's idea (http://highscalability.com/blog/2012/3/6/ask-for-forgiveness-programming-or-how-well-program-1000-cor.html) is a good one, but it's also not new. My Master's is in CS/high performance computing, and I wrote about it back around the turn of the millenium. It's often much better to have asymptotically or probabilistically correct code rather than perfectly correct code when perfectly correct code requires barriers or other synchronizing mechanisms, which are the bane of all things parallel.
In a lot of solvers that iterate over a massive array, only small changes are made at one time. So what if you execute out of turn and update your temperature field before a -.001C change comes in from a neighboring node? You're going to be close anyway? The next few iterations will smooth out those errors, and you'll be able to get far more work done in a far more scalable fashion than if you maintain rigor where it is not exactly needed.
>The 5 largest movie theater chains refused to show the movie out of fear, not Sony. Why can't anyone understand this?
Here's a response from an owner of a small cinema, named George RR Martin:
grrm.livejournal.com/397388.html
"I've seen it. It's rubbish." -- Marvin the Paranoid Android