The actual problem comes to the input data to these models. As TFA says, they measured the correlation value that would predict the observed market prices of CDS's.
This is kind of common in the financial business; you assume that "the market" is already taking into account all facts when deciding those prices, so that you can calibrate a single parameter (the correlation in this case) that will make up for all that assumed knowledge. If your model doesn't explain all data, you simply add more parameters.
I see two problems with this.
The first one is that many times "the market" actually doesn't know about the future, making your calibrated parameters reflect a collective subjective opinion instead of a tangible reality.
The second one is that, as markets mature, many players end up using the same models. This not only leads to a single failure mode for the whole market, but may end up producing a free-floating (unstable) fed back system if no tangible (real world) inputs are at hand; you price things with model M given parameters that were deduced from prices via the same model M. When everyone is doing the same without looking at their surrounding reality at all, there's no more of that "collective market knowledge" left, but a bunch of lemmings running towards the cliff edge.
At my work, we maintain several Windows clusters for financial derivatives valuation.
We can't really move all of them to Linux (no matter how much we would like to do it), because some of the calculations have been implemented using MS only technologies, like ActiveX (yes, you read that well) and
When we needed to upgrade the Windows clusters recently, we had to move from 2-cpu-1-core to 2-cpu-4-core machines, since it was what was being sold. What we've observed is that Windows (Server 2003) is unable to fairly share the CPU time when there are more active threads than available cores. We get a lot of variance on the overall calculation time when the clusters are very loaded because of this.
The same tests done on SLES-9 (yes, 9) based Linux clusters with similar hardware did not suffer from this problem. CPU time was divided equally among all threads. And we are using a 4-year old kernel, that doesn't even sport the newest completely-fair-scheduler.
He has not acquired a fortune; the fortune has acquired him. -- Bion