This would be an interesting explanation of the economic success of Switzerland if it didn't contradict the facts such as: 1. Switzerland is *not* sourrounded by EFTA members (in fact, Switzerland is sorrounded exclusively by countries that are not members of EFTA) and 2. Switzerland *is* one of the four members of EFTA. Actually EFTA was established as an alternative to the EEC which since then has become the EU. See Wikipedia: http://en.wikipedia.org/wiki/European_Free_Trade_Association.
Also, Switzerland, although not a member of the EU, has strong ties to the EU. For example, it is a member of the Schengen area, thus there are no border controls between Switzerland and neighboring (EU-member) countries. While proposals to formally join the EU have been rejected by Swiss voters, most of EU law has been adopted by Switzerland through bilateral treaties, thus Switzerland behaves like an EEA member for most practical purposes.
I think you will have to look for other reasons to explain the outstanding performance of the Swiss economy. By the way, the GDP per capita (PPP) of Switzerland is $44,864 according to the IMF. Countries with GDP per capita (PPP) over $40,000 include Austria, the Netherlands, Ireland and Sweden, all EU members. So it doesn't seem to be the case that "the GDP there is double anywhere else in the rest of Europe". (Actually Norway, a non-EU EEA member, has higher GDP per capita than Switzerland.)
Yes, yes, yes, all the things that happened here are so incredibly unlikely to happen... but then again, the universe is incredibly large and here the law of the large number fits perfectly: NO matter how insignificantly unlikely something is, if there is ONE case where it is true and your sample size is (nearly) infinitely large, the chance to find another case is 1.
This is not what the law of large numbers states. I find it interesting that people cite the law of large numbers so often without knowing exactly what this theorem in probability theory is about.
There are actually two versions of the law of large numbers, a weak and a strong one. Both state that if you have a(n infinite) sequence of independent and identically distributed random variables with finite expected value, then the (X1+X1+...Xn)/n sample average converges to the expected value. The difference is that according to the weak version the convergence is in probability, whereas the strong law states almost sure convergence.
I do not see how this theorem can be applied to the case you describe. You could argue that we could assign Bernoulli random variables to planets: 1 if they have a moon they don't "deserve" (or if they are inhabitable etc.) and 0 id they don't. But obviously these are not independent and not identically distributed random variables, so the LLN cannot be applied.
For more explanation, see http://en.wikipedia.org/wiki/Law_of_large_numbers
This is one of the reasons that GCC sucks compared to ICC and VC++.
Let me give you the facts as they are today. In isolation, both the shift instructions and the multiply instructions have the same latency and throughput, and are also performed on the same execution units.
If this was the entire story, then they would be equal. Buts its not the entire story.
The shift instructions only modify some of the flags in the flags register. Essentially, the shift instructions must do a read/modify/write on the flags. The multiplication instructions, however, alter the entire flags register, so only perform a write.
"But Rockoon.. they are the same latency anyways, right?"
Essentially, one of the inputs to the shift instruction is the flags register so all prior operations that modify the flags register must be completed first, and no instruction following the shift that also partially modify the flags register can be completed until that shift is completed.
In some code, it wont make any discernible difference, but in other code it will make a big difference.
As far as that GCC compiler output.. thats code is horrible, and not just because its AT&T syntax.
There are two alternatives here for multiplying by 4 that should be in competition here, and neither uses a shift.
One is a straight multiplication (MASM syntax, CDECL):
mov edx, [esp + 4] ; 32-bit version, so +4 skips the return address
imul eax, edx, 4
The other is leveraging the LEA instruction (MASM syntax, CDECL):
mov eax, [esp + 4] ; 32-bit version, so +4 skips the return address
lea eax, [eax * 4]
The alternative LEA version on some processors (P4..), in isolation, is slower
GCC is great at folding constants and such, even calculates constant loops at compile time.. but its big-time-fail at code generation. GCC is one of the processors that one optimization expert struggled with because he was trying to turn a series of shifts and adds into a single far more efficient multiplication.. the compiler converted it back into a series of shifts and adds on him. Fucking fail.
You guys are so geographically challenged, I cannot believe it!
Swiss is a country, located between France, Italy and Germany, where chicks are definitely not hot (unless you are into Big Berthas).
(emphasis mine) Let me guess: do you live in American or English, Mr. Geography Expert?
Link to Original Source