Comment Real Costs (Score 1) 242
Did he factor in the costs of the reduced IO performance?
Did he factor in the costs of the reduced IO performance?
What's more ridiculous is that you insist on using terns that contain the word "fuck" and then use the substitution "fsck". If "fuck" is so impolite for you to use, then why not express yourself differently?
I will never forgive the SyFy channel for perverting the spelling of "Sci-Fi".
Not to mention killing off Stargate... or any decent show for that matter. We're now stuck with rubbish like Eureka.
Maybe they've done some surveys and decided that their target audience should actually be a bunch of retards.
Because HIV infected individuals have a large glowing neon sign attached to their foreheads saying "I HAVE HIV!"
How does polarization help?
Even the phase relationship doesn't really help...
Don't you need to measure the time difference from the "first" neutrinos arriving and the first light arriving?
How could you chronologically differentiate all subsequent neutrinos and light?
The comparison is based on 1 FPGA vs. 1 CPU Core of an Intel Xeon E5430 2.66GHz.
More details:
http://www.xilinx.com/publications/archives/xcell/issue74/FPGAs-speed-computation-complex-credit-derivatives.pdf
I think you're underestimating the bank.
The cost of this solution might have been low enough to warrant the immediate gains in performance.
The lock-in you describe might not exist, as the algorithms and the accelerated bits are a small portion of the entire code-base (but take 99% of the run-time).
It will very likely be the case that the cost of not going with this solution is far far greater than going for it.
GPUs are much more power hungry compared to FPGA and provide a fraction of the performance.
At the end of the day, GPUs are designed for gaming machines... the whole GPGPU thing is a side show for the graphics market. It's just not optimized in any way for this sort of computation. There's little money to be made building supercomputers compared to selling gaming machines.
However, an FPGA can be completely customized to suit your exact needs, you will make efficient use of the entire chip. It won't be a mere coincidence (like in the GPU case) that the chip can be used for a computation that you need. The FPGA is customized directly to fit an algorithm. this efficiency is where the speed gains are made.
It seems people put a lot of effort in to making their software compatible with GPUs and changing their algorithms to fit the GPU model.. this is a distorted view of reality - it is the computer that needs and can change to suit the problem, not the other way around.
1 Virtex-6 SX475T could give you about 1 billion SHA-256 hashes/second clocked at 200MHz., will use 20% the power of the ATI GPU. but will cost about 4 times as much.
Checkout: http://www.maxeler.com/
They've been getting some pretty crazy results. If i understand correctly, they've got a completely innovative workflow, tool-chain and abstraction. I think they've even created their own simulation tools that give you cycle-accurate results 1000x faster than modelsim.
I think google did the right thing - get the critical mass - then use the leverage.
see:
http://www.engadget.com/2011/03/31/google-tightening-control-of-android-insisting-licensees-abide/
You get the choice of buying android from a different carrier/manufacturer.
With apple, you're stuck with iTunes... no other choice or option.
Since when is apple a carrier?
Last time I checked, Carriers make money through call tariffs - barely from accessories or ad revenue etc.
Apple makes their money from selling the hardware. Motorola, Samsung, HTC etc make their money also from selling hardware.
Some carrier see greater value in Android as they can fill it up with their rubbish content and try making even more money - beyond the call charges.
Thankfully, Android is open and you can just rip away all the garbage some carriers push into it.
If all else fails, lower your standards.