I doubt that.
TSMC 20nm will be ready for GPUs a lot sooner than their 16nm process. The only reason there are no 20nm GPUs yet is because the initial ramp was fully booked out by Apple.
Meanwhile, a comparison of Apple's 20nm A8 density versus 14nm Core M, indicates Intel's 14nm may not have such a density advantage as they claim: https://www.semiwiki.com/forum...
I doubt that.
The explaination is quite simple: more beer!
If you buy the 16GB instead of the 64GB you have and extra $100 left over to spend on beer!
The GTX 970 is as fast as AMD's flagship R9 290X, much more power efficient and is $170 cheaper. This means AMD will have to knock down prices by a huge amount and
they are sort of depending on graphics revenue to break even, because of falling CPU marketshare.
That's more expensive than the total cost of many tablets
Previously, Nvidia said that it would license it's Kepler GPU cores to third parties. Semiaccurate maintains that this licensing program was in fact bogus and was conceived purely to justify future patent trolling activities. Semiaccurate also claims that
Nvidia tried to "shakedown" Apple with the same patents and Apple subsequently gave the contract for the Mac Pro GPU to AMD as punishment.
I must say ISIS took a turn that no one was expecting: after much success as a post-metal band and releasing 4 albums, they decided to re-emerge as Islamic terrorist group in Iraq.
If it was to be intutitive, at least they should have the basic commands accessable from the menu. For instance, to cut the track can only be access via a secret shortcut.
blender is good for video editing, but there's no way on earth that you could call it initutive. The quirky UI takes a steep learning curve.
Yeah, there's a lot of garbage apps, for instance apps that just display a single jpeg and game engine demos that have been repackaged. These never get rejected. It's pretty obvious that Apple is using an automated app approval system
It has much less power consumption than the R9 280 though. It would be more interesting as a laptop version
From the origonal paper www.cs.wisc.edu/vertical/papers/2013/hpca13-isa-power-struggles.pdf (which ExtremeTech does not link to):
Technology scaling and projections:
Since the i7 processor is 32nm and the Cortex-A8 is 65nm, we use technology node characteristics from the 2007 ITRS tables to normalize to the 45nm technology node in two results where we factor out tech-
nology; we do not account for device type (LOP, HP, LSTP).
For our 45nm projections, the A8â(TM)s power is scaled by 0.8Ã-- and
the i7â(TM)s power by 1.3Ã--. In some results, we scale frequency
to 1 GHz, accounting for DVFS impact on voltage using the
mappings disclosed for Intel SCC . When frequency scal-
ing, we assume that 20% of the i7â(TM)s power is static and does
not scale with frequency; all other cores are assumed to have
negligible static power. When frequency scaling, A8â(TM)s power is
scaled by 1.2Ã--, Atomâ(TM)s power by 0.8Ã--, and i7â(TM)s power by 0.6Ã--.
We acknowledge that this scaling introduces some error to our
technology-scaled power comparison, but feel it is a reasonable
strategy and doesnâ(TM)t affect our primary findings (see Table 4).
If you look at the graph "raw average energy normalised" you see that the ARM A9 core has the lowest energy score -> that clearly shows ARM being the most efficient and hence the conclusion is completely wrong.
Still the test is very interesting. I would like to see it updated with latest CPUs
CPU performance is worse than Haswell and IGP is better - that's what I said. Motherboards are about the same price
They don't anymore. Kaveri is about same price as i3 Core/pentium Haswell, but with more power draw and less performance. Where AMD win is the IGP, which always has been better
I'm glad to see AMD is using their development budget wisely and not wasting it on other stuff, like it making their x86 cores competitive versus Intel