Intel's 10nm Cannon Lake CPUs Won't Arrive in Mass Quantities Until 2019, Company Says (pcgamer.com) 116
Intel said this week that it is once again delaying the mass production of its 10-nanometer "Cannon Lake" chips. The company insists that it is already building the chips in low volumes, but said it "now expects 10-nanometer volume production to shift to 2019 [rather than the end of 2018]." From a report: Intel is on solid footing, in other words, though pesky challenges remain in manufacturing its next-generation 10nm parts. CEO Brian Krzanich acknowledged as much during an earnings call, attributing the delay to difficulties in getting 10nm yields to where they need to be. So rather than push to ship 10nm in volume this year, Intel is giving itself some additional time to sort things out.
Re: Thanks but no thanks, Intel (Score:2, Interesting)
You can't VMotion running VMs between Intel and AMD ESXi hosts. So it's not like I can just drop an AMD server into the cluster even if I wanted too. So, I'm kinda stuck with staying with Intel.
Re: (Score:1, Insightful)
Isn't it interesting how the solutions relying on the Microsoft ecology all seem to deadlock you to Intel hardware too, when none of the other virtualization technology seems to have that problem. I wonder if you've learned your lesson yet.
Re: Thanks but no thanks, Intel (Score:1)
ESXi is VMWare not MICROSOFT.
Re: (Score:2)
https://partnerweb.vmware.com/programs/guestOS/guest-os-customization-matrix.pdf
Re: (Score:1)
What the FUCK about VMware requires Windows??????????
Comment removed (Score:5, Informative)
Re: (Score:1)
Mod this fella up, +1 Informative. Wish I had mod points.
Re: (Score:3)
This isn't a Microsoft specific issue. If a process is running on Windows or Linux and that process was using processor platform specific instructions (which it may have dynamically queried support for upon process startup) how is that process supposed to keep running on the other platform where the instruction doesn't exist?
Re: (Score:1)
VMware ESXi isn't part of the Microsoft ecology. It isn't based on Microsoft software.
I hope you learned your lesson - keep your mouth shut when adults are talking.
Re: (Score:2)
Either you are an idiot, a troll or someone incapable of reading simple text. What I wrote that was:
VMware ESXi isn't part of the Microsoft ecology.
It isn't based on Microsoft software.
Which means it have nothing to do with Microsoft _WHICH_WAS_THE_FREAKING_POINT_. It was a _CORRECTION_ of the post I responded to.
Suppose I shouldn't complain. Someone moderated me troll for the same post. Correcting an idiot poster by stating facts is now a troll...
Re: (Score:2)
You can if you fuck around with unsupported shit. I wouldn't trust it to work, however.
Of course, other hypervisors have no issue with this.
You're only stuck with Intel if you cannot afford brief downtime of a single VM. If you cannot afford that, you're in a precarious place regardless.
Re: (Score:3)
Xen is the only other one I know about:
https://support.citrix.com/article/CTX115813
Q: Does XenMotion support live relocation of virtual machines between Intel-based and AMD-based host systems?
A: No, XenMotion supports live relocation of virtual machines between systems with the same type and manufacturer of processor.
Re: (Score:3)
KVM does it.
Re: (Score:2)
That's cool, I use KVM quite a lot but I didn't know they supported that!
Re: (Score:2)
Re: Thanks but no thanks, Intel (Score:2)
"You can't VMotion running VMs between Intel and AMD ESXi hosts. So it's not like I can just drop an AMD server into the cluster even if I wanted too."
Well, yes, you would need to create a separate cluster (managed by the same vCenter server), and shut VMs down to migrate between clusters. Oh the horror.
"So, I'm kinda stuck with staying with Intel."
If you can't afford one reboot VMs one at a time, you have bigger problems.
Intel in full damage control mode. (Score:2)
"No really guys. Don't buy that AMD chip yet. We promise that the next-gen chip we're making that will be so much faster than theirs really exists! We only need about 4 more quarters worth of earnings to prove it..."
Re: (Score:2)
Re:Intel in full damage control mode. (Score:5, Informative)
. I'm looking forward to trying out AMD for the first time,
You won't be disappointed with AMD this go around. Take a look at the specs for the new 2700X.
https://www.cpubenchmark.net/c... [cpubenchmark.net]
That is a $329 chip and has better performance than the closest intel chip in that class, the 8700K. The 8700K is also $30 bucks more. Sure there are more powerful intel chips but those chips are in the $1000+ range.
You can also find AMD chips in that range too but if you are going to do a bitch'n build and not break the bank the 2700X seems to be the way to go.
Re: (Score:2)
if you are going to do a bitch'n build and not break the bank the 2700X seems to be the way to go.
If quiet is your thing, the 2700 is also a great choice. 65 watt envelope and even better value.
Re: (Score:1)
(Pro tip: try it with Linux!)
Re: (Score:3)
(Pro tip: try it with Linux!)
Okay. Lets see been running linux exclusively on AMD since the '486DX-133. Lets see, since that time I have run some variation of linux on every class of AMD processors. I have been running Centos on a AMD-8350 since 2012. In that time I have had exactly zero issue with linux and AMD processors.
Re:Intel in full damage control mode. (Score:4, Informative)
There was the segfault issue with early Ryzen production, requiring an RMA. Inconvenient, but AMD handled it with good style. Then there wass the soft lockup at idle issue, apparently resolved by the new "typical power" option in recent bios updates. Otherwise, Ryzen has been really sweet, including for virtualization. It is fair to say that the Ryzen introduction was a little bumpy, but the overall experience is so positive (massive parallel throughput, decent single-core, great power efficiency) that the user community is happy to cut AMD some slack. It's a bit early to say, but I think my Ryzen systems are now in that "golden uptime" zen state. I certainly had that with my Piledriver + Radeon system - uptime measured in months, typically only limited by something like a power outage or a kernel update.
Windows users never noticed either of the above Ryzen issues, it's not clear why. Maybe, they just never put their systems under enough load to get the segfault, or it's hard to distinguish those segfaults from normal life in Windows land anyway. For the idle power issue, maybe AMD quietly supplied a fix to Microsoft months ago, ahead of users noticing it. Don't know. But it's water under the bridge now, I see no compelling argument to build an Intel box now or in the foreseeable future. With the Ryzen 12nm refresh already landed and 7nm parts scheduled to sample around the same time as Intel's roughly equivalent 10nm parts, it's clear that Intel has lost its process edge to TSMC and Glofo.
Of course the real elephant in the room at the moment is Meltdown. Intel does not have a credible answer, while AMD just designed their parts right in the first place. For the moment, if you want a system that is not just one gaping security hole, plus performs decently, AMD is the only game in town.
Re: (Score:3)
Thanks for bringing me up to speed. I was sniffing around Ryzen to replace my aging FX-8350 in my Centos box. I'm thinking that a 2700X would fit the bill.
Re: (Score:2)
Really? Wow. That's interesting! Please detail 3 of these bizarre compatibility issues.
Re: (Score:2)
Re:Intel in full damage control mode. (Score:5, Informative)
First, these are Cannon Lake chips. Remember Cannon Lake, due late 2016? Delayed until late 2017? Delayed until late in the first half of 2018? Yup, that Cannon Lake. Among other things, Cannon Lake was scheduled to introduce LPDDR4, so would be the first Intel mobile chips that could manage 32GB of RAM without using a huge power budget. If you think it's bad for Intel now, wait for the Apple fanboys to notice...
Second, one of the features that people have been waiting for since it was originally announced in 2016 and was expected to debut with Cannon Lake is Intel's Control-flow Enforcement Technology. This comprises two parts. The first is a set of magic nops that indicate a valid branch target and protect forward control flow arcs (any jump that isn't to a designated landing pad will trap in code marked as supporting the feature). The second is a secure stack. Every call instruction pushes the return address onto the main stack, but also onto a second stack (which is not readable or writeable by normal instructions). Each ret instruction checks the top of both stacks and traps if they disagree. Sounds great? That's what everyone thought last year, but unfortunately it is incompatible with the retpoline Spectre mitigation that is now fairly widely deployed, so CET is now impossible to deploy in the presence of code using retpolines (e.g. Chrome) and so needs to be redesigned very late in the schedule or skipped entirely.
Re: (Score:2, Interesting)
Quad core or desktop Cannon Lake was cancelled loooong ago, so now it's strictly a dual core / quad thread CPU for low power. Thus while this LPDDR4 support is great news it won't be suitable for Macbook Pro. When Cannon Lake production ramps up though it's suitable for the Apple "Macbook".
Yes I would like to see a high end Apple netbook with 32GB RAM lol, afterall I always run out of RAM way easier than I run out of CPU.
I bet the PCIe SSD has enough I/O to run slashdot with all its users.
Interesting will b
Re: (Score:2)
The 8th gen Kaby Lake Refresh processors support 32 GB. Business oriented ultrabooks with 8th gen processors often have 32 GB as an option. Laptops like Thinkpads, Dell Inspiron, HP EliteBook, etc. It's just an option on customized orders. No one's mass producing laptops with that much built in, as the demand isn't there.
Re: (Score:2)
Re: (Score:2)
I don't know of any coin with any traction that uses CPUs, Bitcoin Lite Coin Dash etc are all ASIC mined and the Ethereum clones use GPU's. None of these will effect the sale of CPUs.
Re: (Score:2)
I don't know of any coin with any traction that uses CPUs, Bitcoin Lite Coin Dash etc are all ASIC mined and the Ethereum clones use GPU's.
Monero is #12 at Coinmarketcap, and it remains mine-worthy on CPUs. GPUs might be marginally more efficient, but they are within the same order of magnitude.
I'm not exactly vouching for Monero, as it's heavy and slow to use, even compared to its Cryptonote siblings, but it seems to have a lot of backing from the big boys. Also recently, ASICs suitable for Monero were released, and the Monero team reacted in a whack-a-mole fashion by changing some parameters in the proof-of-work algo. So not very professi
Re: (Score:2)
Or maybe they're telling us that Apple has paid in advance for all the Cannon lake CPUs Intel is able to produce right now?
Updated Mac mini and/or MacBook Air "soon"?
Re: (Score:1)
Apple is in the middle of a full transition to their own ARM chips. And I doubt there will ever be another Mac mini. It's too much like a real, upgradeable computer.
Re: (Score:2)
They took care of that with the 2014 "update".
Re: (Score:3)
Citation needed. People have been talking about Apple switching to ARM for at least 5 years.
Re: (Score:2, Insightful)
I have no citation yet (Apple would try not to leak until the press event), but the momentum keeps building. The newest iMac Pro has an A10 Fusion chip for some functions (Macbook Pro runs the touchbar on ARM). Before, I think they were mostly driving that speculation themselves (and saving it as a backup plan) to keep Intel pricing under control. Now that AMD is more competitive, Intel will be looking to its major buyers to keep their profits up. Apple introduced bitcode to the OS X app store in 2015,
Re: (Score:2)
In other words you got nothing.
Re: Fabbing 10nm hasn't been easy (Score:1)
A 7 year old article, nice.
Re: (Score:2)
A switch away from intel will kill the performance users.
You'll lose them all to the surface books.
Re: (Score:2)
Perhaps you want to enlighten us: in which cases is an Intel CPU faster than an ARM?
Re: (Score:2)
The A11 in battery-powered devices where they need to keep the power requirements low and the heat dissipation to a minimum bests some of the lower end Intel Core i3 desktop chips connected to the power lines with massive heatsinks and fans.
We have no idea what kind of A11-style CPUs Apple has in its labs. For all we know, they have "A20" CPUs that can rival quad-core Intel i7.
Re: (Score:2)
Re: (Score:2)
Apple hasn't released a true performance computer for a while - even their newest Mac Pro fails to keep up with the times.
The single core Geekbench scores of the Xeon e5 in the current Mac Pro match up not too far off from that of the A10X (with only a TDP of 8W). If Apple did a 10-12 core desktop chip with their A11X, they really could have the performance there.
Re: (Score:2)
This line of thinking is oversimplified: one can't just add a lot of stuff and expect everything to scale.
The interconnect between cores in a low power design with a few cores is different in a "manycore"* design designed for intermediary to reasonably high power (say 40-90W). To reduce the communication latency one will burn more power however Apple may try to keep a simple design and use NUMA type techniques to reduce longer latency communications in software.
Memory controllers would consume more, the GPU
Re: (Score:2)
It was really for an idea is performance scale, not literally using the same cores. They absolutely have been working on prototypes even if just as insurance against Intel pricing. The threat of being able to change architectures at the drop of a hat has served then well so far. But an educated guess still makes it likely that they will switch to ARM soon. There have been a lot of outwardly visible pointers.
Is "sort things out" an euphemism? (Score:3)
Re: (Score:2)
Is "sort things out" an euphemism for trying to patch gaping security holes?
They have been sorting this platform out long before the gaping holes were first discovered.
Re: (Score:2)
Marketing speak for "damage control".
Re: (Score:3)
Discloser: I work in the semiconductor industry
Re: (Score:2)
That, and trying to work EUV into the mix, really nasty stuff. Without EUV, multipatterning is a serious bottleneck.
Re: (Score:2)
Ah, and Intel seems to be way behind AMD in multi-die tech. Strategic blunder.
Alles Klar Kommissar (Score:3)
Re: (Score:3)
Re:Alles Klar Kommissar (Score:4, Insightful)
Re:Alles Klar Kommissar (Score:5, Interesting)
The best way to describe spectre is that's it's fundamental to how all out of order instruction processors work. All out of order processors will suffer from spectre.
Getting rid of spectre would require the return to in order execution at a MASSIVE performance penalty, more than 50% and probably closer to 75% drop in compute power. It's mitigateable but it's going to have hundreds of edge cases that will be found for years so it's going to take a long time (years) and a lot of rewriting in the fundamental parts of OS's to negate Spectre based attacks.
Spectre is fundamental to the design assumptions of all modern processors, as I like to say it's the bug that's going to give and give and give. They probably won't have found most of the edge cases until after 2020 so we should expect yearly/quarterly patches to spectre like attacks for a long time.
One thing that's not mentioned in a lot of the articles but the timing based attacks that comprise the spectre attacks were discovered years ago. It took several years for someone to find and demonstrate the first version of these attacks but most experts think this is just the beginning and that we're looking toward years of these type of attacks on all aspects of operating systems and CPU's.
In other words, spectre was just the first timing attack, there will be more, probably a lot more now that there is an actual example of how to do them.
Re: (Score:2)
Getting rid of spectre would require the return to in order execution at a MASSIVE performance penalty, more than 50% and probably closer to 75% drop in compute power.
Probably some dumb questions, but anyway:
(1) Does out-of-order necessarily imply speculative execution?
(2) Is in-order really that bad, considering all the other advances in processor design?
(3) What about efficiency? If in-order means the CPU is doing less work in a given time, is it also consuming less power? I.e. is the 50%..75% reduction in absolute computing power, or also efficiency?
(2) is related to some anecdotal experience that hyperthreading works well on in-order Atom processors. In my u
Re: (Score:1)
Fair questions.
(1) No. For instance, if you multiply two registers, and then load a constant, two things that have only one outcome, you can run them both in parallel and finish the second one first.
However, OO really come into its own when coupled with speculative execution, because a lot of instructions can trigger faults. Anything to do with memory, for a start. That's when it becomes really useful to be able to speculatively execute past the possibly troublesome instruction, expecting there won't be a p
Re: (Score:2)
(1) Does out-of-order necessarily imply speculative execution?
No, though typically you move to speculative execution before you move to superscalar. Speculative execution is required to get decent performance from any pipelined processor. The difference between the speculative execution in a superscalar and an in-order pipeline is one of degree, rather than kind. There's a fairly common heuristic that you have a branch roughly ever 7 instructions in code compiled from vaguely C-like languages (code that is often quite misleadingly called 'general-purpose' code). I
Re: (Score:3)
"Addresses in hardware" will mean "performance-fucking changes to built-in microcode" for at least another full generation.
Actual hardware fixes will also result in performance drawbacks. The whole issue is that they're executing and caching results before a simple security check. They've either got to stop that speculative execution, add a delay / wipe that prevents speculative results from being cached, or never let it hit the cache in the first place. All options incur a significant performance penalt
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Meanwhile... (Score:1)
14nm has become 70% more compact since the first 14nm products, which translates into whatever mix of power saving or performance increase you use it for.
Things have not stood still. 10nm will be another incremental step relative to 14nm.
The Xnm description of processes has become a tool of obfuscation. Gates per square micron might be better.
Do I hear 2020 (Score:2)