Unfortunately, it seem as if these 6-core and 4-core Ryzen 5 CPUs are only going to be eight-core Ryzen 7 CPUs with cores disabled in both compute-complexes.
According to what? TDP numbers?
I would hope the 4 core was just one CCX.
For 6 core I don't know if I think 3+3 or 4+2 is better, for gaming I'd take 4+2 and then the +2 could be used for other threads.
The split cache isn't 100% useless. Assuming the same bandwidth to it (not a fact or even thing I think) having just half the cores use it could improve performance to it, the very weak connectivity between them however is an issue. In reality the latency within the same CCX is lower than over at Intel but between all cores on average is worse and to the second CCX from the first and viceversa is even worse.
Seem like I should had read your whole comment .. ;D
The R5 1600X and 1600 are going to have one core disabled per compute-complex (CCX): 3+3. This was expected.
Too bad. Still 16 MB of L3 cache or disabled so one get the worst of everything? :D
However, surprisingly, AMD has told Anandtech and Ars Technica that the R5 1500X and likely also the R5 1400 are going to have two cores disabled per CCX: giving it a 2+2 config.
I wonder if that's why they went with Overwatch which only use 2 cores:
It's still a good processor for the price though. For many tasks due to having the 6 cores it will be more powerful than the competing i5 7600K. VS an overclocked one it will likely be a bit slower in some games of today, non-overclocked maybe not / not much, but it still have those extra cores for background tasks and streaming and possibly future advantage. Windows 10 game mode will also dedicate some cores for gaming and prevent threads from moving around so that will bring some more performance.
and there is an interconnect between the CCX'es L3 caches which while being slower than a single shared L3 cache is somewhat faster than going to main memory
I read before that it was on the same bus as the PCI-express lanes and the DDR4 RAM. Correct or not?
I've also read it ran half speed of the RAM speed (in clock?), but it seem weird that the bus / cache would vary speed with what RAM one use. However I've seen different L3 latency test numbers in different memory configurations, thought that could be the benchmark software which happen to access some regular RAM too I suppose.
This means that the 1500X and 1400 are going to be slower on many workloads than on a hypothetical Zen CPU with one single four-core CCX.
Yeah. I would prefer that and it would feel better if they just made 1 CCX chips for that purpose .. ... then again maybe they let people unlock functional cores?! ;D (but fuck up overclocking when doing so? ;D .. reason to wait? ;D)
It is believed that this bottleneck is the reason behind relatively low Ryzen 1800X/1700X/1700 scores in many games - compared to Intel (even when clock speed and IPC have been taken into account).
Both cache and memory performance is slight worse. It still have just 2 AGUs / core just like FX before it whereas Intel have 4, I don't know how much that matter or affect things (It also only capable of 2 128 bit FMAC vs Intel 2 256 bit FMAC and hence is inferior there too, I don't know how much that matter. To me it would make sense if software which relied heavily on floating-point performance used the GPU instead (though high precision performance on consumer GPUs are most often much slower than single-precision so maybe they consumer graphics cards isn't better there than an Intel HEDT/Xeon processor anyway?) but I also assume that generic PC software which just happen to do some floating point calculations here and there just do it in x86 code rather than doing it on the graphics card. I would be very interested in getting some actual knowledge there from someone who knows though! Like Arma 3 run like shit on Ryzen, do that use floating-point calculations on the CPU to a large extent for simulation purposes? Some claim it just use 1 core or thread but on a hyper-threaded quad-core Intel chip the load is very even so I don't see how that make any sense whatsoever, unless what made the load even was the software uesd to record the game when playing it and the GAME actually just loaded one (or two?) logical (why is it called logical? How is it logical? Virtual? :D) cores.
(Curious enough, this is also a known issue among programmers for the XBox One and PS4 - both having AMD CPUs with a similar setup, but apparently it didn't really occur to game programmers that AMD would have a go at retaking the desktop?)
Some guy made some claim which sounded like AMD made all the parts separately and then kinda "soldered" them together into a full chip. So as such they could test the parts before hand and so on, rather than doing the whole chip at once. It that a thing? (If so why don't they just use 1 CCX?)
I assume the 2 CCX design otherwise is to save space by not having to connect everything to everything else. (Before I knew about the disadvantages I also assumed it just gave better bandwidth to the cache and had an advantage there but now I know it comes with disadvantages too.)