Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
DEAL: For $25 - Add A Second Phone Number To Your Smartphone for life! Use promo code SLASHDOT25. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. Check out the new SourceForge HTML5 internet speed test! ×

Comment Re:Probably a good investment (Score 2) 138

I think an example is in order to highlight this:

Fresh fish is sent by rail through Sweden, along the coastal rail route on the east, from the northern parts of Norway to Oslo in the south. Because that's faster than doing it along their own railways or highways.

Also because "why do it yourself when you can have a Swede do it?" / all the rich Norwegians. ;D

Comment Inferior communist technology (Score 1) 62

For just 80 times the cost, A US Ally Shot Down a $200 Drone With a $3 Million Patriot Missile, superior US military technology can shoot it down at up to 160 times the distance! Wikipedia: MIM-104 Patriot.

Superior Patriot technology let you protect against drones in a 80 524 times as large area at a cost effectiveness at 100:1 considering the low extra cost at 80x the price of this anti-drone rifle!

Comment Re:Crippled Ryzen 7 (Score 1) 173

Unfortunately, it seem as if these 6-core and 4-core Ryzen 5 CPUs are only going to be eight-core Ryzen 7 CPUs with cores disabled in both compute-complexes.

According to what? TDP numbers?
I would hope the 4 core was just one CCX.
For 6 core I don't know if I think 3+3 or 4+2 is better, for gaming I'd take 4+2 and then the +2 could be used for other threads.
The split cache isn't 100% useless. Assuming the same bandwidth to it (not a fact or even thing I think) having just half the cores use it could improve performance to it, the very weak connectivity between them however is an issue. In reality the latency within the same CCX is lower than over at Intel but between all cores on average is worse and to the second CCX from the first and viceversa is even worse.

Seem like I should had read your whole comment .. ;D

The R5 1600X and 1600 are going to have one core disabled per compute-complex (CCX): 3+3. This was expected.

Too bad. Still 16 MB of L3 cache or disabled so one get the worst of everything? :D

However, surprisingly, AMD has told Anandtech and Ars Technica that the R5 1500X and likely also the R5 1400 are going to have two cores disabled per CCX: giving it a 2+2 config.

I wonder if that's why they went with Overwatch which only use 2 cores:
It's still a good processor for the price though. For many tasks due to having the 6 cores it will be more powerful than the competing i5 7600K. VS an overclocked one it will likely be a bit slower in some games of today, non-overclocked maybe not / not much, but it still have those extra cores for background tasks and streaming and possibly future advantage. Windows 10 game mode will also dedicate some cores for gaming and prevent threads from moving around so that will bring some more performance.

and there is an interconnect between the CCX'es L3 caches which while being slower than a single shared L3 cache is somewhat faster than going to main memory

I read before that it was on the same bus as the PCI-express lanes and the DDR4 RAM. Correct or not?
I've also read it ran half speed of the RAM speed (in clock?), but it seem weird that the bus / cache would vary speed with what RAM one use. However I've seen different L3 latency test numbers in different memory configurations, thought that could be the benchmark software which happen to access some regular RAM too I suppose.

This means that the 1500X and 1400 are going to be slower on many workloads than on a hypothetical Zen CPU with one single four-core CCX.

Yeah. I would prefer that and it would feel better if they just made 1 CCX chips for that purpose .. ... then again maybe they let people unlock functional cores?! ;D (but fuck up overclocking when doing so? ;D .. reason to wait? ;D)

It is believed that this bottleneck is the reason behind relatively low Ryzen 1800X/1700X/1700 scores in many games - compared to Intel (even when clock speed and IPC have been taken into account).

Both cache and memory performance is slight worse. It still have just 2 AGUs / core just like FX before it whereas Intel have 4, I don't know how much that matter or affect things (It also only capable of 2 128 bit FMAC vs Intel 2 256 bit FMAC and hence is inferior there too, I don't know how much that matter. To me it would make sense if software which relied heavily on floating-point performance used the GPU instead (though high precision performance on consumer GPUs are most often much slower than single-precision so maybe they consumer graphics cards isn't better there than an Intel HEDT/Xeon processor anyway?) but I also assume that generic PC software which just happen to do some floating point calculations here and there just do it in x86 code rather than doing it on the graphics card. I would be very interested in getting some actual knowledge there from someone who knows though! Like Arma 3 run like shit on Ryzen, do that use floating-point calculations on the CPU to a large extent for simulation purposes? Some claim it just use 1 core or thread but on a hyper-threaded quad-core Intel chip the load is very even so I don't see how that make any sense whatsoever, unless what made the load even was the software uesd to record the game when playing it and the GAME actually just loaded one (or two?) logical (why is it called logical? How is it logical? Virtual? :D) cores.

(Curious enough, this is also a known issue among programmers for the XBox One and PS4 - both having AMD CPUs with a similar setup, but apparently it didn't really occur to game programmers that AMD would have a go at retaking the desktop?)

Some guy made some claim which sounded like AMD made all the parts separately and then kinda "soldered" them together into a full chip. So as such they could test the parts before hand and so on, rather than doing the whole chip at once. It that a thing? (If so why don't they just use 1 CCX?)
I assume the 2 CCX design otherwise is to save space by not having to connect everything to everything else. (Before I knew about the disadvantages I also assumed it just gave better bandwidth to the cache and had an advantage there but now I know it comes with disadvantages too.)

Comment Re:TDP? (Score 2) 173

Worth noticing is that the 95 watt Ryzen 7 1700X and 1800X increase the power-demands more when going from idle to fully loaded than the 140 watt i7 6900K as well as the whole platform using slightly more power for the whole system:
So take the TDP values with some salt. It doesn't tell the whole story. Neither have they done for AMD vs Nvidia graphics cards I've heard claimed but there with the advantage for AMD.

Ryzen 7 1700X and 1800X idle whole system at the wall: 45 watt
i7 6900K idle whole system: 60 watt.

1700X blender & x264 benchmark: 155 & 160 watt.
1800X blender & x264 benchmark: 165 & 170 watt
i7 6900K blender & x264 benchmark: 150 & 150 watt.
That's still for the full system.
i7 6900K Intel Ark says 140 watt TDP:

i7 6950X: 60/145/170 watt idle, blender, x264 (10 core also 140 watt.)

Slashdot Top Deals

1 Dog Pound = 16 oz. of Alpo