Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Submission + - Zen and the Art of CPU Design (pcper.com)

JoshMST writes: This in-depth editorial covers the history of the major AMD releases over the past 20+ years and comparing them to what Zen is expected to encounter when it is released this week. It goes from pre-K5 processors, how AMD got into the CPU world, and the releases that not only matched Intel's products but also exceeded them at times.

Submission + - Animation Explains Multi-GPU Load-Balancing Tasks and Memory

Scott Michaud writes: While DirectX 12, Mantle, and Vulkan allow developers to list all GPUs in a system, and communicate with them individually, Crossfire and SLI accomplished that task in DirectX 11 and OpenGL. Apart from the very early implementations, which interleaved monitor scanlines (or otherwise cut up a single frame) between devices, these systems used the Alternate Frame Rendering (AFR) algorithm to divide work. Because neighbouring frames require roughly the same amount of work, and old APIs submit work through restrictive interfaces, memory was mirrored across GPUs and, except for AMD's Hybrid Crossfire and LucidLogix HYDRA Engine, GPUs needed to be roughly identical. The new APIs open the dialogue between software and hardware, but the load balancing algorithms, themselves, have their own limitations.

Submission + - Animation Explains Multi-GPU Load-Balancing Tasks and Memory

Scott Michaud writes: While DirectX 12, Mantle, and Vulkan allow developers to list all GPUs in a system, and communicate with them individually, Crossfire and SLI accomplished that task in DirectX 11 and OpenGL. Apart from the very early implementations, which interleaved monitor scanlines (or otherwise cut up a single frame) between devices, these systems used the Alternate Frame Rendering (AFR) algorithm to divide work. Because neighbouring frames require roughly the same amount of work, and old APIs submit work through restrictive interfaces, memory was mirrored across GPUs and, except for AMD's Hybrid Crossfire and LucidLogix HYDRA Engine, GPUs needed to be roughly identical. The new APIs open the dialogue between software and hardware, but the load balancing algorithms, themselves, have their own limitations.

Submission + - Animation Explains Multi-GPU Load-Balancing Tasks and Memory

Scott Michaud writes: While DirectX 12, Mantle, and Vulkan allow developers to list all GPUs in a system, and communicate with them individually, Crossfire and SLI accomplished that task in DirectX 11 and OpenGL. Apart from the very early implementations, which interleaved monitor scanlines (or otherwise cut up a single frame) between devices, these systems used the Alternate Frame Rendering (AFR) algorithm to divide work. Because neighbouring frames require roughly the same amount of work, and old APIs submit work through restrictive interfaces, memory was mirrored across GPUs and, except for AMD's Hybrid Crossfire and LucidLogix HYDRA Engine, GPUs needed to be roughly identical. The new APIs open the dialogue between software and hardware, but the load balancing algorithms, themselves, have their own limitations.

Submission + - New GTX Titan X from NVIDIA Integrates Pascal, Costs $1200

Vigile writes: Benchmarks are finally live for the updated NVIDIA Titan X graphics card that uses a brand new GP102 GPU based on the Pascal architecture and the 16nm process at TSMC. Announced just a week or so ago, PC Perspective has done the full suite of performance tests on the card to find that this beast of a GPU that includes 3,584 CUDA cores running at over 1600 MHz in practice, races past the GeForce GTX 1080 by 30-40%. This makes the new Titan X the highest performance GPU on the market. Users of the GeForce GTX 980 Ti will see 50-80% performance gains with the Titan X and AMD's highest performing graphics card, the Fury X, gets bested by 80-120% in all games but one! Clearly the Titan X is the GPU king but it is going to cost you — NVIDIA is selling it directly for $1200.

Submission + - GeForce GTX 1060 battles RX 480 with new GP106 Pascal GPU

Vigile writes: Twelve days ago, NVIDIA announced its competitor to the AMD Radeon RX 480, the GeForce GTX 1060, based on a new Pascal GPU; GP106. Though that was just a brief preview of the product, and a pictorial of the GTX 1060 Founders Edition card we were initially sent, it set the community ablaze with discussion around which mainstream enthusiast platform was going to be the best for gamers this summer. The new GP106 GPU offers nearly identical performance to the GeForce GTX 980 but at a $249 starting price point; the GTX 980 launched at $549 in late 2014. Testing shows that the GTX 1060 performs better than the RX 480 in 5 out of 7 games PC Perspective tested, though the two in which the RX 480 prevailed are quite interesting. In applications where asynchronous compute has been clearly called out, AMD's GCN architecture and Polaris GPU pose a more significant threat.

Submission + - AMD RX 480 offers Best-in-class performance for $199/$239

Vigile writes: It's been a terribly long news cycle, but today is finally the day reviews and sales start of the new AMD Radeon RX 480 graphics card based on the company's latest Polaris architecture and built on 14nm FinFET process technology. With a starting price tag of $199 for the 4GB model and $239 for the 8GB, the RX 480 has some interesting performance characteristics. Compared to the GeForce GTX 970, currently selling for around $280, the RX 480 performs +/- 5-10% in DX11 games but PC Perspective found that the RX 480 was as much as 40% faster in DX12 titles like Gears of War, Hitman and Rise of the Tomb Raider. Compared to previous AMD products, the RX 480 is as fast as a Radeon R9 390 but uses just 150 watts compared to 275 watts for the previous generation! Chances are that NVIDIA will have a competing product based on Pascal available sometime in July, so AMD's advantage may be short lived; but in the meantime the Radeon RX 480 is clearly the best GPU for $200.

Submission + - GeForce GTX 1070 Offers 980 Ti Performance at 100 watts lower power (pcper.com)

Vigile writes: NVIDIA has now released the second of its two new Pascal architecture GeForce cards to the market, this one with lower prices and performance slightly from the GTX 1080 to create the GTX 1070. Based on the same GP104 GPU, but with 1920 CUDA cores rather than 2560, and slightly lower base and Boost clock speeds, the GTX 1070 is offering performance above the GeForce GTX 980 Ti, a card that launched at $649. PC Perspective tested it at both 1920x1080 and 2560x1440 to address the segments that the $379 MSRP and $449 Founders Edition prices target, and found it was 20% faster than the GTX 980 Ti, 30-50% faster than the GTX 970 and much faster than the R9 Nano or R9 390X from AMD. Even more impressive is that it matches or beats the GTX 980 Ti with 100 watt lower TDP! Cards are expected to go on sale June 10th and if you can find one at the $379 price point, it looks to be the best enthusiast gaming card for those of us without a GTX 1080 budget.

Submission + - GeForce GTX 1080 new fastest GPU, and not by a little

Vigile writes: Though details have been coming out for over a week from NVIDIA, today is the day that reviews of the new GeForce GTX 1080 finally hit. The results are pretty stunning — the GTX 1080 is the new fastest GPU in the world, beating out the likes of NVIDIA's own GTX 980 Ti and AMD's Fiji XT GPU built into the R9 Fury X. The GP104 GPU has 2560 CUDA cores (25% higher than GTX 980) and runs at a base clock of 1607 MHz (42% higher than the GTX 980) while introducing the first GDDR5X memory interface that runs at 10 Gbps. The result is performance 25-40% higher than the GTX 980 Ti and 65-100% faster than the GTX 980! Better yet, it does this while only drawing about 10 watts more than the GM204 part used in the GTX 980 thanks to the new 16nm FinFET process tech. NVIDIA also adds in new features that improve VR performance, a new screenshot tool that is actually pretty impressive and more. Starting price is $599 EVENTUALLY but the "Founders Edition" parts launching on May 27th will be $699.

Submission + - Details of NVIDIA GP100 GPU Sneak Out, 3840 CUDA Cores (pcper.com)

Vigile writes: While NVIDIA is talking to the world about the benefits of Pascal for high performance computing, PC Perspective dove into the information with more of an ear towards what it might mean for GeForce products and PC gaming later in the year. The specifications alone are impressively insane: 16GB of HBM2 with a 4096-bit memory bus, 3840 CUDA processing cores and 15.3 billion transistors. It will be built on the 16nm FinFET process technology with a massive die size of 610 mm^2; and that is without the memory. The GP100 configuration that was announced sets the clock speed up to 1480 MHz with boost, a significant jump over a Maxwell-based card like the Titan hit just over 1000 MHz. It's unknown what benefits, if any, NVLink technology will provide for gamers, though direct GPU to GPU communication might help with the hurdles of SLI scaling. There are still lots of technological questions as well as even the inclusion of HBM2 may not make it to flagship GeForce products, instead sticking with GDDR5 or GDDR5X based on cost structures. NVIDIA will likely balance power consumption and performance down some, as the Tesla P100 part has a TDP of 300 watts. If current rumors from both AMD and NVIDIA turn out to be correct, it appears we may have a "one big chip (NVIDIA) versus many little chips (AMD)" battle coming this summer.

Submission + - New tool offers look at performance of UWP games on Windows

Vigile writes: One of the concerns surrounding the recent debate of the Unified Windows Platform and games being released on it, such as the recent Gears of War Ultimate Edition, was the inability for media and consumers, and even entry level developers, to properly profile the performance of those applications. All of the standard testing applications like Fraps, FCAT and other overlays are locked out of UWP games. A Intel graphics engineer released a tool called PresentMon on GitHub yesterday that accesses event timers in Windows to monitor Present commands in any API, including DX11, DX12, Vulkan as well as games built on the Windows Store platform. Using this data, PC Perspective was able to profile the performance of the new Gears of War on PC, comparing frame time variability between the two flagship parts from NVIDIA and AMD. While it's not a perfect utility yet, there is hope now that this open source code will allow for performance metrics on any and all gaming titles.

Submission + - Microsoft Losing Ground on Windows Store and UWP for Gaming

Vigile writes: Microsoft has big plans to try and merge the experiences of the Xbox One and Windows for gaming but the push back from the community and from major developers and personalities is mounting. Earlier this week PC Perspective posted a story that detailed the controversy around DX12 performance analysis without an exclusive full screen mode, changes to multi-GPU configurations and even compatibility issues with variable refresh that crop up from games from the Windows Store. Microsoft's only official response so far as been that it is listening to feedback and plans to address it with upcoming changes. Now today, Epic's Tim Sweeney has posted an editorial at The Guardian with an even more dramatic tone, saying that UWP (Unified Windows Platform) "can, should, must and will, die..." Clearly the stakes are being placed in the ground and even damage control from Phil Spencer on Twitter isn't likely to hold back angry PC users.

Slashdot Top Deals

Truly simple systems... require infinite testing. -- Norman Augustine

Working...