Slashdot is powered by your submissions, so send in your scoop


Forgot your password?

Submission + - Fable Legends DX12 Benchmark Stressing High End GPUs

Vigile writes: In preparation for the release of the free-to-play Fable Legends game on both Xbox One and PC this winter, Microsoft and Lionhead Studios released a benchmark today that allows users to test performance of their PC hardware configuration with a DirectX 12 based game engine that pushes the boundaries of render quality. Based on a modified UE4 engine, Fable Legends includes support for asynchronous compute shaders, manual resource barrier tracking and explicit memory management, all new to the DX12 API. Unlike the previous DX12 benchmark, Ashes of the Singularity, that focused mainly on high draw call counts and mass quantities of on-screen units, Fable Legends takes a more standard approach, attempting to improve image quality and shadow reproduction with the new API. PC Perspective has done some performance analysis with the new benchmark and a range of graphics cards, finding that while NVIDIA still holds the lead at the top spot (GTX 980 Ti vs Fury X), the AMD Radeon mid-range products offer better performance (and better value) than the comparable GeForce parts.

Submission + - The AMD Radeon R9 Nano - Flagship Performance at 6 Inches

Vigile writes: Back when AMD announced it would be producing an even smaller graphics card than the Fury X, but based on the same full-sized Fiji GPU, many people wondered just how they would be able to pull it off. Using 4096 stream processors, a 4096-bit memory bus with 4GB of HBM (high bandwidth memory) and a clock speed rated "up to" 1000 MHz, the new AMD Radeon R9 Nano looked to be an impressive card. Today PC Perspective has a review of the R9 Nano and though there are some quirks, including pronounced coil whine and a hefty $650 price tag, it offers nearly the same performance as the non-X Radeon R9 Fury card at 100 watts lower TDP! It is able to do that by dynamically adjusting the clock speed from ~830 MHz to 1000 MHz depending on the workload, always maintaining a peak power draw of just 175 watts. All of this is packed into a 6 inch PCB — smaller than any other enthusiast class GPU to date, making it a perfect pairing for SFF cases that demand smaller components. The R9 Nano is expensive though with the same asking price as AMD's own R9 Fury X and the GeForce GTX 980 Ti.

Submission + - DirectX 12 Performance Tested in Ashes of the Singularity

Vigile writes: The future of graphics APIs lies in DirectX 12 and Vulkan, both built to target GPU hardware at a lower level than previously available. The advantages are better performance, better efficiency on all hardware and more control for the developer that is willing to put in the time and effort to understand the hardware in question. Until today we have only heard or seen theoretical "peak" performance claims of DX12 compared to DX11. PC Perspective just posted an article that uses a pre-beta version of Ashes of the Singularity, an upcoming RTS utilizing the Oxide Games Nitrous engine, to evaluate and compare DX12's performance claims and gains against DX11. In the story we find five different processor platforms tested with two different GPUs and two different resolutions. Results are interesting and show that DX12 levels the playing field for AMD, with its R9 390X gaining enough ground in DX12 to overcome a significant performance deficit that exists using DX11 to the GTX 980.

Submission + - Intel's Skylake Architecture Comes to Enthusiasts First, Reviewed

Vigile writes: The Intel Skylake architecture has been on our radar for quite a long time as Intel's next big step in CPU design. We know at least a handful of details: DDR4 memory support, 14nm process technology, modest IPC gains and impressive GPU improvements. But the details have remained a mystery on how the "tock" of Skylake on the 14nm process technology will differ from Broadwell and Haswell. That changes today with the official release of the "K" SKUs of Skylake — the unlocked, enthusiast class parts for DIY PC builders. PC Perspective has a full review of the Core i7-6700K with benchmarks as well as discrete GPU and gaming testing that shows Skylake is an impressive part. IPC gains on Skylake over Haswell are modest but noticeable, and IGP performance is as much as 50% higher than Devil's Canyon. Based on that discrete GPU testing, all those users still on Nehalem and Sandy Bridge might finally have a reason to upgrade to Skylake.

Submission + - AMD Fury X with Fiji and HBM Falls Behind GTX 980 Ti (

Vigile writes: Even with months of build up and hype, culminating last week during a pair of press conferences from E3 to announce it, the reviews and performance of the AMD Radeon R9 Fury X are finally available. Built on the new Fiji GPU, AMD's Fury X has 4,096 stream processors and a 4,096-bit memory bus that runs at just 500 MHz. That High Bandwidth Memory (HBM) implementation results in a total memory bandwidth of 512 GB/s, much higher than the GTX 980 Ti or R9 290X/390X. The Fury X is also the first single-GPU reference card to ship with an integrated self-contained water cooler, keeping the GPU at around 55C while gaming — a very impressive feat that no doubt adds to the GPU's measured efficiency. But in PC Perspective's testing, the Fury X isn't able overcome the performance of the GeForce GTX 980 Ti in more than a couple of specific tests, leaving NVIDIA's flagship as the leader in the clubhouse. So even though it's great to see AMD back in the saddle and competing in the high end space, this $650 graphics card needs a little more work to be a dominant competitor.

Submission + - NVIDIA GTX 980 Ti Offers Titan X Performance for $350 Less (

Vigile writes: Today at the beginning of Computex in Taipei, NVIDIA is officially unveiling the GeForce GTX 980 Ti graphics card, a new offering based on the same GM200 Maxwell architecture GPU as the GTX Titan X released in March. Though the Titan X sells today for more than $1000, the GTX 980 Ti will start at $650 while offering performance parity with the more expensive option. The GTX 980 Ti has 6GB of memory (versus 12GB for the GTX Titan X) but PC Perspective's review shows no negative side effects of the drop. This implementation of the GM200 GPU uses 2,816 CUDA cores rather than the 3,072 cores of the Titan X, but thanks to higher average Boost clocks, performance between the two cards is identical. Enthusiasts that were considering the Titan X for high end PC gaming should definitely reconsider with NVIDIA's latest offering. You can read the full review and technical breakdown over at PC Perspective.

Submission + - Khronos Group Announces Vulkan to Compete Against DirectX 12.

Phopojijo writes: The Khronos Group has announced the Vulkan API for compute and graphics. Its goal is to compete against DirectX 12. It has some interesting features, such as queuing to multiple GPUs and an LLVM-based bytecode for its shading language to remove the need for a compiler from the graphics drivers. Also, the API allows graphics card vendors to support Vulkan with drivers back to Windows XP "and beyond".

Comment Re:Wut? (Score 1) 42

1. That is a false claim - Gamenab didn't even cite the correct FPGA model when he made that DRM claim.
2. G-Sync is actually good down to 1 FPS - it adaptively inserts additional redraws in between frames at rates below 30, as to minimize the possibility of judder (incoming frame during an already started panel refresh pass). FreeSync (it its most recently demoed form) reverts back to the VSYNC setting at the low end. Further, you are basing the high end of G-Sync only on the currently released panels. Nothing states the G-Sync FPGA tops out at 144.
3. I use the word 'experience' because it is 'my experience' - I have personally witnessed most currently shipping G-Sync panels as well as the FreeSync demo at this past CES. I have also performed many tests with G-Sync. Source: I have written several articles about this, including the one linked in this post.
5. I believe the reason it is not yet released is because Nvidia wants to be able to correctly cover more of the range (including the low range / what happens when the game engine hitches).

Comment Re:its Nvidia FREESYNC (Score 1) 42

Gamenab stumbled across the leaked driver and tried to use it to spread a bunch of conspiracy theory FUD. I hope most people here can correctly apply Occam's razor as opposed to the alternative, which is that he supposedly designed those changes, those changes going into an internal driver build that was inadvertently leaked and happened to apply to the exact laptop he already owned.

ExtremeTech picked apart his BS in more detail:

Comment Re:Wut? (Score 1) 42

1. The FPGA *was* required for the tech to work on the desktop panels it was installed in.
2. FreeSync (as I've witnessed so far) as well as the most recent adaptive sync can not achieve the same result across as wide of a refresh rate range that G-Sync currently can.
3. Nvidia could 'make it work', but it would not be the same experience as can be had with a G-Sync module, even with an adaptive sync panel (as evidenced by how this adaptive sync panel in this laptop intermittently blanks out at 30 FPS or when a game hitches.
4. ...
5. The driver was not a release driver, and was not meant to call the experience it gives 'G-Sync'. It was meant to be internal.

Conclusion - Adaptive sync alone is not the same experience you can currently get with a real G-Sync panel, which is why any possible future G-Sync that does not need a module it's not yet a real thing.

Submission + - NVIDIA GTX 970 Specifications Corrected, Memory Pools Explained (

Vigile writes: Over the weekend NVIDIA sent out its first official response to the claims of hampered performance on the GTX 970 and a potential lack of access to 1/8th of the on-board memory. Today NVIDIA has clarified the situation again, this time with some important changes to the specifications of the GPU. First, the ROP count and L2 cache capacity of the GTX 970 were incorrectly reported at launch (last September). The GTX 970 has 52 ROPs and 1792 KB of L2 cache compared to the GTX 980 that has 64 ROPs and 2048 KB of L2 cache; previously both GPUs claimed to have identical specs. Because of this change, one of the 32-bit memory channels is accessed differently, forcing NVIDIA to create 3.5GB and 0.5GB pools of memory to improve overall performance for the majority of use cases. The smaller, 500MB pool operates at 1/7th the speed of the 3.5GB pool and thus will lower total graphics system performance by 4-6% when added into the memory system. That occurs when games request MORE than 3.5GB of memory allocation though, which happens only in extreme cases and combinations of resolution and anti-aliasing. Still, the jury is out on whether NVIDIA has answered enough questions to temper the fire from consumers.

Submission + - NVIDIA Responds to GTX 970 Memory Issue (

Vigile writes: Over the past week or so, owners of the GeForce GTX 970 have found several instances where the GPU was unable or unwilling to address memory capacities over 3.5GB despite having 4GB of on-board frame buffer. Specific benchmarks were written to demonstrate the issue and users even found ways to configure games to utilize more than 3.5GB of memory using DSR and high levels of MSAA. While the GTX 980 can access 4GB of its memory, the GTX 970 appeared to be less likely to do so and would see a dramatic performance hit when it did. NVIDIA responded today saying that the GTX 970 has "fewer crossbar resources to the memory system" as a result of disabled groups of cores called SMMs. NVIDIA states that "to optimally manage memory traffic in this configuration, we segment graphics memory into a 3.5GB section and a 0.5GB section" and that the GPU has "higher priority" to the larger pool. The question that remains is should this affect gamers' view of the GTX 970? If performance metrics already take the different memory configuration into account, then I don't see the GTX 970 declining in popularity.

Submission + - How we'll know whether BICEP2 was right about gravitational waves

StartsWithABang writes: The Big Bang takes us back to very early times, but not the earliest. It tells us the Universe was in a hot, dense state, where even the possibility of forming neutral atoms was impossible due to the incredible energies of the Universe at that time. The patterns of fluctuations that are left over from that time give us insight into the primordial density fluctuations that our Universe was born with. But there’s an additional signature encoded in this radiation, one that’s much more difficult to extract: polarization. While most of the polarization signal that’s present will be due to the density fluctuations themselves, there’s a way to extract even more information about an even earlier phenomenon: gravitational waves that were present from the epoch of cosmic inflation! Here's the physics on how that works, and how we'll find whether BICEP2 was right or not.

Submission + - Using naval logbooks to reconstruct past weather—and predict future climat ( 1

Lasrick writes: What a great idea. The Old Weather Project uses old logbooks to study the weather patterns of long ago, providing a trove of archival data to scientists who are trying to fill in the details of our knowledge about the atmosphere and the changing climate. 'Pity the poor navigator who fell asleep on watch and failed to update his ship’s logbook every four hours with details about its geographic position, time, date, wind direction, barometric readings, temperatures, ocean currents, and weather conditions.' As Clive Wilkinson of the UK's National Maritime Museum adds, 'Anything you read in a logbook, you can be sure that it is a true and faithful account.'

The Old Weather Project uses citizen scientists to transcribe and digitize observations that were scrupulously recorded on a clockwork-like basis, and it is one of several that climate scientists are using to create 'a three-dimensional computer simulation that will provide a continuous, century-and-a-half-long profile of the entire planet’s climate over time'--the 20th Century Reanalysis Project. Data is checked and rechecked by 3 different people before entry into the database, and the logbook measurements are especially valuable because it was compiled at sea. Great story.

Real Programmers think better when playing Adventure or Rogue.