Vigile writes: Samsung just released its first non-OEM, consumer level NVMe enabled SSD, the 950 Pro series. This drive will ship in an M.2 form factor rather than a 2.5-in drive size that is the standard for users today, allowing installation into notebooks, small form factor PCs and desktop PCs that have at least one M.2 slot on-board. It peaks at 512GB capacity today but Samsung promises a 1TB version using 48-layer VNAND in 2016. The NVMe protocol allows much better performance directly over the PCIe bus without the overhead of the AHCI protocol used in hard drives and previous SSDs. PC Perspective's review has performance breaking the 2.5GB/s read speed level while also introducing an entirely new type of performance evaluation for SSDs centered around latency distribution of IOs. By measuring how long each IO takes, rather than reporting only an average, the performance of an SSD can be determined on a per-workflow basis and drives can be compared in an entirely new light. There is a lot of detail on to be read over and digested but again the new NVMe Samsung 950 Pro impresses.
Vigile writes: In preparation for the release of the free-to-play Fable Legends game on both Xbox One and PC this winter, Microsoft and Lionhead Studios released a benchmark today that allows users to test performance of their PC hardware configuration with a DirectX 12 based game engine that pushes the boundaries of render quality. Based on a modified UE4 engine, Fable Legends includes support for asynchronous compute shaders, manual resource barrier tracking and explicit memory management, all new to the DX12 API. Unlike the previous DX12 benchmark, Ashes of the Singularity, that focused mainly on high draw call counts and mass quantities of on-screen units, Fable Legends takes a more standard approach, attempting to improve image quality and shadow reproduction with the new API. PC Perspective has done some performance analysis with the new benchmark and a range of graphics cards, finding that while NVIDIA still holds the lead at the top spot (GTX 980 Ti vs Fury X), the AMD Radeon mid-range products offer better performance (and better value) than the comparable GeForce parts.
Vigile writes: Back when AMD announced it would be producing an even smaller graphics card than the Fury X, but based on the same full-sized Fiji GPU, many people wondered just how they would be able to pull it off. Using 4096 stream processors, a 4096-bit memory bus with 4GB of HBM (high bandwidth memory) and a clock speed rated "up to" 1000 MHz, the new AMD Radeon R9 Nano looked to be an impressive card. Today PC Perspective has a review of the R9 Nano and though there are some quirks, including pronounced coil whine and a hefty $650 price tag, it offers nearly the same performance as the non-X Radeon R9 Fury card at 100 watts lower TDP! It is able to do that by dynamically adjusting the clock speed from ~830 MHz to 1000 MHz depending on the workload, always maintaining a peak power draw of just 175 watts. All of this is packed into a 6 inch PCB — smaller than any other enthusiast class GPU to date, making it a perfect pairing for SFF cases that demand smaller components. The R9 Nano is expensive though with the same asking price as AMD's own R9 Fury X and the GeForce GTX 980 Ti.
Vigile writes: The future of graphics APIs lies in DirectX 12 and Vulkan, both built to target GPU hardware at a lower level than previously available. The advantages are better performance, better efficiency on all hardware and more control for the developer that is willing to put in the time and effort to understand the hardware in question. Until today we have only heard or seen theoretical "peak" performance claims of DX12 compared to DX11. PC Perspective just posted an article that uses a pre-beta version of Ashes of the Singularity, an upcoming RTS utilizing the Oxide Games Nitrous engine, to evaluate and compare DX12's performance claims and gains against DX11. In the story we find five different processor platforms tested with two different GPUs and two different resolutions. Results are interesting and show that DX12 levels the playing field for AMD, with its R9 390X gaining enough ground in DX12 to overcome a significant performance deficit that exists using DX11 to the GTX 980.
Vigile writes: The Intel Skylake architecture has been on our radar for quite a long time as Intel's next big step in CPU design. We know at least a handful of details: DDR4 memory support, 14nm process technology, modest IPC gains and impressive GPU improvements. But the details have remained a mystery on how the "tock" of Skylake on the 14nm process technology will differ from Broadwell and Haswell. That changes today with the official release of the "K" SKUs of Skylake — the unlocked, enthusiast class parts for DIY PC builders. PC Perspective has a full review of the Core i7-6700K with benchmarks as well as discrete GPU and gaming testing that shows Skylake is an impressive part. IPC gains on Skylake over Haswell are modest but noticeable, and IGP performance is as much as 50% higher than Devil's Canyon. Based on that discrete GPU testing, all those users still on Nehalem and Sandy Bridge might finally have a reason to upgrade to Skylake.
Vigile writes: Even with months of build up and hype, culminating last week during a pair of press conferences from E3 to announce it, the reviews and performance of the AMD Radeon R9 Fury X are finally available. Built on the new Fiji GPU, AMD's Fury X has 4,096 stream processors and a 4,096-bit memory bus that runs at just 500 MHz. That High Bandwidth Memory (HBM) implementation results in a total memory bandwidth of 512 GB/s, much higher than the GTX 980 Ti or R9 290X/390X. The Fury X is also the first single-GPU reference card to ship with an integrated self-contained water cooler, keeping the GPU at around 55C while gaming — a very impressive feat that no doubt adds to the GPU's measured efficiency. But in PC Perspective's testing, the Fury X isn't able overcome the performance of the GeForce GTX 980 Ti in more than a couple of specific tests, leaving NVIDIA's flagship as the leader in the clubhouse. So even though it's great to see AMD back in the saddle and competing in the high end space, this $650 graphics card needs a little more work to be a dominant competitor.
Vigile writes: Today at the beginning of Computex in Taipei, NVIDIA is officially unveiling the GeForce GTX 980 Ti graphics card, a new offering based on the same GM200 Maxwell architecture GPU as the GTX Titan X released in March. Though the Titan X sells today for more than $1000, the GTX 980 Ti will start at $650 while offering performance parity with the more expensive option. The GTX 980 Ti has 6GB of memory (versus 12GB for the GTX Titan X) but PC Perspective's review shows no negative side effects of the drop. This implementation of the GM200 GPU uses 2,816 CUDA cores rather than the 3,072 cores of the Titan X, but thanks to higher average Boost clocks, performance between the two cards is identical. Enthusiasts that were considering the Titan X for high end PC gaming should definitely reconsider with NVIDIA's latest offering. You can read the full review and technical breakdown over at PC Perspective.
Vigile writes: Over the weekend NVIDIA sent out its first official response to the claims of hampered performance on the GTX 970 and a potential lack of access to 1/8th of the on-board memory. Today NVIDIA has clarified the situation again, this time with some important changes to the specifications of the GPU. First, the ROP count and L2 cache capacity of the GTX 970 were incorrectly reported at launch (last September). The GTX 970 has 52 ROPs and 1792 KB of L2 cache compared to the GTX 980 that has 64 ROPs and 2048 KB of L2 cache; previously both GPUs claimed to have identical specs. Because of this change, one of the 32-bit memory channels is accessed differently, forcing NVIDIA to create 3.5GB and 0.5GB pools of memory to improve overall performance for the majority of use cases. The smaller, 500MB pool operates at 1/7th the speed of the 3.5GB pool and thus will lower total graphics system performance by 4-6% when added into the memory system. That occurs when games request MORE than 3.5GB of memory allocation though, which happens only in extreme cases and combinations of resolution and anti-aliasing. Still, the jury is out on whether NVIDIA has answered enough questions to temper the fire from consumers.
Vigile writes: Over the past week or so, owners of the GeForce GTX 970 have found several instances where the GPU was unable or unwilling to address memory capacities over 3.5GB despite having 4GB of on-board frame buffer. Specific benchmarks were written to demonstrate the issue and users even found ways to configure games to utilize more than 3.5GB of memory using DSR and high levels of MSAA. While the GTX 980 can access 4GB of its memory, the GTX 970 appeared to be less likely to do so and would see a dramatic performance hit when it did. NVIDIA responded today saying that the GTX 970 has "fewer crossbar resources to the memory system" as a result of disabled groups of cores called SMMs. NVIDIA states that "to optimally manage memory traffic in this configuration, we segment graphics memory into a 3.5GB section and a 0.5GB section" and that the GPU has "higher priority" to the larger pool. The question that remains is should this affect gamers' view of the GTX 970? If performance metrics already take the different memory configuration into account, then I don't see the GTX 970 declining in popularity.
Vigile writes: Earlier this month Micron announced a technology called Dynamic Write Acceleration in the new M600 SSD models that is able to swap NAND flash from MLC and to SLC (multi- and single-level cell) on the fly in order to improve performance in low cost solid state drive implementations. In short, a new and empty M600 SSD will have its dies in SLC mode. While the SSD will appear to the user at its rated capacity, the actual flash capacity in SLC mode is half of what it would be if all dies were in MLC mode. As the SSD is filled past 50% capacity, the controller intelligently switches dies from SLC to MLC, shuffling data around as necessary in the background to briefly empty a given die before switching its mode. In PC Perspective's testing though, the hardware was very inconsistent in write speeds, even at the same capacity fill levels, and would often run at a much lower throughput level than expected. Read speeds are not affected by the DWA feature. While interesting in theory it appears the dynamic flipping technology needs a bit more work.
Vigile writes: Launching today is a new GPU from NVIDIA along with two new graphics card that utilize it. GM204, the second chip released based on the Maxwell architecture, brings an incredibly high level of power efficiency to high-end enthusiast level graphics cards. The GeForce GTX 980, reviewed by PC Perspective, with 2048 CUDA cores, a 256-bit memory bus, 4GB of GDDR5 running at 7.0 GHz and a base clock over 1100 MHz, is able to outperform cards like the GeForce GTX 780 Ti and the AMD Radeon R9 290X and will sell for $549. Maybe most impressive is the power draw difference — the GTX 980 uses 130 watts LESS POWER than the R9 290X under a full load. The GTX 970, with 1664 CUDA cores, the same memory configuration and a base clock of 1050 MHz runs at even lower power, outperforming the Radeon R9 290 and using 80 watts less power and has an MSRP of just $329. Faster GPUs using less power — it's pretty impressive. New features of the GTX 900 series include MFAA (multi-frame AA), Dynamic Super Resolution and full DX12 feature set support. And the fact that we were able to overclock the GTX 980 to nearly 1500 MHz doesn't hurt either.
Vigile writes: AMD looks to continue addressing the mainstream PC enthusiast and gamer with a set of releases into two different component categories. First, today marks the launch of the Radeon R9 285 graphics card, a $250 option based on a brand new piece of silicon dubbed Tonga. This GPU has nearly identical performance to the R9 280 that came before it, but includes support for XDMA PCIe CrossFire, TrueAudio DSP technology and is FreeSync capable (AMD's response to NVIDIA G-Sync). On the CPU side AMD has refreshed its FX product line with three new models (FX-8370, FX-8370e and FX-8320e) with lower TDPs and supposedly better efficiency. The problem of course is that while Intel is already sampling 14nm parts these Vishera-based CPUs continue to be manufactured on GlobalFoundries' 32nm process. The result is less than expected performance boosts and efficiency gains.
Vigile writes: Today Intel released its updated E-class, enthusiast platform based on Haswell, known previously as just Haswell-E. The Core i7-5960X Extreme Edition CPU is an 8-core processor (addressing 16 threads with HyperThreading) that doubles core count over mainstream Haswell parts and jumps from the 6-core parts in previous E-class platforms. That not only turns into dramatic performance increases in highly threaded applications like rendering and encoding, but Haswell-E is also the first consumer platform to integrate a quad-channel DDR4 memory controller, with frequencies starting at 2133 MHz. The top two tiers of Haswell-E processors also include 40 lanes of PCI Express 3.0 while the lower cost Core i7-5820K will be limited to 6-cores and 28 lanes of PCIe. New motherboards based on the new X99 chipset are required as well and include additional storage options like 14 USB ports and 10 SATA 6.0 Gbps channels. Clearly this is the fastest consumer platform tested but as with all E-class releases, the cost is higher. The Core i7-5960X will set you back $999 and expect to pay at least $500 for a motherboard and 4 DIMMs of the new DDR4 as well.
Vigile writes: Intel continues to plug along with new processor architectures and new process technologies in an effort to stay ahead in the consumer and enterprise markets (against AMD) as well as gain ground in the mobile space against the likes of Qualcomm and Samsung. The new 14nm process technology being detailed for the first time results in a 0.65x area scaling rate, an improvement over other generational shifts. Yield appears to be slightly behind where 22nm was at this point in its life cycle but Intel sees it catching up rather quickly before products ship late this winter. Also detailed was information on Broadwell-Y, the lower power version of the Broadwell microarchitecture. With a die size of just 80 mm^2 (compared to the 130 mm^2 of Haswell-Y) and some changes to the packing of the dies themselves, Intel is enabling much smaller form factors (as low as 7mm) with fanless designs. A feature called Duty Cycle Control enables lower "effective" clock speeds than would be possible with current voltage minimums on the process to enable lower power consumption for low performance requirement tasks. PC Perspective covered the released information on both the 14nm process technology as well as the Broadwell CPU/GPU changes and it looks like Intel could be dramatically reinventing itself once again.
Vigile writes: NVIDIA G-Sync, though announced back in October of 2013, is finally getting its first wave of releases in the consumer market. The ASUS ROG Swift PG278Q combines a 144 Hz refresh rate on a 2560x1440 resolution 27-in TN panel with NVIDIA G-Sync support. PC Perspective tested out the variable refresh technology which sends data to the monitor at rate set by the GPU rather than by the display, allowing games to be played without the stutter often seen with V-Sync enabled and without the horizontal tearing seen with V-Sync disabled. The monitor's TN panel limits viewing angles somewhat but less thant traditional TN panel users might anticipate, providing one of the fastest response time monitors with a 2560x1440 resolution. Unfortunately connectivity is limited only to DisplayPort on the PG278Q as it is a requirement of G-Sync, but other features like an integrated USB 3.0 hub and Ultra Low Motion Blur / LightBoost support help justify the rather high $799 price tag.