Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Submission + - AMD Fury X with Fiji and HBM Falls Behind GTX 980 Ti (pcper.com)

Vigile writes: Even with months of build up and hype, culminating last week during a pair of press conferences from E3 to announce it, the reviews and performance of the AMD Radeon R9 Fury X are finally available. Built on the new Fiji GPU, AMD's Fury X has 4,096 stream processors and a 4,096-bit memory bus that runs at just 500 MHz. That High Bandwidth Memory (HBM) implementation results in a total memory bandwidth of 512 GB/s, much higher than the GTX 980 Ti or R9 290X/390X. The Fury X is also the first single-GPU reference card to ship with an integrated self-contained water cooler, keeping the GPU at around 55C while gaming — a very impressive feat that no doubt adds to the GPU's measured efficiency. But in PC Perspective's testing, the Fury X isn't able overcome the performance of the GeForce GTX 980 Ti in more than a couple of specific tests, leaving NVIDIA's flagship as the leader in the clubhouse. So even though it's great to see AMD back in the saddle and competing in the high end space, this $650 graphics card needs a little more work to be a dominant competitor.

Submission + - NVIDIA GTX 980 Ti Offers Titan X Performance for $350 Less (pcper.com)

Vigile writes: Today at the beginning of Computex in Taipei, NVIDIA is officially unveiling the GeForce GTX 980 Ti graphics card, a new offering based on the same GM200 Maxwell architecture GPU as the GTX Titan X released in March. Though the Titan X sells today for more than $1000, the GTX 980 Ti will start at $650 while offering performance parity with the more expensive option. The GTX 980 Ti has 6GB of memory (versus 12GB for the GTX Titan X) but PC Perspective's review shows no negative side effects of the drop. This implementation of the GM200 GPU uses 2,816 CUDA cores rather than the 3,072 cores of the Titan X, but thanks to higher average Boost clocks, performance between the two cards is identical. Enthusiasts that were considering the Titan X for high end PC gaming should definitely reconsider with NVIDIA's latest offering. You can read the full review and technical breakdown over at PC Perspective.

Submission + - Khronos Group Announces Vulkan to Compete Against DirectX 12.

Phopojijo writes: The Khronos Group has announced the Vulkan API for compute and graphics. Its goal is to compete against DirectX 12. It has some interesting features, such as queuing to multiple GPUs and an LLVM-based bytecode for its shading language to remove the need for a compiler from the graphics drivers. Also, the API allows graphics card vendors to support Vulkan with drivers back to Windows XP "and beyond".

Comment Re:Wut? (Score 1) 42

1. That is a false claim - Gamenab didn't even cite the correct FPGA model when he made that DRM claim.
2. G-Sync is actually good down to 1 FPS - it adaptively inserts additional redraws in between frames at rates below 30, as to minimize the possibility of judder (incoming frame during an already started panel refresh pass). FreeSync (it its most recently demoed form) reverts back to the VSYNC setting at the low end. Further, you are basing the high end of G-Sync only on the currently released panels. Nothing states the G-Sync FPGA tops out at 144.
3. I use the word 'experience' because it is 'my experience' - I have personally witnessed most currently shipping G-Sync panels as well as the FreeSync demo at this past CES. I have also performed many tests with G-Sync. Source: I have written several articles about this, including the one linked in this post.
5. I believe the reason it is not yet released is because Nvidia wants to be able to correctly cover more of the range (including the low range / what happens when the game engine hitches).

Comment Re:its Nvidia FREESYNC (Score 1) 42

Gamenab stumbled across the leaked driver and tried to use it to spread a bunch of conspiracy theory FUD. I hope most people here can correctly apply Occam's razor as opposed to the alternative, which is that he supposedly designed those changes, those changes going into an internal driver build that was inadvertently leaked and happened to apply to the exact laptop he already owned.

ExtremeTech picked apart his BS in more detail: http://www.extremetech.com/ext...

Comment Re:Wut? (Score 1) 42

1. The FPGA *was* required for the tech to work on the desktop panels it was installed in.
2. FreeSync (as I've witnessed so far) as well as the most recent adaptive sync can not achieve the same result across as wide of a refresh rate range that G-Sync currently can.
3. Nvidia could 'make it work', but it would not be the same experience as can be had with a G-Sync module, even with an adaptive sync panel (as evidenced by how this adaptive sync panel in this laptop intermittently blanks out at 30 FPS or when a game hitches.
4. ...
5. The driver was not a release driver, and was not meant to call the experience it gives 'G-Sync'. It was meant to be internal.

Conclusion - Adaptive sync alone is not the same experience you can currently get with a real G-Sync panel, which is why any possible future G-Sync that does not need a module it's not yet a real thing.

Submission + - NVIDIA GTX 970 Specifications Corrected, Memory Pools Explained (pcper.com)

Vigile writes: Over the weekend NVIDIA sent out its first official response to the claims of hampered performance on the GTX 970 and a potential lack of access to 1/8th of the on-board memory. Today NVIDIA has clarified the situation again, this time with some important changes to the specifications of the GPU. First, the ROP count and L2 cache capacity of the GTX 970 were incorrectly reported at launch (last September). The GTX 970 has 52 ROPs and 1792 KB of L2 cache compared to the GTX 980 that has 64 ROPs and 2048 KB of L2 cache; previously both GPUs claimed to have identical specs. Because of this change, one of the 32-bit memory channels is accessed differently, forcing NVIDIA to create 3.5GB and 0.5GB pools of memory to improve overall performance for the majority of use cases. The smaller, 500MB pool operates at 1/7th the speed of the 3.5GB pool and thus will lower total graphics system performance by 4-6% when added into the memory system. That occurs when games request MORE than 3.5GB of memory allocation though, which happens only in extreme cases and combinations of resolution and anti-aliasing. Still, the jury is out on whether NVIDIA has answered enough questions to temper the fire from consumers.

Submission + - NVIDIA Responds to GTX 970 Memory Issue (pcper.com)

Vigile writes: Over the past week or so, owners of the GeForce GTX 970 have found several instances where the GPU was unable or unwilling to address memory capacities over 3.5GB despite having 4GB of on-board frame buffer. Specific benchmarks were written to demonstrate the issue and users even found ways to configure games to utilize more than 3.5GB of memory using DSR and high levels of MSAA. While the GTX 980 can access 4GB of its memory, the GTX 970 appeared to be less likely to do so and would see a dramatic performance hit when it did. NVIDIA responded today saying that the GTX 970 has "fewer crossbar resources to the memory system" as a result of disabled groups of cores called SMMs. NVIDIA states that "to optimally manage memory traffic in this configuration, we segment graphics memory into a 3.5GB section and a 0.5GB section" and that the GPU has "higher priority" to the larger pool. The question that remains is should this affect gamers' view of the GTX 970? If performance metrics already take the different memory configuration into account, then I don't see the GTX 970 declining in popularity.

Submission + - How we'll know whether BICEP2 was right about gravitational waves

StartsWithABang writes: The Big Bang takes us back to very early times, but not the earliest. It tells us the Universe was in a hot, dense state, where even the possibility of forming neutral atoms was impossible due to the incredible energies of the Universe at that time. The patterns of fluctuations that are left over from that time give us insight into the primordial density fluctuations that our Universe was born with. But there’s an additional signature encoded in this radiation, one that’s much more difficult to extract: polarization. While most of the polarization signal that’s present will be due to the density fluctuations themselves, there’s a way to extract even more information about an even earlier phenomenon: gravitational waves that were present from the epoch of cosmic inflation! Here's the physics on how that works, and how we'll find whether BICEP2 was right or not.

Submission + - Using naval logbooks to reconstruct past weather—and predict future climat (thebulletin.org) 1

Lasrick writes: What a great idea. The Old Weather Project uses old logbooks to study the weather patterns of long ago, providing a trove of archival data to scientists who are trying to fill in the details of our knowledge about the atmosphere and the changing climate. 'Pity the poor navigator who fell asleep on watch and failed to update his ship’s logbook every four hours with details about its geographic position, time, date, wind direction, barometric readings, temperatures, ocean currents, and weather conditions.' As Clive Wilkinson of the UK's National Maritime Museum adds, 'Anything you read in a logbook, you can be sure that it is a true and faithful account.'

The Old Weather Project uses citizen scientists to transcribe and digitize observations that were scrupulously recorded on a clockwork-like basis, and it is one of several that climate scientists are using to create 'a three-dimensional computer simulation that will provide a continuous, century-and-a-half-long profile of the entire planet’s climate over time'--the 20th Century Reanalysis Project. Data is checked and rechecked by 3 different people before entry into the database, and the logbook measurements are especially valuable because it was compiled at sea. Great story.

Submission + - Interviews: Ask Warren Ellis a Question

samzenpus writes: Warren Ellis is an acclaimed British author of comics, novels, and television who is well known for his sociocultural commentary. The movies Red, and Iron Man 3 are based on his graphic novels. In addition to numerous other comic titles he started a personal favorite, Transmetropolitan. Ellis has written for Vice, Wired UK and Reuters on technological and cultural matters, and is co-writing a video project called Wastelanders with Joss Whedon. Warren has agreed to give us some of his time to answer any questions you may have. As usual, ask as many as you'd like, but please, one per post.

Submission + - Micron SSDs with MLC/SLC Conversion Technology Tested

Vigile writes: Earlier this month Micron announced a technology called Dynamic Write Acceleration in the new M600 SSD models that is able to swap NAND flash from MLC and to SLC (multi- and single-level cell) on the fly in order to improve performance in low cost solid state drive implementations. In short, a new and empty M600 SSD will have its dies in SLC mode. While the SSD will appear to the user at its rated capacity, the actual flash capacity in SLC mode is half of what it would be if all dies were in MLC mode. As the SSD is filled past 50% capacity, the controller intelligently switches dies from SLC to MLC, shuffling data around as necessary in the background to briefly empty a given die before switching its mode. In PC Perspective's testing though, the hardware was very inconsistent in write speeds, even at the same capacity fill levels, and would often run at a much lower throughput level than expected. Read speeds are not affected by the DWA feature. While interesting in theory it appears the dynamic flipping technology needs a bit more work.

Submission + - GeForce GTX 980 and GTX 970 Bring Unseen Power Efficiency (pcper.com)

Vigile writes: Launching today is a new GPU from NVIDIA along with two new graphics card that utilize it. GM204, the second chip released based on the Maxwell architecture, brings an incredibly high level of power efficiency to high-end enthusiast level graphics cards. The GeForce GTX 980, reviewed by PC Perspective, with 2048 CUDA cores, a 256-bit memory bus, 4GB of GDDR5 running at 7.0 GHz and a base clock over 1100 MHz, is able to outperform cards like the GeForce GTX 780 Ti and the AMD Radeon R9 290X and will sell for $549. Maybe most impressive is the power draw difference — the GTX 980 uses 130 watts LESS POWER than the R9 290X under a full load. The GTX 970, with 1664 CUDA cores, the same memory configuration and a base clock of 1050 MHz runs at even lower power, outperforming the Radeon R9 290 and using 80 watts less power and has an MSRP of just $329. Faster GPUs using less power — it's pretty impressive. New features of the GTX 900 series include MFAA (multi-frame AA), Dynamic Super Resolution and full DX12 feature set support. And the fact that we were able to overclock the GTX 980 to nearly 1500 MHz doesn't hurt either.

Submission + - AMD Releases new Tonga GPU, Lowers 8-core CPU to $229

Vigile writes: AMD looks to continue addressing the mainstream PC enthusiast and gamer with a set of releases into two different component categories. First, today marks the launch of the Radeon R9 285 graphics card, a $250 option based on a brand new piece of silicon dubbed Tonga. This GPU has nearly identical performance to the R9 280 that came before it, but includes support for XDMA PCIe CrossFire, TrueAudio DSP technology and is FreeSync capable (AMD's response to NVIDIA G-Sync). On the CPU side AMD has refreshed its FX product line with three new models (FX-8370, FX-8370e and FX-8320e) with lower TDPs and supposedly better efficiency. The problem of course is that while Intel is already sampling 14nm parts these Vishera-based CPUs continue to be manufactured on GlobalFoundries' 32nm process. The result is less than expected performance boosts and efficiency gains.

Slashdot Top Deals

Crazee Edeee, his prices are INSANE!!!

Working...