An anonymous reader writes: Earlier this month AMD released the air-cooled Radeon R9 Fury graphics card with Fury X-like performance, but the big caveat is the bold performance is only to be found on Windows. Testing the R9 Fury X on Linux revealed the Catalyst driver delivers devastatingly low performance for this graphics card. With OpenGL Linux games, the R9 Fury performed between the speed of a GeForce GTX 960 and 970, with the GTX 960 retailing for around $200 while the GTX 970 is $350. The only workloads where the AMD R9 Fury performed as expected under Linux was the Unigine Valley tech demo and OpenCL compute tests. There also is not any open-source driver support yet for the AMD R9 Fury.
An anonymous reader writes: A NVIDIA SHIELD Android TV modified to run Ubuntu Linux is providing interesting data on how NVIDIA's latest "Tegra X1" 64-bit ARM big.LITTLE SoC compares to various Intel/AMD/MIPS systems of varying form factors. Tegra X1 benchmarks on Ubuntu show strong performance with the X1 SoC in this $200 Android TV device, beating out low-power Intel Atom/Celeron Bay Trail SoCs, AMD AM1 APUs, and in some workloads is even getting close to an Intel Core i3 "Broadwell" NUC. The Tegra X1 features Maxwell "GM20B" graphics and the total power consumption is less than 10 Watts.
An anonymous reader writes: With the upcoming Linux 4.2 kernel will be the premiere of the new "AMDGPU" kernel driver to succeed the "Radeon" DRM kernel driver, which is part of AMD's long talked about new Linux driver architecture for supporting the very latest GPUs and all future GPUs. Unfortunately for AMD customers, there's still much waiting. The new open-source AMDGPU Linux code works for Tonga/Carrizo GPUs but it doesn't yet support the latest R9 Fury "Fiji" GPUs, lacks re-clocking/DPM for Tonga GPUs leading to low performance, and there are stability issues under high-load OpenGL apps/games. There's also the matter that current Linux users need to jump through hoops for now in getting the code into a working state with the latest kernel and forked versions of Mesa, libdrm, new proprietary microcode files, and the new xf86-video-amdgpu user-space driver.
New submitter samtuke writes: AMD processors get rated and reviewed based on performance. It is in our self-interest to make things work really, really fast on AMD hardware. AMD engineers contribute to LibreOffice, for good reason. Think about what happens behind a spreadsheet calculation. There can be a huge amount of math. Writing software to take advantage of a Graphics Processing Unit (GPU) for general purpose computing is non-trivial. We know how to do it. AMD engineers wrote OpenCL kernels, and contributed them to the open source code base. Turning on the OpenCL option to enable GPU Compute resulted in a 500X+ speedup, about ¼ second vs. 2minutes, 21 seconds. Those measurements specifically come from the ground-water use sample from this set of Libre Office spreadsheets.
An anonymous reader writes: In past years the AMD Catalyst Linux driver has yielded better performance if naming the executable "doom3.x86" or "compiz" (among other choices), but these days this application profile concept is made more absurd with more games coming to Linux but AMD not maintaining well their Linux application profile database. The latest example is by getting ~40% better performance by renaming Counter-Strike: Global Offensive on Linux. If renaming the "csgo_linux" binary to "hl2_linux" for Half-Life 2 within Steam, the frame-rates suddenly increase across the board, this is with the latest Catalyst 15.7 Linux driver while CS:GO has been on Linux for nearly one year. Should driver developers re-evaluate their optimization practices for Linux?
MojoKid writes: When AMD launched the liquid-cooled Radeon Fury X, it was obvious the was company willing to commit to new architecture and bleeding edge technologies (Fiji and High-Bandwidth Memory, respectively). However, it fell shy of the mark that enthusiasts hoped it would achieve, unable to quite deliver a definitive victory against NVIDIA's GeForce GTX 980 Ti. However, AMD just launched their Radeon R9 Fury (no "X" and sometimes referred to as "Fury Air"), a graphics card that brings a more compelling value proposition to the table. It's the Fury release that should give AMD a competitive edge against NVIDIA in the $500+ graphics card bracket. AMD's Radeon R9 Fury's basic specs are mostly identical to the liquid-cooled flagship Fury X, except for two important distinctions. There's a 50MHz reduction in GPU clock speed to 1000MHz, and 512 fewer stream processors for a total of 3584, versus what Fury X has on board. Here's the interesting news which the benchmark results demonstrate: In price the Fury veers closer to the NVIDIA GeForce GTX 980, but in performance it sneaks in awfully close to the GTX 980 Ti.
Deathspawner writes: Following-up on the release of 12GB and 16GB FirePro compute cards last fall, AMD has just announced a brand-new top-end: the 32GB FirePro S9170. Targeted at DGEMM computation, the S9170 sets a new record for GPU memory on a single card, and does so without a dual-GPU design. Architecturally, the S9170 is similar to the S9150, but is clocked a bit faster, and is set to cost about the same as well, at between $3,000~$4,000. While AMD's recent desktop Radeon launch might have left a bit to be desired, the company has proven with its S9170 that it's still able to push boundaries.
nateman1352 links to an article at Tom's Hardware which makes the interesting point that chip-maker AMD will offer Intel -- rather than AMD -- CPUs in their upcoming high-end gaming PC. (High-end for being based on integrated components, at least.) From the article: Recently, AMD showed off its plans for its Fiji based graphics products, among which was Project Quantum – a small form factor PC that packs not one, but two Fiji graphics processors. Since the announcement, KitGuru picked up on something, noticing that the system packs an Intel Core i7-4790K "Devil's Canyon" CPU. We hardly need to point out that it is rather intriguing to see AMD use its largest competitor's CPU in its own product, when AMD is a CPU maker itself.
MojoKid writes: AMD officially launched the Radeon R9 Fury X based on their next generation Fiji GPU and HBM 3D stacked DRAM memory. Fiji is manufactured using TSMC's 28nm process. At its reference clocks of 1050MHz (GPU) and 500MHz (HBM), Fiji and the Radeon R9 Fury X offer peak compute performance of 8.6 TFLOPs, up to 268.8 GT/s of texture fill-rate, 67.2 GP/s of pixel fill-rate, and a whopping 512GB/s of memory bandwidth, thanks to HBM. Its compute performance, memory bandwidth, and textured fill-rate are huge upgrades over the previous generation AMD Hawaii GPU and even outpace NVIDIA's GM200, which powers the GeForce Titan X and 980 Ti. To keep the entire assembly cool, AMD strapped a close-loop liquid cooler onto the Fury X. There's a reason AMD went that route on this card, and it's not because they had to. There will be air-cooled Fury and Fury Nano cards coming in a few weeks that feature fully-functional Fiji GPUs. What the high-powered liquid-cooler on the Fury X does is allow the use of an ultra-quiet fan, with the side benefit of keeping the GPU very cool under both idle and load conditions(around 60C max under load and 30C at idle), which helps reduce overall power consumption by limiting leakage current. The AMD Radeon R9 Fury X performed very well in the benchmarks, and remained competitive with a similarly priced, reference NVIDIA GeForce GTX 980 Ti, but it wasn't a clear win. Generally speaking, the Fury X was the faster of the two cards at 2560x1440. With the resolution cranked up to 3840x2160, however, the Fury X and 980 Ti trade victories.
MojoKid writes: AMD announced new Radeon R9 and R7 300 series of graphics cards earlier this week, and while they are interesting, they're not nearly as impressive as AMD's upcoming flagship of AMD GPU, code named Fiji. Fiji will find its way into three products this summer: the Radeon R9 Nano, Radeon R9 Fury, and the range-topping (and water-cooled) Radeon R9 Fury X. Other upcoming variants like, AMD's dual-Fiji board, were teased at E3 but are still under wraps. However, while full reviews are still under embargo, the official specification of the Radeon R9 Fury X have been revealed, along with an array of benchmark scores comparing the GPU to NVIDIA's GeForce GTX 980 Ti. Should the numbers AMD has released jibe with independent testing, the Radeon R9 Fury X looks strong and possibly faster than Nvidia's GeForce GTX 980 Ti.
MojoKid writes: Today AMD announced new graphics solutions ranging from the bottom to the top ($99 on up to $649). First up is the new range of R7 300 Series cards that is aimed squarely at gamers AMD says are typically running at 1080p. For gamers that want a little bit more power, there's the new R9 300 Series (think of them as R9 280s with higher clocks and 8GB of memory). Finally, AMD unveiled its Fiji graphics cards that feature onboard High Bandwidth Memory (HBM), offering 3x the performance-per-watt of GDDR5. Fiji has 1.5x the performance-per-watt of the R9 290X, and was built with a focus on 4K gaming. The chip itself features 4096 stream processors and is comprised of 8.9 billion transistors. It has a graphics core clock of 1050MHz and is rated at 8.6 TFLOPs. AMD says there will also be plenty of overhead for overclocking. Finally, AMD also took the opportunity to showcase its "Project Quantum," which is a small form-factor PC that manages to cram two Fiji GPUs inside. The processor, GPUs, and all other hardware are incorporated into the bottom of the chassis, while the cooling solution is built into the top of the case.
MojoKid writes: A fresh alleged leak of next AMD Fiji graphics info has just hit the web and there's an abundance of supposedly confirmed specifications for what will be AMD's most powerful graphics card to date. Fiji will initially be available in both Pro and XT variants with the Fiji Pro dubbed "Fury" and Fiji XT being dubbed "Fury X." The garden variety Fury touts single-precision floating point (SPFP) performance of 7.2 TFLOPS compared to 5.6 TFLOPS for a bone stock Radeon R9 290X. That's a roughly 29-percent performance improvement. The Fury X with its 4096 stream processors, 64 compute units, and 256 texture mapping units manages to deliver 8.6 TFLOPS, or a 54-percent increase over a Radeon R9 290X. The star of the show, however, will be AMD's High Bandwidth Memory (HBM) interface. Unlike traditional GDDR5 memory, HBM is stacked vertically, decreasing the PCB footprint required. It's also integrated directly into the same package as the GPU/SoC, leading to further efficiencies, reduced latency and a blistering 100GB/sec of bandwidth per stack (4 stacks per card). On average HBM is said to deliver three times the performance-per-watt of GDDR5 memory. With that being said, the specs listed are by no means confirmed by AMD, yet. We shall find out soon enough during AMD's E3 press conference scheduled for June 16.
itwbennett writes: Steve Casselman at Seeking Alpha was among the first to suggest that Xilinx should buy AMD because, among other reasons, it 'would let Xilinx get in on the x86 + FPGA fabric tsunami.' The trouble with this, however, is that 'AMD's server position is minuscule.... While x86 has 73% of the server market, Intel owns virtually all of it,' writes Andy Patrizio. At the same time, 'once Intel is in possession of the Altera product line, it will be able to cheaply produce the chip and drop the price, drastically undercutting Xilinx,' says Patrizio. And, he adds, buying AMD wouldn't give Xilinx the same sort of advantage 'since AMD is fabless.'
An anonymous reader writes: In trying to offer a unique look at how Intel x86 CPU performance has evolved since their start, Phoronix celebrated their 11th birthday by comparing modern CPUs to old Socket 478 CPUs with the NetBurst Celeron and Pentium 4C on an Intel 875P+ICH5R motherboard. These old NetBurst processors were compared to modern Core and Atom processors from Haswell, Broadwell, Bay Trail and other generations. There were also some AMD CPUs and the NVIDIA Tegra K1 ARM processor. Surprisingly, in a few Linux tests the NetBurst CPUs performed better than AMD E-Series APUs and an Atom Bay Trail. However, for most workloads, the 45+ other CPUs tested ended up being multiple times faster; for the systems where the power consumption was monitored, the power efficiency was obviously multiple times better.
An anonymous reader writes: Intel has often been portrayed as the golden child within the Linux community and by those desiring a fully-free system without tainting their kernel with binary blobs while wanting a fully-supported open-source driver. The Intel Linux graphics driver over the years hasn't required any firmware blobs for acceleration, compared to AMD's open-source driver having many binary-only microcode files and Nouveau also needing blobs — including firmware files that NVIDIA still hasn't released for their latest GPUs. However, beginning with Intel Skylake and Broxton CPUs, their open-source driver will now too require closed-source firmware. The required "GuC" and "DMC" firmware files are for handling the new hardware's display microcontroller and workload scheduling engine. These firmware files are explicitly closed-source licensed and forbid any reverse-engineering. What choices are left for those wanting a fully-free, de-blobbed system while having a usable desktop?
MojoKid writes: AMD previously only teased bits of detail regarding their forthcoming 6th Generation A-Series APU, code named "Carrizo," as far back as CES 2015 in January and more recently with AMD's HSA (Heterogenous System Architecture) 1.0 spec roll-out in March. However, the company has officially launched the product today and has lifted the veil on all aspects of their new highly integrated notebook APU. Carrizo has been optimized for the 15 Watt TDP envelope that comprises the bulk of the thin and light notebook market currently and it brings a couple of first to integrated notebook chip designs. AMD's Carrizo APU is the first SoC architecture to fully support the HSA 1.0 specification, allowing full memory coherency of a shared memory space for both CPU and GPU up to 32GB. It's also the first integrated chip to include full support in hardware for H.265/HEVC HD video decoding and finally, Carizzo is also the first AMD APU to have a full integrated, in silicon, Southbridge controller block. So, with its CPU, GPU, memory controller, Northbridge, Southbridge, and PCIe 3.0 links, Carrizo is truly a fully integrated System On A Chip. The company is claiming a 39% CPU performance lift (combination clock speed and IPC) and up to a 65% in graphics, versus their previous generation Kaveri APU. AMD notes laptops from major vendors will begin shipping in the next few weeks.
edxwelch writes: Intel has finally released their Broadwell desktop processors. Featuring Iris Pro Graphics 6200, they take the integrated graphics crown from AMD (albeit costing three times as much). However, they are not as fast as current Haswell flagship processors and they will be soon superseded by Skylake, to be released later this year. Tom's Hardware and Anandtech have the first reviews of the Core i7-5775C and i5-5675C.
Deathspawner writes: In advance of the rumored pending launch of AMD's next-generation Radeon graphics cards, NVIDIA has decided to pull no punches and release a seriously tempting GTX 980 Ti at $649. It's tempting both because the extra $150 it costs over the GTX 980 more than makes up for it in performance gained, and despite it coming really close to the performance of TITAN X, it costs $350 less. AMD's job might just have become a bit harder. Vigile adds The GTX 980 Ti has 6GB of memory (versus 12GB for the GTX Titan X) but PC Perspective's review shows no negative side effects of the drop. This implementation of the GM200 GPU uses 2,816 CUDA cores rather than the 3,072 cores of the Titan X, but thanks to higher average Boost clocks, performance between the two cards is identical. And at Hot Hardware, another equally positive, benchmark-laden review.
MojoKid writes: Recently, a few details of AMD's next-generation Radeon 300-series graphics cards have trickled out. Today, AMD has publicly disclosed new info regarding their High Bandwidth Memory (HBM) technology that will be used on some Radeon 300-series and APU products. Currently, a relatively large number of GDDR5 chips are necessary to offer sufficient capacity and bandwidth for modern GPUs, which means significant PCB real estate is consumed. On-chip integration is not ideal for DRAM because it is not size or cost effective with a logic-optimized GPU or CPU manufacturing process. HBM, however, brings the DRAM as close to possible to the logic die (GPU) as possible. AMD partnered with Hynix and a number of companies to help define the HBM specification and design a new type of memory chip with low power consumption and an ultra-wide bus width, which was eventually adopted by JEDEC 2013. They also develop a DRAM interconnect called an "interposer," along with ASE, Amkor, and UMC. The interposer allows DRAM to be brought into close proximity with the GPU and simplifies communication and clocking. HBM DRAM chips are stacked vertically, and "through-silicon vias" (TSVs) and "bumps" are used to connect one DRAM chip to the next, and then to a logic interface die, and ultimately the interposer. The end result is a single package on which the GPU/SoC and High Bandwidth Memory both reside. 1GB of GDDR5 memory (four 256MB chips), requires roughly 672mm2. Because HBM is vertically stacked, that same 1GB requires only about 35mm2. The bus width on an HBM chip is 1024-bits wide, versus 32-bits on a GDDR5 chip. As a result, the High Bandwidth Memory interface can be clocked much lower but still offer more than 100GB/s for HBM versus 25GB/s with GDDR5. HBM also requires significantly less voltage, which equates to lower power consumption.