Vigile writes: A new system for testing performance of graphics cards that has been in the works for over a calendar year is being fully unveiled today, called Frame Rating. This technology uses hardware-based capture to record the output from the graphics card and GPU directly and then uses post processing to measure performance and experiences as the user would see them, not through basic logs recorded on the gaming system itself. This much more accurate representation of performance has revealed some interesting highs and some unfortunate lows for graphics vendors already. AMD's CrossFire and Eyefinity technologies take the brunt of the damage: Frame Rating proves that in many games adding a second GPU to your system will result in essentially zero improvement in performance, frame rate or animation smoothness. PC Perspective has detailed the new testing methodology and posted the first sets of data across several PC titles.
Vigile writes: A big shift in the way graphics cards and gaming performance are tested has been occurring over the last few months with many review sites now using frame times rather than just average frame rates to compare products. Another unique testing methodology called Frame Rating has been started by PC Perspective that uses video capture equipment capable of recording uncompressed high resolution output direct from the graphics card, a colored bar overlay system and post-processing on that recorded video to evaluate performance as it is seen by the end user. The benefit is that there is literally no software interference between the data points and what the user sees making it is as close to an "experience metric" as any developed. Interestingly, multi-GPU solutions like SLI and CrossFire have VERY different results when viewed in this light, with AMD's offering clearly presenting a poorer, and more stuttery, animation.
Vigile writes: Details of the new NVIDIA GeForce GTX TITAN graphics card based on GK110 are already known including the 7.1 billion transistor GPU, 6GB of on-board frame buffer and full speed double precision compute power but gaming benchmarks and performance weren't revealed until today. PC Perspective has tested the TITAN up against the best graphics cards on the market and found that the new NVIDIA flagship is easily the best single-GPU solution on the market though it does fall behind the dual-GK104 based GTX 690 in most cases. Where TITAN really shines is in multi-display, 5760x1080 resolutions. Interestingly, testing of the CrossFire configurations of the Radeon HD 7970 were omitted from the article due to concerns about current FRAPS-based testing methods, and an interesting new capture solution for performance analysis is discussed.
Vigile writes: "NVIDIA's new GeForce GTX TITAN graphics card is being announced today and is utilizing the GK110 GPU first announced in May of 2012 for HPC and supercomputing markets. The GPU touts computing horsepower at 4.5 TFLOPS provided by the 2,688 single precision cores, 896 double precision cores, a 384-bit memory bus and 6GB of on-board memory doubling the included frame buffer that AMD's Radeon HD 7970 uses. With a make up of 7.1 billion transistors and a 551 mm^2 die size, GK110 is very close to the reticle limit for current lithography technology! The GTX TITAN introduces a new GPU Boost revision based on real-time temperature monitoring and support for monitor refresh rate overclocking that will entice gamers and with a $999 price tag, the card could be one of the best GPGPU options on the market."
Vigile writes: NVIDIA is finally releasing the GeForce GTX 660 Ti to gamers today after much speculation and swirling rumors. Interestingly, the new card is based on the exact same GPU configuration as the GeForce GTX 670 but lowers the memory bus width from 256-bit to 192-bit, resulting in a 35% drop in available memory bandwidth. Even with the same number of shaders and the same clock speeds, the GeForce GTX 660 Ti falls about 15-20% behind its bigger brother in texture-limited games though at $299 it is more than capable of competing with AMD's Radeon HD 7950 3GB card that sells for $50 more. This version of GK104 continues to have the GPU Boost technology, Adaptive VSync and Frame Rate Target features and was overclocked to 1215 MHz in PC Perspective's review. Oh, and you get a free copy of Borderlands 2 with the GeForce GTX 660 Ti as well. Not a bad addition!
Vigile writes: NVIDIA today announced a new technology partnership with Gaikai, an on-demand gaming company that competes with OnLive, that brings GeForce GRID to the cloud gaming ecosystem. GRID aims to increase both the visual quality and user experience of cloud gaming by decreasing latencies involved in the process, the biggest hindrance to acceptance for consumers. NVIDIA claims to have decreased the time for game stream capture and decode by a factor of three by handling the process completely on the GPU while also decreasing the "game time" with the power of the Kepler GPU. NVIDIA hopes to help both gamers and clould streaming companies by offering 4x the density currently available and at just 75 watts per game stream. The question remains — will mainstream adopt the on-demand games market as they have the on-demand video market?
Vigile writes: "NVIDIA has been rolling out the new Kepler GPU in various graphics cards since March, starting with the GeForce GTX 680 and then the GTX 690 dual-GPU card with a steep $999 price tag. Today's release of the GTX 670 offers some compelling arguments for being the BEST graphics card on the market. Based on the exact same die as the GTX 680 and 690, but with a single SMX disabled bringing the core count from 1536 to 1344, the GTX 670 still includes 2GB of frame buffer running at 6 Gbps on a 256-bit memory bus. Performance of the card actually rivals the $80-100 more expensive AMD Radeon HD 7970, a card that was the fastest GPU available just a few short months ago. With a price tag of $399, the GTX 670 isn't a mid-range card by any stretch, but it may just be the best card for power and dollar efficiency. Of course, NVIDIA first needs to fight through the massive availability issues they are having with this generation."
Vigile writes: When NVIDIA launched the GTX 680 last month, it was the fastest single GPU graphics card on the market, bypassing the Radeon HD 7970 card released in January. NVIDIA was late to this generation of GPU but they are definitely targeting the high-end gamer by releasing the GeForce GTX 690 today — a dual-GPU variant based on the same GK104 chip as the GTX 680. This card features a total of 3072 shader processors, 4GB of GDDR5 memory running at 6 Gbps and a cooler made of magnesium alloy and trivalent chromium plating. While the price tag is $999, the performance of the card simply blows away anything else on the market including the dual-GPU GTX 590 and HD 6990 cards.
Vigile writes: NVIDIA's GeForce GTX 560 Ti has been pretty popular since its release but the new GTX 560 Ti 448-core model isn't even based on the same GPU. Instead, this limited edition part is built around the same GF110 die as the GTX 570 and GTX 580 but with one additional processor cluster disabled. In terms of performance, this puts the GTX 560 Ti 448 about 15% ahead of the original GTX 560 Ti and 5% slower than the GTX 570. Pricing and availability are a mixed bag though — with an MSRP of just $289 it becomes a solid value for the money though NVIDIA only expects availability for 6-8 weeks. PC Perspective has a review that looks at two of the retail units based on this part (MSI and EVGA), both of which are overclocked out of the box.
Vigile writes: "It seems that every year some graphics card vendor steps up its game and produces a card that puts the others to shame. While AMD and NVIDIA pushed out the HD 6990 and GTX 590 earlier in the year, ASUS has designed another dual-GPU offering dubbed the "MARS II" that combines two true GTX 580 cores for a 25% boost in gaming performance over either previous bests. As you might expect in PC Perspective's testing of the new card the power draw is incredibly high but the temperature and noise is kept minimal thanks to a custom cooler built for the task. Oh and that price — how does $1300 sound?"
Vigile writes: "While the wildly expensive graphics cards tend to get all of the headlines, it is really the ~$150 market that gets the most sales. Recently NVIDIA released the GeForce GTX 550 Ti that brought the Fermi architecture to this market segment and AMD was depending on last generation's model to hold its own. Now, with the Radeon HD 6790, AMD has a new design that takes all of the features of the HD 6800-series and offers them (though slightly slower) at the $150 price point. Performance results show that the new card is much faster than the GTX 550 Ti while also being more power efficient in the process. The aging GTX 460 card from NVIDIA offers another alternative, but it looks like AMD has the best card for this value segment for the time being."
Vigile writes: "Both NVIDIA and AMD have recently released new extreme-high-end graphics cards with dual-GPU configurations and PC Perspective has compared them to each other with some standard SLI/CrossFire comparisons for good measure. The GTX 590 is a pair of 512 shader processor GF110 GPUs which had the potential to be the fastest combination available, but the clock speeds were lowered to such a level that is has trouble keeping up with AMD's Radeon HD 6990. Sound levels were noticeably better on NVIDIA's option though the Radeon card provided better frame rates at the highest resolutions. So, while the $700 video card market just got a pair of new competitors, the best investment for that money might still be two less expensive Radeon or GeForce single-GPU cards."
Vigile writes: NVIDIA's Tegra SoC made a big splash at CES 2011 with its inclusion in many upcoming cell phones and tablet devices such as the Motorola Xoom and Atrix. The company isn't standing still though, with competition coming from Texas Instruments and Qualcomm, and is talking about quad-core variants known as Tegra 3 as soon as this year. As the performance of ARM-based platforms increases as it also seems likely that the upcoming generation of gaming consoles will use ARM designs to promote the trending cross-platform gaming initiatives both Sony and Microsoft are pushing for.
Vigile writes: For many years NVIDIA has been rumored to enter the highly competitive general processor markets and while the speculation of x86 never actually panned out, at CES yesterday NVIDIA did something nearly as dramatic. Shortly before the Microsoft announcement of a Windows operating system running on the ARM architecture, NVIDIA CEO Jen-Hsun Huang went on stage and discussed Project Denver, the NVIDIA initiative to create high performance ARM processors suitable for desktop and server use at standard wattage and TDPs. This indicates a fundamental shift in computing technology where companies like NVIDIA can compete with Intel without the often-litigated x86 licenses.