Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Submission + - NVIDIA's $329 GeForce RTX 3060 With 12GB Of RAM Benchmarked (hothardware.com)

MojoKid writes: NVIDIA is launching its latest mainstream Ampere-based GPU today, targeting a $329 sweet-spot of the PC gaming market, dubbed GeForce RTX 3060. All of the GeForce RTX 3060 series cards that hit store shelves today will come by way of NVIDIA's board partners, however, rather than Founders Edition reference card versions. EVGA's GeForce RTX 3060 XC Black Gaming card was put through its paces at HotHardware. The GPU is comprised of 3584 CUDA cores with a 1777MHz boost clock and 12GB of GDDR6 memory at 7501MHz, along with 28 second generation RT cores for ray tracing workloads. What this translates to in the benchmarks, is that a GeForce RTX 3060 is roughly on-par or slightly ahead of NVIDIA's previous-generation RTX 2060 Super cards, offering solid 1440p gaming performance with and without ray tracing enabled. So, if you haven't upgraded for a few generations and are still running anything below a GTX 1660-class GPU, the GeForce RTX 3060 will offer a big performance and feature boost. However, if you've already got an RTX 20-series card, the RTX 3060 will be less appealing since it trades blows with the GeForce RTX 2060 Super. Spending a few extra dollars on an RTX 3060 Ti (when street prices come back to reality), will net significantly better performance.

Submission + - NVIDIA GeForce RTX 30-Series Laptops Put To The Test (hothardware.com)

MojoKid writes: This morning, NVIDIA lifted its embargo on the performance and experiences of new GeForce RTX 30 Series-powered gaming laptops. Thinner, higher-performance form factors aren't the only features NVIDIA is touting with this launch. A number of new laptops will also sport 1440p, high refresh rate IPS displays like the MSI GS66 Stealth with a GeForce RTX 3080 mobile GPU tested at HotHardware. This machine features a 15.6-inch IPS, 1440p panel with a 240Hz refresh rate and G-Sync support. However, the biggest difference between these new laptop GeForce RTX 30 series GPUs and their desktop counterparts, are their core counts. Desktop GeForce RTX 3080, 3070, and 3060 series GPUs have 8,704, 5,888, and 3,584 CUDA cores, respectively, whereas these new laptop offerings have 6,144, 5,120, and 3,840 — it is only the RTX 3060 laptop GPU that has more cores than its similar-branded desktop counterpart. In the benchmarks, with a retail-ready Alienware m15 R4 gaming laptop powered by a GeForce RTX 3070 mobile GPU, the new platform offered sizable performance gains of 15 — 25% over the previous generation RTX 20 series mobile offering, and an even stronger performance lift with ray tracing enabled, sometimes in excess of a 40%. NVIDIA GeForce 30 Series laptops are in production now and available in the next few weeks from major OEMs like Alienware, ASUS, MSI, Gigabyte and others.

Submission + - Intel Unveils New Core H-Series Laptop And 11th Gen Desktop Processors At CES (hothardware.com)

MojoKid writes: At its virtual CES 2021 event today, Intel's EVP Gregory Bryant unveiled an array of new processors and technologies targeting virtually every market, from affordable Chromebooks, to enthusiast-class gaming laptops and high-end desktops. Intel's 11th Gen Core vPro platform was announced, featuring new Intel Hardware Shield AI-enabled threat ransomware and crytpo-mining malware detection technology. In addition, the Intel Rocket Lake-S based Core i9-11900K 8-core CPU was revealed, offering up to a 19% improvement in IPC performance and the ability to out-pace AMD's Ryzen 9 5900X 12-core CPU in some workloads like gaming. Also, a new high-end hybrid processor, code-named Alder Lake was previewed. Alder Lake packs both high-performance cores and high-efficiency cores on a single product, for what Intel calls its "most power-scalable system-on-chip" ever. Alder Lake will also be manufactured using an enhanced version of 10nm SuperFin technology with improved power and thermal characteristics, and targets both desktop and mobile form factors when they arrive later this year. Finally, Intel launched its new 11th Gen Core H-Series Tiger Lake H35 parts that will appear in high-performance laptops as thin as 16mm. At the top of the 11th Gen H-Series stack is the Intel Core i7-11375H Special Edition, a 35W quad-core processor (8-threads) that turbos up to 5GHz and supports PCI Express 4.0, and is targeted for ultraportable gaming notebooks. Intel is claiming single-threaded performance improvements in the neighborhood of 15% over previous-gen architectures and a greater than 40% improvement in multi-threaded workloads. Intel's Bryant also announced an 8-core mobile processor variant leveraging the same architecture as the 11th Gen H-Series that is slated to start shipping a bit later this quarte at 5GHz on multiple cores, with 20 lanes of PCIe Gen 4 connectivity.

Submission + - Boston Dynamics Robots Bust Freakishly Good Moves On The Dance Floor (hothardware.com)

MojoKid writes: Boston Dynamics made news recently when 80% of the company was acquired by Hyundai. The company's family of robots is always impressive and now it appears they're having some fun to celebrate the close of 2020. Boston Dynamics' robot dog, Spot and its humanoid-like Atlas bot friend, were joined by their oddball sibling Handle to shake their booties on the dance floor to "Do You Love Me" by The Contours. The video starts off impressive enough with just a single Atlas showing its incredible dexterity while busting out some sweet moves that would leave even the late Patrick Swayze envious. However, as the routine progresses, the camera pulls back to show that another twin Atlas is dancing along with the first one as they show off their synchronized and fresh rug-cutting ways. As this robotic soul train continues to roll, Spot the dog saunters in to join in on the fun with the distinct flare that only rover can bring. The entire 3 minute clip is really a marvel to behold, and maybe even slightly unsettling for some that might not fully welcome our robot overlords.

Submission + - AMD Launches Radeon RX 6900 XT To Compete With NVIDIA's GeForce RTX 3090 (hothardware.com)

MojoKid writes: AMD launched its highest end Radeon gaming graphics card today, dubbed the Radeon RX 6900 XT. At a $999 MSRP, the product is a competitive answer to NVIDIA's pricey, $1499 GeForce RTX 3090 in the very top end of the GPU market for PC gamers. The Radeon RX 6900 XT built around a fully-enabled AMD Navi 21 RDNA 2-based GPU, which is manufactured on TSMC's 7nm process node. The GPU is comprised of roughly 26.8 billion transistors and sports 80 Radeon Compute Units (CUs) and 80 ray accelerators with a 2250MHz core boost clock. Like other members of the Radeon RX 6000 family, the 6900 XT also has 16GB of GDDR6 memory on board but maintains a 300 Watt board power rating like the lower end Radeon RX 6800 XT. In the benchmarks, with traditional rasterization, the Radeon RX 6900 XT and GeForce RTX 3090 are fairly nip and tuck, trading victories depending on game title, though the RTX 3090 does have the overall edge. Turn on AMD's Rage Mode overclocking and things tighten up a bit. With ray tracing enabled, however, NVIDIA's high-end GeForce RTX 30 series cards have a clear advantage currently.

Submission + - NVIDIA Launches GeForce RTX 3060 Ti Setting A New Gaming Performance Bar At $399 (hothardware.com)

MojoKid writes: NVIDIA expanded its line-up of Ampere-based graphics cards today with a new lower cost GeForce RTX 3060 Ti. As its name suggests, the new $399 NVIDIA GPU supplants the previous-gen GeForce RTX 2060 / RTX 2060 Super, and slots in just behind the recently-released GeForce RTX 3070. The GeForce RTX 3060 Ti features 128 CUDA cores per SM, for a total of 4,864, 4 Third-Gen Tensor cores per SM (152 total), and 38 Second-Gen RT cores. The GPU has a typical boost clock of 1,665MHz and it is linked to 8GB of standard GDDR6 memory (not the GDDR6X of the RTX 3080/3090) via a 256-bit memory interface that offers up to 448GB/s of peak bandwidth. In terms of overall performance, the RTX 3060 Ti lands in the neighborhood of the GeForce RTX 2080 Super, and well ahead of cards like AMD's Radeon RX 5700 XT. The GeForce RTX 3060 Ti's 8GB frame buffer may give some users pause, but for 1080p and 1440p gaming, it shouldn't be a problem for the overwhelming majority of titles. It's also par for the course in this $399 price band. Cards are reported to be shipping in retail tomorrow.

Submission + - Radeon RX 6800 And 6800 XT Performance Marks AMD's Return To High-End Graphics (hothardware.com)

MojoKid writes: AMD officially launched its Radeon RX 6800 and Radeon RX 6800 XT graphics cards today, previously known in the PC gaming community as Big Navi. The company claimed these high-end GPUs would compete with NVIDIA's best GeForce RTX 30 series and it appears AMD made good on its claims. AMD's new Radeon RX 6800 XT and Radeon 6800 are based on the company's RDNA 2 GPU architecture, with the former sporting 72 Compute Units (CUs) and 2250MHz boost clock, while the RX 6800 sports 60 CUs at a 2105MHz boost clock. Both cards come equipped with 16GB of GDDR6 memory and 128MB of on-die cache AMD calls Infinity Cache, that improves bandwidth and latency, feeding the GPU in front of its 256-bit GDDR6 memory interface. In the benchmarks, it is fair to say the Radeon RX 6800 is typically faster than an NVIDIA GeForce RTX 3070 just as AMD suggested. Things are not as cut and dry for the Radeon RX 6800 XT though, as the GeForce RTX 3080 and Radeon RX 6800 XT trade victories depending on the game title or workload, but the RTX 3080 has an edge overall. In DXR Ray Tracing performance, NVIDIA has a distinct advantage at the high-end. Though the Radeon RX 6800 wasn't too far behind and RTX 3070, neither the Radeon RX 6800 XT or 6800 card came close the GeForce RTX 3080. Pricing is set at $649 and $579 for the AMD Radeon RX 6800 XT and Radeon RX 6800, respectively and the cards are on sale as of today. However, demand is likely to be fierce as this new crop of high-end graphics cards from both companies have been quickly evaporating from retail shelves.

Submission + - GeForce RTX 3090 Launched: NVIDIA's Biggest, Fastest Gaming GPU Tested (hothardware.com)

MojoKid writes: NVIDIA's GeForce RTX 3090, which just launched this morning, is the single most powerful graphics card money can (almost) buy currently. It sits at the top of NVIDIA's product stack, and according to the company, it enables new experiences like smooth 8K gaming and seamless processing of massive content creation workloads, thanks in part to its 24GB of on-board GDDR6X memory. A graphics card like the GeForce RTX 3090 isn't for everyone, however. Though its asking price is about a $1,000 lower than its previous-gen, Turing-based Titan RTX counterpart, it is still out of reach for most gamers. That said, content creation and workstation rendering professionals can more easily justify its cost. In performance testing fresh off NDA lift, versus the GeForce RTX 3080 that arrived last week, the more powerful RTX 3090's gains range from about 4% to 20%. Versus the more expensive previous generation Titan RTX though, the GeForce RTX 3090's advantages increase to approximately 6% to 40%. When you factor in complex creator workloads that can leverage the GeForce RTX 3090's additional resources and memory, however, it can be many times faster than either the RTX 3080 or Titan RTX. The GeForce RTX 3090 will be available in limited quantities today but the company pledges to make more available directly and through OEM board partners as soon as possible.

Submission + - First Intel Tiger Lake Benchmarks Show Big CPU And Graphics Performance Gains (hothardware.com)

MojoKid writes: Intel formally announced its 11th Gen Core mobile processor family, known by the code name Tiger Lake, a few weeks back and made some bold performance claims for it as well. The company even compared its quad-core variant to AMD's 8-core Ryzen 7 4800U in gaming and content creation. Today Intel lifted the embargo veil on benchmarks with its Core i7-1185G7 Tiger Lake CPU with on-board Iris Xe graphics and there's no question Tiger Lake is impressive. Intel indeed achieved single-threaded performance gains north of 20% with even larger deltas for multithreaded throughput in some cases as well. In addition, Tiger Lake's integrated Iris Xe graphics put up over 2X the gaming performance over the company's 10th Gen Ice Lake processors, and it looks to be the fastest integrated graphics solution for laptops on the market currently, besting AMD's Ryzen 4000 series as well. Battery life measurements are still out, however, as retail ready products have yet to hit the channel. Intel notes Tiger Lake-powered laptops from OEM partners should be available in the next month or so.

Submission + - NVIDIA GeForce RTX 3080 Tested: A Huge Leap Forward In Gaming Performance (hothardware.com)

MojoKid writes: NVIDIA CEO Jensen Huang officially unveiled the GeForce RTX 30 series based on the company's new Ampere architecture a couple of weeks back. According to Huang, the GeForce RTX 30 series represents the greatest generational leap in the company's history and he claimed the GeForce RTX 3080 would offer double the performance of its predecessor. The embargo for GeForce RTX 3080 reviews just lifted and it seems NVIDIA was intent on making good on its claims. The GeForce RTX 3080 is the fastest GPU released to date, across the board, regardless of the game, application, or benchmarks used. Throughout testing, the GeForce RTX 3080 often put up scores more than doubling the performance of AMD's current flagship Radeon RX 5700 XT. The RTX 3080 even skunked the NVIDIA Titan RTX and GeForce RTX 2080 Ti by relatively large margins, even though it will retail for almost half the price of a 2080 Ti (at least currently). The bottom line is, NVIDIA's got an absolutely stellar-performing GPU on its hands, and the GeForce RTX 3080 isn't even the best Ampere has to offer, with the RTX 3090 waiting in the wings. GeForce RTX 3080 cards will be available from NVIDIA and 3rd party board partners on 9/17 for an entry-level MSRP of $699.

Submission + - NVIDIA Unveils GeForce RTX 30 Cards With Much Better Performance And Pricing (hothardware.com)

MojoKid writes: NVIDIA just finished hosting a GeForce webcast on Twitch highlighting its new, oft-leaked Ampere-based GeForce RTX 30 series that will be available soon, in addition to a number of technologies for gamers and creators. The GeForce RTX 3090 will be NVIDIA's full fat Ampere card and it's a beast, delivering 36 Shader-TFLOPS, 69 RT-TFLOPS for ray tracing, and 285 Tensor-TFLOPS for DLSS processing. At the high end of the stack, NVIDIA claims Ampere offers over 2X the performance and throughput of NVIDIA's Turing GPU in these areas , though the RTX 3090 will retail for a hefty $1499 Titan-like price tag. However, this time around NVIDIA was more aggressive with its mainstream GPU pricing and the GeForce RTX 3080, which the company claims brings two times the performance of an RTX 2080, and the GeForce RTX 3070, which is claimed to be faster than a GeForce RTX 2080 Ti, will retail for $699 and $499 respectively. When you consider GeForce RTX 2080 Ti cards still retail for around $1200, the GeForce RTX 3070 value proposition looks especially solid. Other interesting aspects are the the new RTX 30 series cooler design, with front and rear-mounted push/pull dual axial fans for better cooling and warm air exhaust out the card's IO backplate. Also NVIDIA announced NVIDIA RTX IO, which promises to direct connect fast PC SSD storage to GPU graphics memory via Microsoft's DirectStorage API in DirectX for "instantaneous" game loading. NVIDIA also unveiled a number of new software features and tools like Ominverse Machinima, for AI enhanced content creation and animation in games from user webcam input, NVIDIA Reflex to reduce latency in fast twitch competitive gaming and NVIDIA Broadcast that uses RTX cards for enhancing video and audio quality for gamer streamers and web conferencing applications. NVIDIA's GeForce RTX 3080 will be available at retail on 9/17, the RTX 3090 will follow on 9/24 and the GeForce RTX 3070 will arrive in October.

Submission + - SPAM: Galaxy Note 20 And Note 20 Ultra Reviews Are Live And Samsung Impresses Again

MojoKid writes: Samsung took the wraps off Galaxy Note 20 series product reviews today, along with first performance benchmarks and camera results. Fans of Samsung's Note series will appreciate the Galaxy Note 20 Ultra's large 120Hz 6.9-inch OLED display and expandable storage, while the standard Note 20 is limited to a 6.7-inch 60Hz screen and no microSD card slot. Both of the new Galaxy Note 20s are equipped with Qualcomm's most powerful Snapdragon 865+ 5G mobile platform, which should boost performance over the older Galaxy Note 10 quite a bit. The Snapdragon 865+ is more strictly binned from the best silicon, and the chips are able to sustain higher CPU and GPU frequencies, resulting in a 10% boost in performance over the most recent crop of Snapdragon 865-based phones. Both phones also support Ultra Wide-Band technology for high speed point to point file sharing between devices, and new Samsung S-Pen technology that Samsung claims brings a 40% reduction in latency and better accuracy. In addition, both phones have the same 10MP Selfie Camera with f2.2 aperture and 80 FOV. The triple-rear camera arrays on the back of the devices are quite different though. The Note 20 Ultra is outfitted with a 12MP Ultra Wide Camera (f2.2, FOV: 120), a 12MP Telephoto Camera (f3.0, FOV: 20), and a 108MP Wide-Angle Camera (f1.8, FOV: 79) with 1/1.33" image sensor size, phase detect AF, OIS and a new Laser AF Sensor for faster and more accurate auto-focus in a variety of lighting conditions. All told, the Galaxy Note 20 Ultra and standard Note 20 are significant upgrades versus the previous gen Note 10. However, as usual, these premium phones also sport premium pricing, starting at $999 for the standard Note 20 and $1200 Note 20 Ultra, both of which will be available in market on August 21.
Link to Original Source

Submission + - Intel Details New 10nm SuperFin, Tiger Lake And Xe GPU Tech At Architecture Day (hothardware.com)

MojoKid writes: Intel held a virtual Architecture Day event this week, and the company disclosed a number of details regarding its Six Pillars of technology innovation — Process, Architecture, Memory, Interconnect, Security and Software. Many new details were revealed regarding Intel's upcoming Tiger Lake, Willow Cove, Xe Graphics, and hybrid Alder Lake architectures. In addition, its new 10nm SuperFin technology was unveiled, and an array of packaging technologies were disclosed as well, including the next evolution of the company's Foveros 3D stacked silicon packaging. Intel's 10nm SuperFin technology builds upon the company's enhanced FinFET transistors, but adds a new super metal insulator metal capacitor, or Super MIM for short. This enhanced transistor tech enables Tiger Lake to have a much wider dynamic power and clock speed curve versus Ice Lake, which results in significant performance and power advantages. In addition, Tiger Lake mobile CPUs will be based on Intel's new Willow Cove architecture that brings a dual ring bus with a 2X increase in coherent fabric bandwidth and big gains in memory bandwidth. Intel claims Tiger Lake will deliver more than a generational increase in performance over Ice Lake and up to a 2X performance lift in its Gen 12 Xe graphics engine, now with up to 96 execution units versus 64 in the previous gen. Many Intel Xe GPU details were revealed as well, including a new its Xe-HPG architecture, which will feature hardware-accelerated ray tracing and target enthusiast gamers.

Submission + - AMD Launches Ryzen 3000XT Series CPUs At Higher Clock Speeds To Battle Intel (hothardware.com)

MojoKid writes: Last month, AMD made its Ryzen 3000XT series processors official, after weeks of leaks and speculation. Ryzen 3000XT series processors are tweaked versions of the original 3000X series products, but with higher clocks and the ability to maintain turbo frequencies longer. Launching today, AMD's new Ryzen 5 3600XT is a 6 core / 12-thread processor, with a 3.8GHz base clock and a 4.5GHz max boost clock. That's a 100MHz increase over the 3600X. The Ryzen 7 3800XT is an 8-core / 16-thread processor with a base clock of 3.9GHz and a max boost clock of 4.7GHz, which is 200MHz higher than the original 3800X. Finally, the Ryzen 9 3900XT is a 12-core / 24-thread processor with a base clock of 3.8GHz with a max boost clock of 4.7GHz, which is a 100MHz increase over the original Ryzen 9 3900X. AMD also notes these new processors can maintain boost frequencies for somewhat longer durations as well, which should offer an additional performance uplift, based on refinements made to the chip's 7nm manufacturing process. In testing, the new CPUs offer small performance gains over their "non-XT" namesakes, with 100MHz — 200MHz increases in boost clocks resulting in roughly 2% — 5% increases to both their single and multi-threaded performance in most workloads. Those frequency increases come at the expense of slightly higher peak power consumption as well of course. The best news may be that AMD's original Ryzen 5 3600X, Ryzen 7 3800X, and the Ryzen 9 3900X will remain in the line-up for the time being, but their prices will be slashed a bit, with the new Ryzen 5 3600XT, Ryzen 7 3800XT, and Ryzen 9 3900XT arriving with the same $249, $399, and $499 introductory prices as the originals.

Submission + - A Look At AI Benchmarking For Mobile Devices In A Rapidly Evolving Ecosystem (hothardware.com)

MojoKid writes: AI and Machine Learning performance benchmarks have been well explored in the data center, but are fairly new and unestablished for edge devices like smartphones. While AI implementations on phones are typically limited to inferencing tasks like speech to text transcription and camera image optimization, there are real-world neural network models employed on mobile devices and accelerated by their dedicated processing engines. A deep dive look at HotHardware of three popular AI benchmarking apps for Android, shows that not all platforms are created equal, but also that performance results can vary wildly, depending on the app used for benchmarking. Generally speaking, it all hinges on what NNs the benchmarks are testing and what precision is being tested and weighted. Most mobile apps that currently employ some level of AI make use of INT8 (quantized). While INT8 offers less precision than FP16 (Floating Point), it's also more power-efficient and offers enough precision for most consumer applications. Generally speaking, Qualcomm Snapdragon 865 powered devices offer the best INT8 performance, while Huawei's Kirin 990 in the P40 Pro 5G offers superior FP16 performance. Since INT8 precision for NN processing is more common in today's mobile apps, it could be said that Qualcomm has the upper hand, but the landscape in this area is ever-evolving to be sure.

Slashdot Top Deals

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...