Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×

Submission + - First Intel Tiger Lake Benchmarks Show Big CPU And Graphics Performance Gains (hothardware.com)

MojoKid writes: Intel formally announced its 11th Gen Core mobile processor family, known by the code name Tiger Lake, a few weeks back and made some bold performance claims for it as well. The company even compared its quad-core variant to AMD's 8-core Ryzen 7 4800U in gaming and content creation. Today Intel lifted the embargo veil on benchmarks with its Core i7-1185G7 Tiger Lake CPU with on-board Iris Xe graphics and there's no question Tiger Lake is impressive. Intel indeed achieved single-threaded performance gains north of 20% with even larger deltas for multithreaded throughput in some cases as well. In addition, Tiger Lake's integrated Iris Xe graphics put up over 2X the gaming performance over the company's 10th Gen Ice Lake processors, and it looks to be the fastest integrated graphics solution for laptops on the market currently, besting AMD's Ryzen 4000 series as well. Battery life measurements are still out, however, as retail ready products have yet to hit the channel. Intel notes Tiger Lake-powered laptops from OEM partners should be available in the next month or so.

Submission + - NVIDIA GeForce RTX 3080 Tested: A Huge Leap Forward In Gaming Performance (hothardware.com)

MojoKid writes: NVIDIA CEO Jensen Huang officially unveiled the GeForce RTX 30 series based on the company's new Ampere architecture a couple of weeks back. According to Huang, the GeForce RTX 30 series represents the greatest generational leap in the company's history and he claimed the GeForce RTX 3080 would offer double the performance of its predecessor. The embargo for GeForce RTX 3080 reviews just lifted and it seems NVIDIA was intent on making good on its claims. The GeForce RTX 3080 is the fastest GPU released to date, across the board, regardless of the game, application, or benchmarks used. Throughout testing, the GeForce RTX 3080 often put up scores more than doubling the performance of AMD's current flagship Radeon RX 5700 XT. The RTX 3080 even skunked the NVIDIA Titan RTX and GeForce RTX 2080 Ti by relatively large margins, even though it will retail for almost half the price of a 2080 Ti (at least currently). The bottom line is, NVIDIA's got an absolutely stellar-performing GPU on its hands, and the GeForce RTX 3080 isn't even the best Ampere has to offer, with the RTX 3090 waiting in the wings. GeForce RTX 3080 cards will be available from NVIDIA and 3rd party board partners on 9/17 for an entry-level MSRP of $699.

Submission + - NVIDIA Unveils GeForce RTX 30 Cards With Much Better Performance And Pricing (hothardware.com)

MojoKid writes: NVIDIA just finished hosting a GeForce webcast on Twitch highlighting its new, oft-leaked Ampere-based GeForce RTX 30 series that will be available soon, in addition to a number of technologies for gamers and creators. The GeForce RTX 3090 will be NVIDIA's full fat Ampere card and it's a beast, delivering 36 Shader-TFLOPS, 69 RT-TFLOPS for ray tracing, and 285 Tensor-TFLOPS for DLSS processing. At the high end of the stack, NVIDIA claims Ampere offers over 2X the performance and throughput of NVIDIA's Turing GPU in these areas , though the RTX 3090 will retail for a hefty $1499 Titan-like price tag. However, this time around NVIDIA was more aggressive with its mainstream GPU pricing and the GeForce RTX 3080, which the company claims brings two times the performance of an RTX 2080, and the GeForce RTX 3070, which is claimed to be faster than a GeForce RTX 2080 Ti, will retail for $699 and $499 respectively. When you consider GeForce RTX 2080 Ti cards still retail for around $1200, the GeForce RTX 3070 value proposition looks especially solid. Other interesting aspects are the the new RTX 30 series cooler design, with front and rear-mounted push/pull dual axial fans for better cooling and warm air exhaust out the card's IO backplate. Also NVIDIA announced NVIDIA RTX IO, which promises to direct connect fast PC SSD storage to GPU graphics memory via Microsoft's DirectStorage API in DirectX for "instantaneous" game loading. NVIDIA also unveiled a number of new software features and tools like Ominverse Machinima, for AI enhanced content creation and animation in games from user webcam input, NVIDIA Reflex to reduce latency in fast twitch competitive gaming and NVIDIA Broadcast that uses RTX cards for enhancing video and audio quality for gamer streamers and web conferencing applications. NVIDIA's GeForce RTX 3080 will be available at retail on 9/17, the RTX 3090 will follow on 9/24 and the GeForce RTX 3070 will arrive in October.

Submission + - SPAM: Galaxy Note 20 And Note 20 Ultra Reviews Are Live And Samsung Impresses Again

MojoKid writes: Samsung took the wraps off Galaxy Note 20 series product reviews today, along with first performance benchmarks and camera results. Fans of Samsung's Note series will appreciate the Galaxy Note 20 Ultra's large 120Hz 6.9-inch OLED display and expandable storage, while the standard Note 20 is limited to a 6.7-inch 60Hz screen and no microSD card slot. Both of the new Galaxy Note 20s are equipped with Qualcomm's most powerful Snapdragon 865+ 5G mobile platform, which should boost performance over the older Galaxy Note 10 quite a bit. The Snapdragon 865+ is more strictly binned from the best silicon, and the chips are able to sustain higher CPU and GPU frequencies, resulting in a 10% boost in performance over the most recent crop of Snapdragon 865-based phones. Both phones also support Ultra Wide-Band technology for high speed point to point file sharing between devices, and new Samsung S-Pen technology that Samsung claims brings a 40% reduction in latency and better accuracy. In addition, both phones have the same 10MP Selfie Camera with f2.2 aperture and 80 FOV. The triple-rear camera arrays on the back of the devices are quite different though. The Note 20 Ultra is outfitted with a 12MP Ultra Wide Camera (f2.2, FOV: 120), a 12MP Telephoto Camera (f3.0, FOV: 20), and a 108MP Wide-Angle Camera (f1.8, FOV: 79) with 1/1.33" image sensor size, phase detect AF, OIS and a new Laser AF Sensor for faster and more accurate auto-focus in a variety of lighting conditions. All told, the Galaxy Note 20 Ultra and standard Note 20 are significant upgrades versus the previous gen Note 10. However, as usual, these premium phones also sport premium pricing, starting at $999 for the standard Note 20 and $1200 Note 20 Ultra, both of which will be available in market on August 21.
Link to Original Source

Submission + - Intel Details New 10nm SuperFin, Tiger Lake And Xe GPU Tech At Architecture Day (hothardware.com)

MojoKid writes: Intel held a virtual Architecture Day event this week, and the company disclosed a number of details regarding its Six Pillars of technology innovation — Process, Architecture, Memory, Interconnect, Security and Software. Many new details were revealed regarding Intel's upcoming Tiger Lake, Willow Cove, Xe Graphics, and hybrid Alder Lake architectures. In addition, its new 10nm SuperFin technology was unveiled, and an array of packaging technologies were disclosed as well, including the next evolution of the company's Foveros 3D stacked silicon packaging. Intel's 10nm SuperFin technology builds upon the company's enhanced FinFET transistors, but adds a new super metal insulator metal capacitor, or Super MIM for short. This enhanced transistor tech enables Tiger Lake to have a much wider dynamic power and clock speed curve versus Ice Lake, which results in significant performance and power advantages. In addition, Tiger Lake mobile CPUs will be based on Intel's new Willow Cove architecture that brings a dual ring bus with a 2X increase in coherent fabric bandwidth and big gains in memory bandwidth. Intel claims Tiger Lake will deliver more than a generational increase in performance over Ice Lake and up to a 2X performance lift in its Gen 12 Xe graphics engine, now with up to 96 execution units versus 64 in the previous gen. Many Intel Xe GPU details were revealed as well, including a new its Xe-HPG architecture, which will feature hardware-accelerated ray tracing and target enthusiast gamers.

Submission + - NVIDIA Reportedly Could Be Pursuing Arm In Disruptive Acquisition Move (hothardware.com)

MojoKid writes: Word across a number of business and tech press publications tonight is that NVIDIA is reportedly pursuing a possible acquisition of Arm, the chip IP juggernaut that currently powers virtually every smartphone on the planet (including iPhones), to a myriad of devices in the IoT and embedded spaces, as well as supercomputing and in the datacenter. NVIDIA has risen in the ranks over the past few years to become a force in the chip industry, and more recently has even been trading places with Intel as the most valuable chipmaker in the United States, with a current market cap of $256 billion. NVIDIA has found major success in consumer and pro graphics, the data center, artificial intelligence/machine learning and automotive sectors in recent years, meanwhile CEO Jensen Huang has expressed a desire to further branch out into the growing Internet of Things (IoT) market, where Arm chip designs flourish. However, Arm's current parent company, SoftBank, is looking for a hefty return on its investment and Arm reportedly could be valued at around $44 billion, if it were to go public. A deal with NVIDIA, however, would short-circuit those IPO plans and potentially send shockwaves in the semiconductor market.

Submission + - AMD Launches Ryzen 4000 Renoir Processors With Integrated Graphics For Desktops (hothardware.com)

MojoKid writes: Today AMD took the wraps off a new line of desktop processors based on its Zen 2 architecture but also with integrated Radeon graphics, to compete better versus Intel with OEM system builders. These new AMD Ryzen 4000 socket AM4 desktop processors are essentially juiced-up versions of AMD's already announced Ryzen 4000 laptop CPUs, but with faster base and boost clocks, as well as faster GPU clocks for desktop PCs. There are two distinct families AMD Ryzen 4000 families, a trio of 65-watt processors that include the Ryzen 3 4300G (4-core/8-thread), Ryzen 5 4600G (6-core/12-thread), and the flagship Ryzen 7 4700G, offering 8 cores/16 threads, base/boost clocks of 3.6GHz/4.4GHz, 12MB cache, and 8 Radeon Vega cores clocked at 2100MHz. AMD is also offering three 35-watt processors — Ryzen 3 4300GE, Ryzen 5 4600GE, and the Ryzen 7 4700GE – which share the same base hardware configurations as the "G" models but slightly lower CPU/GPU clocks to reduce power consumption. In addition AMD also announced its Ryzen Pro 4000 series for business desktops, which also include a dedicated security processor and support for AMD Memory Guard full system memory encryption. As you might expect, specs (core/cache counts, CPU/GPU clocks) for the Ryzen Pro 4000G (65W) and Ryzen Pro 4000GE (35W) largely line up with their consumer desktop counterparts.

Submission + - AMD Launches Ryzen 3000XT Series CPUs At Higher Clock Speeds To Battle Intel (hothardware.com)

MojoKid writes: Last month, AMD made its Ryzen 3000XT series processors official, after weeks of leaks and speculation. Ryzen 3000XT series processors are tweaked versions of the original 3000X series products, but with higher clocks and the ability to maintain turbo frequencies longer. Launching today, AMD's new Ryzen 5 3600XT is a 6 core / 12-thread processor, with a 3.8GHz base clock and a 4.5GHz max boost clock. That's a 100MHz increase over the 3600X. The Ryzen 7 3800XT is an 8-core / 16-thread processor with a base clock of 3.9GHz and a max boost clock of 4.7GHz, which is 200MHz higher than the original 3800X. Finally, the Ryzen 9 3900XT is a 12-core / 24-thread processor with a base clock of 3.8GHz with a max boost clock of 4.7GHz, which is a 100MHz increase over the original Ryzen 9 3900X. AMD also notes these new processors can maintain boost frequencies for somewhat longer durations as well, which should offer an additional performance uplift, based on refinements made to the chip's 7nm manufacturing process. In testing, the new CPUs offer small performance gains over their "non-XT" namesakes, with 100MHz — 200MHz increases in boost clocks resulting in roughly 2% — 5% increases to both their single and multi-threaded performance in most workloads. Those frequency increases come at the expense of slightly higher peak power consumption as well of course. The best news may be that AMD's original Ryzen 5 3600X, Ryzen 7 3800X, and the Ryzen 9 3900X will remain in the line-up for the time being, but their prices will be slashed a bit, with the new Ryzen 5 3600XT, Ryzen 7 3800XT, and Ryzen 9 3900XT arriving with the same $249, $399, and $499 introductory prices as the originals.

Submission + - New Dell XPS 17 Laptop Now On Sale, Impresses In First Reviews (hothardware.com)

MojoKid writes: Dell's premium line of XPS laptops has garnered more than its fair share of awards over the years, but the XPS line-up hasn't featured a larger 17-inch machine. However, Dell recently announced the all-new XPS 17, and as the name implies, this new laptop features a bright, spacious 17-inch 4K display and also Dell's updated design language, introduced with the company's redesigned 2020 XPS line-up. The design has the thinnest of bezels on all four sides of its display, even on the bottom (with a web cam on top where it should be) and the result is essentially 17-inches of screen real estate in a package that's similar to the average 15-inch notebook form factor. The machine is powered Intel's 10th Gen Core processors with up to 8-cores and up to an NVIDIA GeForce RTX 2060 Max-Q GPU with 6GB of GDDR6. Toss in up to 64GB of RAM and a fast Samsung NVMe SSD and the machine really tears through the benchmarks and even some top game titles. Finally, with up to 97 Wh battery on board, the 5.5 pound XPS 17 delivers solid battery life as well. It's not cheap, starting at $1399 and weighing in at $2999 for the config tested at HotHardware, but the new Dell XPS 17 just may be as close to perfect as a 17-inch laptop can get these days.

Comment Re:Bullshit! (Score 2) 10

And that would be completely inaccurate. Neural Networks and libraries are employed in multiple real-world applications on phones currently. Your on-device speech to text processing doesn't happen unless the machine is listening to your spoken word and inferring what you're saying and translating it to text. This is done with anything from tensor cores, which do exist in modern smartphone SoCs like Snapdragon chips, to DSP complexes as well that are on-chip. Also, machine vision to improve image capture and processing is used as well, detecting actual objects in a scene and determining what color/contrast, sharpening and other balance options need to be applied. Even silly Tik-Tok and Instagram image/video manipulation uses AI on phones these days. So don't call BS unless you really understand what you're speaking of. The benchmarks are relative to the capabilities of a given handset to perform tasks just like these. Obviously workstation and server AI is on another level, and often where AI training is done, rather than just inference.

Submission + - A Look At AI Benchmarking For Mobile Devices In A Rapidly Evolving Ecosystem (hothardware.com)

MojoKid writes: AI and Machine Learning performance benchmarks have been well explored in the data center, but are fairly new and unestablished for edge devices like smartphones. While AI implementations on phones are typically limited to inferencing tasks like speech to text transcription and camera image optimization, there are real-world neural network models employed on mobile devices and accelerated by their dedicated processing engines. A deep dive look at HotHardware of three popular AI benchmarking apps for Android, shows that not all platforms are created equal, but also that performance results can vary wildly, depending on the app used for benchmarking. Generally speaking, it all hinges on what NNs the benchmarks are testing and what precision is being tested and weighted. Most mobile apps that currently employ some level of AI make use of INT8 (quantized). While INT8 offers less precision than FP16 (Floating Point), it's also more power-efficient and offers enough precision for most consumer applications. Generally speaking, Qualcomm Snapdragon 865 powered devices offer the best INT8 performance, while Huawei's Kirin 990 in the P40 Pro 5G offers superior FP16 performance. Since INT8 precision for NN processing is more common in today's mobile apps, it could be said that Qualcomm has the upper hand, but the landscape in this area is ever-evolving to be sure.

Slashdot Top Deals

DEC diagnostics would run on a dead whale. -- Mel Ferentz

Working...