MojoKid writes: Qualcomm is lifting the veil on performance benchmark numbers for its new Snapdragon 835 processor today (or Mobile Platform, as Qualcomm is referring to it now) and it's looking like a notable improvement over the Snapdragon 820 and 821. The Snapdragon 835 is expected to provide up to 11 hours of 4K video playback and also serve up hours of VR gaming on a single charge, along with performance increases of up to 25 percent in both CPU and graphics/gaming related workloads. The Snapdragon 835 SoC is built on 10nm FinFET technology, which results in significantly lower power consumption, though Qualcomm notes most of the power consumption gains were realized in the 8-core CPU block — an ARM big.Little design — for efficiency. Four larger, semi-custom Kyro 280 cores are clocked at 2.45GHz and the smaller, lower-power four cores are clocked up to 1.9GHz. Though proving battery life claims will have to wait for retail shipping smartphones that employ the new Snapdragon SoC to ship in market, benchmark numbers taken from Qualcomm prototype devices show impressive results. Actual gains north of 20 percent in gaming and graphics tests were observed, but more modest gains of 10 — 15 percent were realized in CPU-centric tests.
MojoKid writes: Intel unveiled its first SSD product that will leverage 3D Xpoint memory technology, the new Optane SSD DC P4800X. The Intel SSD DC P4800X resembles some of Intel's previous enterprise storage products, but this product is all new, from its controller to its 3D Xpoint storage media that was co-developed with Micron. The drive's sequential throughput isn't impressive versus other high-end, enterprise NVMe storage products, but the Intel Optane SSD DX P4800X shines at very low queue depths with high random 4kB IO throughput, where NAND flash-based storage products tend to falter. The drive's endurance is also exceptionally high, rated for 30 drive writes per day or 12.3 Petabytes Written. Intel provided some performance data comparing its SSD SC P3700 NAND drive to the Optane SSD DC P4800X in a few different scenarios. This test shows read IO latency with the drive under load and not only is the P4800X's read IO latency significantly lower, but it is very consistent regardless of load. With a 70/30 mixed read write workload, the Optane SSD DC P4800X also offers between 5 and 8x better performance versus standard NVMe drives. The 375GB Intel Optane SSD DC P4800X add-in-card will be priced at $1520, which is roughly three times the cost per gigabyte of Intel's high-end SSD DC P3700. In the short term, expect Intel Optane solid state drives to command a premium. As availability ramps, however, prices will likely come down.
MojoKid writes: NVIDIA is officially launching its most powerful gaming graphics card today, the GeForce GTX 1080 Ti. It was announced last week at the Game Developers Conference and pre-orders began shortly thereafter. However, the cards will begin shipping today and NVIDIA has lifted the veil on performance reviews. Though its memory complement and a few blocks within the GPU are reduced versus the NVIDIA's previous top-end card, the Titan X, the GeForce GTX 1080 Ti makes up for its shortcomings with a combination of refinement and the brute force of higher memory clocks, based on new and improved Micron GDDR5X memory, faster core clocks and an improved cooler. For gamers, the good news is, the 1080 Ti retails for $699, versus $1200 for the Titan X, and it is in fact faster, for the most part. Throughout a battery of game tests and benchmarks, regardless of the resolution or settings used, the GeForce GTX 1080 Ti performed on par with or slightly faster than the NVIDIA Titan X and roughly 30 % – 35% better than a standard GeForce GTX 1080 Founder's Edition. Versus AMD's current flagship GPU, the Radeon R9 Fury X, there is no competition; the GeForce GTX 1080 Ti was nearly 2x faster than the Fury X in some cases.
MojoKid writes: AMD has finally lifted the veil on independent reviews of its new Ryzen series of desktop processors that bring the company's CPU architecture back more on competitive footing versus its rival, Intel's Core series. The initial family of Ryzen processors consists of three 8-core chips, the Ryzen 7 1800X at 3.6GHz with boost to 4.1GHz, the Ryzen 7 1700X at 3.4Ghz with boost to 3.8GHz and the Ryzen 7 1700 at 3GHz with boost to 3.7GHz. Each has support for 2 threads per core, for a total of 16 threads with 16MB of L3 cache on-board, 512K of L2 and TDPs that range from 65 watts for the Ryzen 7 1700 at the low-end, on up to 95 watts for the 1700X and 1800X. In comparison to AMD's long-standing A-series APUs and FX-series processors, the new architecture is significantly more efficient and performant than any of AMD's previous desktop processor offerings. AMD designed the Zen microarchitecture at the heart of Ryzen with performance, throughput, and efficiency in mind. Initially, AMD had reported a 40% target for IPC (instructions per clock) improvement with Zen but actually realized about a 52% lift in overall performance. In the general compute workloads, rendering, and clock-for-clock comparisons, the Ryzen 7 1800X either outperformed or gives Intel's much more expensive Core i7-6900K a run for its money. The lower clock speeds of the Ryzen 7 1700X and 1700 obviously resulted in performance a notch behind the flagship 1800X, but those processors also performed quite well. Ryzen was especially strong in heavily threaded workloads like 3D rendering and Ray Tracing, but even in less strenuous tests like PCMark, the Ryzen 7 series competed favorably. It's not all good news, though. With some older code, audio encoding, lower-res gaming, and platform level tests, Ryzen trailed Intel – sometimes by a wide margin. There's obviously still optimization work that needs to be done – from both AMD and software developers.
MojoKid writes: LG unveiled the new G6 smartphone today, going completely back to the drawing board versus its predecessor, the not so well-received G5. In its place is a very compact aluminum unibody design and a large 5.7-inch QHD+ display with a 2880X1440 resolution. That display is the main focal point of the G6, and it has a rather unorthodox 18:9 screen ratio, which LG says allows that smartphone to better fit in your hand. LG also notes that the aspect ratio is being adopted as a universal format from the likes of film studios and content providers like Netflix. Its thin bezel also gives the LG G6 an 80 percent screen to body ratio. The handset is powered by a Qualcomm Snapdragon 821 processor along with 4GB of RAM, 32GB of internal storage and a microSD slot which can accommodate up to an additional 2TB of storage. LG also outfitted the G6 with dual 13MP rear cameras: a wide angle (F2.4 / 125) shooter and a standard camera (F1.8 / 71) with optical image stabilization. The LG G6 launches next month and will be available in Ice Platinum, Mystic White, Astro Black color options. Pricing is TBD.
MojoKid writes: LG has unveiled the new G6 smartphone today and the company has gone completely back to the drawing board versus its predecessor, the G5. In its place is a compact, efficient aluminum unibody design with a large 5.7-inch QHD+ display with a 2880X1440 resolution. That display is the main focal point of the G6, and it has a rather unorthodox 18:9 screen ratio, which LG says allows that smartphone to better fit in your hand and LG notes that the aspect is being adopted as a universal format from the likes of film studios and content providers like Netflix. Its thin bezels also give the LG G6 an 80 percent screen to body ratio. With a Qualcomm Snapdragon 821 on board, LG may not have bragging rights when it comes to outright performance versus rumored devices that will bring Qualcomm's Snapdragon 835 to market this year, but the 821 is certainly no slouch either. Complementing the Snapdragon 821 is 4GB of RAM, 32GB of internal storage and a microSD slot which can accommodate up to additional 2TB of storage. LG has also outfitted the G6 with dual 13MP rear cameras: Wide (F2.4 / 125) and Standard (F1.8 / 71) with optical image stabilization. The LG G6 launches next month and will be available in Ice Platinum, Mystic White, Astro Black color options. Pricing is TBD.
MojoKid writes: Yet another AMD Ryzen leak is making the rounds, one that details an extensive lineup of 17 processors. It's a continuation of a previous leak supposedly outing AMD's top-to-bottom retail launch lineup, only now with individual part numbers and TDP ratings for every SKU. The leaked chart lists all 17 Ryzen SKUs, a dozen of which sport 65 TDP ratings with the remaining five listed having a 95W TDP. Eight of the Ryzen chips are quad-core parts, four are six-core CPUs, and five are eight-core processors. AMD will allegedly bundle an updated Wraith cooler codenamed HS81 with its Black Edition Ryzen processors that have a 95W TDP. They include the Ryzen 7 1800X, Ryzen 7 1700X, and Ryzen 5 1600X. Also, new information on one of AMD's top-end chips, the Ryzen 7 1700X, has surfaced as well, claiming a $389 price tag and performance on par with Intel's Core i7-6900K Broadwell-E 8-core chip that retails for over $1K.
MojoKid writes: Microsoft may have discontinued its Lumia family of smartphones, but that doesn't mean that the company has given up on handsets altogether. A new patent filing reveals that Microsoft could still have a few more tricks up its sleeve; in this case, a folding smartphone. If such a design were to make it to production, it would likely adopt Surface branding, joining the likes of the flexible and convertible Surface Pro, Surface Book and Surface Studio. Entitled "Mobile Computing Device Having A Flexible Hinge Structure", the patent shows a smartphone with a side-mounted hinge that opens up to reveal an uninterrupted, large display surface more fitting for tablet duty. And just like patent filings leaked the Surface Studio months before its official unveil, this could be a precursor to a future Microsoft product. Of course, there are no guarantees when it comes to patent filings, as Microsoft has patented many design innovations without acting on them with a shipping product.
MojoKid writes: When Microsoft first launched Windows 10, it was generally well-received but also came saddled with a number of privacy concerns. It has taken quite a while for Microsoft to respond to these concerns in a meaningful way, but the company is finally proving that it's taking things seriously by detailing some enhanced privacy features coming to a future Windows 10 build. Microsoft is launching what it calls a (web-based) privacy dashboard, which lets you configure anything and everything about information that might be sent to back to the mother ship. You can turn all tracking off, or pick and choose, if certain criteria don't concern you too much, like location or health activity, for example. Also, for fresh installs, you'll be given more specific privacy options so that you can feel confident from the get-go about the information you're sending Redmond's way. If you do decide to send any information Microsoft's way, the company promises that it won't use your information for the sake of targeted advertising.
MojoKid writes: AMD is announcing a new series of Radeon-branded products today, targeted at machine intelligence and deep learning enterprise applications, called Radeon Instinct. As its name suggests, the new Radeon Instinct line of products are comprised of GPU-based solutions for deep learning, inference and training. The new GPUs are also complemented by a free, open-source library and framework for GPU accelerators, dubbed MIOpen. MIOpen is architected for high-performance machine intelligence applications and is optimized for the deep learning frameworks in AMD's ROCm software suite. The first products in the lineup consist of the Radeon Instinct MI6, the MI8, and the MI25. The 150W Radeon Instinct MI6 accelerator is powered by a Polaris-based GPU, packs 16GB of memory (224GB/s peak bandwidth), and will offer up to 5.7 TFLOPS of peak FP16 performance. Next up in the stack is the Fiji-based Radeon Instinct MI8. Like the Radeon R9 Nano, the Radeon Instinct MI8 features 4GB of High-Bandwidth Memory (HBM) with peak bandwidth of 512GB/s. The MI8 will offer up to 8.2 TFLOPS of peak FP16 compute performance, with a board power that typical falls below 175W. The Radeon Instinct MI25 accelerator will leverage AMD's next-generation Vega GPU architecture and has a board power of approximately 300W. All of the Radeon Instinct accelerators are passively cooled but when installed into a server chassis you can bet there will be plenty of air flow. Like the recently released Radeon Pro WX series of professional graphics cards for workstations, Radeon Instinct accelerators will be built by AMD. All of the Radeon Instinct cards will also support AMD MultiGPU (MxGPU) hardware virtualization technology.
MojoKid writes: Intel is laying out its roadmap to advance artificial intelligence performance across the board. Nervana Systems, a company that Intel acquired just a few months ago, will play a pivotal role in the company's efforts to make waves in an industry dominated by GPU-based solutions. Intel's Nervana chips incorporate technology (which involves a fully-optimized software and hardware stack) that is specially tasked with reducing the amount of time required to train deep learning models. Nervana hardware will initially be available as an add-in card that plugs into a PCIe slot, which is the quickest way for Intel to get this technology to customers. The first Nervana silicon, codenamed Lake Crest, will make its way to select Intel customers in H1 2017. Intel is also talking about Knights Mill, which is the next generation of the Xeon Phi processor family. The company claims that Knights Mill will deliver a 4x increase in deep learning performance compared to existing Xeon Phi processors and the combined solution with Nervana will offer orders of magnitude gains in deep learning performance.
MojoKid writes: It has been twenty years since the release of Diablo, and Blizzard is celebrating with some very special new content. The team is recreating the original Diablo inside Diablo 3: Reaper of Souls with its "The Darkening of Tristram" update. The Darkening of Tristram will offer a sixteen-level dungeon with the four main bosses from Diablo. The name of the bosses have not been clarified yet. There is speculation, however, that they will be the Butcher (Level 2), King Leoric (Level 3), Archbishop Lazarus (a secret lair adjacent to Level 15), and Diablo or the "Lord of Terror." The art style is reminiscent of the original game and comes with visual filters that make the game look pixelated and grainy. Frank Pearce, Blizzard's chief development officer, remarked, "we call it "glorious retrovision." He also stated that the best way to experience the update is to start the game with a fresh character, although the content will be available for all characters. The Darkening of Tristram will also appear on Diablo 3's Public Test Realm next week, though a target formal release date has not been set.
MojoKid writes: We weren't expecting to see Intel release Kaby Lake desktop SKUs until a couple months from now, but a new leak has the enthusiast community buzzing with a look at what one of those chips will offer with regards to performance compared to its immediate predecessor. The Intel Core i5-7600K, which is reportedly the top-end SKU in the Kaby Lake Core i5 family is still based on 14nm FinFET technology (or rather, 14nm Plus) and has a base frequency of 3.8GHz that can Turbo Boost to 4.2GHz. It has 6MB of L3 cache and has a rated TDP of 91W. For comparison, the Skylake-based Core i5-6600K, which the Core i5-7600K will be replacing, has a base frequency of 3.5GHz and can Turbo Boost to 3.9GHz. Like its successor, it too has 6MB of L3 cache and a TDP of 91W. On the graphics front, the new Kaby Lake processor has an integrated Intel HD 630 graphics core, which will be clocked slightly higher than the HD 530 core found in its Skylake counterpart. While operating within the same power envelope as the Core i5-6600K, the Core i5-7600K offers roughly a 7 percent to 10 percent performance advantage over its predecessor, which can mainly be attributed to the increase in clock speeds.
MojoKid writes: Consumer WiFi router products are classified by three major performance characteristics: overall throughput or bandwidth, multi-client performance, and range. Although throughput and multi-client bandwidth has scaled-up over the years, range hasn't improved quite as robustly. Even the most powerful WiFi routers, with active antennas, can still leave dead spots in large home or office installations. That's where the recent crop of mesh router technologies, that startups like Eero and Google with Google WiFi, are making significant advancements. By spreading out multiple, interconnected router access points across a WiFi network, you blanket the area with a stronger, more contiguous signal. If you need to go the distance, mesh WiFi routers are the new way to go and Netgear is now entering the fray with a 3Gbps tri-band setup called Orbi. Where the Orbi is different from recent mesh networking products is its 5GHz, 1733Mbps backhaul connection between its satellite and the base router. A combined two unit system offers a 2X2, 866Mbps, 5GHz AC connection and a 2x2, 400Mbps, 2.4GHz link. However, in between, including Gig-E wired devices that you can plug into a satellite, there's a 4x4, 5GHz dedicated backhaul link that lets client connections stretch their legs. Tested against a powerful standard AC5300 router, the Orbi mesh setup delivered consistent performance well north of 130Mbps, through multiple floors, and upwards of 300Mbps at longer distances, up to 4,000 square feet, with the Orbi satellite on the same level as the client PC.
MojoKid writes: Whether you use Linux at home or manage a Linux server, you should waste no time in making sure your OS is completely up-to-date. An exploit called "Dirty COW" has now been revealed, and while it's not the most dangerous one ever released, the fact that it's been around for nine years is causing alarm throughout the Linux community. Dirty COW might sound like an awfully bizarre name for an exploit, but it's named as such because the Linux function it affects is "copy-on-write." COW happens when more than one system call references the same data. To optimize the amount of space that data uses, pointers are used (as with data deduplication). If one call needs to modify the data, that's when the data is copied entirely. As a privilege escalation exploit, code execution could happen after this bug is exploited. Imagine, for example, if someone gains access to a system via SQL injection, but lands as a normal user. With this exploit, the equivalent of root access could be gained, at which point the OS is at the mercy of its attacker.