MojoKid writes: Intel took the wraps off its Optane Memory devices for client PCs today and the product looks just like current generation 80mm M.2 "gumstick" type solid state drives. However, Intel Optane Memory is based on the company's 3D Xpoint memory technology and is meant to be used as an accelerator for systems with relatively low-speed storage devices, like hard drives. Intel Optane Memory products and associated software are designed to cache the most frequently accessed bits of data on a compatible system, which can significantly increase performance and responsiveness of slower drives. The SSD can be paired to any standard hard drive or SATA drive for that matter, regardless of the capacity. The Optane memory is used as a high-speed repository of the most commonly accessed data blocks (not necessarily complete files). Usage patterns on the hard drive are monitored and the most frequently accessed bits of data are copied from the hard drive to the Optane drive. Because it's is used as a cache, it is not presented to the end-user as a separate volume. The first products in Intel's Optane Memory line-up will be M.2 type NVMe SSDs, with capacities of 16GB and 32GB. Note that Intel Optane Memory will work only on Windows 10 64-bit systems with Intel 7th Gen Kaby Lake-based processors and 200-series chipsets, or newer systems. 16GB and 32GB Intel Optane Memory modules will be available initially through retailers with MSRPs of $44 for the 16GB part and $77 for the 32GB model. There are already over 130 motherboards on the market and systems featuring the technology will be made available soon from all of the major players, including Dell, Lenovo, HP and others.
MojoKid writes: Qualcomm is lifting the veil on performance benchmark numbers for its new Snapdragon 835 processor today (or Mobile Platform, as Qualcomm is referring to it now) and it's looking like a notable improvement over the Snapdragon 820 and 821. The Snapdragon 835 is expected to provide up to 11 hours of 4K video playback and also serve up hours of VR gaming on a single charge, along with performance increases of up to 25 percent in both CPU and graphics/gaming related workloads. The Snapdragon 835 SoC is built on 10nm FinFET technology, which results in significantly lower power consumption, though Qualcomm notes most of the power consumption gains were realized in the 8-core CPU block — an ARM big.Little design — for efficiency. Four larger, semi-custom Kyro 280 cores are clocked at 2.45GHz and the smaller, lower-power four cores are clocked up to 1.9GHz. Though proving battery life claims will have to wait for retail shipping smartphones that employ the new Snapdragon SoC to ship in market, benchmark numbers taken from Qualcomm prototype devices show impressive results. Actual gains north of 20 percent in gaming and graphics tests were observed, but more modest gains of 10 — 15 percent were realized in CPU-centric tests.
MojoKid writes: Intel unveiled its first SSD product that will leverage 3D Xpoint memory technology, the new Optane SSD DC P4800X. The Intel SSD DC P4800X resembles some of Intel's previous enterprise storage products, but this product is all new, from its controller to its 3D Xpoint storage media that was co-developed with Micron. The drive's sequential throughput isn't impressive versus other high-end, enterprise NVMe storage products, but the Intel Optane SSD DX P4800X shines at very low queue depths with high random 4kB IO throughput, where NAND flash-based storage products tend to falter. The drive's endurance is also exceptionally high, rated for 30 drive writes per day or 12.3 Petabytes Written. Intel provided some performance data comparing its SSD SC P3700 NAND drive to the Optane SSD DC P4800X in a few different scenarios. This test shows read IO latency with the drive under load and not only is the P4800X's read IO latency significantly lower, but it is very consistent regardless of load. With a 70/30 mixed read write workload, the Optane SSD DC P4800X also offers between 5 and 8x better performance versus standard NVMe drives. The 375GB Intel Optane SSD DC P4800X add-in-card will be priced at $1520, which is roughly three times the cost per gigabyte of Intel's high-end SSD DC P3700. In the short term, expect Intel Optane solid state drives to command a premium. As availability ramps, however, prices will likely come down.
MojoKid writes: NVIDIA is officially launching its most powerful gaming graphics card today, the GeForce GTX 1080 Ti. It was announced last week at the Game Developers Conference and pre-orders began shortly thereafter. However, the cards will begin shipping today and NVIDIA has lifted the veil on performance reviews. Though its memory complement and a few blocks within the GPU are reduced versus the NVIDIA's previous top-end card, the Titan X, the GeForce GTX 1080 Ti makes up for its shortcomings with a combination of refinement and the brute force of higher memory clocks, based on new and improved Micron GDDR5X memory, faster core clocks and an improved cooler. For gamers, the good news is, the 1080 Ti retails for $699, versus $1200 for the Titan X, and it is in fact faster, for the most part. Throughout a battery of game tests and benchmarks, regardless of the resolution or settings used, the GeForce GTX 1080 Ti performed on par with or slightly faster than the NVIDIA Titan X and roughly 30 % – 35% better than a standard GeForce GTX 1080 Founder's Edition. Versus AMD's current flagship GPU, the Radeon R9 Fury X, there is no competition; the GeForce GTX 1080 Ti was nearly 2x faster than the Fury X in some cases.
MojoKid writes: AMD has finally lifted the veil on independent reviews of its new Ryzen series of desktop processors that bring the company's CPU architecture back more on competitive footing versus its rival, Intel's Core series. The initial family of Ryzen processors consists of three 8-core chips, the Ryzen 7 1800X at 3.6GHz with boost to 4.1GHz, the Ryzen 7 1700X at 3.4Ghz with boost to 3.8GHz and the Ryzen 7 1700 at 3GHz with boost to 3.7GHz. Each has support for 2 threads per core, for a total of 16 threads with 16MB of L3 cache on-board, 512K of L2 and TDPs that range from 65 watts for the Ryzen 7 1700 at the low-end, on up to 95 watts for the 1700X and 1800X. In comparison to AMD's long-standing A-series APUs and FX-series processors, the new architecture is significantly more efficient and performant than any of AMD's previous desktop processor offerings. AMD designed the Zen microarchitecture at the heart of Ryzen with performance, throughput, and efficiency in mind. Initially, AMD had reported a 40% target for IPC (instructions per clock) improvement with Zen but actually realized about a 52% lift in overall performance. In the general compute workloads, rendering, and clock-for-clock comparisons, the Ryzen 7 1800X either outperformed or gives Intel's much more expensive Core i7-6900K a run for its money. The lower clock speeds of the Ryzen 7 1700X and 1700 obviously resulted in performance a notch behind the flagship 1800X, but those processors also performed quite well. Ryzen was especially strong in heavily threaded workloads like 3D rendering and Ray Tracing, but even in less strenuous tests like PCMark, the Ryzen 7 series competed favorably. It's not all good news, though. With some older code, audio encoding, lower-res gaming, and platform level tests, Ryzen trailed Intel – sometimes by a wide margin. There's obviously still optimization work that needs to be done – from both AMD and software developers.
MojoKid writes: LG unveiled the new G6 smartphone today, going completely back to the drawing board versus its predecessor, the not so well-received G5. In its place is a very compact aluminum unibody design and a large 5.7-inch QHD+ display with a 2880X1440 resolution. That display is the main focal point of the G6, and it has a rather unorthodox 18:9 screen ratio, which LG says allows that smartphone to better fit in your hand. LG also notes that the aspect ratio is being adopted as a universal format from the likes of film studios and content providers like Netflix. Its thin bezel also gives the LG G6 an 80 percent screen to body ratio. The handset is powered by a Qualcomm Snapdragon 821 processor along with 4GB of RAM, 32GB of internal storage and a microSD slot which can accommodate up to an additional 2TB of storage. LG also outfitted the G6 with dual 13MP rear cameras: a wide angle (F2.4 / 125) shooter and a standard camera (F1.8 / 71) with optical image stabilization. The LG G6 launches next month and will be available in Ice Platinum, Mystic White, Astro Black color options. Pricing is TBD.
MojoKid writes: LG has unveiled the new G6 smartphone today and the company has gone completely back to the drawing board versus its predecessor, the G5. In its place is a compact, efficient aluminum unibody design with a large 5.7-inch QHD+ display with a 2880X1440 resolution. That display is the main focal point of the G6, and it has a rather unorthodox 18:9 screen ratio, which LG says allows that smartphone to better fit in your hand and LG notes that the aspect is being adopted as a universal format from the likes of film studios and content providers like Netflix. Its thin bezels also give the LG G6 an 80 percent screen to body ratio. With a Qualcomm Snapdragon 821 on board, LG may not have bragging rights when it comes to outright performance versus rumored devices that will bring Qualcomm's Snapdragon 835 to market this year, but the 821 is certainly no slouch either. Complementing the Snapdragon 821 is 4GB of RAM, 32GB of internal storage and a microSD slot which can accommodate up to additional 2TB of storage. LG has also outfitted the G6 with dual 13MP rear cameras: Wide (F2.4 / 125) and Standard (F1.8 / 71) with optical image stabilization. The LG G6 launches next month and will be available in Ice Platinum, Mystic White, Astro Black color options. Pricing is TBD.
MojoKid writes: AMD CEO, Dr. Lisu Su took to the stage at AMD's Ryzen tech day yesterday and opened the event with official speeds, feeds, pricing, and benchmark scores for the company's upcoming Ryzen series processors. AMD's goal with Ryzen, which is based on its Zen microarchitecture, was a 40% IPC (instructions per clock) uplift. As it turns out, AMD was actually able to increase IPC by approximately 52% with the final shipping product, sometimes more depending on workload type. Dr. Su also showed the first die shot of an 8-core Ryzen processor, disclosing that it consists of approximately 4.8 billion transistors. AMD's flagship Ryzen 7 1800X 8-core/16 thread CPU will have a base clock speed of 3.6GHz, a boost clock of 4.0GHz, and a 95 watt TDP. AMD claims the Ryzen 7 1800X will be the fastest 8-core desktop processor on the market when it arrives. The next member of the line-up is the Ryzen 7 1700X with a base clock of 3.4GHz and a boost clock of 3.8GHz, also with 8 cores and a 95 watt TDP. Finally, the Ryzen 7 1700 – sans X – is also an 8-core / 16-thread CPU, but it has lower 3.0GHz base and 3.7GHz boost clocks, along with a lower 65 watt TDP. AMD took the opportunity to demo the Ryzen 7 1800X and it was approximately 9% faster than the Core i7-6900K running Cinebench R15's multi-threaded test, at about half the cost. And in another comparison, Dr. Su put the 8-core Ryzen 7 1700 up against the quad-core Core i7-7700K, converting a 4K 60 FPS video down to 1080P and the Ryzen CPU outpaces the Core i7 by 10 full seconds. Pricing for the three initial Ryzen 7 series processors will undercut competing Intel processors significantly. AMD's Ryzen 7 1800X will arrive at $499, Ryzen 7 1700X at $399, and Ryzen 7 1700 at $329. Based on current street prices, Ryzen will be between 20% — 50% lower priced but AMD is claiming performance that's better than Intel at those price points.
MojoKid writes: Yet another AMD Ryzen leak is making the rounds, one that details an extensive lineup of 17 processors. It's a continuation of a previous leak supposedly outing AMD's top-to-bottom retail launch lineup, only now with individual part numbers and TDP ratings for every SKU. The leaked chart lists all 17 Ryzen SKUs, a dozen of which sport 65 TDP ratings with the remaining five listed having a 95W TDP. Eight of the Ryzen chips are quad-core parts, four are six-core CPUs, and five are eight-core processors. AMD will allegedly bundle an updated Wraith cooler codenamed HS81 with its Black Edition Ryzen processors that have a 95W TDP. They include the Ryzen 7 1800X, Ryzen 7 1700X, and Ryzen 5 1600X. Also, new information on one of AMD's top-end chips, the Ryzen 7 1700X, has surfaced as well, claiming a $389 price tag and performance on par with Intel's Core i7-6900K Broadwell-E 8-core chip that retails for over $1K.
bigwophh writes: Until very recently, Intel's lowest powered Kaby Lake variant, known as Kaby Lake-Y, hasn't made much of an appearance in market. Kaby Lake-Y is the the 4.5 — 7 Watt series of Intel's 7th gen processor family intended for thin and light, fanless 2-in-1 devices. What's interesting is that Intel has done away almost completely with the "Core m" moniker with Kaby Lake, choosing instead to denote the series in the root of the model number, like it does with the more common U series chips that are employed in full-featured ultrabooks. However, the company does list lower-end Core m3 7th gen variants of Kaby Lake as such, while i5 and i7 higher-end SKUs are only distinguishable by the Y in the root of the model number. Clear as mud right? There may be a reason for this, as early introductions of Intel's Core m were met with mixed reviews. However, with early Kaby Lake-Y systems, like Dell's XPS 13 2-in-1, performance with higher end SKUs like the Core i7-7Y75, with aggressive TDP-up tuning, is more in line with lower-end Core i5 variants of Intel's previous gen Skylake U series that was so prevalent in last year's full-fledged notebook designs. With boost clock speeds as high as 3.6GHz for some Kaby Lake-Y chips, this isn't the same Core m kind of performance of previous generations from Intel.
MojoKid writes: Microsoft may have discontinued its Lumia family of smartphones, but that doesn't mean that the company has given up on handsets altogether. A new patent filing reveals that Microsoft could still have a few more tricks up its sleeve; in this case, a folding smartphone. If such a design were to make it to production, it would likely adopt Surface branding, joining the likes of the flexible and convertible Surface Pro, Surface Book and Surface Studio. Entitled "Mobile Computing Device Having A Flexible Hinge Structure", the patent shows a smartphone with a side-mounted hinge that opens up to reveal an uninterrupted, large display surface more fitting for tablet duty. And just like patent filings leaked the Surface Studio months before its official unveil, this could be a precursor to a future Microsoft product. Of course, there are no guarantees when it comes to patent filings, as Microsoft has patented many design innovations without acting on them with a shipping product.
MojoKid writes: When Microsoft first launched Windows 10, it was generally well-received but also came saddled with a number of privacy concerns. It has taken quite a while for Microsoft to respond to these concerns in a meaningful way, but the company is finally proving that it's taking things seriously by detailing some enhanced privacy features coming to a future Windows 10 build. Microsoft is launching what it calls a (web-based) privacy dashboard, which lets you configure anything and everything about information that might be sent to back to the mother ship. You can turn all tracking off, or pick and choose, if certain criteria don't concern you too much, like location or health activity, for example. Also, for fresh installs, you'll be given more specific privacy options so that you can feel confident from the get-go about the information you're sending Redmond's way. If you do decide to send any information Microsoft's way, the company promises that it won't use your information for the sake of targeted advertising.
MojoKid writes: AMD lifted the veil on its next generation GPU architecture, code named Vega this morning. One of the underlying forces behind Vega's design is that conventional GPU architectures have not been scaling well for diverse data types. Gaming and graphics workloads have shown steady progress, but today GPUs are used for much more than just graphics. In addition, the compute capability of GPUs may have been increasing at a good pace, but memory capacity has not kept up. Vega aims to improve both compute performance and addressable memory capacity, however, through some new technologies not available on any previous-gen architecture. First, is that Vega has the most scalable GPU memory architecture built to date with 512TB of address space. It also has a new geometry pipeline tuned for more performance and better efficiency with over 2X peak throughput per clock, a new Compute Unit design, and a revamped pixel engine. The pixel engine features a new Draw Stream Binning Rasterizer, which reportedly improves performance and saves power. All told, Vega should offer significant improvements in terms of performance and efficiency when products based on the architecture begin shipping in a few months.
MojoKid writes: AMD is announcing a new series of Radeon-branded products today, targeted at machine intelligence and deep learning enterprise applications, called Radeon Instinct. As its name suggests, the new Radeon Instinct line of products are comprised of GPU-based solutions for deep learning, inference and training. The new GPUs are also complemented by a free, open-source library and framework for GPU accelerators, dubbed MIOpen. MIOpen is architected for high-performance machine intelligence applications and is optimized for the deep learning frameworks in AMD's ROCm software suite. The first products in the lineup consist of the Radeon Instinct MI6, the MI8, and the MI25. The 150W Radeon Instinct MI6 accelerator is powered by a Polaris-based GPU, packs 16GB of memory (224GB/s peak bandwidth), and will offer up to 5.7 TFLOPS of peak FP16 performance. Next up in the stack is the Fiji-based Radeon Instinct MI8. Like the Radeon R9 Nano, the Radeon Instinct MI8 features 4GB of High-Bandwidth Memory (HBM) with peak bandwidth of 512GB/s. The MI8 will offer up to 8.2 TFLOPS of peak FP16 compute performance, with a board power that typical falls below 175W. The Radeon Instinct MI25 accelerator will leverage AMD's next-generation Vega GPU architecture and has a board power of approximately 300W. All of the Radeon Instinct accelerators are passively cooled but when installed into a server chassis you can bet there will be plenty of air flow. Like the recently released Radeon Pro WX series of professional graphics cards for workstations, Radeon Instinct accelerators will be built by AMD. All of the Radeon Instinct cards will also support AMD MultiGPU (MxGPU) hardware virtualization technology.
MojoKid writes: Intel is laying out its roadmap to advance artificial intelligence performance across the board. Nervana Systems, a company that Intel acquired just a few months ago, will play a pivotal role in the company's efforts to make waves in an industry dominated by GPU-based solutions. Intel's Nervana chips incorporate technology (which involves a fully-optimized software and hardware stack) that is specially tasked with reducing the amount of time required to train deep learning models. Nervana hardware will initially be available as an add-in card that plugs into a PCIe slot, which is the quickest way for Intel to get this technology to customers. The first Nervana silicon, codenamed Lake Crest, will make its way to select Intel customers in H1 2017. Intel is also talking about Knights Mill, which is the next generation of the Xeon Phi processor family. The company claims that Knights Mill will deliver a 4x increase in deep learning performance compared to existing Xeon Phi processors and the combined solution with Nervana will offer orders of magnitude gains in deep learning performance.