146123672
submission
MojoKid writes:
In January, Intel officially announced its Tiger Lake-H mobile platform, but today disclosed full details on the new, higher-end variant of Tiger Lake manufactured using 10nm SuperFIN technology, that brings with it a few significant platform enhancements beyond just its clock speed and core count boost. Intel is refreshing the lineup with higher-power and higher-performance Tiger Lake-H45 processors, with up to 8 physical cores (16 threads). In addition, the CPUs feature 20 reconfigurable PCI Express 4.0 lanes attached directly to the processor, which enable PCIe 4.0 NVMe RAID — a first for any mobile platform. The platform features all of the latest IO and connectivity technologies, like Killer Wi-Fi 6 / 6E, Thunderbolt 4, and support for Resizable BAR. There are an array of consumer and commercial Tiger Lake-H based 11th Gen Intel Core H-series processors coming down the pipeline. The top-end consumer SKU is the Core i9-11980HK, which is an 8-core / 16-thread processor, with a base clock of 2.6GHz and maximum turbo clock of 5GHz on one or two cores. What also makes this particular processor interesting is that it is fully unlocked and overclockable via Intel's XTU utility. Intel has shipped millions of units volume to laptop OEMs already and expects to have laptops in market from all of the majors this month.
139342250
submission
MojoKid writes:
NVIDIA expanded its line-up of Ampere-based graphics cards today with a new lower cost GeForce RTX 3060 Ti. As its name suggests, the new $399 NVIDIA GPU supplants the previous-gen GeForce RTX 2060 / RTX 2060 Super, and slots in just behind the recently-released GeForce RTX 3070. The GeForce RTX 3060 Ti features 128 CUDA cores per SM, for a total of 4,864, 4 Third-Gen Tensor cores per SM (152 total), and 38 Second-Gen RT cores. The GPU has a typical boost clock of 1,665MHz and it is linked to 8GB of standard GDDR6 memory (not the GDDR6X of the RTX 3080/3090) via a 256-bit memory interface that offers up to 448GB/s of peak bandwidth. In terms of overall performance, the RTX 3060 Ti lands in the neighborhood of the GeForce RTX 2080 Super, and well ahead of cards like AMD's Radeon RX 5700 XT. The GeForce RTX 3060 Ti's 8GB frame buffer may give some users pause, but for 1080p and 1440p gaming, it shouldn't be a problem for the overwhelming majority of titles. It's also par for the course in this $399 price band. Cards are reported to be shipping in retail tomorrow.
89380549
submission
MojoKid writes:
AMD CEO, Dr. Lisu Su took to the stage at AMD's Ryzen tech day yesterday and opened the event with official speeds, feeds, pricing, and benchmark scores for the company's upcoming Ryzen series processors. AMD's goal with Ryzen, which is based on its Zen microarchitecture, was a 40% IPC (instructions per clock) uplift. As it turns out, AMD was actually able to increase IPC by approximately 52% with the final shipping product, sometimes more depending on workload type. Dr. Su also showed the first die shot of an 8-core Ryzen processor, disclosing that it consists of approximately 4.8 billion transistors. AMD's flagship Ryzen 7 1800X 8-core/16 thread CPU will have a base clock speed of 3.6GHz, a boost clock of 4.0GHz, and a 95 watt TDP. AMD claims the Ryzen 7 1800X will be the fastest 8-core desktop processor on the market when it arrives. The next member of the line-up is the Ryzen 7 1700X with a base clock of 3.4GHz and a boost clock of 3.8GHz, also with 8 cores and a 95 watt TDP. Finally, the Ryzen 7 1700 – sans X – is also an 8-core / 16-thread CPU, but it has lower 3.0GHz base and 3.7GHz boost clocks, along with a lower 65 watt TDP. AMD took the opportunity to demo the Ryzen 7 1800X and it was approximately 9% faster than the Core i7-6900K running Cinebench R15's multi-threaded test, at about half the cost. And in another comparison, Dr. Su put the 8-core Ryzen 7 1700 up against the quad-core Core i7-7700K, converting a 4K 60 FPS video down to 1080P and the Ryzen CPU outpaces the Core i7 by 10 full seconds. Pricing for the three initial Ryzen 7 series processors will undercut competing Intel processors significantly. AMD's Ryzen 7 1800X will arrive at $499, Ryzen 7 1700X at $399, and Ryzen 7 1700 at $329. Based on current street prices, Ryzen will be between 20% — 50% lower priced but AMD is claiming performance that's better than Intel at those price points.
83450843
submission
MojoKid writes:
A history buff's favorite activity is often analyzing the historical accuracy of modern media. YouTuber's The Great War recently published a video critiquing the historical accuracy of the Battlefield 1 trailer. The trailer for Battlefield 1 was released on May 6th, 2016. According to Indy Neidell, the voice behind the video, the trailer is a hodgepodge of accuracy. Some of the most spectacular moments, such as the tanks bursting into trenches or giant, ominous zeppelins hovering, are actually historically accurate. The trailer however, often shows soldiers wearing custom armor or carrying weapons from the opposing side. Neidell also claims that some of the armor shown is "ridiculous". Materials available at the time would have made armor like this incredibly heavy and impractical. Some Germans did have "lobster" armor, but this was reserved for sentries. It was considered unsuitable for actual battle. Neidell admits that many of these idiosyncrasies could be due to multiplayer customization options. For example, it is possible that once you have killed a enemy, you could loot their weapons. Obviously, the upcoming Battlefield 1 is ultimately just a game but it's fun to see just how close to reality Dice tried to portray the era.
82285575
submission
MojoKid writes:
The OLO 3D Printer was first announced in October at the World Maker Faire in New York, where it earned itself an Editor's Choice award and accolades. The developers behind OLO call it a "smartphone 3D printer" as it requires a smartphone to operate. Designs can either be downloaded from the Internet from the device, or copied over from a computer once it's created. When placed on a desk, the OLO looks like an inconspicuous little box, but inside, it can craft items up to 400 cm3 in volume. Its developers call the OLO "portable", and it has the specs to match at 1.7lbs with a physical size of only 6.8" x 4.5" x 5.8." OLO is a unique printer not only because of its small form factor and low price point ($99), but because of its operation. Once the 3D model is loaded, the bottom section of OLO can be placed on top of your phone, and then the resin of your choice is poured inside that structure. You then place the top half of OLO on top and wait a few hours for it to do its thing. The resin hardens by using the light emitted from the smartphone it sits on top of, generated from the OLO app.
82181343
submission
MojoKid writes:
This week at GDC 2016 the team at Unity revealed their stable release of the Unity 5.3.4 game engine along with a beta of Unity 5.4. There are a number of upgrades included with Unity 5.4 including in-editor artist workflow improvements, VR rendering pipeline optimizations, improved multithreaded rendering, customizable particles which can use Light Probe Proxy Volumes (LPPV) to add more realistic lighting models and the ability to drop in textures from tools like Quixel DDo Painter. But for a jaw-dropping look at what's possible with the Unity 5.4 engine, check out the short film "Adam" that Unity has developed to demo it. The film showcases all of Unity Engine 5.4's effects and gives a great look at what to expect from Unity-based games coming in 2016. Unity will showcase the full film at Unite Europe 2016 in Amsterdam. But what's most impressive about Adam perhaps is that Unity says that this is all being run in real-time at 1440p resolution on just an upper-midrange GeForce GTX 980 card.
81312853
submission
MojoKid writes:
Intel and Micron have been tag-teaming various storage and memory technologies and word on the web is that the fruits of that partnership is a 10-terebyte SSD that's right around the corner. The largest SSD in Intel's stable at the moment is 4TB, which itself is pretty large. However, both Micron and Intel are of the opinion that typical planar NAND flash memory has gone about as far as it can go, and that 3D stacked Flash memory is the future. They've also developed a "floating gate cell" design—a first for 3D stacked memory—resulting in 256Gb multi-level cell (MLC) and 384Gb triple-level cell (TLC) die that fit inside of a standard package. The two companies are targeting gumstick-sized SSDs reaching 3.5TB and regular 2.5-inch SSDs hitting (and even surpassing) 10TB. Apparently that's about to become a reality.
72968945
submission
MojoKid writes:
Recently, a few details of AMD's next-generation Radeon 300-series graphics cards have trickled out. Today, AMD has publicly disclosed new info regarding their High Bandwidth Memory (HBM) technology that will be used on some Radeon 300-series and APU products. Currently, a relatively large number of GDDR5 chips are necessary to offer sufficient capacity and bandwidth for modern GPUs, which means significant PCB real estate is consumed. On-chip integration is not ideal for DRAM because it is not size or cost effective with a logic-optimized GPU or CPU manufacturing process. HBM, however, brings the DRAM as close to possible to the logic die (GPU) as possible. AMD partnered with Hynix and a number of companies to help define the HBM specification and design a new type of memory chip with low power consumption and an ultra-wide bus width, which was eventually adopted by JEDEC 2013. They also develop a DRAM interconnect called an "interposer," along with ASE, Amkor, and UMC. The interposer allows DRAM to be brought into close proximity with the GPU and simplifies communication and clocking. HBM DRAM chips are stacked vertically, and "through-silicon vias" (TSVs) and "bumps" are used to connect one DRAM chip to the next, and then to a logic interface die, and ultimately the interposer. The end result is a single package on which the GPU/SoC and High Bandwidth Memory both reside. 1GB of GDDR5 memory (four 256MB chips), requires roughly 672mm2. Because HBM is vertically stacked, that same 1GB requires only about 35mm2. The bus width on an HBM chip is 1024-bits wide, versus 32-bits on a GDDR5 chip. As a result, the High Bandwidth Memory interface can be clocked much lower but still offer more than 100GB/s for HBM versus 25GB/s with GDDR5. HBM also requires significantly less voltage, which equates to lower power consumption.
70968363
submission
MojoKid writes:
Lenovo just revamped the ThinkPad X1 Carbon and in this third generation of the machine, they've adopted Intel's latest 5th generation Core Series Broadwell processors, along with a few other updates. In addition, they've retooled the keyboard and trackpad area, returning back to more traditional roots versus the second generation machine, which was met with some criticism due to its adaptive function key row and over-simplified, buttonless trackpad. Notable upgrades to this 3rd gen model are a faster Core i5-5300U processor and a self-encrypting Opal2 compliant SSD. Performance-wise, the new ThinkPad offers up some of the best numbers in utlrabooks currently, though battery life is a bit middle of the road, but still able to last over 8 hours under light, web-driven workloads.
67436265
submission
MojoKid writes:
To say that BioWare has something to prove with Dragon Age: Inquisition is an understatement. The first Dragon Age: Origins was a colossal, sprawling, unabashed throwback to classic RPGs. Conversely, Dragon Age: Inquisition doesn't just tell an epic story, it evolves in a way that leaves you, as the Inquisitor, leading an army. Creating that sense of scope required a fundamentally different approach to gameplay. Neither Dragon Origins or Dragon Age 2 had a true "open" world in the sense that Skyrim is an open world. Instead, players clicked on a location and auto-traveled across the map from Point A to Point B. Thus, a village might be contained within a single map, while a major city might have 10-12 different locations to explore. Inquisition keeps the concept of maps as opposed to a completely open world, but it blows those maps up to gargantuan sizes. Instead of simply consisting of a single town or a bit of wilderness, the new maps in Dragon Age: Inquisition are chock-full of areas to explore, side quests, crafting materials to gather, and caves, dungeons, mountain peaks, flowing rivers, and roving bands of monsters. And Inquisition doesn't forget the small stuff — the companion quests, the fleshed-out NPCs, or the rich storytelling — it just seeks to put those events in a much larger context across a broad geographical area. Dragon Age: Inquisition is one of the best RPGs to come along in a long time. Never has a game tried to straddle both the large-scale, 10,000-foot master plan and the small-scale, intimate adventure and hit both so well. In terms of graphics performance, you might be surprised to learn that a Radeon R9 290X has better frame delivery than a GeForce GTX 980, despite the similarity in the overall frame rate. The worst frame time for an Radeon R9 290X is just 38.5ms or 26 FPS while a GeForce GTX 980 is at 46.7ms or 21 FPS. AMD takes home an overall win in Dragon Age: Inquisition currently, though Mantle support isn't really ready for prime time.
66578905
submission
MojoKid writes:
One of the disadvantages to buying an Apple system is that it generally means less upgrade flexibility than a system from a traditional PC OEM. Over the last few years, Apple has introduced features and adopted standards that made using third-party hardware progressively more difficult. Now, with OS X 10.10 Yosemite, the company has taken another step down the path towards total vendor lock-in and effectively disabled support for third-party SSDs. We say "effectively" because while third-party SSDs will still work, they'll no longer perform the TRIM garbage collection command. Being able to perform TRIM and clean the SSD when its sitting idle is vital to keeping the drive at maximum performance. Without it, an SSD's real world performance will steadily degrade over time. What Apple did with OS X 10.10 is introduce KEXT (Kernel EXTension) driver signing. KEXT signing means that at boot, the OS checks to ensure that all drivers are approved and enabled by Apple. It's conceptually similar to the device driver checks that Windows performs at boot. However, with OS X, if a third-party SSD is detected, the OS will detect that a non-approved SSD is in use, and Yosemite will refuse to load the appropriate TRIM-enabled driver.
64615693
submission
MojoKid writes:
NVIDIA has launched two new high-end graphics cards based on their latest Maxwell architecture. The GeForce GTX 980 and GTX 970 are based on Maxwell and replace NVIDIA's current high-end offerings, the GeForce GTX 780 Ti, GTX 780, and GTX 770. NVIDIA's GeForce GTX 980 and GTX 970 are somewhat similar as the cards share the same 4GB frame buffer and GM204 GPU, but the GTX 970's GPU is clocked a bit lower and features fewer active Streaming Multiprocessors and CUDA cores. The GeForce GTX 980's GM204 GPU has all of its functional blocks enabled. The fully-loaded GeForce GTX 980 GM204 GPU has a base clock of 1126MHz and a Boost clock of 1216MHz. The GTX 970 clocks in with a base clock of 1050MHz and Boost clock of 1178MHz. The 4GB of video memory on both cards is clocked at a blisteringly-fast 7GHz (effective GDDR5 data rate). NVIDIA was able to optimize the GM204's power efficiency, however, by tweaking virtually every part of the GPU. NVIDIA claims that Maxwell SMs (Streaming Multiprocessors) offer double the performance of GK104 and double the perf per watt as well. NVIDIA has also added support for new features, namely Dynamic Super Resolution (DSR), Multi-Frame Sampled Anti-Aliasing (MFAA), and Voxel Global Illumination (VXGI). Performance-wise, the GeForce GTX 980 is the fastest single-GPU powered graphics card ever tested. The GeForce GTX 970 isn't as dominant overall, but its performance was impressive nonetheless. The GeForce GTX 970 typically performed about on par with a GeForce GTX Titan and traded blows with the Radeon R9 290X.
61749941
submission
MojoKid writes:
Ever since Edward Snowden leaked details on how the government had forced various IT companies to disclose information (or secured their willing cooperation), companies like Google, Facebook, and Microsoft have been desperate to regain their users' trust. Six months ago, Microsoft announced that it would re-engineer its products and services to provide a much higher level of security — today, the company revealed that it has reached an important milestone in that process. As of now, Outlook.com uses TLS (Transport Layer Security) to provide end-to-end encryption for inbound and outbound email — assuming that the provider on the other end also uses TLS. The TLS standard has been in the news fairly recently after discovery of a major security flaw in one popular package (gnuTLS), but Microsoft notes that it worked with multiple international companies to secure its version of the standard. Second, OneDrive now uses Perfect Forward Secrecy (PFS). Microsoft refers to this as a type of encryption, but PFS isn't a standard like AES or 3DES — instead, it's a particular method of ensuring that an attacker who intercepts a particular key cannot use that information to break the entire key sequence. Even if you manage to gain access to one file or folder, in other words, that information can't be used to compromise the entire account.
52764365
submission
MojoKid writes:
One of the hallmark features of Google's Nexus 5 flagship smartphone by LG isn't its bodaciously big 5-inch HD display, its 8MP camera, or its "OK Google" voice commands. That has all been done before. What does stand out about the Nexus 5 is Google's new Android 4.4 Kit Kat OS and LG's SoC (System on Chip) processor of choice, namely Qualcomm's Snapdragon 800 quad-core. Qualcomm is known for licensing ARM core technology and making it their own; and Qualcomm's latest Krait 400 quad-core along with the Adreno 330 GPU that comprise the Snapdragon 800, is a powerful beast. Google also has taken the scalpel to Kit Kat in all the right places, whittling down the overall footprint of the OS, so it's more efficient on lower-end devices and also offers faster multitasking. Specifically memory usage has been optimized in a number of areas. Couple these OS tweaks with Qualcomm's Snapdragon 800 and you end up with a smartphone that hugs the corners and lights 'em up on the straights. Putting the Nexus 5 through its paces, it turns out preliminary figures are promising. In fact, the Nexus 5 actually was able to surpass the iPhone 5s with Apple's 64-bit A7 processor in a few tests and goes toe to toe with it in gaming and graphics.
48494485
submission
MojoKid writes:
big.LITTLE is ARM's solution to a particularly nasty problem: smaller and smaller process nodes no longer deliver the kind of overall power consumption improvements they did years ago. Before 90nm technology, semiconductor firms could count on new chips being smaller, faster, and drawing less power at a given frequency. Eventually, that stopped being true. Tighter process geometries still pack more transistors per square millimeter, but the improvements to power consumption and maximum frequency have been falling with each smaller node. Rising defect densities have created a situation where — for the first time ever — 20nm wafers won't be cheaper than the 28nm processors they're supposed to replace. This is a critical problem for the mobile market, where low power consumption is absolutely vital. big.LITTLE is ARM's answer to this problem. The strategy requires manufacturers to implement two sets of cores — the Cortex-A7 and Cortex-A15 are the current match-up. The idea is for the little cores to handle the bulk of the device's work, with the big cores used for occasional heavy lifting. ARM's argument is that this approach is superior to dynamic voltage and frequency scaling (DVFS) because it's impossible for a single CPU architecture to retain a linear performance/power curve across its entire frequency range. This is the same argument Nvidia made when it built the Companion Core in Tegra 3.