Desktops (Apple)

Low Power Mode for Mac Laptops: Making the Case Again (marco.org) 58

In light of this week's rumor that a Pro Mode -- which will supposedly boost performance on Macs with Catalina operating system -- may be coming, long time developer and Apple commentator Marco Arment makes the case for a Low Power Mode on macOS. He writes: Modern hardware constantly pushes thermal and power limits, trying to strike a balance that minimizes noise and heat while maximizing performance and battery life. Software also plays a role, trying to keep everything background-updated, content-indexed, and photo-analyzed so it's ready for us when we want it, but not so aggressively that we notice any cost to performance or battery life. Apple's customers don't usually have control over these balances, and they're usually fixed at design time with little opportunity to adapt to changing circumstances or customer priorities.

The sole exception, Low Power Mode on iOS, seems to be a huge hit: by offering a single toggle that chooses a different balance, people are able to greatly extend their battery life when they know they'll need it. Mac laptops need Low Power Mode, too. I believe so strongly in its potential because I've been using it on my laptops (in a way) for years, and it's fantastic. I've been disabling Intel Turbo Boost on my laptops with Turbo Boost Switcher Pro most of the time since 2015. In 2018, I first argued for Low Power Mode on macOS with a list of possible tweaks, concluding that disabling Turbo Boost was still the best bang-for-the-buck tweak to improve battery life without a noticeable performance cost in most tasks.

Recently, as Intel has crammed more cores and higher clocks into smaller form factors and pushed thermal limits to new extremes, the gains have become even more significant. [...] With Turbo Boost disabled, peak CPU power consumption drops by 62%, with a correspondingly huge reduction in temperature. This has two massive benefits: The fans never audibly spin up. [...] It runs significantly cooler. Turbo Boost lets laptops get too hot to comfortably hold in your lap, and so much heat radiates out that it can make hands sweaty. Disable it, and the laptop only gets moderately warm, not hot, and hands stay comfortably dry. I haven't done formal battery testing on the 16-inch, since it's so difficult and time-consuming to do in a controlled way that's actually useful to people, but anecdotally, I'm seeing similar battery gains by disabling Turbo Boost that I've seen with previous laptops: significantly longer battery life that I'd estimate to be between 30-50%.

Programming

'We're Approaching the Limits of Computer Power -- We Need New Programmers Now' (theguardian.com) 306

Ever-faster processors led to bloated software, but physical limits may force a return to the concise code of the past. John Naughton: Moore's law is just a statement of an empirical correlation observed over a particular period in history and we are reaching the limits of its application. In 2010, Moore himself predicted that the laws of physics would call a halt to the exponential increases. "In terms of size of transistor," he said, "you can see that we're approaching the size of atoms, which is a fundamental barrier, but it'll be two or three generations before we get that far -- but that's as far out as we've ever been able to see. We have another 10 to 20 years before we reach a fundamental limit." We've now reached 2020 and so the certainty that we will always have sufficiently powerful computing hardware for our expanding needs is beginning to look complacent. Since this has been obvious for decades to those in the business, there's been lots of research into ingenious ways of packing more computing power into machines, for example using multi-core architectures in which a CPU has two or more separate processing units called "cores" -- in the hope of postponing the awful day when the silicon chip finally runs out of road. (The new Apple Mac Pro, for example, is powered by a 28-core Intel Xeon processor.) And of course there is also a good deal of frenzied research into quantum computing, which could, in principle, be an epochal development.

But computing involves a combination of hardware and software and one of the predictable consequences of Moore's law is that it made programmers lazier. Writing software is a craft and some people are better at it than others. They write code that is more elegant and, more importantly, leaner, so that it executes faster. In the early days, when the hardware was relatively primitive, craftsmanship really mattered. When Bill Gates was a lad, for example, he wrote a Basic interpreter for one of the earliest microcomputers, the TRS-80. Because the machine had only a tiny read-only memory, Gates had to fit it into just 16 kilobytes. He wrote it in assembly language to increase efficiency and save space; there's a legend that for years afterwards he could recite the entire program by heart. There are thousands of stories like this from the early days of computing. But as Moore's law took hold, the need to write lean, parsimonious code gradually disappeared and incentives changed.

Security

Half of the Websites Using WebAssembly Use it for Malicious Purposes (zdnet.com) 109

Around half of the websites that use WebAssembly, a new web technology, use it for malicious purposes, according to academic research published last year. From a report: WebAssembly is a low-level bytecode language that was created after a joint collaboration between all major browser vendors. It introduces a new binary file format for transmitting code from a web server to a browser. Once it reaches the browser, WebAssembly code (Wasm) executes with near-native speed, similar to compiled C, C++, or Rust code. WebAssembly was created for both speed and performance. Due to its binary machine-friendly format, Wasm code is smaller than its equivalent JavaScript form, but also many times faster when executing. This has made WebAssembly the next incarnation of Adobe Flash, allowing websites to run complex CPU-intensive code without freezing a browser, a task for which JavaScript was never designed or optimized for.
AMD

AMD Unveils Ryzen 4000 Mobile CPUs Claiming Big Gains, 64-Core Threadripper (hothardware.com) 71

MojoKid writes: Yesterday, AMD launched its new Ryzen 4000 Series mobile processors for laptops at CES 2020, along with a monstrous 64-core/128-thread third-generation Ryzen Threadripper workstation desktop CPU. In addition to the new processors, on the graphics front the oft-leaked Radeon RX 5600 XT that target's 1080p gamers in the sweet spot of the GPU market was also made official. In CPU news, AMD claims Ryzen 4000 series mobile processors offer 20% lower SOC power, 2X perf-per-watt, 5X faster power state switching, and significantly improved iGPU performance versus its previous-gen mobile Ryzen 3000 products. AMD's U-Series flagship, the Ryzen 7 4800U, is an 8-core/16-thread processor with a max turbo frequency of 4.2GHz and integrated Vega-derived 8-core GPU.

Along with architectural enhancements and the frequency benefits of producing the chips at 7nm, AMD is underscoring up to 59% improved performance per graphics core as well. AMD is also claiming superior single-thread CPU performance versus current Intel-processors and significantly better multi-threaded performance. The initial Ryzen 4000 U-Series line-up consists of five processors, starting with the 4-core/4-thread Ryzen 5 4300U, and topping off with the aforementioned Ryzen 7 4800U. On the other end of the spectrum, AMD revealed some new information regarding its 64-core/128-thread Ryzen Threadripper 3990X processor. The beast chip will have a base clock of 2.9GHz and a boost clock of 4.3GHz with a whopping 288MB of cache. The chip will drop into existing TRX40 motherboards and be available on February 7th for $3990. AMD showcased the chip versus a dual socket Intel Xeon Platinum in the VRAY 3D rendering benchmark beating the Xeon system by almost 30 minutes in a 90-minute workload, though the Intel system retails for around $20K.

Microsoft

The Original Xbox Was Announced 19 Years Ago Today (gamerevolution.com) 51

On January 6, 2001, Bill Gates and The Rock debuted the original Xbox, calling it "the most electrifying" games console on the market. GameRevolution reports: The surreal image of The Rock standing alongside Gates, telling the billionaire "it doesn't matter what you think, Bill," was certainly a unique way to debut the console. We're glad Microsoft opted for this unusual route, though, because if it hadn't we wouldn't have video footage of The Rock discussing symmetric multiprocessing.

The original Xbox released on November 15, 2001. [It debuted with a 32-bit 733 MHz, custom Intel Pentium III Coppermine-based processor, 133 MHz 64-bit GTL+ front-side bus (FSB) with a 1.06 GB/s bandwidth, and 64 MB unified DDR SDRAM, with a 6.4 GB/s bandwidth, of which 1.06 GB/s is used by the CPU and 5.34 GB/s is shared by the rest of the system, according to Wikipedia.] Its high manufacturing cost would wind up costing Microsoft a lot of money, with the company losing $4 billion on the console. It would also fall short of its predicted 50 million sales, with Microsoft only shifting 24 million units by the end of its life cycle.
For comparison, the Xbox One X, which debuted on November 7, 2017, featured a SoC which incorporates a 2.3 GHz octa-core CPU, and Radeon GPU with 40 Compute Units clocked at 1172 MHz, generating 6 teraflops of graphical computing performance. It also includes 12GB of GDDR5 RAM with 9 GB allocated to games. Microsoft's next-generation Series X console is expected to deliver "four times the processing power of Xbox One X," although technical specs have yet to be announced.
Open Source

Linus Torvalds Calls Blogger's Linux Scheduler Tests 'Pure Garbage' (phoronix.com) 191

On Wednesday Phoronix cited a blog post by C++ game developer Malte Skarupke claiming his spinlocks experiments had discovered the Linux kernel had a scheduler issue affecting developers bringing games to Linux for Google Stadia.

Linus Torvalds has now responded: The whole post seems to be just wrong, and is measuring something completely different than what the author thinks and claims it is measuring.

First off, spinlocks can only be used if you actually know you're not being scheduled while using them. But the blog post author seems to be implementing his own spinlocks in user space with no regard for whether the lock user might be scheduled or not. And the code used for the claimed "lock not held" timing is complete garbage.

It basically reads the time before releasing the lock, and then it reads it after acquiring the lock again, and claims that the time difference is the time when no lock was held. Which is just inane and pointless and completely wrong...

[T]he code in question is pure garbage. You can't do spinlocks like that. Or rather, you very much can do them like that, and when you do that you are measuring random latencies and getting nonsensical values, because what you are measuring is "I have a lot of busywork, where all the processes are CPU-bound, and I'm measuring random points of how long the scheduler kept the process in place".

And then you write a blog-post blamings others, not understanding that it's your incorrect code that is garbage, and is giving random garbage values...

You might even see issues like "when I run this as a foreground UI process, I get different numbers than when I run it in the background as a batch process". Cool interesting numbers, aren't they?

No, they aren't cool and interesting at all, you've just created a particularly bad random number generator...

[Y]ou should never ever think that you're clever enough to write your own locking routines.. Because the likelihood is that you aren't (and by that "you" I very much include myself -- we've tweaked all the in-kernel locking over decades, and gone through the simple test-and-set to ticket locks to cacheline-efficient queuing locks, and even people who know what they are doing tend to get it wrong several times).

There's a reason why you can find decades of academic papers on locking. Really. It's hard.

"It really means a lot to me that Linus responded," the blogger wrote later, "even if the response is negative." They replied to Torvalds' 1,500-word post on the same mailing list -- and this time received a 1900-word response arguing "you did locking fundamentally wrong..." The fact is, doing your own locking is hard. You need to really understand the issues, and you need to not over-simplify your model of the world to the point where it isn't actually describing reality any more...

Dealing with reality is hard. It sometimes means that you need to make your mental model for how locking needs to work a lot more complicated...

XBox (Games)

Microsoft's Next Xbox Is Xbox Series X, Coming Holiday 2020 (theverge.com) 78

At the 2019 Game Awards today, Microsoft revealed the name and console design of its next-generation gaming console: Xbox Series X. The Verge reports: The console itself looks far more like a PC than we've seen from previous Xbox consoles, and Microsoft's trailer provides a brief glimpse at the new design. The console itself is designed to be used in both vertical and horizontal orientations, and Microsoft's Xbox chief, Phil Spencer, promises that it will "deliver four times the processing power of Xbox One X in the most quiet and efficient way."

The Xbox Series X will include a custom-designed CPU based on AMD's Zen 2 and Radeon RDNA architecture. Microsoft is also using an SSD on Xbox Series X, which promises to boost load times. Xbox Series X will also support 8K gaming, frame rates of up to 120 fps in games, ray tracing, and variable refresh rate support. Microsoft also revealed a new Xbox Wireless Controller today. "Its size and shape have been refined to accommodate an even wider range of people, and it also features a new Share button to make capturing screenshots and game clips simple," explains Spencer. This updated controller will work with existing Xbox One consoles and Windows 10 PCs, and will ship with every Xbox Series X.

Printer

Ask Slashdot: Will We Ever Be Able To Make Our Own Computer Hardware At Home? 117

dryriver writes: The sheer extent of the data privacy catastrophe happening -- everything software/hardware potentially spies on us, and we don't get to see what is in the source code or circuit diagrams -- got me thinking about an intriguing possibility. Will it ever be possible to design and manufacture your own CPU, GPU, ASIC or RAM chip right in your own home? 3D printers already allow 3D objects to be printed at home that would previously have required an injection molding machine. Inkjet printers can do high DPI color printouts at home that would previously have required a printing press. Could this ever happen for making computer hardware? A compact home machine that can print out DIY electronic circuits right in your home or garage? Could this machine look a bit like a large inkjet printer, where you load the electronics equivalent of "premium glossy photo paper" into the printer, and out comes a printed, etched, or otherwise created integrated circuit that just needs some electricity to start working? If such a machine or "electronics printer" is technically feasible, would the powers that be ever allow us to own one?
Security

New Plundervolt Attack Impacts Intel Desktop, Server, and Mobile CPUs (zdnet.com) 74

An anonymous reader quotes a report from ZDNet: Academics from three universities across Europe have disclosed today a new attack that impacts the integrity of data stored inside Intel SGX, a highly-secured area of Intel CPUs. The attack, which researchers have named Plundervolt, exploits the interface through which an operating system can control an Intel processor's voltage and frequency -- the same interface that allows gamers to overclock their CPUs. Academics say they discovered that by tinkering with the amount of voltage and frequency a CPU receives, they can alter bits inside SGX to cause errors that can be exploited at a later point after the data has left the security of the SGX enclave. They say Plundervolt can be used to recover encryption keys or introduce bugs in previously secure software. Intel desktop, server, and mobile CPUs are impacted. A full list of vulnerable CPUs is available here. Intel has also released microcode (CPU firmware) and BIOS updates today that address the Plundervolt attack [by allowing users to disable the energy management interface at the source of the attack, if not needed]. Proof-of-concept code for reproducing attacks will be released on GitHub.
Google

Google Built Its Own Tiny HDMI 2.1 Box To Jump-Start 'the Next Generation of Android TV' (theverge.com) 23

Google today announced that Android 10 is arriving on Android TV, and it's about as bland of an update as they come. From a report: Primarily, it's just the performance and security benefits of Android 10, without a single new user-facing feature. But at the bottom of Google's blog post, the company hints at why: Google's busy prepping for the "next-generation of Android TV," starting with the miniature box above. Google says this new ADT-3 dongle is a full-fledged Android TV platform, with a quad-core ARM Cortex A53 CPU, 2GB of DDR3 memory, and the ability to output 4K HDR content at 60 frames per second over its HDMI 2.1 port. Before you get too excited, know that it's a developer device. Like its predecessor, the ADT-2, it's possible you'll never see one officially available for purchase.
Businesses

Apple Sues iPhone CPU Design Ace After He Quits To Run Data center Chip Upstart Nuvia (theregister.co.uk) 100

Apple is suing the former chief architect of its iPhone and iPad microprocessors, who in February quit to co-found a data-center chip design biz. From a report: In a complaint filed in the Santa Clara Superior Court, in California, USA, and seen by The Register, the Cupertino goliath claimed Gerard Williams, CEO of semiconductor upstart Nuvia, broke his Apple employment agreement while setting up his new enterprise. Williams -- who oversaw the design of Apple's custom high-performance mobile Arm-compatible processors for nearly a decade -- quit the iGiant in February to head up the newly founded Nuvia. The startup officially came out of stealth mode at the end of November, boasting it had bagged $53m in funding. It appears to be trying to design silicon chips, quite possibly Arm-based ones, for data center systems; it is being coy right now with its plans and intentions.

[...] Apple's lawsuit alleged Williams hid the fact he was preparing to leave Apple to start his own business while still working at Apple, and drew on his work in steering iPhone processor design to create his new company. Crucially, Tim Cook & Co's lawyers claimed he tried to lure away staff from his former employer. All of this was, allegedly, in breach of his contract. The iGiant also reckoned Williams had formed the startup in hope of being bought by Apple to produce future systems for its data centers. [...] Apple's side of the story, however, has been challenged by Williams, who accused the Mac giant of wrongdoing. Last month, his team hit back with a counter argument alleging that Apple doesn't have a legal leg to stand on. The paperwork states Apple's employment contract provisions in this case are not enforceable under California law: they argue the language amounts to a non-compete clause, which is, generally speaking, a no-no in the Golden State. Thus, they say, Williams was allowed to plan and recruit for his new venture while at Apple. [...] They also allege that Apple's evidence in its complaint, notably text messages he exchanged with another Apple engineer and conversations with his eventual Nuvia co-founders, were collected illegally by the highly paranoid iPhone maker.

Open Source

WireGuard VPN Is On Its Way To Linux (zdnet.com) 48

WireGuard has now been committed to the mainline Linux kernel. "While there are still tests to be made and hoops to be jumped through, it should be released in the next major Linux kernel release, 5.6, in the first or second quarter of 2020," reports ZDNet. From the report: WireGuard has been in development for some time. It is a layer 3 secure VPN. Unlike its older rivals, which it's meant to replace, its code is much cleaner and simple. The result is a fast, easy-to-deploy VPN. While it started as a Linux project, WireGuard code is now cross-platform, and its code is now available on Windows, macOS, BSD, iOS, and Android. It took longer to arrive than many wished because WireGuard's principal designer, Jason Donenfeld, disliked Linux's built-in cryptographic subsystem on the grounds its application programming interface (API) was too complex and difficult. He suggested it be supplemented with a new cryptographic subsystem: His own Zinc library. Many developers didn't like this. They saw this as wasting time reinventing the cryptographic well.

But Donenfeld had an important ally. Torvalds wrote, "I'm 1000% with Jason on this. The crypto/ model is hard to use, inefficient, and completely pointless when you know what your cipher or hash algorithm is, and your CPU just does it well directly." In the end, Donenfeld compromised. "WireGuard will get ported to the existing crypto API. So it's probably better that we just fully embrace it, and afterward work evolutionarily to get Zinc into Linux piecemeal." That's exactly what happened. Some Zine elements have been imported into the legacy crypto code in the forthcoming Linux 5.5 kernel. This laid the foundation for WireGuard to finally ship in Linux early next year.

Intel

Intel CEO Blames Company's Obsessive Focus on Capturing 90% CPU Market Share For Missing Out on Other Opportunities (wccftech.com) 101

Intel chief executive Bob Swan says he's willing to let go the company's traditional dominance of the market for CPUs in order to meet the rising demand for newer, more specialized silicon chips for applications such as AI and autonomous cars. From a report: Intel's Bob Swan blames being focused on 90% CPU market share as a reason for missing opportunities and transitions, envisions Intel as having 30% of all-silicon TAM instead of majority CPU TAM. Just a few years ago, Intel owned more than 90% of the market share in the x86 CPU market. Many financial models used Intel's revenue as a proxy for the Total Available Market of the CPU sector. With a full-year revenue of $59.4 billion in 2017, you can estimate the total TAM of the CPU side of things at roughly $66 billion (2017 est). Bob Swan believes that this mindset of protecting a majority share in the CPU side has led to Intel becoming complacent and missing out on major opportunities. Bob even went as far as to say that he is trying to "destroy" this thinking of having a 90% market share in the CPU side and instead wants people to come into office thinking Intel has 30% market share in "all Silicon." Swan on how Intel got to the place where it is now: How we got here is really kind of threefold, one we got a lot faster than we expected and the demand for CPUs and servers grew much faster than we expected in 2018. You'll remember we came into 2018 projecting a 10% growth and we grew by 21% growth so the good news problem is that demand for our products in our transformation to a data-centric company was much higher than we expected. Secondly, we took on a 100% market share for smartphone modem and we decided that we would build it in our fabs, so we took on even more demand. And third, to exacerbate that, we slipped on bringing our 10nm to life and when that happens you build more and more performance into your last generation for us -- 14nm -- which means there is a higher core count and larger die size. So those three -- growing much faster than we thought, bringing modems inside and delaying 10nm resulted in a position where we didn't have flexible capacity.
Hardware

Snapdragon XR2 Chip To Enable Standalone Headsets With 3K x 3K Resolution, 7 Cameras (roadtovr.com) 34

An anonymous reader quotes a report from Road to VR: Qualcomm today announced Snapdragon XR2 5G, its latest chipset platform dedicated to the needs of standalone VR and AR headsets. The new platform is aimed at high-end devices with support for 3K x 3K displays at 90Hz, along with integrated 5G, accelerated AI processing, and up to seven simultaneous camera feeds for user and environment tracking. While XR1 was made for low-end devices, XR2 5G targets high-end standalone headsets, making it a candidate for Oculus Quest 2, Magic Leap 2, and similar next-gen devices.

XR2 offers up notable improvements over Snapdragon 835 (one of the most common chipsets found in current standalone headsets, including Quest); Qualcomm claims 2x performance in CPU & GPU, 4x increase in pixel throughput for video playback, and up to 6x resolution per-eye compared to Snapdragon 835 -- supporting up to 3K x 3K displays at 90Hz. [...] Notably, XR2 supports up to seven simultaneous camera feeds (up from four in prior platforms). This is key for advanced tracking, both of the environment and the user. [...] Qualcomm also says that XR2 offers low-latency pass-through video which could improve the pass-through video experience on headsets like Quest, and potentially enable a wider range of pass-through AR use-cases. Additionally XR2 boasts significantly accelerated AI processing; 11x compared to Snapdragon 835, which could greatly benefit the sort of operations used for turning incoming video feeds into useful tracking information.

Open Source

RISC-V Foundation Moving To Switzerland Over Trade Curb Fears (reuters.com) 76

hackingbear writes: The RISC-V Foundation, which sets standards for the open-sourced CPU architecture and controls who can use the RISC-V trademark on products, will soon move to Switzerland to ensure that universities, governments and companies outside the United States can help develop its open-source technology. "From around the world, we've heard that 'If the incorporation was not in the U.S., we would be a lot more comfortable,'" its Chief Executive Calista Redmond said. Redmond said the foundation's board of directors approved the move unanimously but declined to disclose which members prompted it. More than 325 companies or other entities pay to be members, including U.S. and European chip suppliers such as Qualcomm and NXP Semiconductors, as well as China's Alibaba Group and Huawei Technologies.

The foundation's move from Delaware to Switzerland may foreshadow further technology flight because of U.S. restrictions on dealing with some Chinese technology companies, said William Reinsch, who was undersecretary of commerce for export administration in the Clinton administration. "There is a message for the government. The message is, if you clamp down on things too tightly this is what is going to happen. In a global supply chain world, companies have choices, and one choice is to go overseas," he said. The U.S. has increased tenancy to sanction foreign, especially Chinese, companies using national security as an excuse, thus conveniently evading legal due process in the U.S. justice system without providing any actual evidence.

AMD

AMD Launches Threadripper 3970X, 3960X and Smokes Intel's New 18-Core CPU (hothardware.com) 44

MojoKid writes: Intel and AMD have been duking it out in the high-end desktop processor space lately. AMD's return to competitive footing versus Intel has propelled the company forward and the brand has a loyal, passionate following due the competitive performance-per-dollar its 3rd Generation Ryzen processors bring versus Intel offerings. Today, both companies have launched new flagship many-core CPUs, the Intel Core i9-10980XE, which is an 18-core chip, and the AMD 3rd Gen Threadripper 3970X and 3960X, which are 32-core and 24-core chips, respectively. Intel's Core i9-10980XE brings a lower price of $999 and competes more favorably versus AMD's lower-end 16-core Ryzen 9 3950X that's priced at just $750. Meanwhile, the new AMD Threadripper 3960X at $1399 and Threadripper 3970X at $1999 leave Intel's fastest desktop chip in the dust in multi-threaded workloads, sometimes by a wide margin. In addition, while Threadripper 3960X and 3970X pull only about 26 to 36 Watts of additional power versus Intel's new Core i9-10980XE, they do it with 33-77% more core resources. Regardless, it's impressive how the tables have turned, as AMD is now firmly entrenched with some better value propositions in high-end desktop processors, and better performance in many cases as well.
AMD

AMD Launches 16-Core Ryzen 9 3950X At $750, Beating Intel's $2K 18-Core Chip (hothardware.com) 67

MojoKid writes: AMD officially launched its latest many-core Zen 2-based processor today, a 16-core/32-thread beast known as the Ryzen 9 3950X. The Ryzen 9 3950X goes head-to-head against Intel's HEDT flagship line-up like the 18-core Core i9-9980XE but at a much more reasonable price point of $750 (versus over $2K for the Intel chip). The Ryzen 9 3950X has base and boost clocks of 3.5GHz and 4.7GHz, respectively. The CPU cores at the heart of the Ryzen 9 39050X are grouped into two, 7nm 8-core chiplets, each with dual, four-core compute complexes (CCX). Those chiplets link to an IO die that houses the memory controller, PCI Express lanes, and other off-chip IO. The new 16-core Zen 2 chips also use the same AM4 socket and are compatible with the same motherboards, memory, and coolers currently on the market for lower core-count AMD Ryzen CPUs. Throughout all of Hot Hardware's benchmark testing, the 16-core Ryzen 9 3950X consistently finished at or very near the top of the charts in every heavily-threaded workload, and handily took Intel's 18-core chip to task, beating it more often than not.
Windows

Windows and Linux Get Options To Disable Intel TSX To Prevent Zombieload v2 Attacks (zdnet.com) 67

Both Microsoft and the Linux kernel teams have added ways to disable support for Intel Transactional Synchronization Extensions (TSX). From a report: TSX is the Intel technology that opens the company's CPUs to attacks via the Zombieload v2 vulnerability. Zombieload v2 is the codename of a vulnerability that allows malware or a malicious threat actor to extract information processed inside a CPU, information to which they normally shouldn't be able to access due to the security walls present inside modern-day CPUs. This new vulnerability was disclosed earlier this week. Intel said it would release microcode (CPU firmware) updates -- available on the company's Support & Downloads center. But, the reality of a real-world production environment is that performance matters. Past microcode updates for other attacks, such as Meltdown, Spectre, Foreshadow, Fallout, and Zombieload v1, have been known to introduce performance hits of up to 40%. Seeing that all the CPU attacks listed above are not only theoretical but also hard to pull off, some companies don't see this performance hit as an option.
Intel

Intel's Cascade Lake CPUs Impacted By New Zombieload v2 Attack (zdnet.com) 43

The Zombieload vulnerability disclosed earlier this year in May has a second variant that also works against more recent Intel processors, not just older ones, including Cascade Lake, Intel's latest line of high-end CPUs -- initially thought to have been unaffected. From a report: Intel is releasing microcode (CPU firmware) updates today to address this new Zombieload attack variant, as part of its monthly Patch Tuesday -- known as the Intel Platform Update (IPU) process. Back in May, two teams of academics disclosed a new batch of vulnerabilities that impacted Intel CPUs. Collectively known as MDS attacks, these are security flaws in the same class as Meltdown, Spectre, and Foreshadow. The attacks rely on taking advantage of the speculative execution process, which is an optimization technique that Intel added to its CPUs to improve data processing speeds and performance. Vulnerabilities like Meltdown, Spectre, and Foreshadow, showed that the speculative execution process was riddled with security holes. Disclosed in May, MDS attacks were just the latest line of vulnerabilities impacting speculative execution. They were different from the original Meltdown, Spectre, and Foreshadow bugs disclosed in 2018 because they attacked different areas of a CPU's speculative execution process. Further reading: Flaw in Intel PMx driver gives 'near-omnipotent control over a victim device'.
Data Storage

Ask Slashdot: Are There Storage Devices With Hardware Compression Built In? 120

Slashdot reader dryriver writes: Using a compressed disk drive or hard drive has been possible for decades now. But when you do this in software or the operating system, the CPU does the compressing and decompressing. Are there any hard drives or SSDs that can work compressed using their own built in hardware for this?

I'm not talking about realtime video compression using a hardware CODEC chip -- this does exist and is used -- but rather a storage medium that compresses every possible type of file using its own compression and decompression realtime hardware without a significant speed hit.

Leave your best thoughts and suggestions in the comments. Are there storage devices with hardware compressiong built in?

Slashdot Top Deals