Debian

Debian 13.0 To Begin Supporting RISC-V as an Official CPU Architecture (phoronix.com) 28

It was nearly a decade ago when the first RISCV64 port was started for Debian, reports Phoronix. But one of the big features planned for Debian 13.0 (planned for 9 August) is supporting RISC-V as an official CPU architecture. This is the first release where RISC-V 64-bit is officially supported by Debian Linux albeit with limited board support and the Debian RISC-V build process is handicapped by slow hardware.

A Debian RISC-V BoF session was held at this week's DebConf25 conference in France to discuss the state of RISCV64 for Debian Linux. The talk was led by Debian developers Aurelien Jarno and Bo YU... RV64GC is the current target for Debian RISC-V and using UEFI-based booting as the default. Over seventeen thousand source Debian packages are building for RISC-V with Trixie... Those wishing to learn more about this current state of Debian for RISC-V can see the PDF slide deck from DebConf25.

Bug

NVIDIA Warns Its High-End GPUs May Be Vulnerable to Rowhammer Attacks (nerds.xyz) 15

Slashdot reader BrianFagioli shared this report from Nerds.xyz: NVIDIA just put out a new security notice, and if you're running one of its powerful GPUs, you might want to pay attention. Researchers from the University of Toronto have shown that Rowhammer attacks, which are already known to affect regular DRAM, can now target GDDR6 memory on NVIDIA's high-end GPUs when ECC [error correction code] is not enabled.

They pulled this off using an A6000 card, and it worked because system-level ECC was turned off. Once it was switched on, the attack no longer worked. That tells you everything you need to know. ECC matters.

Rowhammer has been around for years. It's one of those weird memory bugs where repeatedly accessing one row in RAM can cause bits to flip in another row. Until now, this was mostly a CPU memory problem. But this research shows it can also be a GPU problem, and that should make data center admins and workstation users pause for a second.

NVIDIA is not sounding an alarm so much as reminding everyone that protections are already in place, but only if you're using the hardware properly. The company recommends enabling ECC if your GPU supports it. That includes cards in the Blackwell, Hopper, Ada, and Ampere lines, along with others used in DGX, HGX, and Jetson systems. It also includes popular workstation cards like the RTX A6000.

There's also built-in On-Die ECC in certain newer memory types like GDDR7 and HBM3. If you're lucky enough to be using a card that has it, you're automatically protected to some extent, because OD-ECC can't be turned off. It's always working in the background. But let's be real. A lot of people skip ECC because it can impact performance or because they're running a setup that doesn't make it obvious whether ECC is on or off. If you're not sure where you stand, it's time to check. NVIDIA suggests using tools like nvidia-smi or, if you're in a managed enterprise setup, working with your system's BMC or Redfish APIs to verify settings.

AMD

AMD Warns of New Meltdown, Spectre-like Bugs Affecting CPUs (theregister.com) 26

AMD is warning users of a newly discovered form of side-channel attack affecting a broad range of its chips that could lead to information disclosure. Register: Akin to Meltdown and Spectre, the Transient Scheduler Attack (TSA) comprises four vulnerabilities that AMD said it discovered while looking into a Microsoft report about microarchitectural leaks.

The four bugs do not appear too venomous at face value -- two have medium-severity ratings while the other two are rated "low." However, the low-level nature of the exploit's impact has nonetheless led Trend Micro and CrowdStrike to assess the threat as "critical."

The reasons for the low severity scores are the high degree of complexity involved in a successful attack -- AMD said it could only be carried out by an attacker able to run arbitrary code on a target machine. It affects AMD processors (desktop, mobile and datacenter models), including 3rd gen and 4th gen EPYC chips -- the full list is here.

HP

CarFax For Used PCs: Hewlett Packard Wants To Give Laptops New Life (arstechnica.com) 52

HP is developing a "PCFax" system similar to CarFax for used cars that securely collects and stores detailed device usage and health data at the firmware level to extend the life of PCs and reduce e-waste. A team of HP experts outlines the system in a recent IEEE Spectrum report: The secure telemetry protocol we've developed at HP works as follows. We gather the critical hardware and sensor data and store it in a designated area of the SSD. This area is write-locked, meaning only authorized firmware components can write to it, preventing accidental modification or tampering. That authorized firmware component we us is the Endpoint Security Controller, a dedicated piece of hardware embedded in business class HP PCs. It plays a critical role in strengthening platform-level security and works independently from the main CPU to provide foundational protection.

The endpoint security controller establishes a secure session by retaining the secret key within the controller itself. This mechanism enables read data protection on the SSD -- where telemetry and sensitive data are stored -- by preventing unauthorized access, even if the operating system is reinstalled or the system environment is otherwise altered. Then, the collected data is recorded in a timestamped file, stored within a dedicated telemetry log on the SSD. Storing these records on the SSD has the benefit of ensuring the data is persistent even if the operating system is reinstalled or some other drastic change in software environment occurs. The telemetry log employs a cyclic buffer design, automatically overwriting older entries when the log reaches full capacity. Then, the telemetry log can be accessed by authorized applications at the operating system level.

The telemetry log serves as the foundation for a comprehensive device history report. Much like a CarFax report for used cars, this report, which we call PCFax, will provide both current users and potential buyers with crucial information. The PCFax report aggregates data from multiple sources beyond just the on-device telemetry logs. It combines the secure firmware-level usage data with information from HP's factory and supply chain records, digital services platforms, customer support service records, diagnostic logs, and more. Additionally, the system can integrate data from external sources including partner sales and service records, refurbishment partner databases, third-party component manufacturers like Intel, and other original equipment manufacturers. This multi-source approach creates a complete picture of the device's entire lifecycle, from manufacturing through all subsequent ownership and service events.

Firefox

Firefox 140 Arrives With ESR Status 29

Longtime Slashdot reader williamyf writes: Firefox 140 just landed. Some user-facing features include:

Vertical Tabs: You can now keep more -- or fewer -- pinned tabs in view for quicker access to important windows. Just drag the divider to resize your pinned tabs section.
Unload Tabs: You can now unload tabs by right-clicking on a tab (or multiple selected tabs) and selecting "Unload Tab." This can speed up performance by reducing Firefox's memory and CPU usage.

But the most important feature? This release is an Extended Support Release (ESR). Why are ESRs so important? ESR is the Firefox version that ships as the default with many Linux distributions. Some downstream projects (like Waterfox) depend on the ESR version. Many enterprise software systems are tested only against ESR. When features are dropped -- like support for older operating systems or Flash -- ESR keeps that functionality around for longer.

And speaking of old operating systems: If you are using Windows 7, 8.1, or macOS 10.12~10.15, note that FireFox ESR 115 (the last version supporting these OSs) will continue to receive patches until at least September 2025.

So one can see why ESR is very important for some people.
The release notes are available here.
Ubuntu

Ubuntu To Disable Intel Graphics Security Mitigations To Boost GPU Performance By Up To 20% (arstechnica.com) 15

Disabling Intel graphics security mitigations in GPU compute stacks for OpenCL and Level Zero can yield a performance boost of up to 20%, prompting Ubuntu's Canonical and Intel to disable these mitigations in future Ubuntu packages. Phoronix's Michael Larabel reports: Intel does allow building their GPU compute stack without these mitigations by using the "NEO_DISABLE_MITIGATIONS" build option and that is what Canonical is looking to set now for Ubuntu packages to avoid the significant performance impact. This work will likely all be addressed in time for Ubuntu 25.10. This NEO_DISABLE_MITIGATIONS option is just for compiling the Intel Compute Runtime stack and doesn't impact the Linux kernel security mitigations or else outside of Intel's "NEO" GPU compute stack. Both Intel and Canonical are in agreement with this move and it turns out that even Intel's GitHub binary packages for their Compute Runtime for OpenCL and Level Zero ship with the mitigations disabled due to the performance impact. This Ubuntu Launchpad bug report for the Intel Compute Runtime notes some of the key takeaways. There is also this PPA where Ubuntu developers are currently testing their Compute Runtime builds with NEO_DISABLE_MITIGATIONS enabled for disabling the mitigations.
PlayStation (Games)

Engineer Creates First Custom Motherboard For 1990s PlayStation Console (arstechnica.com) 12

An anonymous reader quotes a report from Ars Technica: Last week, electronics engineer Lorentio Brodesco announced the completion of a mock-up for nsOne, reportedly the first custom PlayStation 1 motherboard created outside of Sony in the console's 30-year history. The fully functional board accepts original PlayStation 1 chips and fits directly into the original console case, marking a milestone in reverse-engineering for the classic console released in 1994. Brodesco's motherboard isn't an emulator or FPGA-based re-creation -- it's a genuine circuit board designed to work with authentic PlayStation 1 components, including the CPU, GPU, SPU, RAM, oscillators, and voltage regulators. The board represents over a year of reverse-engineering work that began in March 2024 when Brodesco discovered incomplete documentation while repairing a PlayStation 1.

"This isn't an emulator. It's not an FPGA. It's not a modern replica," Brodesco wrote in a Reddit post about the project. "It's a real motherboard, compatible with the original PS1 chips." It's a desirable project for some PS1 enthusiasts because a custom motherboard could allow owners of broken consoles to revive their systems by transplanting original chips from damaged boards onto new, functional ones. With original PS1 motherboards becoming increasingly prone to failure after three decades, replacement boards could extend the lifespan of these classic consoles without resorting to emulation.

The nsOne project -- short for "Not Sony's One" -- uses a hybrid design based on the PU-23 series motherboards found in SCPH-900X PlayStation models but reintroduces the parallel port that Sony had removed from later revisions. Brodesco upgraded the original two-layer PCB design to a four-layer board while maintaining the same form factor. [...] As Brodesco noted on Kickstarter, his project's goal is to "create comprehensive documentation, design files, and production-ready blueprints for manufacturing fully functional motherboards." Beyond repairs, the documentation and design files Brodesco is creating would preserve the PlayStation 1's hardware architecture for future generations: "It's a tribute to the PS1, to retro hardware, and to the belief that one person really can build the impossible."

Intel

Top Researchers Leave Intel To Build Startup With 'The Biggest, Baddest CPU' (oregonlive.com) 104

An anonymous reader quotes a report from OregonLive: Together, the four founders of Beaverton startup AheadComputing spent nearly a century at Intel. They were among Intel's top chip architects, working years in advance to develop new generations of microprocessors to power the computers of the future. Now they're on their own, flying without a net, building a new class of microprocessor on an entirely different architecture from Intel's. Founded a year ago, AheadComputing is trying to prove there's a better way to design computer chips.

"AheadComputing is doing the biggest, baddest CPU in the world," said Debbie Marr, the company's CEO. [...] AheadComputing is betting on an open architecture called RISC-V -- RISC stands for "reduced instruction set computer." The idea is to craft a streamlined microprocessor that works more efficiently by doing fewer things, and doing them better than conventional processors. For AheadComputing's founders and 80 employees, many of them also Intel alumni, it's a major break from the kind of work they've been doing all their careers. They've left a company with more than 100,000 workers to start a business with fewer than 100.

"Every person in this room," Marr said, looking across a conference table at her colleagues, "we could have stayed at Intel. We could have continued to do very exciting things at Intel." They decided they had a better chance at leading a revolution in semiconductor technology at a startup than at a big, established company like Intel. And AheadComputing could be at the forefront of renewal in Oregon's semiconductor ecosystem. "We see this opportunity, this light," Marr said. "We took our chances."
It'll be years before AheadComputing's designs are on the market, but the company "envisions its chips will someday power PCs, laptops and data centers," reports OregonLive. "Possible clients could include Google, Amazon, Samsung or other large computing companies."
Build

Linux 6.16 Adds 'X86_NATIVE_CPU' Option To Optimize Your Kernel Build (phoronix.com) 33

unixbhaskar shares a report from Phoronix: The X86_NATIVE_CPU Kconfig build time option has been merged for the Linux 6.16 merge window as an easy means of enforcing "-march=native" compiler behavior on AMD and Intel processors to optimize your kernel build for the local CPU architecture/family of your system. For those wanting to "-march=native" your Linux kernel build on AMD/Intel x86_64 processors, the new CONFIG_X86_NATIVE_CPU option can be easily enabled for setting that compiler option on your local kernel builds.

The CONFIG_X86_NATIVE_CPU option is honored if compiling the Linux x86_64 kernel with GCC or LLVM Clang when using Clang 19 or newer due to a compiler bug with the Linux kernel on older compiler versions. In addition to setting the "-march=native" compiler option for the Linux kernel C code, enabling this new Kconfig build option also sets "-Ctarget-cpu=native" for the kernel's Rust code too.
"It seems interesting though," comments unixbhaskar. "If the detailed benchmark shows some improvement with the option selected, then distros might start to adopt it for their flavor."
Red Hat Software

Red Hat Collaborates with SIFive on RISC-V Support, as RHEL 10 Brings AI Assistant and Post-Quantum Security (betanews.com) 24

SiFive was one of the first companies to produce a RISC-V chip. This week they announced a new collaboration with Red Hat "to bring Red Hat Enterprise Linux support to the rapidly growing RISC-V community" and "prepare Red Hat's product portfolio for future intersection with RISC-V server hardware from a diverse set of RISC-V suppliers."

Red Hat Enterprise Linux 10 is available in developer preview on the SiFive HiFive Premier P550 platform, which they call "a proven, high performance RISC-V CPU development platform." The SiFive HiFive Premier P550 provides a proven, high performance RISC-V CPU development platform. Adding support for Red Hat Enterprise Linux 10, the latest version of the world's leading enterprise Linux platform, enables developers to create, optimize, and release new applications for the next generation of enterprise servers and cloud infrastructure on the RISC-V architecture...

SiFive's high performance RISC-V technology is already being used by large organizations to meet compute-intensive AI and machine learning workloads in the datacenter... "With the growing demand for RISC-V, we are pleased to collaborate with SiFive to support Red Hat Enterprise Linux 10 deployments on SiFive HiFive Premier P550," said Ronald Pacheco, senior director of RHEL product and ecosystem strategy, "to further empower developers with the power of the world's leading enterprise Linux platform wherever and however they choose to deploy...."

Dave Altavilla, principal analyst at HotTech Vision And Analysis, said "Native Red Hat Enterprise Linux support on SiFive's HiFive Premier P550 board offers developers a substantial enterprise-grade toolchain for RISC-V.

"This is a pivotal step forward in enabling a full-stack ecosystem around open RISC-V hardware.
SiFive says the move will "inspire the next generation of enterprise workloads and AI applications optimized for RISC-V," while helping their partners "deliver systems with a meaningfully lower total cost of ownership than incumbent platforms."

"With the growing demand for RISC-V, we are pleased to collaborate with SiFive to support Red Hat Enterprise Linux 10 deployments on SiFive HiFive Premier P550..." said Ronald Pacheco, senior director of RHEL product and ecosystem strategy. .

Beta News notes that there's also a new AI-powered assistant in RHEL 10, so "Instead of spending all day searching for answers or poking through documentation, admins can simply ask questions directly from the command line and get real-time help Security is front and center in this release, too. Red Hat is taking a proactive stance with early support for post-quantum cryptography. OpenSSL, GnuTLS, NSS, and OpenSSH now offer quantum-resistant options, setting the stage for better protection as threats evolve. There's a new sudo system role to help with privilege management, and OpenSSH has been bumped to version 9.9. Plus, with new Sequoia tools for OpenPGP, the door is open for even more robust encryption strategies. But it's not just about security and AI. Containers are now at the heart of RHEL 10 thanks to the new "image mode." With this feature, building and maintaining both the OS and your applications gets a lot more streamlined...
AI

Qualcomm To Launch Data Center Processors That Link To Nvidia Chips 6

Qualcomm announced plans to re-enter the data center market with custom CPUs designed to integrate with Nvidia GPUs and software. As CNBC reports, the move supports Qualcomm's broader strategy to diversify beyond smartphones and into high-growth areas like data centers, PCs, and automotive chips. From the report: "I think we see a lot of growth happening in this space for decades to come, and we have some technology that can add real value added," Cristiano Amon, CEO of Qualcomm, told CNBC in an interview on Monday. "So I think we have a very disruptive CPU." Amon said the company will make an announcement about the CPU roadmap and the timing of its release "very soon," without offering specifics. The data center CPU market remains highly competitive. Big cloud computing players like Amazon and Microsoft already design and deploy their own custom CPUs. AMD and Intel also have a strong presence.

Addressing the competition, Amon said that there will be a place for Qualcomm in the data center CPU space. "As long as ... we can build a great product, we can bring innovation, and we can add value with some disruptive technology, there's going to be room for Qualcomm, especially in the data center," Amon said. "[It] is a very large addressable market that will that will see a lot of investment for decades to come." Last week, Qualcomm signed a memorandum of understanding with Saudi-based AI frim Humain to develop data centers, joining a slew of U.S. tech companies making deals in the region. Humain will operate under Saudi Arabia's Public Investment Fund.
AMD

Intel Struggles To Reverse AMD's Share Gains In x86 CPU Market (crn.com) 91

An anonymous reader shared this report from CRN: CPU-tracking firm Mercury Research reported on Thursday that Intel's x86 CPU market share grew 0.3 points sequentially to 75.6 percent against AMD's 24.4 percent in the first quarter. However, AMD managed to increase its market share by 3.6 points year over year. These figures only captured the server, laptop and desktop CPU segments. When including IoT and semicustom products, AMD grew its x86 market share sequentially by 1.5 points and year over year by 0.9 points to 27.1 percent against Intel's 72.9 percent... AMD managed to gain ground on Intel in the desktop and server segments sequentially and year over year. But it was in the laptop segment where Intel eked out a sequential share gain, even though rival AMD ended up finishing the first quarter with a higher share of shipments than what it had a year ago...

While AMD mostly came out on top in the first quarter, [Mercury Research President Dean] McCarron said ARM's estimated CPU share against x86 products crossed into the double digits for the first time, growing 2.3 points sequentially to 11.9 percent. This was mainly due to a "surge" of Nvidia's Grace CPUs for servers and a large increase of Arm CPU shipments for Chromebooks.

Meanwhile, PC Gamer reports that ARM's share of the PC processor market "grew to 13.6% in the first quarter of 2025 from 10.8% in the fourth quarter of 2024." And they note the still-only-rumors that an Arm-based chip from AMD will be available as soon next year. [I]f one of the two big players in x86 does release a mainstream Arm chip for the PC, that will very significant. If it comes at about the same time as Nvidia's rumoured Arm chip for the PC, well, momentum really will be building and questioning x86's dominance will be wholly justified.
Software

Carmack: World Could Run on Older Hardware if Software Optimization Was Priority 174

Gaming pioneer John Carmack believes we're not nearly as dependent on cutting-edge silicon as most assume -- we just lack the economic incentive to prove it. Responding to a "CPU apocalypse" thought experiment on X, the id Software founder and former Oculus CTO suggested that software inefficiency, not hardware limitations, is our greatest vulnerability. "More of the world than many might imagine could run on outdated hardware if software optimization was truly a priority," Carmack wrote, arguing that market pressures would drive dramatic efficiency improvements if new chips stopped arriving.

His solution? "Rebuild all the interpreted microservice based products into monolithic native codebases!" -- essentially abandoning modern development patterns for the more efficient approaches of earlier computing eras. The veteran programmer noted that such changes would come with significant tradeoffs: "Innovative new products would get much rarer without super cheap and scalable compute."
Hardware

Linux Drops Support For 486 and Early Pentium Processors (zdnet.com) 71

An anonymous reader quotes a report from ZDNet: RIP, 486 processor. You've had a long run since Intel released you back in 1989. While Microsoft stopped supporting you with the release of Windows XP in 2001, Linux kept you alive and well for another 20+ years. But all good things must come to an end, and with the forthcoming release of the Linux 6.15 kernel, the 486 and the first Pentium processors will be sunsetted.

Why? Linus Torvalds wrote recently on the Linux Kernel Mailing List (LKML), "I really get the feeling that it's time to leave i486 support behind. There's zero real reason for anybody to waste one second of development effort on this kind of issue." Senior Linux kernel developer Ingo Molnar put Torvalds' remark into context, writing, "In the x86 architecture, we have various complicated hardware emulation facilities on x86-32 to support ancient 32-bit CPUs that very very few people are using with modern kernels. This compatibility glue is sometimes even causing problems that people spend time to resolve, which time could be spent on other things."
"This will be the first time Linux has dropped support for a major chip family since 2012, when Linux stopped supporting the 386 family," notes ZDNet's Steven Vaughan-Nichols. "Moving forward, the minimum supported x86 CPU will now be the original Pentium (P5) or newer, requiring the presence of the Time Stamp Counter (TSC) and the CMPXCHG8B (CX8) instruction. These features are absent in the older 486 and early 586 processors, such as the IDT WinChip and AMD Elan families."

That said, you can continue running Linux on Pentium CPUs, but you'll have to "run museum kernels," as Torvalds pointed out in 2022 when he first floated the idea of ending support for 486.
Intel

Intel Says It's Rolling Out Laptop GPU Drivers With 10% To 25% Better Performance (arstechnica.com) 23

Ars Technica's Andrew Cunningham reports: Intel's oddball Core Ultra 200V laptop chips -- codenamed Lunar Lake -- will apparently be a one-off experiment, not to be replicated in future Intel laptop chips. They're Intel's only processors with memory integrated onto the CPU package; the only ones with a neural processing unit that meets Microsoft's Copilot+ performance requirements; and the only ones with Intel's best-performing integrated GPUs, the Intel Arc 130V and 140V.

Today, Intel announced some updates to its graphics driver that specifically benefit those integrated GPUs, welcome news for anyone who bought one and is trying to get by with it as an entry-level gaming system. Intel says that version 32.0.101.6734 of its graphics driver can speed up average frame rates in some games by around 10 percent, and can speed up "1 percent low FPS" (that is, for any given frames per second measurement, whatever your frame rate is the slowest 1 percent of the time) by as much as 25 percent. This should, in theory, make games run better in general and ease some of the stuttering you notice when your game's performance dips down to that 1 percent level.

Microsoft

Microsoft Confirms Classic Outlook CPU Usage Spikes, Offers No Fix (theregister.com) 58

Microsoft has acknowledged that Classic Outlook can mysteriously transform into a system resource hog, causing CPU usage spikes between 30-50% and significantly increasing power consumption on both Windows 10 and 11 systems.

Users first reported the issue in November 2024, but Microsoft only confirmed the problem this week, offering little resolution beyond stating that "the Outlook Team is investigating this issue." The company's sole workaround involves forcing a switch to the Semi-Annual Channel update through registry edits -- an approach many enterprise environments will likely avoid. Microsoft hasn't announced a definitive end date for Classic Outlook, but the company continues pushing users toward its New Outlook client despite its incomplete feature set.
AMD

New Supercomputing Record Set - Using AMD's Instinct GPUs (tomshardware.com) 23

"AMD processors were instrumental in achieving a new world record," reports Tom's Hardware, "during a recent Ansys Fluent computational fluid dynamics simulation run on the Frontier supercomputer at the Oak Ridge National Laboratory."

The article points out that Frontier was the fastest supercomputer in the world until it was beaten by Lawrence Livermore Lab's El Capitan — with both computers powered by AMD GPUs: According to a press release by Ansys, it ran a 2.2-billion-cell axial turbine simulation for Baker Hughes, an energy technology company, testing its next-generation gas turbines aimed at increasing efficiency. The simulation previously took 38.5 hours to complete on 3,700 CPU cores. By using 1,024 AMD Instinct MI250X accelerators paired with AMD EPYC CPUs in Frontier, the simulation time was slashed to 1.5 hours. This is more than 25 times faster, allowing the company to see the impact of the changes it makes on designs much more quickly...

Given those numbers, the Ansys Fluent CFD simulator apparently only used a fraction of the power available on Frontier. That means it has the potential to run even faster if it can utilize all the available accelerators on the supercomputer. It also shows that, despite Nvidia's market dominance in AI GPUs, AMD remains a formidable competitor, with its CPUs and GPUs serving as the brains of some of the fastest supercomputers on Earth.

Operating Systems

Coreboot 25.03 Released With Support For 22 More Motherboards (phoronix.com) 26

Coreboot 25.03 has been released with support for 22 new motherboards and several other significant updates, including enhanced display handling, USB debugging, RISC-V support, and RAM initialization for older Intel platforms. Phoronix reports: Coreboot 25.03 delivers display handling improvements, a better USB debugging experience, CPU topology updates, various improvements to the open-source RAM initialization for aging Intel Haswell platforms, improved USB Type-C and Thunderbolt handling, various embedded controller (EC) improvements, better RISC-V architecture support, DDR5-7500 support, and many bug fixes across the sprawling Coreboot codebase. More details, including a full list of the supported boards, can be found here.
Power

Open-Source Tool Designed To Throttle PC and Server Performance Based On Electricity Pricing (tomshardware.com) 56

Robotics and machine learning engineer Naveen Kul developed WattWise, a lightweight open-source CLI tool that monitors power usage via smart plugs and throttles system performance based on electricity pricing and peak hours. Tom's Hardware reports: The simple program, called WattWise, came about when Naveen built a dual-socket EPYC workstation with plans to add four GPUs. It's a power-intensive setup, so he wanted a way to monitor its power consumption using a Kasa smart plug. The enthusiast has released the monitoring portion of the project to the public now, but the portion that manages clocks and power will be released later. Unfortunately, the Kasa Smart app and the Home Assistant dashboard was inconvenient and couldn't do everything he desired. He already had a terminal window running monitoring tools like htop, nvtop, and nload, and decided to take matters into his own hands rather than dealing with yet another app.

Naveen built a terminal-based UI that shows power consumption data through Home Assistant and the TP-Link integration. The app monitors real-time power use, showing wattage and current, as well as providing historical consumption charts. More importantly, it is designed to automatically throttle CPU and GPU performance. Naveen's power provider uses Time-of-Use (ToU) pricing, so using a lot of power during peak hours can cost significantly more. The workstation can draw as much as 1400 watts at full load, but by reducing the CPU frequency from 3.7 GHz to 1.5 GHz, he's able to reduce consumption by about 225 watts. (No mention is made of GPU throttling, which could potentially allow for even higher power savings with a quad-GPU setup.)

Results will vary based on the hardware being used, naturally, and servers can pull far more power than a typical desktop -- even one designed and used for gaming. WattWise optimizes the system's clock speed based on the current system load, power consumption as reported by the smart plug, and the time -- with the latter factoring in peak pricing. From there, it uses a Proportional-Integral (PI) controller to manage the power and adapts system parameters based on the three variables.
A blog post with more information is available here.

WattWise is also available on GitHub.
Cloud

Microsoft Announces 'Hyperlight Wasm': Speedy VM-Based Security at Scale with a WebAssembly Runtime (microsoft.com) 18

Cloud providers like the security of running things in virtual machines "at scale" — even though VMs "are not known for having fast cold starts or a small footprint..." noted Microsoft's Open Source blog last November. So Microsoft's Azure Core Upstream team built an open source Rust library called Hyperlight "to execute functions as fast as possible while isolating those functions within a VM."

But that was just the beginning... Then, we showed how to run Rust functions really, really fast, followed by using C to [securely] run Javascript. In February 2025, the Cloud Native Computing Foundation (CNCF) voted to onboard Hyperlight into their Sandbox program [for early-stage projects].

[This week] we're announcing the release of Hyperlight Wasm: a Hyperlight virtual machine "micro-guest" that can run wasm component workloads written in many programming languages...

Traditional virtual machines do a lot of work to be able to run programs. Not only do they have to load an entire operating system, they also boot up the virtual devices that the operating system depends on. Hyperlight is fast because it doesn't do that work; all it exposes to its VM guests is a linear slice of memory and a CPU. No virtual devices. No operating system. But this speed comes at the cost of compatibility. Chances are that your current production application expects a Linux operating system running on the x86-64 architecture (hardware), not a bare linear slice of memory...

[B]uilding Hyperlight with a WebAssembly runtime — wasmtime — enables any programming language to execute in a protected Hyperlight micro-VM without any prior knowledge of Hyperlight at all. As far as program authors are concerned, they're just compiling for the wasm32-wasip2 target... Executing workloads in the Hyperlight Wasm guest isn't just possible for compiled languages like C, Go, and Rust, but also for interpreted languages like Python, JavaScript, and C#. The trick here, much like with containers, is to also include a language runtime as part of the image... Programming languages, runtimes, application platforms, and cloud providers are all starting to offer rich experiences for WebAssembly out of the box. If we do things right, you will never need to think about whether your application is running inside of a Hyperlight Micro-VM in Azure. You may never know your workload is executing in a Hyperlight Micro VM. And that's a good thing.

While a traditional virtual-device-based VM takes about 125 milliseconds to load, "When the Hyperlight VMM creates a new VM, all it needs do to is create a new slice of memory and load the VM guest, which in turn loads the wasm workload. This takes about 1-2 milliseconds today, and work is happening to bring that number to be less than 1 millisecond in the future."

And there's also double security due to Wasmtime's software-defined runtime sandbox within Hyperlight's larger VM...

Slashdot Top Deals