×
Hardware

PC CPU Shipments See Steepest Decline In 30 Years (tomshardware.com) 128

According to a new report by Dean McCarron of Mercury Research, the x86 processor market has just endured "the largest on-quarter and on-year declines in our 30-year history." "Based on previously published third-party data, McCarron is also reasonably sure that the 2022 Q4 and full-year numbers represent the worst downturn in PC processor history," adds Tom's Hardware. From the report: The x86 processor downturn observed has been precipitated by the terrible twosome of lower demand and an inventory correction. This menacing pincer movement has resulted in 2022 unit shipments of 374 million processors (excluding ARM), a figure 21% lower than in 2021. Revenues were $65 billion, down 19 percent YoY. McCarron was keen to emphasize that Mercury's gloomy stats about x86 shipments through 2022 do not necessarily directly correlate with x86 PC (processors) shipments to end users. Earlier, we mentioned that the two downward driving forces were inventory adjustments and a slowing of sales -- but which played the most significant part in this x86 record slump?

The Mercury Research analyst explained, "Most of the downturn in shipments is blamed on excess inventory shipping in prior quarters impacting current sales." A perfect storm is thus brewing as "CPU suppliers are also deliberately limiting shipments to help increase the rate of inventory consumption... [and] PC demand for processors is lower, and weakening macroeconomic concerns are driving PC OEMs to reduce their inventory as well." Mercury also asserted that the trend is likely to continue through H1 2023. Its thoughts about the underlying inventory shenanigans should also be evidenced by upcoming financials from the major players in the next few months. [...]

McCarron shines a glimmer of light in the wake of this gloom, reminding us that overall processor revenue was still higher in 2022 than any year before the 2020s began. Another ray of light shone on AMD, with its gains in server CPU share, one of the only segments which saw some growth in Q4 2022. Also, AMD gained market share in the shrinking desktop and laptop markets.

Red Hat Software

Red Hat Gives an ARM Up To OpenShift Kubernetes Operations (venturebeat.com) 13

An anonymous reader quotes a report from VentureBeat: Red Hat is perhaps best known as a Linux operating system vendor, but it is the company's OpenShift platform that represents its fastest growing segment. Today, Red Hat announced the general availability of OpenShift 4.12, bringing a series of new capabilities to the company's hybrid cloud application delivery platform. OpenShift is based on the open source Kubernetes container orchestration system, originally developed by Google, that has been run as the flagship project of the Linux Foundation's Cloud Native Computing Foundation (CNCF) since 2014. [...] With the new release, Red Hat is integrating new capabilities to help improve security and compliance for OpenShift, as well as new deployment options on ARM-based architectures. The OpenShift 4.12 release comes as Red Hat continues to expand its footprint, announcing partnerships with Oracle and SAP this week.

The financial importance of OpenShift to Red Hat and its parent company IBM has also been revealed, with IBM reporting in its earnings that OpenShift is a $1 billion business. "Open-source solutions solve major business problems every day, and OpenShift is just another example of how Red Hat brings business and open source together for the benefit of all involved," Mike Barrett, VP of product management at Red Hat, told VentureBeat. "We're very proud of what we have accomplished thus far, but we're not resting at $1B." [...]

OpenShift, like many applications developed in the last several decades, originally was built just for the x86 architecture that runs on CPUs from Intel and AMD. That situation is increasingly changing as OpenShift is gaining more support to run on the ARM processor with the OpenShift 4.12 update. Barrett noted that Red Hat OpenShift announced support for the AWS Graviton ARM architecture in 2022. He added that OpenShift 4.12 expands that offering to Microsoft Azure ARM instances. "We find customers with a significant core consumption rate for a singular computational deliverable are gravitating toward ARM first," Barrett said.

Overall, Red Hat is looking to expand the footprint of where its technologies are able to run, which also new cloud providers. On Jan. 31, Red Hat announced that for the first time, Red Hat Enterprise Linux (RHEL) would be available as a supported platform on Oracle Cloud Infrastructure (OCI). While RHEL is now coming to OCI, OpenShift isn't -- at least not yet. "Right now, it's just RHEL available on OCI," Mike Evans, vice president, technical business development at Red Hat, told VentureBeat. "We're evaluating what other Red Hat technologies, including OpenShift, may come to Oracle Cloud Infrastructure but this will ultimately be driven by what our joint customers want."

Wine

Wine 8.0 Released — and Plenty of Improvements are Included (omgubuntu.co.uk) 59

An anonymous reader shares this report from OMG! Ubuntu: Developers have just uncorked a brand new release of Wine, the open source compatibility layer that allows Windows apps to run on Linux.

A substantial update, Wine 8.0 is fermented from a year's worth of active development (roughly 8,600 changes in total). From that, a wealth of improvements are provided across every part of the Wine experience, from app compatibility, through to performance, and a nicer looking UI....

Notable highlights in Wine 8.0 include the completion of PE conversion, meaning all modules can be built in PE format. Wine devs say this work is an important milestone towards supporting "copy protection, 32-bit applications on 64-bit hosts, Windows debuggers, x86 applications on ARM", and more.
Intel

Intel Unveils Core i9-13900KS (anandtech.com) 37

Initially mentioned during their Innovation 2022 opening keynote by Intel CEO Pat Gelsinger, Intel has unveiled its highly anticipated 6 GHz out-of-the-box processor, the Core i9-13900KS. The Core i9-13900KS has 24-cores (8P+16E) within its hybrid architecture design of performance and efficiency cores, with the exact fundamental specifications of the Core i9-13900K, but with an impressive P-core turbo of up to 6 GHz. From a report: Based on Intel's Raptor Lake-S desktop series, Intel claims that the Core i9-13900KS is the first desktop processor to reach 6 GHz out of the box without overclocking. Available from today, the Core i9-13900KS has a slightly higher base TDP of 150 W (versus 125 on the 13900K), 36 MB of Intel's L3 smart cache, and is pre-binned through a unique selection process to ensure the Core i9-13900KS's special edition status for their highest level of frequency of 6 GHz in a desktop chip out of the box, without the need to overclock manually.

The Core i9-13900KS has been a long-awaited entrant to Intel's Raptor Lake-S for desktop series, with previous reports from Intel during their Innovation 2022 keynote that a 6 GHz out-of-the-box processor was on the horizon for this year. As Intel highlights, the Core i9-13900KS represents a significant milestone for desktop PCs, with its 6 GHz out-of-the-box P-Core turbo frequency. This makes it one of the fastest desktop x86 processors, at least from the perspective that users don't need to overclock anything to attain these ridiculous core frequencies. From Intel's sneak peek video on YouTube published on Jan 10th, the Core i9-13900KS looks to have reached 6 GHz on two of the eight performance (P) cores, with a clock speed of up to 5.6 GHz on the remaining six cores, which is very impressive.

One of the adjustments Intel needed to make to power limitations to achieve these frequencies is somewhat hazy. Intel hasn't specified if the Core i9-13900KS is a special binned part, but from previous KS launches, this has been the case, and it's expected that it is still the case. The reports of Core i9-13900K chips being overclocked to 6 GHz at ambient are few and far between, with only the best examples and those with very aggressive and premium ambient cooling solutions capable of this. [...] The Intel Core i9-13900KS is available to buy now at most retailers, with an MSRP of $699. This is $40 cheaper than the previous Core i9-12900KS ($739) that launched last year. Based on current MSRP pricing, the Core i9-13900KS is $110 more than the current Core i9-13900K.

Google

Google Wants RISC-V To Be a 'Tier-1' Android Architecture (arstechnica.com) 61

An anonymous reader quotes a report from Ars Technica: Google's keynote at the RISC-V Summit was all about bold proclamations [...]. Lars Bergstrom, Android's director of engineering, wants RISC-V to be seen as a "tier-1 platform" in Android, which would put it on par with Arm. That's a big change from just six months ago. Bergstrom says getting optimized Android builds on RISC-V will take "a lot of work" and outlined a roadmap that will take "a few years" to come to fruition, but AOSP started to land official RISC-V patches back in September. The build system is up and running, and anyone can grab the latest "riscv64" branch whenever they want -- and yes, in line with its recent Arm work, Google wants RISC-V on Android to be 64-bit only. For now, the most you can get is a command line, and Bergstrom's slide promised "initial emulator support by the start of 2023, with Android RunTime (ART) support for Java workloads following during Q1."

One of Bergstrom's slides featured the above "to-do" list, which included a ton of major Android components. Unlike Android's unpolished support for x86, Bergstrom promised a real push for quality with RISC-V, saying, "We need to do all of the work to move from a prototype and something that runs to something that's really singing -- that's showing off the best-in-class processors that [RISC-V International Chairman Krste Asanovic] was mentioning in the previous talk." Once Google does get Android up and running on RISC-V, then it will be up to manufacturers and the app ecosystem to back the platform. What's fun about the Android RunTime is that when ART supports RISC-V, a big chunk of the Android app ecosystem will come with it. Android apps ship as Java code, and the way that becomes an ARM app is when the Android Runtime compiles it into ARM code. Instead, it will soon compile into RISC-V code with no extra work from the developer. Native code that isn't written in Java, like games and component libraries, will need to be ported over, but starting with Java code is a big jump-start.

In her opening remarks, RISC-V International (the nonprofit company that owns the architecture) CEO Calista Redmond argued that "RISC-V is inevitable" thanks to the open business model and wave of open chip design that it can create, and it's getting hard to argue against that. While the show was mostly about the advantages of RISC-V, I want to add that the biggest reason RISC-V seems inevitable is that current CPU front-runner Arm has become an unstable, volatile company, and it feels like any viable alternative would have a good shot at success right now. [...] The other reason to kick Arm to the curb is the US-China trade war, specifically that Chinese companies (and the Chinese government) would really like to distance themselves from Western technology. [...] RISC-V is seen as a way to be less reliant on the West. While the project started at UC Berkeley, RISC-V International says the open source architecture is not subject to US export law. In 2019, the RISC-V Foundation actually moved from the US to Switzerland and became "RISC-V International," all to try to avoid picking a side in the US-China trade war. The result is that Chinese tech companies are rallying around RISC-V as the future chip architecture. One Chinese company hit by US export restrictions, the e-commerce giant Alibaba, has been the leading force in bringing RISC-V support to Android, and with Chinese companies playing a huge part in the Android ecosystem, it makes sense that Google would throw open the doors for official support. Now we just need someone to build a phone.

Unix

OSnews Decries 'The Mass Extinction of Unix Workstations' (osnews.com) 284

Anyone remember the high-end commercial UNIX workstations from a few decades ago — like from companies like IBM, DEC, SGI, and Sun Microsystems?

Today OSnews looked back — but also explored what happens when you try to buy one today> : As x86 became ever more powerful and versatile, and with the rise of Linux as a capable UNIX replacement and the adoption of the NT-based versions of Windows, the days of the UNIX workstations were numbered. A few years into the new millennium, virtually all traditional UNIX vendors had ended production of their workstations and in some cases even their associated architectures, with a lacklustre collective effort to move over to Intel's Itanium — which didn't exactly go anywhere and is now nothing more than a sour footnote in computing history.

Approaching roughly 2010, all the UNIX workstations had disappeared.... and by now, they're all pretty much dead (save for Solaris). Users and industries moved on to x86 on the hardware side, and Linux, Windows, and in some cases, Mac OS X on the software side.... Over the past few years, I have come to learn that If you want to get into buying, using, and learning from UNIX workstations today, you'll run into various problems which can roughly be filed into three main categories: hardware availability, operating system availability, and third party software availability.

Their article details their own attempts to buy one over the years, ultimately concluding the experience "left me bitter and frustrated that so much knowledge — in the form of documentation, software, tutorials, drivers, and so on — is disappearing before our very eyes." Shortsightedness and disinterest in their own heritage by corporations, big and small, is destroying entire swaths of software, and as more years pass by, it will get ever harder to get any of these things back up and running.... As for all the third-party software — well, I'm afraid it's too late for that already. Chasing down the rightsholders is already an incredibly difficult task, and even if you do find them, they are probably not interested in helping you, and even if by some miracle they are, they most likely no longer even have the ability to generate the required licenses or release versions with the licensing ripped out. Stuff like Pro/ENGINEER and SoftWindows for UNIX are most likely gone forever....

Software is dying off at an alarming rate, and I fear there's no turning the tide of this mass extinction.

The article also wonders why companies like HPE don't just "dump some ISO files" onto an FTP server, along with patch depots and documentation. "This stuff has no commercial value, they're not losing any sales, and it will barely affect their bottom line.
Intel

Head of Intel Foundry Services Resigns Just As Chip Biz Gets Going (theregister.com) 26

The head of Intel's revitalized contract chip manufacturing business plans to step down, The Register has learned, creating a setback for the x86 behemoth's big bet to take on Asian foundry giants TSMC and Samsung as part of its comeback plan. From the report: Randhir Thakur, senior vice president and president of Intel Foundry Services, "has decided to pursue other opportunities" but will continue to lead the business unit through the first quarter of 2023 to "ensure a smooth transition to a new leader," Intel CEO Pat Gelsinger said in an email to employees Monday that was seen by el Reg. Intel spokesperson William Moss confirmed the news with us. "We're grateful to Randhir for the tremendous progress IFS has made and for laying the foundation for Intel to become a world-class systems foundry," Moss said in a statement. "We wish him all the best in his new endeavors."

In his email, Gelsinger said he will share more information soon "about the new leader" for Intel Foundry Services, suggesting the company may have a successor in place -- or is at least close to having one. "Randhir has been a key member of the Executive Leadership Team for the past two and a half years and has served in several senior leadership roles since he joined us in 2017," Gelsinger wrote. "... His contributions to our [Integrated Device Manufacturing] 2.0 transformation are many, but most notable is his leadership in standing up our IFS business."

Intel revitalized its contract chip manufacturing business in early 2021 and renamed it Intel Foundry Services with the goal of competing with TSMC and Samsung, the world's two largest contract chip manufacturers that make chips for the likes of Intel rivals, including AMD, Nvidia, and Apple. In his email to employees, Gelsinger credited Thakur for establishing a "seasoned leadership team with veterans from leading foundries" like TSMC and Samsung. He added that the Intel Foundry Services leader also "secured major customer wins in the mobile and auto segments" and helped the company win the US government's RAMP-C award along with four customers for chip designs on its 18A node. "Since Q2, IFS has expanded engagements to seven of the 10 largest foundry customers coupled with consistent pipeline growth to include 35 customer test chips," Gelsinger said. "This is tremendous progress in only 20 months!"
Intel has a pending $5.4 billion acquisition of Israeli chip manufacturer Tower Semiconductor, notes The Register. "Analysts responding to the news of Thakur's resignation said the move is likely happening because Intel plans to put Tower Semiconductor's management in charge of Intel Foundry Services." The deal is expected to close in the first quarter of 2023.
Intel

Intel Takes on AMD and Nvidia With Mad 'Max' Chips For HPC (theregister.com) 26

Intel's latest plan to ward off rivals from high-performance computing workloads involves a CPU with large stacks of high-bandwidth memory and new kinds of accelerators, plus its long-awaited datacenter GPU that will go head-to-head against Nvidia's most powerful chips. From a report: After multiple delays, the x86 giant on Wednesday formally introduced the new Xeon CPU family formerly known as Sapphire Rapids HBM and its new datacenter GPU better known as Ponte Vecchio. Now you will know them as the Intel Xeon CPU Max Series and the Intel Data Center GPU Max Series, respectively, which were among the bevy of details shared by Intel today, including performance comparisons. These chips, set to arrive in early 2023 alongside the vanilla 4th generation Xeon Scalable CPUs, have been a source of curiosity within the HPC community for years because they will power the US Department of Energy's long-delayed Aurora supercomputer, which is expected to become the country's second exascale supercomputer and, consequently, one of the world's fastest.

In a briefing with journalists, Jeff McVeigh, the head of Intel's Super Compute Group, said the Max name represents the company's desire to maximize the bandwidth, compute and other capabilities for a wide range of HPC applications, whose primary users include governments, research labs, and corporations. McVeigh did admit that Intel has fumbled in how long it took the company to commercialize these chips, but he tried to spin the blunders into a higher purpose. "We're always going to be pushing the envelope. Sometimes that causes us to maybe not achieve it, but we're doing that in service of helping our developers, helping the ecosystem to help solve [the world's] biggest challenges," he said. [...] The Xeon Max Series will pack up to 56 performance cores, which are based on the same Golden Cove microarchitecture features as Intel's 12th-Gen Core CPUs, which debuted last year. Like the vanilla Sapphire Rapids chips coming next year, these chips will support DDR5, PCIe 5.0 and Compute Express Link (CXL) 1.1, which will enable memory to be directly attached to the CPU over PCIe 5.0.

Intel

The Linux Kernel May Finally Phase Out Intel i486 CPU Support (phoronix.com) 154

"Linus Torvalds has backed the idea of possibly removing Intel 486 (i486) processor support from the Linux kernel," reports Phoronix: After the Linux kernel dropped i386 support a decade ago, i486 has been the minimum x86 processor support for the mainline Linux kernel. This latest attempt to kill off i486 support ultimately arose from Linus Torvalds himself with expressing the idea of possibly requiring x86 32-bit CPUs with "cmpxchg8b" support, which would mean Pentium CPUs and later:

Maybe we should just bite the bullet, and say that we only support x86-32 with 'cmpxchg8b' (ie Pentium and later).

Get rid of all the "emulate 64-bit atomics with cli/sti, knowing that nobody has SMP on those CPU's anyway", and implement a generic x86-32 xchg() setup using that try_cmpxchg64 loop.

I think most (all?) distros already enable X86_PAE anyway, which makes that X86_CMPXCHG64 be part of the base requirement.

Not that I'm convinced most distros even do 32-bit development anyway these days.... We got rid of i386 support back in 2012. Maybe it's time to get rid of i486 support in 2022?

Towards the end of his post, Torvalds makes the following observation about i486 systems. "At some point, people have them as museum pieces. They might as well run museum kernels. "
Intel

Intel Sued Over Historic DEC Chip Site's Future (theregister.com) 43

Intel is being taken to court in Massachusetts over its proposals to build a distribution and logistics warehouse on the site of its defunct R&D offices and chip factory that closed in 2013. The Register reports: At the heart of this showdown are claims by townsfolk that Intel has not revealed to the surrounding community what exactly it intends to build, and that the land is supposed to be used for industry and manufacturing yet it appears a huge commercial warehouse will be built instead. The x86 giant has spent years trying to figure out what to do the campus -- whether to salvage it for production or research, or to sell it to a developer. It came close to securing a buyer earlier this year.

The site in question is at 75 Reed Road in Hudson, Massachusetts, which holds a special place in computer history. It was the home of Digital Equipment Corporation's R&D and chip manufacturing before Intel took over the land and facility following a patent battle with DEC in 1997. Intel continued R&D at the site and kept it producing chips until it threw the towel in, leaving the location open to options. Ultimately, the site was up for sale with Intel planning to demolish the 40-year-old main buildings while offloading the land. However, the chipmaker, perhaps in response to a revitalization of American semiconductor manufacturing funded by CHIPS Act government subsidies, decided it wants to remake the property into a distribution and logistics and storage facility -- something that might sound innocuous but has the nearby community up in arms.

Further, Intel doesn't have to use the redeveloped site for its own purposes at all: it can, and probably will, market the facility to a future tenant. And it can breeze through planning law requirements without having to reveal the full scope of traffic, pollution, and other impacts due to its status as a "logistics" facility. And that is what really has the locals enraged. Crucially, the site is adjacent to two retirement villages with 286 units and a childcare center. As a former R&D and manufacturing facility, neighboring communities understood the scope of traffic and resource impacts of such a factory. [...] The even bigger problem is that this represents another example of a large tech company wheedling its way through local restrictions to build community-damning facilities, said Michael Pill, the lawyer representing both retirement condo facilities and the childcare center in their legal challenge [PDF] to Intel.
"What Intel has done here is something deeply unpleasant that grows out of its desire to dump the property without any thought to the community where they were once an important pillar of manufacturing," Pill told The Register. "There is a pattern of development in which big companies come sailing into towns, saying they'll build million-plus square foot facilities with hundreds of loading docks and all the planning is done on spec."

In response to the lawsuit, Intel's lawyers said in a filing that the proposed changes are subject to approval by the town: "Because the proposed redevelopment is a permitted use in the zoning district, the project will require site plan review from the town of Hudson planning board."
Software

VirtualBox 7.0 Adds First ARM Mac Client, Full Encryption, Windows 11 TPM (arstechnica.com) 19

Nearly four years after its last major release, VirtualBox 7.0 arrives with a... host of new features. Chief among them are Windows 11 support via TPM, EFI Secure Boot support, full encryption for virtual machines, and a few Linux niceties. From a report: The big news is support for Secure Boot and TPM 1.2 and 2.0, which makes it easier to install Windows 11 without registry hacks (the kind Oracle recommended for 6.1 users). It's strange to think about people unable to satisfy Windows 11's security requirements on their physical hardware, but doing so with a couple clicks in VirtualBox, but here we are. VirtualBox 7.0 also allows virtual machines to run with full encryption, not just inside the guest OSâ"but logs, saved states, and other files connected to the VM. At the moment, this support only works through the command line, "for now," Oracle notes in the changelog.

This is the first official VirtualBox release with a Developer Preview for ARM-based Macs. Having loaded it on an M2 MacBook Air, I can report that the VirtualBox client informs you, extensively and consistently, about the non-production nature of your client. The changelog notes that it's an "unsupported work in progress" that is "known to have very modest performance." A "Beta Warning" shows up in the (new and unified) message center, and in the upper-right corner, a "BETA" warning on the window frame is stacked on top of a construction-style "Dev Preview" warning sign. It's still true that ARM-based Macs don't allow for running operating systems written for Intel or AMD-based processors inside virtual machines. You will, however, be able to run ARM-based Linux installations in macOS Venture that can themselves run x86 processors using Rosetta, Apple's own translation layer.

China

China May Prove Arm Wrong About RISC-V's Role In the Datacenter (theregister.com) 49

Arm might not think RISC-V is a threat to its newfound foothold in the datacenter, but growing pressure on Chinese chipmaking could ultimately change that, Forrester Research analyst Glenn O'Donnell tells The Register. From the report: Over the past few years the US has piled on export bans and trade restrictions on Chinese chipmakers in an effort to stall the country's semiconductor industry. This has included barring companies with ties to the Chinese military from purchasing x86 processors and AI kit from the likes of Intel, AMD, and Nvidia. "Because the US-China trade war restricts x86 sales to China, Chinese infrastructure vendors and cloud providers need to adapt to remain in business," O'Donnell said. "They initially pivoted to Arm, but trade restrictions exist there too. Chinese players are showing great interest in RISC-V."

RISC-V provides China with a shortcut around the laborious prospect of developing their own architecture. "Coming up with a whole new architecture is nearly impossible," O'Donnell said. But "a design based on some architecture is very different from the architecture itself." So it should come as no surprise that the majority of RISC-V members are based in China, according to a report published last year. And the country's government-backed Chinese Academy of Sciences is actively developing open source RISC-V performance processors.

Alibaba's T-Head, which is already deploying Arm server processors and smartNICs, is also exploring RISC-V-based platforms. But for now, they're largely limited to edge and IoT appliances. However, O'Donnell emphasizes that there is no technical reason that would prevent someone from developing a server-grade RISC-V chip. "Similar to Arm, many people dismiss RISC-V as underpowered for more demanding applications. They are wrong. Both are architectures, not specific designs. As such, one can design a powerful processor based on either architecture," he said. [...] One of the most attractive things about RISC-V over Softbank-owned Arm is the relatively low cost of building chips based on the tech, especially for highly commoditized use cases like embedded processors, O'Donnell explained. While nowhere as glamorous as something like a server CPU, embedded applications are one of RISC-V's first avenues into the datacenter. [...] These embedded applications are where O'Donnell expects RISC-V will see widespread adoption, including in the datacenter. Whether the open source ISA will rise to the level of Arm or x86 is another matter entirely.

Open Source

Linux 6.0 Arrives With Support For Newer Chips, Core Fixes, and Oddities (arstechnica.com) 26

An anonymous reader quotes a report from Ars Technica: A stable version of Linux 6.0 is out, with 15,000 non-merge commits and a notable version number for the kernel. And while major Linux releases only happen when the prior number's dot numbers start looking too big -- there is literally no other reason" -- there are a lot of notable things rolled into this release besides a marking in time. Most notable among them could be a patch that prevents a nearly two-decade slowdown for AMD chips, based on workaround code for power management in the early 2000s that hung around for far too long. [...]

Intel's new Arc GPUs are supported in their discrete laptop form in 6.0 (though still experimental). Linux blog Phoronix notes that Intel's ARC GPUs all seem to run on open source upstream drivers, so support should show up for future Intel cards and chipsets as they arrive on the market. Linux 6.0 includes several hardware drivers of note: fourth-generation Intel Xeon server chips, the not-quite-out 13th-generation Raptor Lake and Meteor Lake chips, AMD's RDNA 3 GPUs, Threadripper CPUs, EPYC systems, and audio drivers for a number of newer AMD systems. One small, quirky addition points to larger things happening inside Linux. Lenovo's ThinkPad X13s, based on an ARM-powered Qualcomm Snapdragon chip, get some early support in 6.0. ARM support is something Linux founder Linus Torvalds is eager to see [...].

Among other changes you can find in Linux 6.0, as compiled by LWN.net (in part one and part two):
- ACPI and power management improvements for Sapphire Rapids CPUs
- Support for SMB3 file transfer inside Samba, while SMB1 is further deprecated
- More work on RISC-V, OpenRISC, and LoongArch technologies
- Intel Habana Labs Gaudi2 support, allowing hardware acceleration for machine-learning libraries
- A "guest vCPU stall detector" that can tell a host when a virtual client is frozen
Ars' Kevin Purdy notes that in 2022, "there are patches in Linux 6.0 to help Atari's Falcon computers from the early 1990s (or their emulated descendants) better handle VGA modes, color, and other issues."

Not included in this release are Rust improvements, but they "are likely coming in the next point release, 6.1," writes Purdy.
AMD

A 20 Year Old Chipset Workaround Has Been Hurting Modern AMD Linux Systems (phoronix.com) 53

AMD engineer K Prateek Nayak recently uncovered that a 20 year old chipset workaround in the Linux kernel still being applied to modern AMD systems is responsible in some cases for hurting performance on modern Zen hardware. Fortunately, a fix is on the way for limiting that workaround to old systems and in turn helping with performance for modern systems. Phoronix reports: Last week was a patch posted for the ACPI processor idle code to avoid an old chipset workaround on modern AMD Zen systems. Since ACPI support was added to the Linux kernel in 2002, there has been a "dummy wait op" to deal with some chipsets where STPCLK# doesn't get asserted in time. The dummy I/O read delays further instruction processing until the CPU is fully stopped. This was a problem with at least some AMD Athlon era systems with a VIA chipset... But not a problem with newer chipsets of roughly the past two decades.

With this workaround still being applied to even modern AMD systems, K Prateek Nayak discovered: "Sampling certain workloads with IBS on AMD Zen3 system shows that a significant amount of time is spent in the dummy op, which incorrectly gets accounted as C-State residency. A large C-State residency value can prime the cpuidle governor to recommend a deeper C-State during the subsequent idle instances, starting a vicious cycle, leading to performance degradation on workloads that rapidly switch between busy and idle phases. One such workload is tbench where a massive performance degradation can be observed during certain runs."

At least for Tbench, this long-time, unconditional workaround in the Linux kernel has been hurting AMD Ryzen / Threadripper / EPYC performance in select workloads. This workaround hasn't affected modern Intel systems since those newer Intel platforms use the alternative MWAIT-based intel_idle driver code path instead. The AMD patch evolved into this patch by Intel Linux engineer Dave Hansen. That patch to limit the "dummy wait" workaround to old systems is already queued into TIP's x86/urgent branch. With it going the route of "x86/urgent" and for fixing a overzealous workaround that isn't needed on modern hardware, it's likely this patch will be submitted this week still for the Linux 6.0 kernel rather than needing to wait until the next (v6.1) merge window.

AMD

AMD Launches Zen 4 Ryzen 7000 CPUs (tomshardware.com) 156

AMD unveiled its 5nm Ryzen 7000 lineup today, outlining the details of four new models that span from the 16-core $699 Ryzen 9 7950X flagship, which AMD claims is the fastest CPU in the world, to the six-core $299 Ryzen 5 7600X, the lowest bar of entry to the first family of Zen 4 processors. Tom's Hardware reports: Ryzen 7000 marks the first 5nm x86 chips for desktop PCs, but AMD's newest chips don't come with higher core counts than the previous-gen models. However, frequencies stretch up to 5.7 GHz - an impressive 800 MHz improvement over the prior generation -- paired with an up to 13% improvement in IPC from the new Zen 4 microarchitecture. That results in a 29% improvement in single-threaded performance over the prior-gen chips. That higher performance also extends out to threaded workloads, with AMD claiming up to 45% more performance in some threaded workloads. AMD says these new chips power huge generational gains over the prior-gen Ryzen 5000 models, with 29% faster gaming and 44% more performance in productivity apps. Going head-to-head with Intel's chips, AMD claims the high-end 7950X is 11% faster overall in gaming than Intel's fastest chip, the 12900K, and that even the low-end Ryzen 5 7600X beats the 12900K by 5% in gaming. It's noteworthy that those claims come with a few caveats [...].

The Ryzen 7000 processors come to market on September 27, and they'll be joined by new DDR5 memory products that support new EXPO overclocking profiles. AMD's partners will also offer a robust lineup of motherboards - the chips will snap into new Socket AM5 motherboards that AMD says it will support until 2025+. These motherboards support DDR5 memory and the PCIe 5.0 interface, bringing the Ryzen family up to the latest connectivity standards. The X670 Extreme and standard X670 chipsets arrive first in September, while the more value-oriented B650 options will come to market in October. That includes the newly announced B650E chipset that brings full PCIe 5.0 connectivity to budget motherboards, while the B650 chipset slots in as a lower-tier option. The Ryzen 7000 lineup also brings integrated RDNA 2 graphics to all of the processors in the stack, a first for the Ryzen family.

Operating Systems

Linux 6.1 Will Make It A Bit Easier To Help Spot Faulty CPUs (phoronix.com) 16

An anonymous reader shares a report: While mostly of benefit to server administrators with large fleets of hardware, Linux 6.1 aims to make it easier to help spot problematic CPUs/cores by reporting the likely socket and core when a segmentation fault occurs, which can help in spotting any trends if routinely finding the same CPU/core is causing problems. Queued up now in TIP's x86/cpu branch for the Linux 6.1 merge window in October is a patch to print the likely CPU at segmentation fault time. Printing the likely CPU core and socket when a seg fault occurs can be beneficial if routinely finding seg faults happening on the same CPU package or particular core.
Desktops (Apple)

Devs Make Progress Getting MacOS Venture Running On Unsupported, Decade-Old Macs (arstechnica.com) 20

An anonymous reader quotes a report from Ars Technica: Skirting the official macOS system requirements to run new versions of the software on old, unsupported Macs has a rich history. Tools like XPostFacto and LeopardAssist could help old PowerPC Macs run newer versions of Mac OS X, a tradition kept alive in the modern era by dosdude1's patchers for Sierra, High Sierra, Mojave, and Catalina. For Big Sur and Monterey, the OpenCore Legacy Patcher (OCLP for short) is the best way to get new macOS versions running on old Macs. It's an offshoot of the OpenCore Hackintosh bootloader, and it's updated fairly frequently with new features and fixes and compatibility for newer macOS versions. The OCLP developers have admitted that macOS Ventura support will be tough, but they've made progress in some crucial areas that should keep some older Macs kicking for a little bit longer.

[...] First, while macOS doesn't technically include system files for pre-AVX2 Intel CPUs, Apple's Rosetta 2 software does still include those files, since Rosetta 2 emulates the capabilities of a pre-AVX2 x86 CPU. By extracting and installing those files in Ventura, you can re-enable support on Ivy Bridge and older CPUs without AVX2 instructions. And this week, Grymalyuk showed off another breakthrough: working graphics support on old Metal-capable Macs, including machines as old as the 2014 5K iMac, the 2012 Mac mini, and even the 2008 cheese grater-style Mac Pro tower. The OCLP team still has other challenges to surmount, not least of which will involve automating all of these hacks so that users without a deep technical understanding of macOS's underpinnings can continue to set up and use the bootloader. Grymalyuk still won't speculate about a timeframe for official Ventura support in OCLP. But given the progress that has been made so far, it seems likely that people with 2012-and-newer Macs should still be able to run Ventura on their Macs without giving up graphics acceleration or other important features.

Operating Systems

NetBSD 9.3: A 2022 OS That Can Run On Late-1980s Hardware (theregister.com) 41

Version 9.3 of NetBSD is here, able to run on very low-end systems and with that authentic early-1990s experience. The Register reports: Version 9.3 comes some 15 months after NetBSD 9.2 and boasts new and updated drivers, improved hardware support, including for some recent AMD and Intel processors, and better handling of suspend and resume. The next sentence in the release announcement, though, might give some readers pause: "Support for wsfb-based X11 servers on the Commodore Amiga." This is your clue that we are in a rather different territory from run-of-the-mill PC operating systems here. A notable improvement in NetBSD 9.3 is being able to run a graphical desktop on an Amiga. This is a 2022 operating system that can run on late-1980s hardware, and there are not many of those around.

NetBSD supports eight "tier I" architectures: 32-bit and 64-bit x86 and Arm, plus MIPS, PowerPC, Sun UltraSPARC, and the Xen hypervisor. Alongside those, there are no less than 49 "tier II" supported architectures, which are not as complete and not everything works -- although almost all of them are on version 9.3 except for the version for original Acorn computers with 32-bit Arm CPUs, which is still only on NetBSD 8.1. There's also a "tier III" for ports which are on "life support" so there may be a risk Archimedes support could drop to that. This is an OS that can run on 680x0 hardware, DEC VAX minicomputers and workstations, and Sun 2, 3, and 32-bit SPARC boxes. In other words, it reaches back as far as some 1970s hardware. Let this govern your expectations. For instance, in VirtualBox, if you tell it you want to create a NetBSD guest, it disables SMP support.

Linux

Linux May Soon Lose Support For the DECnet Protocol (theregister.com) 69

Microsoft software engineer Stephen Hemminger has proposed removing the DECnet protocol handling code from the Linux kernel. The Register reports: The timing is ironic, as this comes just two weeks after VMS Software Inc announced that OpenVMS 9.2 was really ready this time... That announcement, of course, came some months after the first time it announced [PDF] version 9.2 [...]. The last maintainer of the DECnet code was Red Hat's Christine Caulfield, who flagged the code as orphaned in 2010. The change is unlikely to vastly inconvenience many people: VMS is the last even slightly mainstream OS that used DECnet, and VMS has supported TCP/IP for a long time. Indeed, for decades, the oldest email in this reporter's "sent" folder was a 1993 enquiry about the freeware CMUIP stack for VMS.

One of the easier ways to bootstrap VMS on an elderly VAX these days is to install it on the SimH VAX hardware simulator, and then net-boot the real VAX from the simulated one. Anyone keen enough to do that will be competent to run an older version of Linux just for the purpose. Although their existence is rapidly being forgotten today, TCP/IP is not the only network protocol around, and as late as the mid-1990s it wasn't even the dominant one. The Linux kernel used to support multiple network protocols, but they are disappearing fast. [...] For a long time, DECnet was a significant network protocol. DEC supplied a client stack called PathWorks to let DOS, Windows and Mac clients connect to VAX servers, not only for file and print, but also terminal connections and X.11. Whole worldwide WANs ran over DECnet, and as a teenage student, your correspondent enjoyed exploring them.

The Media

Are Reviewers Refusing to Compare Wintel Laptops to Apple Silicon? (wormsandviruses.com) 323

The New York Times' product-recommendation service "Wirecutter" has sparked widening criticism about how laptops are reviewed. The technology/Apple blog Daring Fireball first complained that they "institutionally fetishize price over quality". That makes it all the more baffling that their recommended "Best Laptop" — not best Windows laptop, but best laptop, full stop — is a Dell XPS 13 that costs $1,340 but is slower and gets worse battery life (and has a lower-resolution display) than their "best Mac laptop", the $1,000 M1 MacBook Air.
Technically Dell's product won in a category titled "For most people: The best ultrabook" (and Wikipedia points out that ultrabook is, after all, "a marketing term, originated and trademarked by Intel.") But this leads blogger Jack Wellborn to an even larger question: why exactly do reviewers refuse to do a comparison between Wintel laptops and Apple's MacBooks? Is it that reviewers don't think they could fairly compare x86 and ARM laptops? It seems easy enough to me. Are they afraid that constantly showing MacBooks outperforming Wintel laptops will give the impression that they are in the bag for Apple? I don't see why. Facts are facts, and a lot of people need or want to buy a Windows laptop regardless. I can't help but wonder if, in the minds of many reviewers, MacBooks were PCs so long as they used Intel, and therefore they stopped being PCs once Apple switched to using their own silicon.
Saturday Daring Fireball responded with their own assessment. "Reviewers at ostensibly neutral publications are afraid that reiterating the plain truth about x86 vs. Apple silicon — that Apple silicon wins handily in both performance and efficiency — is not going to be popular with a large segment of their audience. Apple silicon is a profoundly inconvenient truth for many computer enthusiasts who do not like Macs, so they've gone into denial..."

Both bloggers cite as an example this review of Microsoft's Surface Laptop Go 2, which does begin by criticizing the device's old processor, its un-backlit keyboard, its small selection of ports, and its low-resolution touchscreen. But it ultimately concludes "Microsoft gets most of the important things right here, and there's no laptop in this price range that doesn't come with some kind of trade-off...." A crime of omission — or is the key phrase "in this price range"? (Which gets back to Daring Fireball's original complaint about "fetishizing price over quality.") Are Apple's new Silicon-powered laptops sometimes being left out of comparisons because they're more expensive?

In an update, Wellborn acknowledges that this alleged refusal-to-compare apparently actually precedes Apple's launch of its M1 chip. But he argues that now it's more important than ever to begin making those comparisons: It's a choice between a hot and noisy and/or slow PC laptop running Windows and a cool, silent, and fast MacBook. Most buyers don't know that choice now exists, and it's the reviewer's job to educate them. Excluding MacBooks from consideration does those buyers a considerable disservice.

Slashdot Top Deals