Microsoft

Microsoft 365 Endured 9+ Hours of Outages Thursday (crn.com) 36

Early Friday "there were nearly 113 incidents of people reporting issues with Microsoft 365 as of 1:05 a.m. ET," reports Reuters. But that's down "from over 15,890 reports at its peak a day earlier, according to Downdetector." Reuters points out the outage affected antivirus software Microsoft Defender and data governance software Microsoft Purview, while CRN notes it also impacted "a number of Microsoft 365 services" including Outlook and Exchange online: During the outage, Outlook users received a "451 4.3.2 temporary server issue" error message when attempting to send or receive email. Users did not have the ability to send and receive email through Exchange Online, including notification emails from Microsoft Viva Engage, according to the vendor. Other issues that cropped up include an inability to send and receive subscription email through [analytics platform] Microsoft Fabric, collect message traces, search within SharePoint online and Microsoft OneDrive and create chats, meetings, teams, channels or add members in Microsoft Teams...

As with past cloud outages with other vendors, even after Microsoft fixed the issues, recovery efforts by its users to return to a normal state took additional time... Microsoft confirmed in a post on X [Thursday] at 4:14 p.m. ET that it "restored the affected infrastructure to a (healthy) state" but "further load balancing is required to mitigate impact...." The company reported "residual imbalances across the environment" at 7:02 p.m., "restored access to the affected services" and stable mail flow at 12:33 a.m. Jan. 23. At that time, Microsoft still saw a "small number of remaining affected services" without full service stability. The company declared impact from the event "resolved" at 1:29 p.m. Eastern. Microsoft sent out another X post at 8:20 a.m. asking users experiencing residual issues to try "clearing local DNS caches or temporarily lowering DNS TTL values may help ensure a quicker remediation...."

Microsoft said in an admin center update that [Thursday's] outage was "caused by elevated service load resulting from reduced capacity during maintenance for a subset of North America hosted infrastructure." Furthermore, Microsoft noted that during "ongoing efforts to rebalance traffic" it introduced a "targeted load balancing configuration change intended to expedite the recovery process, which incidentally introduced additional traffic imbalances associated with persistent impact for a portion of the affected infrastructure." US itek's David Stinner said it appears that Microsoft did not have enough capacity on its backup system while doing maintenance on its main system. "It looks like the backup system was overloaded, and it brought the system down while they were still doing maintenance on the main system," he said. "That is why it took so many hours to get back up and running. If your primary system is down for maintenance and your backup system fails due to capacity issues, then it is going to take a while to get your primary system back up and running."

"This was not Microsoft's first outage of 2026," the article notes, "with the vendor handling access issues with Teams, Outlook and other M365 services on Wednesday, a Copilot issue on Jan. 15 plus an Azure outage earlier in the month..."
PlayStation (Games)

Why Gen Z is Using Retro Tech (bbc.com) 62

"People in their teens and early 20s are increasingly turning to old school tech," reports the BBC, "in a bid to unplug from the online world." Amazon UK told BBC Scotland News that retro-themed products surged in popularity during its Black Friday event, with portable vinyl turntables, Tamagotchis and disposable cameras among their best sellers. Retailers Currys and John Lewis also said they had seen retro gadgets making a comeback with sales of radios, instant cameras and alarm clocks showing big jumps.

While some people scroll endlessly through Netflix in search of their next watch, 17-year-old Declan prefers the more traditional approach of having a DVD in his hands. He grew up surrounded by his gran's collection and later bought his own after visiting a shop with a friend. "The main selling point for me is the cases," he says. Streaming services like Netflix and Disney+ dominate the market but Declan says he values ownership. "It's nice to have something you own instead of paying for subscriptions all the time," he says. "If I lost access to streaming tomorrow, I'd still have my favourite movies ready to watch."

He admits DVDs are a "dying way of watching movies" but that makes them cheaper. "I think they're just cool, there's something authentic about having DVDs," he says. "These things are generations old, it's nice to have them available."

The BBC also writes that one 21-year-old likes the "deliberate artistry" of traditional-camera photography — and the nostalgic experience of using one. They interview a 20-year-old who says vinyl records have a "more authentic sound" — and he appreciates having the physical disc and jacket art.

And one 21-year-old even tracked down the handheld PlayStation Portable he'd used as a kid...
Security

Hacker Conference Installed a Literal Antivirus Monitoring System (wired.com) 49

An anonymous reader quotes a report from Wired: Hacker conferences -- like all conventions -- are notorious for giving attendees a parting gift of mystery illness. To combat "con crud," New Zealand's premier hacker conference, Kawaiicon, quietly launched a real-time, room-by-room carbon dioxide monitoring system for attendees. To get the system up and running, event organizers installed DIY CO2 monitors throughout the Michael Fowler Centre venue before conference doors opened on November 6. Attendees were able to check a public online dashboard for clean air readings for session rooms, kids' areas, the front desk, and more, all before even showing up. "It's ALMOST like we are all nerds in a risk-based industry," the organizers wrote on the convention's website. "What they did is fantastic," Jeff Moss, founder of the Defcon and Black Hat security conferences, told WIRED. "CO2 is being used as an approximation for so many things, but there are no easy, inexpensive network monitoring solutions available. Kawaiicon building something to do this is the true spirit of hacking." [...]

Kawaiicon's work began one month before the conference. In early October, organizers deployed a small fleet of 13 RGB Matrix Portal Room CO2 Monitors, an ambient carbon dioxide monitor DIY project adapted from US electronics and kit company Adafruit Industries. The monitors were connected to an Internet-accessible dashboard with live readings, daily highs and lows, and data history that showed attendees in-room CO2 trends. Kawaiicon tested its CO2 monitors in collaboration with researchers from the University of Otago's public health department. The Michael Fowler Centre is a spectacular blend of Scandinavian brutalism and interior woodwork designed to enhance sound and air, including two grand pou -- carved Mori totems -- next to the main entrance that rise through to the upper foyers. Its cathedral-like acoustics posed a challenge to Kawaiicon's air-hacking crew, which they solved by placing the RGB monitors in stereo. There were two on each level of the Main Auditorium (four total), two in the Renouf session space on level 1, plus monitors in the daycare and Kuracon (kids' hacker conference) areas. To top it off, monitors were placed in the Quiet Room, at the Registration Desk, and in the Green Room.

Kawaiicon's attendees could quickly check the conditions before they arrived and decide how to protect themselves accordingly. At the event, WIRED observed attendees checking CO2 levels on their phones, masking and unmasking in different conference areas, and watching a display of all room readings on a dashboard at the registration desk. In each conference session room, small wall-mounted monitors displayed stoplight colors showing immediate conditions: green for safe, orange for risky, and red to show the room had high CO2 levels, the top level for risk. Colorful custom-made Kawaiicon posters by New Zealand artist Pepper Raccoon placed throughout the Michael Fowler Centre displayed a QR code, making the CO2 dashboard a tap away, no matter where they were at the conference.
Resources, parts lists, and assembly guides can be found here.
AI

Chan Zuckerberg Initiative Shifts Bulk of Philanthropy, 'Going All In on AI-Powered Biology' (apnews.com) 32

The Associated Press reports that "For the past decade, Dr. Priscilla Chan and her husband Mark Zuckerberg have focused part of their philanthropy on a lofty goal — 'to cure, prevent or manage all disease' — if not in their lifetime, then in their children's."

During that decade they also funded other initiatives (including underprivileged schools and immigration reform), according to the article. But there's a change coming: Now, the billionaire couple is shifting the bulk of their philanthropic resources to Biohub, the pair's science organization, and focusing on using artificial intelligence to accelerate scientific discovery. The idea is to develop virtual, AI-based cell models to understand how they work in the human body, study inflammation and use AI to "harness the immune system" for disease detection, prevention and treatment. "I feel like the science work that we've done, the Biohub model in particular, has been the most impactful thing that we have done. So we want to really double down on that. Biohub is going to be the main focus of our philanthropy going forward," Zuckerberg said Wednesday evening at an event at the Biohub Imaging Institute in Redwood City, California.... Chan and Zuckerberg have pledged 99% of their lifetime wealth — from shares of Meta Platforms, where Zuckerberg is CEO — toward these efforts...

On Thursday, Chan and Zuckerberg also announced that Biohub has hired the team at EvolutionaryScale, an AI research lab that has created large-scale AI systems for the life sciences... Biohub's ambition for the next years and decades is to create virtual cell systems that would not have been possible without recent advances in AI. Similar to how large language models learn from vast databases of digital books, online writings and other media, its researchers and scientists are working toward building virtual systems that serve as digital representations of human physiology on all levels, such as molecular, cellular or genome. As it is open source — free and publicly available — scientists can then conduct virtual experiments on a scale not possible in physical laboratories.

"We will continue the model we've pioneered of bringing together scientists and engineers in our own state-of-the-art labs to build tools that advance the field," according to Thursday's blog post. "We'll then use those tools to generate new data sets for training new biological AI models to create virtual cells and immune systems and engineer our cells to detect and treat disease....

"We have also established the first large-scale GPU cluster for biological research, as well as the largest datasets around human cell types. This collection of resources does not exist anywhere else."
AI

Did Will Smith Upload an AI-Enhanced Video - and Is This Just the Beginning? (hollywoodreporter.com) 28

After Will Smith uploaded a video of an adoring crowd, blogger Andy Baio "conducted a detailed analysis that suggests Will Smith's team might have used AI to turn photos from his recent concerts into videos," writes BGR. But there's more to the story: Google recently ran an experiment for YouTube Shorts in which it used AI (machine learning) to improve the quality of Shorts without asking the creator for permission. People complained the videos looked like they were AI generated. It seems that Will Smith's YouTube Shorts clip that attracted criticism from fans this week might have been a victim of this experiment... The signs are real. The man who claimed Will Smith's song helped him cure cancer was there. The woman in front of him was holding the sign with him. The "Lov U" sign appeared in photos the singer posted on his social media channels before the clip was shared.
"Will Smith has not denied the use of AI in these promotional clips," the article adds.

But the Hollywood Reporter also calls it "just the beginning of AI chaos," noting that "influencers and spinmeisters have been using AI upscaling for years, if quietly, the way you might round up your current salary in a job interview." It's only going to grow more popular as the tools get better. (And they will — you just need some tweaks to the model and increases in compute to erase these hallucinations.) In fact, when the chapter on the early AI Age is written, the line about this moment is less likely to be, "Remember when Will Smith did something cringily AI?" and more, "Remember when AI was still seen as so cringe that we made fun of Will Smith for it?" Experts differ on the timeline, but everyone agrees it's just years if not months before we'll stop being able to spot an AI video. [Will Smith's video] had the particular misfortune of coming out at this interregnum moment: good enough for someone to use but not so good we can't spot it.

That moment will be over soon enough, and, I suspect, so will our pearl-clutching. The main effect of this new age of the synthetic is that video will stop being a meaningful measure of truth. We have long stopped believing everything we read, and AI image-generators have killed what photoshop wounded. But video until now has been the last bastion of objectivity — incontrovertible evidence that an event took place the way it seemed to....

But there is an upside. (Really.) Without a format that can telegraph objectivity, we'll need to (if we care to) turn to other ways to assure ourselves of the facts: the source of the video. That could mean the human-led content creator will matter more. After years of seeing news brands take a beating in the trust department, they'll soon become the only hope we have of knowing whether something happened. We no longer will be able to trust the medium. But we may newly believe the media.

Open Source

Arch Linux Faces 'Ongoing' DDoS Attack (theregister.com) 29

"Some joyless ne'er-do-well has loosed a botnet on the community-driven Arch Linux distro," reports the Register, with a distributed denial of service (DDoS) attack that apparently started a week ago.

Arch maintainer Cristian Heusel announced Thursday on the project's web site that the attack "primarily impacts our main webpage, the Arch User Repository (AUR), and the Forums." We are aware of the problems that this creates for our end users and will continue to actively work with our hosting provider to mitigate the attack. We are also evaluating DDoS protection providers while carefully considering factors including cost, security, and ethical standards... As a volunteer-driven project, we appreciate the community's patience as our DevOps team works to resolve these issues.
A status update Friday acknowledged "we are suffering from partial outages." The Register reports: The attack comes as the project has been enjoying a boost in mainstream success. The distro was picked by Valve to underpin the SteamOS software running on its Steam Deck handheld gaming gadget, with the company providing the project with funding for further development. Late last year, a new version of the archinstall tool was released, with a view to making the system more friendly to newcomers...

For now, the Arch team is working to mitigate the attack's impact, which highlights a bootstrapping issue. Tools designed to shift traffic to mirrors in the event the main infrastructure is unavailable rely on a mirror list obtained from that same main infrastructure, with Heusel advising that users should "default to the mirrors listed in the pacman-mirrorlist package" if tools like reflector fail. Installation media can be downloaded from a range of mirrors, too, but should be checked against the project's official signing key before being trusted.

AI

Nvidia's CUDA Platform Now Support RISC-V (tomshardware.com) 20

An anonymous reader quotes a report from Tom's Hardware: At the 2025 RISC-V Summit in China, Nvidia announced that its CUDA software platform will be made compatible with the RISC-V instruction set architecture (ISA) on the CPU side of things. The news was confirmed during a presentation during a RISC-V event. This is a major step in enabling the RISC-V ISA-based CPUs in performance demanding applications. The announcement makes it clear that RISC-V can now serve as the main processor for CUDA-based systems, a role traditionally filled by x86 or Arm cores. While nobody even barely expects RISC-V in hyperscale datacenters any time soon, RISC-V can be used on CUDA-enabled edge devices, such as Nvidia's Jetson modules. However, it looks like Nvidia does indeed expect RISC-V to be in the datacenter.

Nvidia's profile on RISC-V seems to be quite high as the keynote at the RISC-V Summit China was delivered by Frans Sijsterman, who appears to be Vice President of Hardware Engineering at Nvidia. The presentation outlined how CUDA components will now run on RISC-V. A diagram shown at the session illustrated a typical configuration: the GPU handles parallel workloads, while a RISC-V CPU executes CUDA system drivers, application logic, and the operating system. This setup enables the CPU to orchestrate GPU computations fully within the CUDA environment. Given Nvidia's current focus, the workloads must be AI-related, yet the company did not confirm this. However, there is more.

Also featured in the diagram was a DPU handling networking tasks, rounding out a system consisting of GPU compute, CPU orchestration, and data movement. This configuration clearly suggests Nvidia's vision to build heterogeneous compute platforms where RISC-V CPU can be central to managing workloads while Nvidia's GPUs, DPUs, and networking chips handle the rest. Yet again, there is more. Even with this low-profile announcement, Nvidia essentially bridges proprietary CUDA stack to an open architecture, one that seems to develop fast in China. Yet, being unable to ship flagship GB200 and GB300 offerings to China, the company has to find ways to keep its CUDA thriving.

Transportation

Mitsubishi Launches EV Battery Swap Network in Tokyo - for Both Cars and Trucks (electrek.co) 70

In Tokyo Mitsubishi is deploying "an innovative new battery swap network for electric cars" in a multi-year test program reports the EV news site Electrek.

But it's not just for electric cars. Along with the 14 modular battery swapping stations, Mitsubishi is also deploying "more than 150 battery-swappable commercial electric vehicles" from truck maker Fuso: A truck like the Mitsubishi eCanter typically requires a full night of AC charging to top off its batteries, and at least an hour or two on DC charging in Japan, according to Fuso. This joint pilot by Mitsubishi, Mitsubishi Fuso Trucks, and [EV battery swap specialist] Ample aims to circumvent this issue of forced downtime with its swappable batteries, supporting vehicle uptime by delivering a full charge within minutes.

The move is meant to encourage the transport industry's EV shift while creating a depository of stored energy that can be deployed to the grid in the event of a natural disaster — something Mitsubishi in Japan has been working on for years.

The article's author also adds their own opinion about battery-swapping technology. "When you see how simple it is to add hundreds of miles of driving in just 100 seconds — quicker, in many cases, than pumping a tank of liquid fuel into an ICE-powered car — you might come around, yourself."
IOS

What To Expect From Apple's WWDC (arstechnica.com) 26

Apple's Worldwide Developers Conference 25 (WWDC) kicks off next week, June 9th, showcasing the company's latest software and new technologies. That includes the next version of iOS, which is rumored to have the most significant design overhaul since the introduction of iOS 7. Here's an overview of what to expect: Major Software Redesigns
Apple plans to shift its operating system naming to reflect the release year, moving from sequential numbers to year-based identifiers. Consequently, the upcoming releases will be labeled as iOS 26, macOS 26, watchOS 26, etc., streamlining the versioning across platforms.

iOS 26 is anticipated to feature a glossy, glass-like interface inspired by visionOS, incorporating translucent elements and rounded buttons. This design language is expected to extend across iPadOS, macOS, watchOS, and tvOS, promoting a cohesive user experience across devices. Core applications like Phone, Safari, and Camera are slated for significant redesigns, too. For instance, Safari may introduce a translucent, "glassy" address bar, aligning with the new visual aesthetics.

While AI is not expected to be the main focus due to Siri's current readiness, some AI-related updates are rumored. The Shortcuts app may gain "Apple Intelligence," enabling users to create shortcuts using natural language. It's also possible that Gemini will be offered as an option for AI functionalities on the iPhone, similar to ChatGPT.

Other App and Feature Updates
The lock screen might display charging estimates, indicating how long it will take for the phone to fully charge. There's a rumor about bringing live translation features to AirPods. The Messages app could receive automatic translations and call support; the Music app might introduce full-screen animated lock screen art; and Apple Notes may get markdown support. Users may also only need to log into a captive Wi-Fi portal once, and all their devices will automatically be logged in.

Significant updates are expected for Apple Home. There's speculation about the potential announcement of a "HomePad" with a screen, Apple's competitor to devices like the Nest Hub Mini. A new dedicated Apple gaming app is also anticipated to replace Game Center.
If you're expecting new hardware, don't hold your breath. The event is expected to focus primarily on software developments. It may even see discontinued support for several older Intel-based Macs in macOS 26, including models like the 2018 MacBook Pro and the 2019 iMac, as Apple continues its transition towards exclusive support for Apple Silicon devices.

Sources:
Apple WWDC 2025 Rumors and Predictions! (Waveform)
WWDC 2025 Overview (MacRumors)
WWDC 2025: What to expect from this year's conference (TechCrunch)
What to expect from Apple's Worldwide Developers Conference next week (Ars Technica)
Apple's WWDC 2025: How to Watch and What to Expect (Wired)
AI

New Pope Chose His Name Based On AI's Threats To 'Human Dignity' (arstechnica.com) 69

An anonymous reader quotes a report from Ars Technica: Last Thursday, white smoke emerged from a chimney at the Sistine Chapel, signaling that cardinals had elected a new pope. That's a rare event in itself, but one of the many unprecedented aspects of the election of Chicago-born Robert Prevost as Pope Leo XIV is one of the main reasons he chose his papal name: artificial intelligence. On Saturday, the new pope gave his first address to the College of Cardinals, explaining his name choice as a continuation of Pope Francis' concerns about technological transformation. "Sensing myself called to continue in this same path, I chose to take the name Leo XIV," he said during the address. "There are different reasons for this, but mainly because Pope Leo XIII in his historic Encyclical Rerum Novarum addressed the social question in the context of the first great industrial revolution."

In his address, Leo XIV explicitly described "artificial intelligence" developments as "another industrial revolution," positioning himself to address this technological shift as his namesake had done over a century ago. As the head of an ancient religious organization that spans millennia, the pope's talk about AI creates a somewhat head-spinning juxtaposition, but Leo XIV isn't the first pope to focus on defending human dignity in the age of AI. Pope Francis, who died in April, first established AI as a Vatican priority, as we reported in August 2023 when he warned during his 2023 World Day of Peace message that AI should not allow "violence and discrimination to take root." In January of this year, Francis further elaborated on his warnings about AI with reference to a "shadow of evil" that potentially looms over the field in a document called "Antiqua et Nova" (meaning "the old and the new").

"Like any product of human creativity, AI can be directed toward positive or negative ends," Francis said in January. "When used in ways that respect human dignity and promote the well-being of individuals and communities, it can contribute positively to the human vocation. Yet, as in all areas where humans are called to make decisions, the shadow of evil also looms here. Where human freedom allows for the possibility of choosing what is wrong, the moral evaluation of this technology will need to take into account how it is directed and used." [...] Just as mechanization disrupted traditional labor in the 1890s, artificial intelligence now potentially threatens employment patterns and human dignity in ways that Pope Leo XIV believes demand similar moral leadership from the church. "In our own day," Leo XIV concluded in his formal address on Saturday, "the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice, and labor."

NASA

Outgoing NASA Administrator Urges Incoming Leaders To Stick With Artemis Plan (arstechnica.com) 45

Before NASA Administrator Bill Nelson retires in a couple of weeks, he has one final message for the next administration: Don't give up on the agency's Artemis Program to return humans to the Moon. In an interview with Ars Technica's Eric Berger, Nelson discussed his time in office, the major decisions he made, and his concerns for the space agency's future under the Trump administration. Here's an excerpt from the interview: Ars: I wanted to start with the state of Artemis. You all had an event a few weeks ago where you talked about Artemis II and Artemis III delays. And you know, both those missions have slipped a couple of years now since you've been administrator. So I'm just wondering, do you know how confident we should be in the current timeline?

Bill Nelson: Well, I am very confident because this most recent [delay] was occasioned by virtue of the heat shield, and it has been unanimous after all of the testing that they understand what happened to Orion's heat shield. The chunks came off in an irregular pattern from the Artemis I heat shield. With the change in the re-entry profile, they are unanimous in their recommendation that we can go with the Artemis II heat shield as it is. And I must say that of the major decisions that I've made, that was an easy one for me because it was unanimous. When I say it was unanimous, it was unanimous in the IRT, the independent review team, headed by Paul Hill. It wasn't to begin with, but after all the extensive testing, everybody was on board. It was unanimous in the deputy's committee. It was unanimous in the agency committee, and that brought it to me then in the Executive Council, and it was unanimous there. So I'm very confident that you're going to see Artemis II fly on or around April of 2026, and then if the SpaceX lander is ready, and that, of course, is a big if -- but they have met all of their milestones, and we'll see what happens on this next test... If they are ready, I think it is very probable that we will see the lunar landing in the summer of 2027.

Ars: Do you think it's appropriate for the next administration to review the Artemis Program?

Bill Nelson: Are you implying that Artemis should be canceled?

Ars: No. I don't think Artemis will be canceled in the main. But I do think they're going to take a look at the way the missions are done at the architecture. I know NASA just went through that process with Orion's heat shield.

Bill Nelson: Well, I think questioning what you're doing clearly is always an issue that ought to be on the table. But do I think that they are going to cancel, as some of the chatter out there suggests, and replace SLS with Starship? The answer is no.

Ars: Why?

Bill Nelson: Put yourself in the place of President Trump. Do you think President Trump would like to have a conversation with American astronauts on the surface of the Moon during his tenure?

Ars: Of course.

Bill Nelson: OK, let me ask you another question. Do you think that President Trump would rather have a conversation with American astronauts during his tenure rather than listening to the comments of Chinese astronauts on the Moon during his tenure? My case is closed, your Honor, I submit it to the jury.
Further reading: Elon Musk: 'We're Going Straight to Mars. The Moon is a Distraction.'
Open Source

What Happens to Relicensed Open Source Projects and Their Forks? (thenewstack.io) 7

A Linux Foundation project focused on understanding the health of the open source community just studied the outcomes for three projects that switched to "more restrictive" licenses and then faced community forks.

The data science director for the project — known as Community Health Analytics in Open Source Software (or CHAOSS) — is also an OpenUK board member, and describes the outcomes for OpenSearch, Redis with fork Valkey, and Terraform: The relicensed project (Redis) had significant numbers of contributors who were not employed by the company, and the fork (Valkey) was created by those existing contributors as a foundation project... The Redis project differs from Elasticsearch and Terraform in the number of contributions to the Redis repository from people who were not employees of Redis. In the year leading up to the relicense, when Redis was still open source, there were substantial contributions from employees of other companies: Twice as many non-Redis employees made five or more commits, and about a dozen employees of other companies made almost twice as many commits as Redis employees made.

In the six months after the relicense, all of the external contributors from companies (including Amazon, Alibaba, Tencent, Huawei and Ericsson) who contributed over five commits to the Redis project in the year prior to the relicense stopped contributing. In sum, Redis had strong organizational diversity before the relicense, but only Redis employees made significant contributions afterward.

Valkey was forked from Redis 7.2.4 on March 28, 2024, as a Linux Foundation project under the BSD-3 license. The fork was driven by a group of people who previously contributed to Redis with public support from their employers. Within its first six months, the Valkey repository had 29 contributors employed at 10 companies, and 18 of those people previously contributed to Redis. Valkey has a diverse set of contributors from various companies, with Amazon having the most contributors.

The results weren't always so clear-cut. Because Terraform always had very few contributors outside of the company, "there was no substantial impact on the contributor community from the relicensing event..." (Although the OpenTofu fork — a Linux Foundation project — had 31 people at 11 organizations who made five or more contributions.)

And both before and after Elasticsearch's relicensing, most contributors were Elastic employees, so "the 2021 relicense had little to no impact on contributors." (But the OpenSearch fork — transferred in September to the Linux Foundation — shows a more varied contributor base, with just 63% of additions and 64% of deletions coming from Amazon employees who made 10 or more commits. Six people who didn't work for Amazon made 10 or more commits, making up 11% of additions and 13% of deletions.")

So "Looking at all of these projects together, we see that the forks from relicensed projects tend to have more organizational diversity than the original projects," they conclude, adding that in general "projects with greater organizational diversity tend to be more sustainable..."

"You can dive into the details about these six projects in the paper, presentation and data we shared at the recent OpenForum Academy Symposium.
AI

'Human Vs. Autonomous Car' Race Ends Before It Begins (arstechnica.com) 26

A demonstration "race" between a (human) F1 race car driver Daniil Kvyat and an autonomous vehicle was just staged by the Abu Dhabi Autonomous Racing League.

Describing the league and the "man vs. machine" showdown, Ars Technica writes, "Say goodbye to the human driver and hello to 95 kilograms of computers and a whole suite of sensors." But again, racing is hard, and replacing humans doesn't change that. The people who run and participate in A2RL are aware of this, and while many organizations have made it a sport of overselling AI, A2RL is up-front about the limitations of the current state of the technology. One example of the technology's current shortcomings: The vehicles can't swerve back and forth to warm up the tires. Giovanni Pau, Team Principal of TII Racing, stated during a press briefing regarding the AI system built for racing, "We don't have human intuition. So basically, that is one of the main challenges to drive this type of car. It's impossible today to do a correct grip estimation. A thing my friend Daniil (Kvyat) can do in a nanosecond...."

Technology Innovation Institute (TII) develops the hardware and software stack for all the vehicles. Hardware-wise, the eight teams receive the same technology. When it comes to software, the teams need to build out their own system on TII's software stack to get the vehicles to navigate the tracks. In April, four teams raced on the track in Abu Dhabi. As we've noted before, how the vehicles navigate the tracks and world around them isn't actually AI. It's programmed responses to an environment; these vehicles are not learning on their own. Frankly, most of what is called "AI" in the real world is also not AI.

Vehicles driven by the systems still need years of research to come close to the effectiveness of a human beyond the wheel. Kvyat has been working with A2RL since the beginning. In that time, the former F1 driver has been helping engineers understand how to bring the vehicle closer to their limit. The speed continues to increase as the development progresses. Initially, the vehicles were three to five minutes slower than Kvyat around a lap; now, they are about eight seconds behind. That's a lifetime in a real human-to-human race, but an impressive amount of development for vehicles with 90 kg of computer hardware crammed into the cockpit of a super formula car. Currently, the vehicles are capable of recreating 90-95 percent of the speed of a human driver, according to Pau. Those capabilities are reduced when a human driver is also on the track, particularly for safety reasons....

The "race" was to be held ahead of the season finale of the Super Formula season... The A2RL vehicle took off approximately 22 seconds ahead of Kvyat, but the race ended before the practice lap was completed. Cameras missed the event, but the A2RL car lost traction and ended up tail-first into a wall...

Khurram Hassan, commercial director of A2RL, told Ars that the cold tires on the cold track caused a loss of traction.

Graphics

Nvidia Revives LAN Party After 13 Years To Celebrate RTX 50-Series GPU Launch (tomshardware.com) 9

Nvidia is hosting its first LAN party in over a decade to celebrate the debut of the RTX 50 series. It'll occur at CES 2025 in January and feature a 50-hour gaming marathon with tournaments, prizes, and global remote sessions. Tom's Hardware reports: The LAN party (dubbed GeForce LAN 50) will start on January 4 at 4:30 pm PT and end right before Nvidia CES Jensen Huang gives his opening speech at the CES event in Las Vegas on January 6 at 6:30 pm PT. The main LAN event will occur in Las Vegas, while remote sessions will take place in Beijing, Berlin, and Taipei. The event will purportedly host up to 400 gamers, requiring a $125 refundable deposit to sign up. The 400 lucky people who manage to make the list will not include content creators who might be invited directly to the LAN party from Nvidia.

As mentioned, the LAN party will be a full-blown 50-hour gaming marathon with in-game and LAN contests, tournaments, and prize raffles. For everyone who won't be able to get into the LAN party, Nvidia is providing additional prizes through its Nvidia App dubbed "LAN" missions. More prizes will be given out through the hashtag #GeForceGreats on social media. Nvidia is going all out for its GeForce RTX 50 series debut early next month. The last time Nvidia hosted a LAN party was purportedly 13 years ago.

Hardware

Framework Laptops Get Modular Makeover With RISC-V Main Board (theregister.com) 48

An anonymous reader quotes a report from The Register: Framework CEO Nirav Patel had one of the bravest tech demos that we've seen at a conference yet -- modifying a Framework Laptop from x86 to RISC-V live on stage. In the five-minute duration of one of the Ubuntu Summit's Lightning Talks, he opened up a Framework machine, removed its motherboard, installed a RISC-V-powered replacement, reconnected it, and closed the machine up again. All while presenting the talk live, and pretty much without hesitation, deviation, or repetition. It was an impressive performance, and you can watch it yourself at the 8:56:30 mark in the video recording.

Now DeepComputing is taking orders for the DC-ROMA board, at least to those in its early access program. The new main board is powered by a StarFive JH7110 System-on-Chip. (Note: there are two tabs on the page, for both the JH7110 and JH7100, and we can't link directly to the latter.) CNX Software has more details about the SoC. Although the SoC has six CPU cores, two are dedicated processors, making it a quad-core 64-bit device. The four general-purpose cores are 64-bit and run at up to 1.5 GHz. It supports 8 GB of RAM and eMMC storage. [...]

In our opinion, RISC-V is not yet competitive with Arm in performance. However, this is a real, usable, general-purpose computer, based on an open instruction set. That's no mean feat, and it's got more than enough performance for less demanding work. It's also the first third-party main board for the Framework hardware, which is another welcome achievement. The company has now delivered several new generations of hardware, including a 16-inch model, and continues to upgrade its machines' specs.

Technology

Where Have All the Chief Metaverse Officers Gone? (wired.com) 34

Wired: Last spring, At an event in New York City, Robert Triefus, then Gucci's CEO of Vault -- the brand's virtual marketplace -- argued the recent deflation in hype around the metaverse was just a brief hiccup. "I see it more as a correction," he told the crowd. "We're now at a much more sensible place, where you've got individuals [and] companies ... who are very serious about what they're doing." When asked how buying real estate in The Sandbox aligned with Gucci's broader goals as a brand, he responded with quasi-mystical language: "The metaverse is an opportunity to embrace the digital self."

The following month, Triefus left Gucci "abruptly," according to Vogue Business. He was off "to pursue other opportunities," the brand said at the time. A month later, Vogue Business revealed that Triefus was to be the new Stone Island CEO. Immediately there was speculation on whether Stone Island would enter the metaverse. So far it has not. Triefus' public zeal for all things virtual and his short-lived tenure as the head of Gucci's metaverse strategy are both part of a broader trend that briefly convulsed the private sector starting in late 2021: the hastily recruited "chief metaverse officer."

Following a wave of excitement around the metaverse as a golden new opportunity for commerce, a legion of brands rushed to launch their own virtual storefronts. Three quarters of CEOs surveyed by Russell Reynolds in 2022 said they were hiring dedicated talent to lead in the space, or expanding current roles to cover it. While the actual titles varied, their main role seemed to involve helping their respective brands devise new strategies with then-buzzy technologies such as NFTs and crypto.
Meta CEO Mark Zuckerberg has quietly shifted focus from virtual reality to augmented reality, signaling a retreat from the company's ambitious metaverse plans. At Meta's recent developer conference, Zuckerberg mentioned "metaverse" only three times in his hour-long keynote, instead highlighting AR innovations like smart glasses.

The move follows a broader cooling of corporate enthusiasm for the metaverse. Luxury brands that once rushed to establish virtual presences have scaled back efforts, with some chief metaverse officers pivoting to AI-focused roles. "Many brands were quick to experiment -- there was a sense of a land grab," said Matthew Ball, tech investor and author. "They didn't want to be last, and they were excited to try and be first." Wired notes that the shift reflects disappointing user engagement with existing metaverse platforms and growing interest in more accessible AR technologies.
Earth

Mount Everest Is Growing Even Taller (msn.com) 32

The world's tallest mountain is getting taller. Mount Everest, also known as Chomolungma, has grown about 15 to 50 meters (50 to 164 feet) higher over the past 89,000 years than expected, according to a modeling study released Monday. From a report: The culprit is a nearby river eroding and pushing down land, causing the ground under Mount Everest to rebound and lift. "It's a new additional component of uplift of Mount Everest," said Matthew Fox, study co-author and geologist at University College London. He expects this spurt of Everest and its surrounding peaks to continue for millions of years. He added "the biggest impact is probably on the climbers that have to climb another 20 meters or so to the top." The additional height may also lead to the growth of more ice at the higher elevations.

Mount Everest, part of the Himalayan mountain range, towers along the Nepal-Tibet border at around 8,850 meters (29,000 feet) high. Not only is it the tallest worldwide, it leaves its surrounding peaks in the dust -- rising around 250 meters above the next tallest mountain in the Himalayas, the 8,611-meter (28,251-foot) K2 mountain. But what could cause Everest's anomalous height compared to its neighbors? These extra meters on Mount Everest can be chalked up to a relatively rare "river capture event" from 89,000 years ago, according to the authors' computer models. During such an event, one river changes it course, interacts with another and steals its water, Fox said. In this case, the team said the Arun river network -- about 75 kilometers east of Mount Everest -- stole water from a river flowing north of Everest. Fox said the capture could have been initiated by a dramatic flood, which rerouted the water to a new drainage network. Today, the Arun River is a main tributary to the Kosi River to the south.

Science

Brain Scientists Finally Discover the Glue That Makes Memories Stick For a Lifetime (scientificamerican.com) 71

An anonymous reader quotes a report from Scientific American, written by science journalist Simon Makin: The persistence of memory is crucial to our sense of identity, and without it, there would be no learning, for us or any other animal. It's little wonder, then, that some researchers have called how the brain stores memories the most fundamental question in neuroscience. A milestone in the effort to answer this question came in the early 1970s, with the discovery of a phenomenon called long-term potentiation, or LTP. Scientists found that electrically stimulating a synapse that connects two neurons causes a long-lasting increase in how well that connection transmits signals. Scientists say simply that the "synaptic strength" has increased. This is widely believed to be the process underlying memory. Networks of neural connections of varying strengths are thought to be what memories are made of.

In the search for molecules that enable LTP, two main contenders emerged. One, called PKMzeta (protein kinase Mzeta), made a big splash when a 2006 study showed that blocking it erased memories for places in rats. If obstructing a molecule erases memories, researchers reasoned, that event must be essential to the process the brain uses to maintain memories. A flurry of research into the so-called memory molecule followed, and numerous experiments appeared to show that it was necessary and sufficient for maintaining numerous types of memory. The theory had a couple of holes, though. First, PKMzeta is short-lived. "Those proteins only last in synapses for a couple of hours, and in neurons, probably a couple of days," says Todd Sacktor, a neurologist at SUNY Downstate Health Sciences University, who was co-senior author of the 2006 study. "Yet our memories can last 90 years, so how do you explain this difference?" Second, PKMzeta is created in cells as needed, but then it has to find the right synapses. Each neuron has around 10,000 synapses, only a few percent of which are strengthened, says neuroscientist Andre Fenton, the other co-senior author of the 2006 study, who is now at New York University. The strengthening of some synapses and not others is how this mechanism stores information, but how PKMzeta molecules accomplish this was unknown.

A new study published in Science Advances by Sacktor, Fenton and their colleagues plugs these holes. The research suggests that PKMzeta works alongside another molecule, called KIBRA (kidney and brain expressed adaptor protein), which attaches to synapses activated during learning, effectively "tagging" them. KIBRA couples with PKMzeta, which then keeps the tagged synapses strengthened. Experiments show that blocking the interaction between these two molecules abolishes LTP in neurons and disrupts spatial memories in mice. Both molecules are short-lived, but their interaction persists. "It's not PKMzeta that's required for maintaining a memory, it's the continual interaction between PKMzeta and this targeting molecule, called KIBRA," Sacktor says. "If you block KIBRA from PKMzeta, you'll erase a memory that's a month old." The specific molecules will have been replaced many times during that month, he adds. But, once established, the interaction maintains memories over the long term as individual molecules are continually replenished. [...]
"What seems clear is that there is no single 'memory molecule,'" concludes Scientific American. "Regardless of any competing candidate, PKMzeta needs a second molecule to maintain long-term memories, and there is another that can substitute in a pinch."

"There are also some types of memory, such as the association of a location with fear, that do not depend on PKMzeta," the report adds. "Nobody knows what molecules are involved in those cases, and PKMzeta is clearly not the whole story."
Encryption

NIST Finalizes Trio of Post-Quantum Encryption Standards (theregister.com) 20

"NIST has formally accepted three algorithms for post-quantum cryptography," writes ancient Slashdot reader jd. "Two more backup algorithms are being worked on. The idea is to have backup algorithms using very different maths, just in case a flaw in the original approach is discovered later." The Register reports: The National Institute of Standards and Technology (NIST) today released the long-awaited post-quantum encryption standards, designed to protect electronic information long into the future -- when quantum computers are expected to break existing cryptographic algorithms. One -- ML-KEM (PDF) (based on CRYSTALS-Kyber) -- is intended for general encryption, which protects data as it moves across public networks. The other two -- ML-DSA (PDF) (originally known as CRYSTALS-Dilithium) and SLH-DSA (PDF) (initially submitted as Sphincs+) -- secure digital signatures, which are used to authenticate online identity. A fourth algorithm -- FN-DSA (PDF) (originally called FALCON) -- is slated for finalization later this year and is also designed for digital signatures.

NIST continued to evaluate two other sets of algorithms that could potentially serve as backup standards in the future. One of the sets includes three algorithms designed for general encryption -- but the technology is based on a different type of math problem than the ML-KEM general-purpose algorithm in today's finalized standards. NIST plans to select one or two of these algorithms by the end of 2024. Despite the new ones on the horizon, NIST mathematician Dustin Moody encouraged system administrators to start transitioning to the new standards ASAP, because full integration takes some time. "There is no need to wait for future standards," Moody advised in a statement. "Go ahead and start using these three. We need to be prepared in case of an attack that defeats the algorithms in these three standards, and we will continue working on backup plans to keep our data safe. But for most applications, these new standards are the main event."
From the NIST: This notice announces the Secretary of Commerce's approval of three Federal Information Processing Standards (FIPS):
- FIPS 203, Module-Lattice-Based Key-Encapsulation Mechanism Standard
- FIPS 204, Module-Lattice-Based Digital Signature Standard
- FIPS 205, Stateless Hash-Based Digital Signature Standard

These standards specify key establishment and digital signature schemes that are designed to resist future attacks by quantum computers, which threaten the security of current standards. The three algorithms specified in these standards are each derived from different submissions in the NIST Post-Quantum Cryptography Standardization Project.

United States

The Nation's Best Hackers Found Vulnerabilities in Voting Machines - But No Time To Fix Them (politico.com) 189

Hackers at the DEF CON conference in Las Vegas identified vulnerabilities in voting machines slated for use in the 2024 U.S. election, but fixes are unlikely to be implemented before November 5, organizers said. The annual "Voting Village" event, held away from the main conference floor due to security concerns, drew election officials and cybersecurity experts. Organizers plan to release a detailed report on the vulnerabilities found.

Catherine Terranova, an event organizer, said major systemic changes are difficult to make 90 days before an election, particularly given heightened scrutiny of election security in 2024. The process of addressing vulnerabilities involves manufacturer approval, recertification by authorities, and updating individual devices. This typically takes longer than the time remaining before the election, according to Scott Algeier, executive director of the Information Technology-Information Sharing and Analysis Center. The event comes amid ongoing concerns about foreign targeting of U.S. elections, including a recent hack of former President Donald Trump's campaign, reportedly by Iran.

Slashdot Top Deals