The Internet

India Proposes Charging OpenAI, Google For Training AI On Copyrighted Content (techcrunch.com) 10

An anonymous reader quotes a report from TechCrunch: On Tuesday, India's Department for Promotion of Industry and Internal Trade released a proposed framework that would give AI companies access to all copyrighted works for training in exchange for paying royalties to a new collecting body composed of rights-holding organizations, with payments then distributed to creators. The proposal argues that this "mandatory blanket license" would lower compliance costs for AI firms while ensuring that writers, musicians, artists, and other rights holders are compensated when their work is scraped to train commercial models. [...]

The eight-member committee, formed by the Indian government in late April, argues the system would avoid years of legal uncertainty while ensuring creators are compensated from the outset. Defending the system, the committee says in a 125-page submission (PDF) that a blanket license "aims to provide an easy access to content for AI developers reduce transaction costs [and] ensure fair compensation for rightsholders," calling it the least burdensome way to manage large-scale AI training. The submission adds that the single collecting body would function as a "single window," eliminating the need for individual negotiations and enabling royalties to flow to both registered and unregistered creators.

United States

More Than 200 Environmental Groups Demand Halt To New US Datacenters (theguardian.com) 123

An anonymous reader quotes a report from the Guardian: A coalition of more than 230 environmental groups has demanded a national moratorium on new datacenters in the U.S., the latest salvo in a growing backlash to a booming artificial intelligence industry that has been blamed for escalating electricity bills and worsening the climate crisis. The green groups, including Greenpeace, Friends of the Earth, Food & Water Watch and dozens of local organizations, have urged members of Congress to halt the proliferation of energy-hungry datacenters, accusing them of causing planet-heating emissions, sucking up vast amounts of water and exacerbating electricity bill increases that have hit Americans this year.

"The rapid, largely unregulated rise of datacenters to fuel the AI and crypto frenzy is disrupting communities across the country and threatening Americans' economic, environmental, climate and water security," the letter states, adding that approval of new data centers should be paused until new regulations are put in place. The push comes amid a growing revolt against moves by companies such as Meta, Google and Open AI to plow hundreds of billions of dollars into new datacenters, primarily to meet the huge computing demands of AI. At least 16 datacenter projects, worth a combined $64 billion, have been blocked or delayed due to local opposition to rising electricity costs. The facilities' need for huge amounts of water to cool down equipment has also proved controversial, particularly in drier areas where supplies are scarce. [...]

At the current rate of growth, datacenters could add up to 44m tons of carbon dioxide to the atmosphere by 2030, equivalent to putting an extra 10m cars on to the road and exacerbating a climate crisis that is already spurring extreme weather disasters and ripping apart the fabric of the American insurance market. But it is the impact upon power bills, rather than the climate crisis, that is causing anguish for most voters, acknowledged Emily Wurth, managing director of organizing at Food & Water Watch, the group behind the letter to lawmakers.
"I've been amazed by the groundswell of grassroots, bipartisan opposition to this, in all types of communities across the US," said Wurth. "Everyone is affected by this, the opposition has been across the political spectrum. A lot of people don't see the benefits coming from AI and feel they will be paying for it with their energy bills and water."

"It's an important talking point. We've seen outrageous utility price rises across the country and we are going to lean into this. Prices are going up across the board and this is something Americans really do care about."
Power

Idaho Lab Produces World's First Molten Salt Fuel for Nuclear Reactors (energy.gov) 43

America's Energy Department runs a research lab in Idaho — and this week announced successful results from a ground-breaking experiment. "This is the first time in history that chloride-based molten salt fuel has been produced for a fast reactor," says Bill Phillips, the lab's technical lead for salt synthesis. He calls it "a major milestone for American innovation and a clear signal of our national commitment to advanced nuclear energy." Unlike traditional reactors that use solid fuel rods and water as a coolant, most molten salt reactors rely on liquid fuel — a mixture of salts containing fissile material. This design allows for higher operating temperatures, better fuel efficiency, and enhanced safety. It also opens the door to new applications, including compact nuclear systems for ships and remote installations.

"The Molten Chloride Fast Reactor represents a paradigm shift in the nuclear fuel cycle, and the Molten Chloride Reactor Experiment (MCRE) will directly inform the commercialization of that reactor," said Jeff Latkowski, senior vice president of TerraPower and program director for the Molten Chloride Fast Reactor. "Working with world-leading organizations such as INL to successfully synthesize this unique new fuel demonstrates how real progress in Gen IV nuclear is being made together."

"The implications for the maritime industry are significant," said Don Wood, senior technical advisor for MCRE. "Molten salt reactors could provide ships with highly efficient, low-maintenance nuclear power, reducing emissions and enabling long-range, uninterrupted travel. The technology could spark the rise of a new nuclear sector — one that is mobile, scalable and globally transformative.

More details from America's Energy Department: MCRE will require a total of 72 to 75 batches of fuel salt to go critical, making it the largest fuel production effort at INL since the operations of Experimental Breeder Reactor-II more than 30 years ago. The full-scale demonstration of the new fuel salt synthesis line for MCRE was made possible by a breakthrough in 2024. After years of testing, the team found the right recipe to convert 95 percent of uranium metal feedstock into 18 kilograms of uranium chloride fuel salt in only a few hours — a process that previously took more than a week to complete...

After delivering the first batch of fuel salt this fall, the team anticipates delivering four additional batches by March of 2026. MCRE is anticipated to run in 2028 for approximately six months at INL in the Laboratory for Operation and Testing (LOTUS) in the United States test bed.

"With the first batch of fuel salt successfully created at INL, researchers will now conduct testing to better understand the physics of the process, with a goal of moving the process to a commercial scale over the next decade," says Cowboy State Daily.

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Power

No Rise in Radiation Levels at Chernobyl, Despite Damage from February's Drone Strike (nytimes.com) 145

UPDATE (12/7): The New York Times clarifies today that the damage at Chernobyl hasn't led to a rise in radiation levels: "If there was to be some event inside the shelter that would release radioactive materials into the space inside the New Safe Confinement, because this facility is no longer sealed to the outside environment, there's the potential for radiation to come out," said Shaun Burnie, a senior nuclear specialist at Greenpeace who has monitored nuclear power plants in Ukraine since 2022 and last visited Chernobyl on October 31. "I have to say I don't think that's a particularly serious issue at the moment, because they're not actively decommissioning the actual sarcophagus."

The I.A.E.A. also said there was no permanent damage to the shield's load-bearing structures or monitoring systems. A spokesman for the agency, Fredrik Dahl, said in a text message on Sunday that radiation levels were similar to what they were before the drone hit.

But "A structure designed to prevent radioactive leakage at the defunct Chernobyl nuclear plant in Ukraine is no longer operational," Politico reported Saturday, "after Russian drones targeted it earlier this year, the U.N.'s nuclear watchdog has found." [T]he large steel structure "lost its primary safety functions, including the confinement capability" when its outer cladding was set ablaze after being struck by Russian drones, according to a new report by the International Atomic Energy Agency. Beyond that, there was "no permanent damage to its load-bearing structures or monitoring systems," it said. "Limited temporary repairs have been carried out on the roof, but timely and comprehensive restoration remains essential to prevent further degradation and ensure long-term nuclear safety," IAEA Director General Rafael Mariano Grossi said in astatement.
The Guardian has pictures of the protective shield — incuding the damage from the drone strike. The shield is the world's largest movable land structure, reports CNN: The IAEA, which has a permanent presence at the site, will "continue to do everything it can to support efforts to fully restore nuclear safety and security," Grossi said.... Built in 2010 and completed in 2019, it was designed to last 100 years and has played a crucial role in securing the site.

The project cost €2.1 billion and was funded by contributions from more than 45 donor countries and organizations through the Chernobyl Shelter Fund, according to the European Bank for Reconstruction and Development, which in 2019 hailed the venture as "the largest international collaboration ever in the field of nuclear safety."

China

Chinese-Linked Hackers Use Backdoor For Potential 'Sabotage,' US and Canada Say (reuters.com) 10

U.S. and Canadian cybersecurity agencies say Chinese-linked actors deployed "Brickstorm" malware to infiltrate critical infrastructure and maintain long-term access for potential sabotage. Reuters reports: The Chinese-linked hacking operations are the latest example of Chinese hackers targeting critical infrastructure, infiltrating sensitive networks and "embedding themselves to enable long-term access, disruption, and potential sabotage," Madhu Gottumukkala, the acting director of the Cybersecurity and Infrastructure Security Agency, said in an advisory signed by CISA, the National Security Agency and the Canadian Centre for Cyber Security. According to the advisory, which was published alongside a more detailed malware analysis report (PDF), the state-backed hackers are using malware known as "Brickstorm" to target multiple government services and information technology entities. Once inside victim networks, the hackers can steal login credentials and other sensitive information and potentially take full control of targeted computers.

In one case, the attackers used Brickstorm to penetrate a company in April 2024 and maintained access through at least September 3, 2025, according to the advisory. CISA Executive Assistant Director for Cybersecurity Nick Andersen declined to share details about the total number of government organizations targeted or specifics around what the hackers did once they penetrated their targets during a call with reporters on Thursday. The advisory and malware analysis reports are based on eight Brickstorm samples obtained from targeted organizations, according to CISA. The hackers are deploying the malware against VMware vSphere, a product sold by Broadcom's VMware to create and manage virtual machines within networks. [...] In addition to traditional espionage, the hackers in those cases likely also used the operations to develop new, previously unknown vulnerabilities and establish pivot points to broader access to more victims, Google said at the time.

Microsoft

Windows 11 Growth Slows As Millions Stick With Windows 10 (theregister.com) 116

Despite Windows 10 losing free support, Statcounter shows Windows 11 holding only a modest lead of 53.7% market share compared to Windows 10's 42.7%. Analysts say the slow transition reflects both hardware limitations and a lack of must-have Windows 11 features compelling organizations to refresh their fleets. The Register reports: The Register spoke to Lansweeper principal technical evangelist Esben Dochy, who noted that consumers were more likely to have devices that couldn't be upgraded or follow the "if it ain't broke, don't fix it" rule when it comes to change. He also pointed out consumers in the EU get Microsoft Extended Security Updates (ESU) for free.

For businesses, though, it's different. Dochy told us: "The primary blocker is slow change management processes. These can be slow due to bad planning, lack of resources, difficulty in execution (in highly distributed organizations) etc. "The ESU are used to be secure while those change management processes take place, but organizations will have to pay to get those ESU making it more expensive for unprepared or inefficient organizations." [...]

The challenge facing Windows 11 is that, other than the end of free support for many versions, there is no must-have feature to make enterprises break a hardware refresh cycle, particularly in a difficult economic environment. Microsoft has not released official statistics on Windows 11 adoption. However, hardware vendors have noted the sluggish pace of transition. Dell COO Jeffrey Clarke commented during an analyst call: "If you were to look at it relative to the previous OS end of support, we are 10-12 points behind at that point with Windows 11 than we were with the previous generation."

Cloud

Amazon and Google Announce Resilient 'Multicloud' Networking Service Plus an Open API for Interoperability (reuters.com) 21

Their announcement calls it "more than a multicloud solution," saying it's "a step toward a more open cloud environment. The API specifications developed for this product are open for other providers and partners to adopt, as we aim to simplify global connectivity for everyone."

Amazon and Google are introducing "a jointly developed multicloud networking service," reports Reuters. "The initiative will enable customers to establish private, high-speed links between the two companies' computing platforms in minutes instead of weeks." The new service is being unveiled a little over a month after an Amazon Web Services outage on October 20 disrupted thousands of websites worldwide, knocking offline some of the internet's most popular apps, including Snapchat and Reddit. That outage will cost U.S. companies between $500 million and $650 million in losses, according to analytics firm Parametrix.
Google and Amazon are promising "high resiliency" through "quad-redundancy across physically redundant interconnect facilities and routers," with both Amazon and Google continuously watching for issues. (And they're using MACsec encryption between the Google Cloud and AWS edge routers, according to Sunday's announcement: As organizations increasingly adopt multicloud architectures, the need for interoperability between cloud service providers has never been greater. Historically, however, connecting these environments has been a challenge, forcing customers to take a complex "do-it-yourself" approach to managing global multi-layered networks at scale.... Previously, to connect cloud service providers, customers had to manually set up complex networking components including physical connections and equipment; this approach required lengthy lead times and coordinating with multiple internal and external teams. This could take weeks or even months. AWS had a vision for developing this capability as a unified specification that could be adopted by any cloud service provider, and collaborated with Google Cloud to bring it to market.

Now, this new solution reimagines multicloud connectivity by moving away from physical infrastructure management toward a managed, cloud-native experience.

Reuters points out that Salesforce "is among the early users of the new approach, Google Cloud said in a statement."
Music

Viral Song Created with Suno's genAI Removed From Streaming Platforms, Re-Released With Human Vocals (yahoo.com) 27

An EDM song by the British group Haven ran into trouble in October after it shared clips of upcoming song "I Run" on TikTok.

The song "was an overnight viral sensation online," writes Digital Music News — racking up millions of plays "even before it hit streaming services." (Although the Washington Post notes that "Record labels and TikTok users began questioning whether 'I Run' used an AI deepfake, modeled off British R&B singer Jorja Smith, for the vocals.")

Digital Music News picks up the story: The artist says he used his own voice to record the vocals, and then ran it through layers of processing and filtering to turn it into the female-sounding voice heard in the track. However, that filtering also included the use of the controversial genAI platform Suno — and that's what complicates things... [The article says later that Suno "is currently in the middle of a blockbuster lawsuit with the Big Three major labels over allegations of widespread copyright infringement of sound recordings used during the AI model training process."]

Meanwhile, the song was rapidly amassing listenership. It soared to #11 on the U.S. Spotify chart and #25 on Spotify globally. Videos using the song continued going viral on TikTok and Instagram, including one in which rapper Offset had apparently played the song during a Boiler Room set, which later turned out to be falsified. And then, as quickly as it appeared, "I Run" was taken down from streaming services, including Spotify and Apple Music. That was due, in part, to numerous takedown notices from The Orchard, the label to which Jorja Smith is signed, as well as the RIAA and IFPI. The takedown notices alleged various issues with the track, including the "misrepresentation" of another artist, as well as copyright infringement.

As a result, the song has also been withheld from the Billboard charts, including the Hot 100, on which it had been predicted to debut this week before the controversy. Billboard points out that it "reserves the right to withhold or remove titles from appearing on the charts that are known to be involved in active legal disputes related to copyright infringement that may extend to the deletion of such content on digital service providers."

The song itself has now been re-released with an all-human vocal track. But going forward will the music industry ever work with AI platforms? The Washington Post reports: "I Run" has taken off as record labels remain unsure of the extent to which they should welcome generative AI programs such as Suno or Udio into the industry. After the two AI music companies began growing in popularity, the three major labels — Sony Music, Warner Music Group and Universal Music Group — filed lawsuits against Suno and Udio, claiming that the AI companies have used the labels' sound recordings to train their model.

Since then, UMG and Warnerhave reached agreementsto work with Udio, ending their litigation... It comes shortly after all three major labels licensed their catalogue to Klay, a music streaming start-up that allows users to adjust songs using artificial intelligence. Major licensing organizations such as ASCAP and BMI shared that they would register songs that were partially AI-generated — but not fully generated ones.

Haven appears to present an uncomfortable edge case. While some AI-generated songs that sound broadly like other artists have been allowed to remain on streaming platforms, the voice in "I Run" appears to have been deemed too duplicative for comfort.

Encryption

Cryptologist DJB Criticizes Push to Finalize Non-Hybrid Security for Post-Quantum Cryptography (cr.yp.to) 21

In October cryptologist/CS professor Daniel J. Bernstein alleged that America's National Security Agency (and its UK counterpart GCHQ) were attempting to influence NIST to adopt weaker post-quantum cryptography standards without a "hybrid" approach that would've also included pre-quantum ECC.

Bernstein is of the opinion that "Given how many post-quantum proposals have been broken and the continuing flood of side-channel attacks, any competent engineering evaluation will conclude that the best way to deploy post-quantum [PQ] encryption for TLS, and for the Internet more broadly, is as double encryption: post-quantum cryptography on top of ECC." But he says he's seen it playing out differently: By 2013, NSA had a quarter-billion-dollar-a-year budget to "covertly influence and/or overtly leverage" systems to "make the systems in question exploitable"; in particular, to "influence policies, standards and specification for commercial public key technologies". NSA is quietly using stronger cryptography for the data it cares about, but meanwhile is spending money to promote a market for weakened cryptography, the same way that it successfully created decades of security failures by building up the market for, e.g., 40-bit RC4 and 512-bit RSA and Dual EC. I looked concretely at what was happening in IETF's TLS working group, compared to the consensus requirements for standards-development organizations. I reviewed how a call for "adoption" of an NSA-driven specification produced a variety of objections that weren't handled properly. ("Adoption" is a preliminary step before IETF standardization....) On 5 November 2025, the chairs issued "last call" for objections to publication of the document. The deadline for input is "2025-11-26", this coming Wednesday.
Bernstein also shares concerns about how the Internet Engineering Task Force is handling the discussion, and argues that the document is even "out of scope" for the IETF TLS working group This document doesn't serve any of the official goals in the TLS working group charter. Most importantly, this document is directly contrary to the "improve security" goal, so it would violate the charter even if it contributed to another goal... Half of the PQ proposals submitted to NIST in 2017 have been broken already... often with attacks having sufficiently low cost to demonstrate on readily available computer equipment. Further PQ software has been broken by implementation issues such as side-channel attacks.
He's also concerned about how that discussion is being handled: On 17 October 2025, they posted a "Notice of Moderation for Postings by D. J. Bernstein" saying that they would "moderate the postings of D. J. Bernstein for 30 days due to disruptive behavior effective immediately" and specifically that my postings "will be held for moderation and after confirmation by the TLS Chairs of being on topic and not disruptive, will be released to the list"...

I didn't send anything to the IETF TLS mailing list for 30 days after that. Yesterday [November 22nd] I finished writing up my new objection and sent that in. And, gee, after more than 24 hours it still hasn't appeared... Presumably the chairs "forgot" to flip the censorship button off after 30 days.

Thanks to alanw (Slashdot reader #1,822) for spotting the blog posts.
Programming

Microsoft and GitHub Preview New Tool That Identifies, Prioritizes, and Fixes Vulnerabilities With AI (thenewstack.io) 18

"Security, development, and AI now move as one," says Microsoft's director of cloud/AI security product marketing.

Microsoft and GitHub "have launched a native integration between Microsoft Defender for Cloud and GitHub Advanced Security that aims to address what one executive calls decades of accumulated security debt in enterprise codebases..." according to The New Stack: The integration, announced this week in San Francisco at the Microsoft Ignite 2025 conference and now available in public preview, connects runtime intelligence from production environments directly into developer workflows. The goal is to help organizations prioritize which vulnerabilities actually matter and use AI to fix them faster. "Throughout my career, I've seen vulnerability trends going up into the right. It didn't matter how good of a detection engine and how accurate our detection engine was, people just couldn't fix things fast enough," said Marcelo Oliveira, VP of product management at GitHub, who has spent nearly a decade in application security. "That basically resulted in decades of accumulation of security debt into enterprise code bases." According to industry data, critical and high-severity vulnerabilities constitute 17.4% of security backlogs, with a mean time to remediation of 116 days, said Andrew Flick, senior director of developer services, languages and tools at Microsoft, in a blog post. Meanwhile, applications face attacks as frequently as once every three minutes, Oliveira said.

The integration represents the first native link between runtime intelligence and developer workflows, said Elif Algedik, director of product marketing for cloud and AI security at Microsoft, in a blog post... The problem, according to Flick, comes down to three challenges: security teams drowning in alert fatigue while AI rapidly introduces new threat vectors that they have little time to understand; developers lacking clear prioritization while remediation takes too long; and both teams relying on separate, nonintegrated tools that make collaboration slow and frustrating... The new integration works bidirectionally. When Defender for Cloud detects a vulnerability in a running workload, that runtime context flows into GitHub, showing developers whether the vulnerability is internet-facing, handling sensitive data or actually exposed in production. This is powered by what GitHub calls the Virtual Registry, which creates code-to-runtime mapping, Flick said...

In the past, this alert would age in a dashboard while developers worked on unrelated fixes because they didn't know this was the critical one, he said. Now, a security campaign can be created in GitHub, filtering for runtime risk like internet exposure or sensitive data, notifying the developer to prioritize this issue.

GitHub Copilot "now automatically checks dependencies, scans for first-party code vulnerabilities and catches hardcoded secrets before code reaches developers," the article points out — but GitHub's VP of product management says this takes things even further.

"We're not only helping you fix existing vulnerabilities, we're also reducing the number of vulnerabilities that come into the system when the level of throughput of new code being created is increasing dramatically with all these agentic coding agent platforms."
AI

Advocacy Groups Urge Parents To Avoid AI Toys This Holiday Season 32

An anonymous reader quotes a report from the Associated Press: They're cute, even cuddly, and promise learning and companionship -- but artificial intelligence toys are not safe for kids, according to children's and consumer advocacy groups urging parents not to buy them during the holiday season. These toys, marketed to kids as young as 2 years old, are generally powered by AI models that have already been shown to harm children and teenagers, such as OpenAI's ChatGPT, according to an advisory published Thursday by the children's advocacy group Fairplay and signed by more than 150 organizations and individual experts such as child psychiatrists and educators.

"The serious harms that AI chatbots have inflicted on children are well-documented, including fostering obsessive use, having explicit sexual conversations, and encouraging unsafe behaviors, violence against others, and self-harm," Fairplay said. AI toys, made by companies including Curio Interactive and Keyi Technologies, are often marketed as educational, but Fairplay says they can displace important creative and learning activities. They promise friendship but disrupt children's relationships and resilience, the group said. "What's different about young children is that their brains are being wired for the first time and developmentally it is natural for them to be trustful, for them to seek relationships with kind and friendly characters," said Rachel Franz, director of Fairplay's Young Children Thrive Offline Program. Because of this, she added, the trust young children are placing in these toys can exacerbate the types of harms older children are already experiencing with AI chatbots.

A separate report Thursday by Common Sense Media and psychiatrists at Stanford University's medical school warned teenagers against using popular AI chatbots as therapists. Fairplay, a 25-year-old organization formerly known as the Campaign for a Commercial-Free Childhood, has been warning about AI toys for years. They just weren't as advanced as they are today. A decade ago, during an emerging fad of internet-connected toys and AI speech recognition, the group helped lead a backlash against Mattel's talking Hello Barbie doll that it said was recording and analyzing children's conversations. This time, though AI toys are mostly sold online and more popular in Asia than elsewhere, Franz said some have started to appear on store shelves in the U.S. and more could be on the way. "Everything has been released with no regulation and no research, so it gives us extra pause when all of a sudden we see more and more manufacturers, including Mattel, who recently partnered with OpenAI, potentially putting out these products," Franz said.
Last week, consumer advocates at U.S. PIRG called out the trend of buying AI toys in its annual "Trouble in Toyland" report. This year, the organization tested four toys that use AI chatbots. "We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls," the report said.
AI

In the AI Race, Chinese Talent Still Drives American Research (nytimes.com) 43

An anonymous reader quotes a report from the New York Times: When Mark Zuckerberg, Meta's chief executive, unveiled the company's Superintelligence Lab in June, he named 11 artificial intelligence researchers who were joining his ambitious effort to build a machine more powerful than the human brain. All 11 were immigrants educated in other countries. Seven were born in China, according to a memo viewed by The New York Times. Although many American executives, government officials and pundits have spent months painting China as the enemy of America's rapid push into A.I., much of the groundbreaking research emerging from the United States is driven by Chinese talent.

Two new studies show that researchers born and educated in China have for years played major roles inside leading U.S. artificial intelligence labs. They also continue to drive important A.I. research in industry and academia, despite the Trump administration's crackdown on immigration and growing anti-China sentiment in Silicon Valley. The research, from two organizations, provides a detailed look at how much the American tech industry continues to rely on engineers from China, particularly in A.I. The findings also offer a more nuanced understanding of how researchers in the two countries continue to collaborate, despite increasingly heated language from Washington and Beijing.

AI

Chinese University Collected More AI Patents Than MIT, Stanford, Princeton and Harvard Combined (bloomberg.com) 33

Tsinghua University collected 4,986 AI and machine learning patents between 2005 and the end of 2024. The Beijing institution has received more than 900 patents last year alone. The total exceeds the combined patent count from MIT, Stanford, Princeton and Harvard during the same period. China now accounts for more than half of all active patent families globally in AI and machine learning fields, according to data analytics service LexisNexis.

The university also has more AI research papers among the 100 most cited than any other school at last count. The US still holds the most influential AI patents and the top performing models. Harvard and MIT consistently rank ahead of Tsinghua in patent influence. American institutions produced 40 notable AI models in 2024 compared to 15 from Chinese organizations, according to Stanford's AI Index Report. China's share of the world's elite AI researchers -- the top 2% -- rose from 10% in 2019 to 26% in 2022. The US share fell from 35% to 28% during the same period, according to the Information Technology & Innovation Foundation.
The Internet

Cloudflare Outage Knocks Many Popular Websites Offline 56

An outage at Cloudflare that began moments ago has knocked many popular websites, including ChatGPT and X, according to user reports. Cloudflare says on its website: "Cloudflare is aware of, and investigating an issue which potentially impacts multiple customers. Further detail will be provided as more information becomes available."

Update: In a statement after the outage was resolved, Cloudflare CTO said: Earlier today we failed our customers and the broader Internet when a problem in Cloudflare network impacted large amounts of traffic that rely on us. The sites, businesses, and organizations that rely on Cloudflare depend on us being available and I apologize for the impact that we caused.

Transparency about what happened matters, and we plan to share a breakdown with more details in a few hours. In short, a latent bug in a service underpinning our bot mitigation capability started to crash after a routine configuration change we made. That cascaded into a broad degradation to our network and other services. This was not an attack.

That issue, impact it caused, and time to resolution is unacceptable. Work is already underway to make sure it does not happen again, but I know it caused real pain today. The trust our customers place in us is what we value the most and we are going to do what it takes to earn that back.
Businesses

Krafton Launches Voluntary Resignation Program Weeks After Declaring 'AI-First Company' Future (pcgamer.com) 24

An anonymous reader shares a report: In October, PUBG and Subnautica 2 publisher Krafton announced that it would be undergoing a "complete reorganization" to become an "AI-first" company, planning to invest over 130 billion won ($88 million) in agentic AI infrastructure and deployment beginning in 2026. This week, as it boasts record-breaking quarterly profits, the Korean publisher has followed that strategic shift by launching a voluntary resignation program for its domestic employees, according to Business Korea reporting.

The program, announced internally, offers substantial buyouts for domestic Krafton employees based on their length of employment at the publisher. Severance packages range from 6 months' salary for employees with one year or less of service to 36 months' salary for employees who've worked at Krafton for over 11 years. The voluntary resignation program follows a November 4 earnings call in which Krafton announced a record quarterly profit of $717 million. During the call, Krafton CFO Bae Dong-geun indicated that Krafton had also halted hiring for new positions, telling investors that "excluding organizations developing original intellectual property and AI-related personnel, we have frozen hiring company-wide."

Communications

Amazon Renames 'Project Kuiper' Satellite Internet Venture To 'Leo' (geekwire.com) 36

Amazon announced that its satellite broadband project called Project Kuiper will now be known as Amazon Leo. GeekWire reports: Leo is a nod to "low Earth orbit," where Amazon has so far launched more than 150 satellites as part of a constellation that will eventually include more than 3,200. In a blog post, Amazon said the 7-year-old Project Kuiper began "with a handful of engineers and a few designs on paper" and like most early Amazon projects "the program needed a code name." The team was inspired by the Kuiper Belt, a ring of asteroids in the outer solar system.

A new website for Amazon Leo proclaims "a new era of internet is coming," as Amazon says its satellites can help serve "billions of people on the planet who lack high-speed internet access, and millions of businesses, governments, and other organizations operating in places without reliable connectivity." Amazon said it will begin rolling out service once it's added more coverage and capacity to the network. Details about pricing and availability haven't been announced.

Google

Google Relaunches Cameyo To Entice Businesses From Windows To ChromeOS (theverge.com) 27

After acquiring software virtualization company Cameyo last year, Google has relaunched a version of the service that makes it easier for Windows-based organizations to migrate over to ChromeOS. From a report: Now called "Cameyo by Google," the Virtual App Delivery (VAD) solution allows users to run legacy Windows apps in the Chrome browser or as web apps, preventing organizations from being tied to Microsoft's operating system. Google says the new Cameyo experience is more efficient than switching between separate virtual desktop environments, allowing users to stream the specific apps they need instead of virtualizing the entire desktop. That allows Windows-based programs like Excel and AutoCAD to run side-by-side with Chrome and other web apps, giving businesses the flexibility to use a mix of Microsoft and Google services.
China

China's New Scientist Visa is a 'Serious Bid' For the World's Top Talent (nature.com) 70

China has introduced a visa that will allow young foreign researchers in science, technology, engineering and mathematics to move there without having to secure a job first. From a report: Before the introduction of the K visa, most foreign STEM researchers hoping to move to China had to find a job in advance and then apply for a work visa. The Chinese government is making "a serious bid" to attract the world's brightest minds in STEM, says Jeremy Neufeld, the director of immigration policy at the Institute for Progress, a think tank in Washington DC. South Korea, Singapore and several other countries have also launched STEM-oriented visa programmes.

The K visa was officially rolled out on 1 October, but Nature understands that applications are yet to open. Few details about eligibility have been released, except that restrictions will apply on the basis of an applicant's age, education and work experience. Foreign researchers who have graduated from 'famous' universities or institutes in China or abroad with a bachelor-or-higher degree in STEM will be eligible to apply. That also includes people who teach or research STEM topics in such organizations.

Python

Python Foundation Donations Surge After Rejecting Grant - But Sponsorships Still Needed (blogspot.com) 64

After the Python Software Foundation rejected a $1.5 million grant because it restricted DEI activity, "a flood of new donations followed," according to a new report. By Friday they'd raised over $157,000, including 295 new Supporting Members paying an annual $99 membership fee, says PSF executive director Deb Nicholson.

"It doesn't quite bridge the gap of $1.5 million, but it's incredibly impactful for us, both financially and in terms of feeling this strong groundswell of support from the community." Could that same security project still happen if new funding materializes? The PSF hasn't entirely given up. "The PSF is always looking for new opportunities to fund work benefiting the Python community," Nicholson told me in an email last week, adding pointedly that "we have received some helpful suggestions in response to our announcement that we will be pursuing." And even as things stand, the PSF sees itself as "always developing or implementing the latest technologies for protecting PyPI project maintainers and users from current threats," and it plans to continue with that commitment.
The Python Software Foundation was "astounded and deeply appreciative at the outpouring of solidarity in both words and actions," their executive director wrote in a new blog post this week, saying the show of support "reminds us of the community's strength."

But that post also acknowledges the reality that the Python Software Foundation's yearly revenue and assets (including contributions from major donors) "have declined, and costs have increased,..." Historically, PyCon US has been a source of revenue for the PSF, enabling us to fund programs like our currently paused Grants Program... Unfortunately, PyCon US has run at a loss for three years — and not from a lack of effort from our staff and volunteers! Everyone has been working very hard to find areas where we can trim costs, but even with those efforts, inflation continues to surge, and changing U.S. and economic conditions have reduced our attendance... Because we have so few expense categories (the vast majority of our spending goes to running PyCon US, the Grants Program, and our small 13-member staff), we have limited "levers to pull" when it comes to budgeting and long-term sustainability...
While Python usage continues to surge, "corporate investment back into the language and the community has declined overall. The PSF has longstanding sponsors and partners that we are ever grateful for, but signing on new corporate sponsors has slowed." (They're asking employees at Python-using companies to encourage sponsorships.) We have been seeking out alternate revenue channels to diversify our income, with some success and some challenges. PyPI Organizations offers paid features to companies (PyPI features are always free to community groups) and has begun bringing in monthly income. We've also been seeking out grant opportunities where we find good fits with our mission.... We currently have more than six months of runway (as opposed to our preferred 12 months+ of runway), so the PSF is not at immediate risk of having to make more dramatic changes, but we are on track to face difficult decisions if the situation doesn't shift in the next year.

Based on all of this, the PSF has been making changes and working on multiple fronts to combat losses and work to ensure financial sustainability, in order to continue protecting and serving the community in the long term. Some of these changes and efforts include:

— Pursuing new sponsors, specifically in the AI industry and the security sector
— Increasing sponsorship package pricing to match inflation
— Making adjustments to reduce PyCon US expenses
— Pursuing funding opportunities in the US and Europe
— Working with other organizations to raise awareness
— Strategic planning, to ensure we are maximizing our impact for the community while cultivating mission-aligned revenue channels

The PSF's end-of-year fundraiser effort is usually run by staff based on their capacity, but this year we have assembled a fundraising team that includes Board members to put some more "oomph" behind the campaign. We'll be doing our regular fundraising activities; we'll also be creating a unique webpage, piloting temporary and VERY visible pop-ups to python.org and PyPI.org, and telling more stories from our Grants Program recipients...

Keep your eyes on the PSF Blog, the PSF category on Discuss, and our social media accounts for updates and information as we kick off the fundraiser this month. Your boosts of our posts and your personal shares of "why I support the PSF" stories will make all the difference in our end-of-year fundraiser. If this post has you all fired up to personally support the future of Python and the PSF right now, we always welcome new PSF Supporting Members and donations.

AI

Neurodiverse Professionals 25% More Satisfied With AI Tools and Agents (cnbc.com) 30

An anonymous reader shared this report from CNBC: Neurodiverse professionals may see unique benefits from artificial intelligence tools and agents, research suggests. With AI agent creation booming in 2025, people with conditions like ADHD, autism, dyslexia and more report a more level playing field in the workplace thanks to generative AI. A recent study from the UK's Department for Business and Trade found that neurodiverse workers were 25% more satisfied with AI assistants and were more likely to recommend the tool than neurotypical respondents. [The study involved 1,000 users of Microsoft 365 Copilot from October through December of 2024.]

"Standing up and walking around during a meeting means that I'm not taking notes, but now AI can come in and synthesize the entire meeting into a transcript and pick out the top-level themes," said Tara DeZao, senior director of product marketing at enterprise low-code platform provider Pega. DeZao, who was diagnosed with ADHD as an adult, has combination-type ADHD, which includes both inattentive symptoms (time management and executive function issues) and hyperactive symptoms (increased movement). "I've white-knuckled my way through the business world," DeZao said. "But these tools help so much...."

Generative AI happens to be particularly adept at skills like communication, time management and executive functioning, creating a built-in benefit for neurodiverse workers who've previously had to find ways to fit in among a work culture not built with them in mind. Because of the skills that neurodiverse individuals can bring to the workplace — hyperfocus, creativity, empathy and niche expertise, just to name a few — some research suggests that organizations prioritizing inclusivity in this space generate nearly one-fifth higher revenue. "Investing in ethical guardrails, like those that protect and aid neurodivergent workers, is not just the right thing to do," said Kristi Boyd, an AI specialist with the SAS data ethics practice. "It's a smart way to make good on your organization's AI investments."

Slashdot Top Deals