Encryption

Intel Demos Chip To Compute With Encrypted Data (ieee.org) 37

An anonymous reader quotes a report from IEEE Spectrum: Worried that your latest ask to a cloud-based AI reveals a bit too much about you? Want to know your genetic risk of disease without revealing it to the services that compute the answer? There is a way to do computing on encrypted data without ever having it decrypted. It's called fully homomorphic encryption, or FHE. But there's a rather large catch. It can take thousands -- even tens of thousands -- of times longer to compute on today's CPUs and GPUs than simply working with the decrypted data. So universities, startups, and at least one processor giant have been working on specialized chips that could close that gap. Last month at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco, Intel demonstrated its answer, Heracles, which sped up FHE computing tasks as much as 5,000-fold compared to a top-of the-line Intel server CPU.

Startups are racing to beat Intel and each other to commercialization. But Sanu Mathew, who leads security circuits research at Intel, believes the CPU giant has a big lead, because its chip can do more computing than any other FHE accelerator yet built. "Heracles is the first hardware that works at scale," he says. The scale is measurable both physically and in compute performance. While other FHE research chips have been in the range of 10 square millimeters or less, Heracles is about 20 times that size and is built using Intel's most advanced, 3-nanometer FinFET technology. And it's flanked inside a liquid-cooled package by two 24-gigabyte high-bandwidth memory chips—a configuration usually seen only in GPUs for training AI.

In terms of scaling compute performance, Heracles showed muscle in live demonstrations at ISSCC. At its heart the demo was a simple private query to a secure server. It simulated a request by a voter to make sure that her ballot had been registered correctly. The state, in this case, has an encrypted database of voters and their votes. To maintain her privacy, the voter would not want to have her ballot information decrypted at any point; so using FHE, she encrypts her ID and vote and sends it to the government database. There, without decrypting it, the system determines if it is a match and returns an encrypted answer, which she then decrypts on her side. On an Intel Xeon server CPU, the process took 15 milliseconds. Heracles did it in 14 microseconds. While that difference isn't something a single human would notice, verifying 100 million voter ballots adds up to more than 17 days of CPU work versus a mere 23 minutes on Heracles.

The Almighty Buck

Prediction Market 'Kalshi' Sued for Not Paying $54 Million for Bets on Khamenei's Death (reuters.com) 44

An anonymous reader shared this report from the Independent: A popular predictions market app will not pay out the $54 million some of its users believed they were owed after correctly forecasting the death of Ayatollah Ali Khamenei, according to a report.

Kalshi, which allows players to gamble on real-world events, offered customers favorable odds on Khamenei, 86, being "out as Supreme Leader" in response to the announcement of joint U.S.-Israeli airstrikes on Tehran in the early hours of Saturday morning. The company promoted the trade on its homepage and app and tweeted [last] Saturday: "BREAKING: The odds Ali Khamenei is out as Supreme Leader have surged to 68 percent." It continued: "Reminder: Kalshi does not offer markets that settle on death. If Ali Khamenei dies, the market will resolve based on the last traded price prior to confirmed reporting of death." Khamenei was later confirmed dead in the airstrikes and the company clarified in a follow-up post: "Please note: A prior version of this clarification was grammatically ambiguous. As a customer service measure, Kalshi will reimburse lost value due to trades made between these clarifications...."

While the company has offered to reimburse any bets, fees or losses from the trade placed prior to its clarification message, it has nevertheless attracted a firestorm of complaints on social media.

A Kalshi spokesperson told Reuters they'd reimbursed "net losses" out of pocket "to the tune of millions of dollars". But a class action lawsuit was filed Thursday saying Kalshi had failed to pay $54 million: Kalshi did not invoke a "death carveout" provision until after the Iranian leader was killed to avoid paying customers in Kalshi's "Khamenei Market" what they were owed, the lawsuit said... The language specifying that Khamenei's departure could be due to any cause, including death, was "clear, unambiguous and binary," the lawsuit said, describing Kalshi's actions as "deceptive" and "predatory."
"In a notice filed Monday, the company proposed standardizing the terms of all its markets that implicitly depend on a person surviving..." reports Business Insider. "The update comes after Kalshi paid $2.2 million to resolve complaints from users who were confused by the way it divided the $55 million wagered on Iran's Supreme Leader Ali Khamenei's ouster after his targeted killing by Israel and the US."

Their article cites a DePaul University law professor who says "There's now sort of this nascent, but bipartisan movement against prediction markets. I think Kalshi's feeling the heat." For example, U.S. Senator Chris Murphy told the Washington Post, "People shouldn't be rooting for people to die because they placed a bet."
Chrome

Google Chrome Is Switching To a Two-Week Release Cycle (9to5google.com) 31

Google is accelerating Chrome's major release cadence from four weeks to two starting with version 153 on September 8th. "...our goal is to ensure developers and users have immediate access to the latest performance improvements, fixes and new capabilities," says Google. "Building on our history of adapting our release process to match the demands of a modern web, Chrome is moving to a two-week release cycle." The company says the "smaller scope" of these releases "minimizes disruption and simplifies post-release debugging." They also cite "recent process enhancements" that will "maintain [Chrome's] high standards for stability." 9to5Google reports: There will still be weekly security updates between milestones. This applies to desktop, Android, and iOS, while there are "no changes to the Dev and the Canary channels": "A Chrome Beta for each version will ship three weeks before the stable release. We recommend developers test with the beta to keep up to date with any upcoming changes that might impact your sites and applications."

The eight-week Extended Stable release schedule for enterprise customers and Chromium embedders will not change. Chromebooks will also have "extended release options": "Our priority is a seamless experience, so the latest Chrome releases will roll out to Chromebooks after dedicated platform testing. We are adapting these channels for the new two-week browser cycle and we will share more details soon regarding milestone updates for managed devices."

Software

LibreOffice Says Its UI Is Way Better Than Microsoft Office's (neowin.net) 235

darwinmac writes: While many users choose Microsoft Office over LibreOffice because of its support for the proprietary formats (.docx, .xlsx, and .pptx), others prefer Office for its "better" ribbon interface. These users often criticize LibreOffice for having a "clunky" UI instead of the "standard" ribbon interface you would find in Word, Excel, and other Office apps.

Now, Neowin reports that LibreOffice is fighting back, arguing that its UI is actually superior because it is customizable, with several modes such as the classic toolbar interface, an Office-inspired ribbon layout, a sidebar-focused design, and more. Furthermore, it argues that there is no evidence that the ribbon offers "superior usability" over other interface modes.
LibreOffice says in a blog post: Incidentally, the characterization of ribbon-style interfaces as "modern" or "standard," used by several users, is not based on any objective usability parameter or design principle, but is the result of Microsoft's dominance in the market and the huge investments made when the ribbon was introduced in Office 2007 as a new paradigm for productivity software. The idea that "modern" equals "similar to a ribbon" is a normalization effect: the Microsoft interface has become a benchmark because of its ubiquity, not because of its proven advantages in terms of usability. Added to this is the fact that many users evaluate office software through the lens of familiarity with Microsoft Office and consider deviation from it as a problem rather than a design choice. Before this, LibreOffice had also criticized its competitor OnlyOffice, accusing it of being "fake open source" because it believes OnlyOffice is working with Microsoft to lock users into the Office ecosystem by prioritizing the formats mentioned earlier instead of LibreOffice's own OpenDocument Format (ODF).
Windows

Microsoft Bans 'Microslop' On Its Discord, Then Locks the Server (windowslatest.com) 82

Over the weekend, Windows Latest noticed that Microsoft's official Copilot Discord server began automatically blocking the term "Microslop." As shown in a screenshot, any message containing the word is automatically prevented from posting, and users receive a moderation notice explaining that the message includes language deemed inappropriate under the server's rules. From the report: Windows Latest found that sending a message with the word "Microslop" inside the official Copilot Discord server immediately triggers an automated moderation response. The message does not appear publicly in the channel, and instead, only the sender sees the notice stating that the content is blocked by the server because it contains a phrase deemed inappropriate.

Of course, the internet rarely leaves things there. Shortly after Windows Latest posted about Copilot Discord server blocking Microslop on X, users began experimenting in the server with variations such as "Microsl0p" using a zero instead of the letter "o." Predictably, those versions slipped past the filter. Keyword moderation has always been something of a cat-and-mouse game, and this isn't any different.

What started as a simple keyword filter quickly snowballed into users deliberately testing the restriction and posting variations of the blocked term. Accounts that included "Microslop" in their messages first got banned from messaging again. Not long after, access to parts of the server was restricted, with message history hidden and posting permissions disabled for many users.

Bitcoin

South Korean Police Lose Seized Crypto By Posting Password Online 51

South Korean tax authorities lost millions in seized cryptocurrency after publishing high-res photos of Ledger hardware wallets that clearly displayed the wallets' seed phrases, allowing an unknown party to drain the funds. Gizmodo reports: South Korea's National Tax Service seized crypto assets during recent enforcement actions against 124 high-value tax evaders, but now, a large chunk of that crypto cash has been lost. The operation originally resulted in the confiscation of crypto holdings worth about 8.1 billion won, or roughly $5.6 million. However, officials later issued a press release to showcase these efforts in recovering delinquent taxes, and the release included photographs of Ledger hardware wallets taken into custody along with handwritten notes that displayed the wallet seed phrases.

Those images attached to the press release turned out to be the critical error. High-resolution photos clearly showed the mnemonic recovery phrases, which serve as the master key for accessing the wallets. This exposure eliminated any protection provided by the offline cold storage on the Ledger devices. Possession of the seed phrase allows complete control, and anyone who knows the phrase can import it into software or another hardware wallet and initiate transfers without the original device.

In this case, an unknown individual who saw the photos published by law enforcement first added a small amount of ether to one of the addresses to cover Ethereum network gas fees necessary for outbound transactions. From there, they executed three transfers to move approximately 4 million Pre-Retogeum, or PRTG, tokens. At the time, those tokens carried a value of $4.8 million, but reporting from The Block indicates liquidating that much value from the holdings would have proven difficult due to market dynamics.
Businesses

Silicon Valley's Ideas Mocked Over Penchant for Favoring Young Entrepreneurs with 'Agency' (harpers.org) 47

In a 9,000-word expose, a writer for Harper's visited San Francisco's young entrepreneurs in September to mockingly profile "tech's new generation and the end of thinking."

There's Cluely founder Roy Lee. ("His grand contribution to the world was a piece of software that told people what to do.") And the Rationalist movement's Scott Alexander, who "would probably have a very easy time starting a suicide cult..." Alexander's relationship with the AI industry is a strange one. "In theory, we think they're potentially destroying the world and are evil and we hate them," he told me. In practice, though, the entire industry is essentially an outgrowth of his blog's comment section... "Many of them were specifically thinking, I don't trust anybody else with superintelligence, so I'm going to create it and do it well." Somehow, a movement that believes AI is incredibly dangerous and needs to be pursued carefully ended up generating a breakneck artificial arms race.
There's a fascinating story about teenaged founder Eric Zhu (who only recently turned 18): Clients wanted to take calls during work hours, so he would speak to them from his school bathroom. "I convinced my counselor that I had prostate issues... I would buy hall passes from drug dealers to get out of class, to have business meetings." Soon he was taking Zoom calls with a U.S. senator to discuss tech regulation... Next, he built his own venture-capital fund, managing $20 million. At one point cops raided the bathroom looking for drug dealers while Eric was busy talking with an investor. Eventually, the school got sick of Eric's misuse of the facilities and kicked him out. He moved to San Francisco.

Eric made all of this sound incredibly easy. You hang out in some Discord servers, make a few connections with the right people; next thing you know, you're a millionaire... Eric didn't think there was anything particularly special about himself. Why did he, unlike any of his classmates, start a $20 million VC fund? "I think I was just bored. Honestly, I was really bored." Did he think anyone could do what he did? "Yeah, I think anyone genuinely can."

The article concludes Silicon Valley's investors are rewarding young people with "agency". Although "As far as I could tell, being a highly agentic individual had less to do with actually doing things and more to do with constantly chasing attention online." Like X.com user Donald Boat, who successfully baited Sam Altman into buying him a gaming PC in "a brutally simplified miniature of the entire VC economy." (After which "People were giving him stuff for no reason except that Altman had already done it, and they didn't want to be left out of the trend.") Shortly before I arrived at the Cheesecake Factory, [Donald Boat] texted to let me know that he'd been drinking all day, so when I met him I thought he was irretrievably wasted. In fact, it turned out, he was just like that all the time... He seemed to have a constant roster of projects on the go. He'd sent me occasional photos of his exploits. He went down to L.A. to see Oasis and ended up in a poker game with a group of weapons manufacturers. "I made a bunch of jokes about sending all their poker money to China," he said, "and they were not pleased...."

"I don't use that computer and I think video games are a waste of time. I spent all the money I made from going viral on Oasis tickets." As far as he was concerned, the fact that tech people were tripping over themselves to take part in his stunt just confirmed his generally low impression of them. "They have too much money and nothing going on..." Ever since his big viral moment, he'd been suddenly inundated with messages from startup drones who'd decided that his clout might be useful to them. One had offered to fly him out to the French Riviera.

The author's conclusion? "It did not seem like a good idea to me that some of the richest people in the world were no longer rewarding people for having any particular skills, but simply for having agency."
The Military

Sam Altman Answers Questions on X.com About Pentagon Deal, Threats to Anthropic (x.com) 42

Saturday afternoon Sam Altman announced he'd start answering questions on X.com about OpenAI's work with America's Department of War — and all the developments over the past few days. (After that department's negotions had failed with Anthropic, they announced they'd stop using Anthropic's technology and threatened to designate it a "Supply-Chain Risk to National Security". Then they'd reached a deal for OpenAI's technology — though Altman says it includes OpenAI's own similar prohibitions against using their products for domestic mass surveillance and requiring "human responsibility" for the use of force in autonomous weapon systems.)

Altman said Saturday that enforcing that "Supply-Chain Risk" designation on Anthropic "would be very bad for our industry and our country, and obviously their company. We said [that] to the Department of War before and after. We said that part of the reason we were willing to do this quickly was in the hopes of de-esclation.... We should all care very much about the precedent... To say it very clearly: I think this is a very bad decision from the Department of War and I hope they reverse it. If we take heat for strongly criticizing it, so be it."

Altman also said that for a long time, OpenAI was planning to do "non-classified work only," but this week found the Department of War "flexible on what we needed..." Sam Altman: The reason for rushing is an attempt to de-escalate the situation. I think the current path things are on is dangerous for Anthropic, healthy competition, and the U.S. We negotiated to make sure similar terms would be offered to all other AI labs.

I know what it's like to feel backed into a corner, and I think it's worth some empathy to the Department of War. They are... a very dedicated group of people with, as I mentioned, an extremely important mission. I cannot imagine doing their work. Our industry tells them "The technology we are building is going to be the high order bit in geopolitical conflict. China is rushing ahead. You are very behind." And then we say "But we won't help you, and we think you are kind of evil." I don't think I'd react great in that situation. I do not believe unelected leaders of private companies should have as much power as our democratically elected government. But I do think we need to help them.

Question: Are you worried at all about the potential for things to go really south during a possible dispute over what's legal or not later on and be deemed a supply chain risk...?

Sam Altman: Yes, I am. If we have to take on that fight we will, but it clearly exposes us to some risk. I am still very hopeful this is going to get resolved, and part of why we wanted to act fast was to help increase the chances of that...

Question: Why the rush to sign the deal ? Obviously the optics don't look great.

Sam Altman: It was definitely rushed, and the optics don't look good. We really wanted to de-escalate things, and we thought the deal on offer was good.

If we are right and this does lead to a de-escalation between the Department of War and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as as rushed and uncareful. I don't where it's going to land, but I have already seen promising signs. I think a good relationship between the government and the companies developing this technology is critical over the next couple of years...

Question: What was the core difference why you think the Department of War accepted OpenAI but not Anthropic?

Sam Altman: [...] We believe in a layered approach to safety — building a safety stack, deploying FDEs [embedded Forward Deployed Engineers] and having our safety and alignment researcher involved, deploying via cloud, working directly with the Department of War. Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with. We feel that it it's very important to build safe system, and although documents are also important, I'd clearly rather rely on technical safeguards if I only had to pick one...

I think Anthropic may have wanted more operational control than we did...

Question: Were the terms that you accepted the same ones Anthropic rejected?

Sam Altman: No, we had some different ones. But our terms would now be available to them (and others) if they wanted.

Question: Will you turn off the tool if they violate the rules?

Sam Altman: Yes, we will turn it off in that very unlikely event, but we believe the U.S. government is an institution that does its best to follow law and policy. What we won't do is turn it off because we disagree with a particular (legal military) decision. We trust their authority.

Questions were also answered by OpenAI's head of National Security Partnerships (who at one point posted that they'd managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.) And they stressed that with OpenAI's deal with Department of War, "We control how we train the models and what types of requests the models refuse." Question: Are employees allowed to opt out of working on Department of War-related projects?

Answer: We won't ask employees to support Department of War-related projects if they don't want to.

Question: How much is the deal worth?

Answer: It's a few million $, completely inconsequential compared to our $20B+ in revenue, and definitely not worth the cost of a PR blowup. We're doing it because it's the right thing to do for the country, at great cost to ourselves, not because of revenue impact...

Question: Can you explicitly state which specific technical safeguard OpenAI has that allowed you to sign what Anthropic called a 'threat to democratic values'?

Answer: We think the deal we made has more guardrails than any previous agreement for classified AI deployments, including Anthropic's. Other AI labs (including Anthropic) have reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments. Usage policies, on their own, are not a guarantee of anything. Any responsible deployment of AI in classified environments should involve layered safeguards including a prudent safety stack, limits on deployment architecture, and the direct involvement of AI experts in consequential AI use cases. These are the terms we negotiated in our contract.

They also detailed OpenAI's position on LinkedIn: Deployment architecture matters more than contract language. Our contract limits our deployment to cloud API. Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware...

Instead of hoping contract language will be enough, our contract allows us to embed forward deployed engineers, commits to giving us visibility into how models are being used, and we have the ability to iterate on safety safeguards over time. If our team sees that our models aren't refusing queries they should, or there's more operational risk than we expected, our contract allows us to make modifications at our discretion. This gives us far more influence over outcomes (and insight into possible abuse) than a static contract provision ever could.

U.S. law already constrains the worst outcomes. We accepted the "all lawful uses" language proposed by the Department, but required them to define the laws that constrained them on surveillance and autonomy directly in the contract. And because laws can change, having this codified in the contract protects against changes in law or policy that we can't anticipate.

AI

US Threatens Anthropic with 'Supply-Chain Risk' Designation. OpenAI Signs New War Department Deal (anthropic.com) 51

It started Friday when all U.S. federal agencies were ordered to "immediately cease" using Anthropic's AI technology after contract negotiations stalled when Anthropic requested prohibitions against mass domestic surveillance or fully autonomous weapons. But later Friday there were even more repercussions...

In a post to his 1.1 million followers on X.com, U.S. Secretary of War Pete Hegseth criticized Anthropic for what he called "a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon." Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic... Cloaked in the sanctimonious rhetoric of "effective altruism," [Anthropic and CEO Dario Amodei] have attempted to strong-arm the United States military into submission — a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic's defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable...

In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic... America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.

Meanwhile, Anthrophic said on Friday that "no amount of intimidation or punishment from the Department of War will change our position." (And "We will challenge any supply chain risk designation in court.") Designating Anthropic as a supply chain risk would be an unprecedented action — one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government's classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so. We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government... Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement.
Anthropic also defended the two exceptions they'd requested that had stalled contract negotiations. "[W]e do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights."

Also Friday, OpenAI announced that "we reached an agreement with the Department of War to deploy our models in their classified network." OpenAI CEO Sam Altman emphasized that the agreement retains and confirms OpenAI's own prohibitions against using their products for domestic mass surveillance — and requires "human responsibility" for the use of force including for autonomous weapon systems. "The Department of War agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the Department of War also wanted. " We are asking the Department of War to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
Music

'The Death of Spotify: Why Streaming is Minutes Away From Being Obsolete' 70

An anonymous reader shares a column: I'm going to take the diplomatic hat off here and say with brutal honesty: basically everybody in the music business hates Spotify except for the people who work there. It's a platform that sucks artists for everything they have, it actively prevents community building, and, despite all of that, the platform still struggles to maintain a healthy profit margin.

The streaming business model is fundamentally broken. And eventually, its demise will become more and more obvious to recognize. I'll break down exactly why the DSP era is coming to a grinding halt, why the major labels are quietly terrified, and why the artists who don't pivot now are going to go down with the ship.

[...] Jimmy Iovine put it bluntly: "The streaming services have a bad situation, there's no margins, they're not making any money." This model only works for Apple, Amazon, and Google, because they don't need their music platforms to be wildly profitable. Amazon uses music as a loss-leader to keep you paying for Prime. Apple uses it to sell $1,000 iPhones. As for Spotify, or any standalone music streaming company, they're kind of screwed. And guess what -- when the platform's margins are structurally squeezed, guess who gets squeezed first? The artists.

[...] What if Jimmy is right? If the DSPs are "minutes away from obsolete," what replaces them? Well, I'm not sure the DSPs are going to disappear overnight, but if you're an artist or a manager trying to sustain yourself in this evolving music economy, the answer is direct ownership. The artists who will survive the next five years are the ones who are quietly shifting their focus away from the "ATM Machine."

They are building their own cultural hangars. They are capturing phone numbers on Laylo. They are driving fans to private Discord servers. They are focusing on ARPF (Average Revenue Per Fan) through high-margin merch, vinyl, and hard tickets, rather than begging for fractions of a penny from a playlist placement. We are witnessing the death of the "Mass Audience" and the birth of the "Micro-Community."
Open Source

'Open Source Registries Don't Have Enough Money To Implement Basic Security' (theregister.com) 24

Google and Microsoft contributed $5 million to launch Alpha-Omega in 2022 — a Linux Foundation project to help secure the open source supply chain. But its co-founder Michael Winser warns that open source registries are in financial peril, reports The Register, since they're still relying on non-continuous funding from grants and donations.

And it's not just because bandwidth is expensive, he said at this year's FOSDEM. "The problem is they don't have enough money to spend on the very security features that we all desperately need..." In a follow-up LinkedIn exchange after this article had posted, Winser estimated it could cost $5 million to $8 million a year to run a major registry the size of Crates.io, which gets about 125 billion downloads a year. And this number wouldn't include any substantial bandwidth and infrastructure donations (Like Fastly's for Crates.io). Adding to that bill is the growing cost of identifying malware, the proliferation of which has been amplified through the use of AI and scripts. These repositories have detected 845,000 malware packages from 2019 to January 2025 (the vast majority of those nasty packages came to npm)...

In some cases benevolent parties can cover [bandwidth] bills: Python's PyPI registry bandwidth needs for shipping copies of its 700,000+ packages (amounting to 747PB annually at a sustained rate of 189 Gbps) are underwritten by Fastly, for instance. Otherwise, the project would have to pony up about $1.8 million a month. Yet the costs Winser was most concerned about are not bandwidth or hosting; they are the security features needed to ensure the integrity of containers and packages. Alpha-Omega underwrites a "distressingly" large amount of security work around registries, he said. It's distressing because if Alpha-Omega itself were to miss a funding round, a lot of registries would be screwed. Alpha-Omega's recipients include the Python Software Foundation, Rust Foundation, Eclipse Foundation, OpenJS Foundation for Node.js and jQuery, and Ruby Central.

Donations and memberships certainly help defray costs. Volunteers do a lot of what otherwise would be very expensive work. And there are grants about...Winser did not offer a solution, though he suggested the key is to convince the corporate bean counters to consider paid registries as "a normal cost of doing business and have it show up in their opex as opposed to their [open source program office] donation budget."

The dilemma was summed up succinctly by the anonymous Slashdot reader who submitted this story.

"Free beer is great. Securing the keg costs money!"
United States

F-35 Software Could Be Jailbreaked Like an IPhone: Dutch Defense Minister (twz.com) 87

Lockheed Martin's F-35 combat aircraft is a supersonic stealth "strike fighter." But this week the military news site TWZ reports that the fighter's "computer brain," including "its cloud-based components, could be cracked to accept third-party software updates, just like 'jailbreaking' a cellphone, according to the Dutch State Secretary for Defense."

TWZ notes that the Dutch defense secretary made the remarks during an episode of BNR Nieuwsradio's "Boekestijn en de Wijk" podcast, according to a machine translation: Gijs Tuinman, who has been State Secretary for Defense in the Netherlands since 2024, does not appear to have offered any further details about what the jailbreaking process might entail. What, if any, cyber vulnerabilities this might indicate is also unclear. It is possible that he may have been speaking more notionally or figuratively about action that could be taken in the future, if necessary...

The ALIS/ODIN network is designed to handle much more than just software updates and logistical data. It is also the port used to upload mission data packages containing highly sensitive planning information, including details about enemy air defenses and other intelligence, onto F-35s before missions and to download intelligence and other data after a sortie. To date, Israel is the only country known to have successfully negotiated a deal giving it the right to install domestically-developed software onto its F-35Is, as well as otherwise operate its jets outside of the ALIS/ODIN network.

The comments "underscore larger issues surrounding the F-35 program, especially for foreign operators," the article points out. But at the same time F-35's have a sophisticated mission-planning data package. "So while jailbreaking F-35's onboard computers, as well as other aspects of the ALIS/ODIN network, may technically be feasible, there are immediate questions about the ability to independently recreate the critical mission planning and other support it provides. This is also just one aspect of what is necessary to keep the jets flying, let alone operationally relevant."

"TWZ previously explored many of these same issues in detail last year, amid a flurry of reports about the possibility that F-35s have some type of discreet 'kill switch' built in that U.S. authorities could use to remotely disable the jets. Rumors of this capability are not new and remain completely unsubstantiated." At that time, we stressed that a 'kill switch' would not even be necessary to hobble F-35s in foreign service. At present, the jets are heavily dependent on U.S.-centric maintenance and logistics chains that are subject to American export controls and agreements with manufacturer Lockheed Martin. Just reliably sourcing spare parts has been a huge challenge for the U.S. military itself... F-35s would be quickly grounded without this sustainment support. [A cutoff in spare parts and support"would leave jailbroken jets quickly bricked on the ground," the article notes later.] Altogether, any kind of jailbreaking of the F-35's systems would come with a serious risk of legal action by Lockheed Martin and additional friction with the U.S. government.
Thanks to long-time Slashdot reader Koreantoast for sharing the article.
Programming

Has the AI Disruption Arrived - and Will It Just Make Software Cheaper and More Accessible? (aboard.com) 88

Programmer/entrepreneur Paul Ford is the co-founder of AI-driven business software platform Aboard. This week he wrote a guest essay for the New York Times titled "The AI Disruption Has Arrived, and It Sure Is Fun," arguing that Anthropic's Claude Code "was always a helpful coding assistant, but in November it suddenly got much better, and ever since I've been knocking off side projects that had sat in folders for a decade or longer... [W]hen the stars align and my prompts work out, I can do hundreds of thousands of dollars worth of work for fun (fun for me) over weekends and evenings, for the price of the Claude $200-a-month."

He elaborates on his point on the Aboard.com blog: I'm deeply convinced that it's possible to accelerate software development with AI coding — not deprofessionalize it entirely, or simplify it so that everything is prompts, but make it into a more accessible craft. Things which not long ago cost hundreds of thousands of dollars to pull off might come for hundreds of dollars, and be doable by you, or your cousin. This is a remarkable accelerant, dumped into the public square at a bad moment, with no guidance or manual — and the reaction of many people who could gain the most power from these tools is rejection and anxiety. But as I wrote....

I believe there are millions, maybe billions, of software products that don't exist but should: Dashboards, reports, apps, project trackers and countless others. People want these things to do their jobs, or to help others, but they can't find the budget. They make do with spreadsheets and to-do lists.

I don't expect to change any minds; that's not how minds work. I just wanted to make sure that I used the platform offered by the Times to say, in as cheerful a way as possible: Hey, this new power is real, and it should be in as many hands as possible. I believe everyone should have good software, and that it's more possible now than it was a few years ago.

From his guest essay: Is the software I'm making for myself on my phone as good as handcrafted, bespoke code? No. But it's immediate and cheap. And the quantities, measured in lines of text, are large. It might fail a company's quality test, but it would meet every deadline. That is what makes A.I. coding such a shock to the system... What if software suddenly wanted to ship? What if all of that immense bureaucracy, the endless processes, the mind-boggling range of costs that you need to make the computer compute, just goes?

That doesn't mean that the software will be good. But most software today is not good. It simply means that products could go to market very quickly. And for lots of users, that's going to be fine. People don't judge A.I. code the same way they judge slop articles or glazed videos. They're not looking for the human connection of art. They're looking to achieve a goal. Code just has to work... In about six months you could do a lot of things that took me 20 years to learn. I'm writing all kinds of code I never could before — but you can, too. If we can't stop the freight train, we can at least hop on for a ride.

The simple truth is that I am less valuable than I used to be. It stings to be made obsolete, but it's fun to code on the train, too. And if this technology keeps improving, then all of the people who tell me how hard it is to make a report, place an order, upgrade an app or update a record — they could get the software they deserve, too. That might be a good trade, long term.

Government

Pro-Gamer Consumer Movement 'Stop Killing Games' Will Launch NGOs in America and the EU (pcgamer.com) 28

The consumer movement Stop Killing Games "has come a long way in the two years since YouTuber Ross Scott got mad about Ubisoft's destruction of The Crew in 2024," writes the gaming news site PC Gamer. "The short version is, he won: 1.3 million people signed the group's petition, mandating its consideration by the European Union, and while Ubisoft CEO Yves Guillemot reminded us all that nothing is forever, his company promised to never do something like that again." (And Ubisoft has since updated The Crew 2 with an offline mode, according to Engadget.)

"But it looks like even bigger things are in store," PC Gamer wrote Thursday, "as Scott announced today that Stop Killing Games is launching two official NGOs, one in the EU and the other in the US." An NGO — that's non-governmental organization — is, very generally speaking, an organization that pursues particular goals, typically but not exclusively political, and that may be funded partially or fully by governments, but is not actually part of any government. It's a big tent: Well-known NGOs include Oxfam, Doctors Without Borders, Amnesty International, and CARE International... "If there's a lobbyist showing up again and again at the EU Commission, that might influence things," [Scott says in a video]. "This will also allow for more watchdog action. If you recall, I helped organize a multilingual site with easy to follow instructions for reporting on The Crew to consumer protection agencies. Well, maybe the NGO could set something like that up for every big shutdown where the game is destroyed in the future...."

Scott said in the video that he doesn't have details, but the two NGOs are reportedly looking at establishing a "global movement" to give Stop Killing Games a presence in other regions.

"According to Scott, these NGOs would allow for 'long-term counter lobbying' when publishers end support for certain video games," Engadget reports" "Let me start off by saying I think we're going to win this, namely the problem of publishers destroying video games that you've already paid for," Scott said in the video. According to Scott, the NGOs will work on getting the original Stop Killing Games petition codified into EU law, while also pursuing more watchdog actions, like setting up a system to report publishers for revoking access to purchased video games... According to Scott, the campaign leadership will meet with the European Commission soon, but is also working on a 500-page legal paper that reveals some of the industry's current controversial practices.
Social Networks

Pinterest Is Drowning in a Sea of AI Slop and Auto-Moderation 30

Users say Pinterest has become flooded with AI-generated images and heavy-handed automated moderation, with artists reporting wrongful takedowns and their hand-drawn work mislabeled as "AI modified." As the company doubles down on AI features and layoffs, longtime users argue the platform's creative ecosystem is being undermined. 404 Media reports: "I feel like, increasingly, it's impossible to talk to a single human [at Pinterest]," artist and Pinterest user Tiana Oreglia told 404 Media. "Along with being filled with AI images that have been completely ruining the platform, Pinterest has implemented terrible AI moderation that the community is up in arms about. It's banning people randomly and I keep getting takedown notices for pins." [...]

r/Pinterest is awash in users complaining about AI-related issues on the site. "Pinterest keeps automatically adding the 'AI modified' tag to my Pins... every time I appeal, Pinterest reviews it and removes the AI label. But then... the same thing happens again on new Pins and new artwork. So I'm stuck in this endless loop of appealing, label removed, new Pin gets tagged again," read a post on r/Pinterest. The redditor told 404 Media that this has happened three times so far and it takes between 24 to 48 hours to sort out. "I actively promote my work as 100% hand-drawn and 'no AI,'" they said. "On Etsy, I clearly position my brand around original illustration. So when a Pinterest Pin is labeled 'Hand Drawn' but simultaneously marked as 'AI modified,' it creates confusion and undermines that positioning."

Artist Min Zakuga told 404 Media that they've seen a lot of their art on Pinterest get labeled as "AI modified" despite being older than image generation tech. "There is no way to take their auto-labeling off, other than going through a horribly long process where you have to prove it was not AI, which still may get rejected," she said. "Even artwork from 10-13 years ago will still be labeled by Pinterest as AI, with them knowing full well something from 10 years ago could not possibly be AI." Other users are tired of seeing a constant flood of AI-generated art in their feeds. "I can't even scroll through 100 pins without 95 out of them being some AI slop or theft, let alone very talented artists tend to be sucked down and are being unrecognized by the sheer amount of it," said another post. "I don't want to triple check my sources every single time I look at a pin, but I refuse to use any of that soulless garbage. However, Pinterest has been infested. Made obsolete."
XBox (Games)

Phil Spencer Retiring After 38 Years At Microsoft (ign.com) 23

Xbox chief and Microsoft Gaming CEO Phil Spencer is leaving Microsoft after nearly 40 years at the company. "Meanwhile, Xbox President Sarah Bond, "long thought by many both inside and outside of Microsoft to be Spencer's heir apparent, has resigned," reports IGN. From the report: The new CEO of Microsoft Gaming will be Asha Sharma, currently the President of Microsoft's CoreAI product. Finally, Xbox Game Studios head Matt Booty is being promoted to Chief Content Officer and will work closely with Sharma. "I want to thank Phil for his extraordinary leadership and partnership," Microsoft CEO Satya Nadella said in an email sent to Microsoft staff. "Over 38 years at Microsoft, including 12 years leading Gaming, Phil helped transform what we do and how we do it." [...]

Spencer was named Head of Xbox in March of 2014, when he was tasked with righting a ship that had made a number of product choices and policy decisions that rubbed core gamers the wrong way in the run-up to the launch of the Xbox One in Fall 2013. Long hailed by gamers as being one of their own, Spencer could frequently be found on Xbox Live, playing games regularly with fellow Xbox gamers and racking up a healthy Gamerscore. His first major move when put in charge was decoupling the Kinect 2.0 peripheral from the Xbox One package, thus immediately reducing the new console's price by $100 to $399, matching the day-one price of Sony's PlayStation 4. He spearheaded the much-heralded backwards compatibility movement within Xbox, the Xbox Game Pass service was born under his watch, and accessibility made major advances during his tenure in both hardware and software. Xbox Play Anywhere, which sought to let gamers play their Xbox games on any device, be it a PC, console, or handheld, isn't new but has been a big recent focal point.

Spencer's time running Xbox will perhaps be most remembered for Microsoft's $69 billion acquisition of Activision-Blizzard-King in 2022, which took almost two years to achieve regulatory approval from various agencies around the world. But Spencer began trying to solve for Xbox's dearth of first-party games in 2018, when the first wave of studio acquisitions occurred. Prior to the Activision deal, Spencer's biggest move came with the $7.5 billion acquisition of ZeniMax, parent company of Bethesda, in 2020. The deal gave Xbox total ownership of Bethesda Game Studios and its Fallout and Elder Scrolls franchises along with id Software and its Doom and Quake IPs, among many others. Questions arose from there about whether or not that meant all of Xbox's new studios would produce games exclusively for Xbox consoles, and while some games were kept off of PlayStation platforms temporarily, many weren't and most now seem to come to PS5 eventually, if not on day one.

Google

Google's Pixel 10a Is the Same Damn Phone As the Pixel 9a (gizmodo.com) 39

Google's Pixel 10a is essentially a flatter version of last year's Pixel 9a, keeping the same Tensor G4 chip, camera hardware, RAM, storage, and $500 price while dropping features like Pixelsnap Qi2 charging and advanced Gemini AI capabilities found in higher-end models. Gizmodo reports: We use words like "candy bar" or "slab" to describe our full-screen smartphones, but Google has designed what is likely the slabbiest phone of the modern era. During an hour-long hands-on with Google's all-new Google Pixel 10a, I slid the phone across a desk and felt oddly satisfied that it could glide as neatly as a figure skater without any hint of a camera bump hindering its path. It's the first thing I need to bring up regarding the Pixel 10a, because there's no other discernible difference between this phone and the previous-gen Pixel 9a.

And that seems to be the point. The Pixel 10a starts at $500, exactly how much the Pixel 9a cost at launch. In a Q&A with journalists, Google told Gizmodo that the company wanted to offer the same price point as before. That apparently required Google to stick with the same Tensor G4 chip as last year. You still have the same storage options of 128GB or 256GB and the minimum of 8GB of RAM. Think of the Pixel 10a as a Pixel 9a with a reduced camera bump. If you're one of the heretics who uses a phone without a case, that fact alone may be enough to pay attention. Otherwise, you'll be scrounging to find any real difference between the Pixel 10a and one of last year's best mid-range phones.

Windows

GameHub Will Give Mac Owners Another Imperfect Way To Play Windows Games (arstechnica.com) 8

An anonymous reader quotes a report from Ars Technica: For a while now, Mac owners have been able to use tools like CrossOver and Game Porting Toolkit to get many Windows games running on their operating system of choice. Now, GameSir plans to add its own potential solution to the mix, announcing that a version of its existing Windows emulation tool for Android will be coming to macOS. Hong Kong-based GameSir has primarily made a name for itself as a manufacturer of gaming peripherals -- the company's social media profile includes a self-description as "the Anti-Stick Drift Experts." Early last year, though, GameSir rolled out the Android GameHub app, which includes a GameFusion emulator that the company claims "provides complete support for Windows games to run on Android through high-precision compatibility design."

In practice, GameHub and GameFusion for Android haven't quite lived up to that promise. Testers on Reddit and sites like EmuReady report hit-or-miss compatibility for popular Steam titles on various Android-based handhelds. At least one Reddit user suggests that "any Unity, Godot, or Game Maker game tends to just work" through the app, while another reports "terrible compatibility" across a wide range of games. With Sunday's announcement, GameSir promises a similar opportunity to "unlock your entire Steam library" and "run Win games/Steam natively" on Mac will be "coming soon." GameSir is also promising "proprietary AI frame interpolation" for the Mac, following the recent rollout of a "native rendering mode" that improved frame rates on the Android version.
There are some "reasons to worry" though, based on the company's uneven track record. The Android version faced controversy for including invasive tracking components, which were later removed after criticism. There were also questions about the use of open-source code, as GameSir acknowledged referencing and using UI components from Winlator, even while maintaining that its core compatibility layer was developed in-house.
Open Source

Oldest Active Linux Distro Slackware Finally Releases Version 15.0 (itsfoss.com) 51

Created in 1993, Slackware is considered the oldest Linux distro that's still actively maintained. And more than three decades later... there's a new release! (And there's also a Slackware Live Edition that can run from a DVD or USB stick...) .

Slackware's latest version was released way back in 2016, notes the blog It's FOSS: The major highlight of Slackware 15 is the addition of the latest Linux Kernel 5.15 LTS. This is a big jump from Linux Kernel 5.10 LTS that we noticed in the beta release. Interestingly, the Slackware team tested hundreds of Linux Kernel versions before settling on Linux Kernel 5.15.19. The release note mentions... "We finally ended up on kernel version 5.15.19 after Greg Kroah-Hartman confirmed that it would get long-term support until at least October 2023 (and quite probably for longer than that)."

In case you are curious, Linux Kernel 5.15 brings in updates like enhanced NTFS driver support and improvements for Intel/AMD processors and Apple's M1 chip. It also adds initial support for Intel 12th gen processors. Overall, with Linux Kernel 5.15 LTS, you should get a good hardware compatibility result for the oldest active Linux distro.

Slackware's announcement says "The challenge this time around was to adopt as much of the good stuff out there as we could without changing the character of the operating system. Keep it familiar, but make it modern." And boy did we have our work cut out for us. We adopted privileged access management (PAM) finally, as projects we needed dropped support for pure shadow passwords. We switched from ConsoleKit2 to elogind, making it much easier to support software that targets that Other Init System and bringing us up-to-date with the XDG standards. We added support for PipeWire as an alternate to PulseAudio, and for Wayland sessions in addition to X11. Dropped Qt4 and moved entirely to Qt5. Brought in Rust and Python 3. Added many, many new libraries to the system to help support all the various additions.

We've upgraded to two of the finest desktop environments available today: Xfce 4.16, a fast and lightweight but visually appealing and easy to use desktop environment, and the KDE Plasma 5 graphical workspaces environment, version 5.23.5 (the Plasma 25th Anniversary Edition). This also supports running under Wayland or X11. We still love Sendmail, but have moved it into the /extra directory and made Postfix the default mail handler. The old imapd and ipop3d have been retired and replaced by the much more featureful Dovecot IMAP and POP3 server.

"As usual, the kernel is provided in two flavors, generic and huge," according to the release notes. "The huge kernel contains enough built-in drivers that in most cases an initrd is not needed to boot the system."

If you'd like to support Slackware, there's an official Patreon account. And the release announcement ends with this personal note: Sadly, we lost a couple of good friends during this development cycle and this release is dedicated to them. Erik "alphageek" Jan Tromp passed away in 2020 after a long illness... My old friend Brett Person also passed away in 2020. Without Brett, it's possible that there wouldn't be any Slackware as we know it — he's the one who encouraged me to upload it to FTP back in 1993 and served as Slackware's original beta-tester. He was long considered a co-founder of this project. I knew Brett since the days of the Beggar's Banquet BBS in Fargo back in the 1980's... Gonna miss you too, pal.
Thanks to long-time Slashdot reader rastos1 for sharing thre news.
AI

Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code (theshamblog.com) 92

"I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change."

"Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." (UPDATE: Ars Technica acknowledges they'd asked ChatGPT to extract quotes from Shambaugh's post, and that it instead responded with inaccurate quotes it hallucinated.)

From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but.

It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet.

I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat...

It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine.

"How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...")

And amazingly, Shambaugh then had another run-in with a hallucinating AI...

I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.

This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here...

So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.

Thanks to long-time Slashdot reader steak for sharing the news.

Slashdot Top Deals