AI

AI CEOs Worry the Government Will Nationalize AI (thenewstack.io) 125

Palantir's CEO was blunt. "If Silicon Valley believes we are going to take away everyone's white-collar job... and you're going to screw the military — if you don't think that's going to lead to the nationalization of our technology, you're retarded..."

And OpenAI's Sam Altman is thinking about the same thing, writes long-time Slashdot reader destinyland: "It has seemed to me for a long time it might be better if building AGI were a government project," Sam Altman publicly mused last week... Altman speculated on the possibility of the government "nationalizing" private AI companies into a public project, admitting more than once he's wondered what would happen next. "I obviously don't know," Altman said — but he added that "I have thought about it, of course" Altman's speculation hedged that "It doesn't seem super likely on the current trajectory. That said, I do think a close partnership between governments and the companies building this technology is super important."

Could powerful AI tools one day slip from the hands of private companies to be controlled by the U.S. government? Fortune magazine's AI editor points out that "many other breakthroughs with big strategic implications — from the Manhattan Project to the space race to early efforts to develop AI — were government-funded and largely government-directed." And Fortune added that last week the Defense Department threatened Anthropic with the Defense Production Act, which allows the president to designate "critical and strategic" goods for which businesses must accept the government's contracts. Fortune speculates this would've been "a sort of soft nationalization of Anthropic's production pipeline". Altman acknowledged Saturday that he'd felt the threat of attempted nationalization "behind a lot of the questions" he'd received when answering questions on X.com.

How exactly will this AI build-out be handled — and how should AI companies be working with the government? In a sprawling ask-me-anything session on X that included other members of OpenAI leadership, one Missouri-based developer even broached an AGI-government scenario directly with OpenAI's Head of National Security Partnerships, Katherine Mulligan. If OpenAI built an AGI — something that even passed its own Turing test for AGI — would that be a case where its government contracts compelled them to grant access to the Defense Department?

"No," Mulligan answered. At our current moment in time, "We control which models we deploy"

The article notes 100 OpenAI employees joined with 856 Google employees in an online letter titled "We Will Not Be Divided" urging their bosses to refuse their models' use in domestic mass surveillance and autonomously killing without human oversight.

But Adafruit's managing director Phillip Torrone (also long-time Slashdot reader ptorrone ) sees analogies to America's atomic bomb-building Manhattan Project, and "what happened when the scientists who built the thing tried to set conditions on how the thing would be used." (The government pressured them to back down, which he compares to the Pentagon's designating Anthropic a "supply chain risk" before offering OpenAI a contract "with the same red lines, just worded differently".)

Ironically, Anthropic CEO Dario Amodei frequently recommends the Pulitzer Prize-winning 1986 book The Making of the Atomic Bomb...
Python

Python 'Chardet' Package Replaced With LLM-Generated Clone, Re-Licensed 47

Ancient Slashdot reader ewhac writes: The maintainers of the Python package `chardet`, which attempts to automatically detect the character encoding of a string, announced the release of version 7 this week, claiming a speedup factor of 43x over version 6. In the release notes, the maintainers claim that version 7 is, "a ground-up, MIT-licensed rewrite of chardet." Problem: The putative "ground-up rewrite" is actually the result of running the existing copyrighted codebase and test suite through the Claude LLM. In so doing, the maintainers claim that v7 now represents a unique work of authorship, and therefore may be offered under a new license. Version 6 and earlier was licensed under the GNU Lesser General Public License (LGPL). Version 7 claims to be available under the MIT license.

The maintainers appear to be claiming that, under the Oracle v. Google decision, which found that cloning public APIs is fair use, their v7 is a fair use re-implementation of the `chardet` public API. However, there is no evidence to suggest their re-write was under "clean room" conditions, which traditionally has shielded cloners from infringement suits. Further, the copyrightability of LLM output has yet to be settled. Recent court decisions seem to favor the view that LLM output is not copyrightable, as the output is not primarily the result of human creative expression -- the endeavor copyright is intended to protect. Spirited discussion has ensued in issue #327 on `chardet`s GitHub repo, raising the question: Can copyrighted source code be laundered through an LLM and come out the other end as a fresh work of authorship, eligible for a new copyright, copyright holder, and license terms? If this is found to be so, it would allow malicious interests to completely strip-mine the Open Source commons, and then sell it back to the users without the community seeing a single dime.
Wikipedia

AI Translations Are Adding 'Hallucinations' To Wikipedia Articles (404media.co) 23

An anonymous reader quotes a report from 404 Media: Wikipedia editors have implemented new policies and restricted a number of contributors who were paid to use AI to translate existing Wikipedia articles into other languages after they discovered these AI translations added AI "hallucinations," or errors, to the resulting article. The new restrictions show how Wikipedia editors continue to fight the flood of generative AI across the internet from diminishing the reliability of the world's largest repository of knowledge. The incident also reveals how even well-intentioned efforts to expand Wikipedia are prone to errors when they rely on generative AI, and how they're remedied by Wikipedia's open governance model. The issue centers around a program run by the Open Knowledge Association (OKA), a nonprofit that was found to be "mostly relying on cheap labor from contractors in the Global South" to translate English Wikipedia articles into other languages. Some translators began using tools like Google Gemini and ChatGPT to speed up the process, but editors reviewing the work found numerous hallucinations, including factual errors, missing citations, and references to unrelated sources.

"Ultimately the editors decided to implement restrictions against OKA translators who make multiple errors, but not block OKA translation as a rule," reports 404 Media.
Games

Humble Games' Former Bosses Buy the Studio's Back Catalog (engadget.com) 15

Former Humble Games executives have reacquired the publisher's catalog of more than 50 indie titles from Ziff Davis and relaunched their company as Balor Games. "For the developers we have worked with over the years, this moment is a reunion," Balor Games CEO Alan Patmore wrote in a statement. "[It has] the same leadership and the same commitment to thoughtful publishing remain in place. What changes is our scale and our focus. Balor Games is built for inventors and backed by believers. To that end, it exists to be a seal of quality for independent games." Engadget reports: The Humble Games lineup includes (among others) Slay the Spire, A Hat in Time, SIGNALIS, Forager, Coral Island, Monaco and Wizard of Legend. Separate from the Humble transaction, Balor also bought the complete catalog of Firestoke Games (which shut down last August) and publishing rights to Fights in Tight Spaces. In total, the young studio now owns the publishing rights to over 60 indie titles. Humble Games is separate from the Humble Bundle storefront. The latter is still owned by Ziff Davis.

The pair view the newly anointed Balor as a developer-friendly publishing house. As for its name, Balor is a supernatural being in Irish mythology. It's sometimes depicted as having three eyes. Triple-eye, triple-I... Clever devils! The triple-I moniker is a more recent addition to the gaming lexicon. It typically means something defined by indie creativity and passion -- with a budget far less than AAA but more than a tiny two-person passion project. (Balor says it's about "high-quality, impactful games.") You wouldn't be blamed for wondering how that's different from AA. But the slant here is to define the genre less by budget and more by "indie" intangibles.
You can learn more about the company's vision in an interview with GamesIndustry.biz.
Power

A Nuclear Reactor Backed By Bill Gates Gets Federal Approval To Start Building 76

An anonymous reader quotes a report from the New York Times: A novel type of nuclear power plant in Wyoming backed by Bill Gates received a key federal permit on Wednesday, making it the first new U.S. commercial reactor in nearly a decade to receive clearance to begin construction. The Nuclear Regulatory Commission, the federal body that oversees reactor safety, unanimously voted (PDF) to grant a construction permit to TerraPower, a start-up founded by Mr. Gates. TerraPower is one of several companies trying to build a new wave of smaller, advanced reactors meant to be easier to build than the large reactors of old.

The permit, which comes after years of consultations and regulatory reviews, means that TerraPower can begin pouring concrete and building the nuclear components of its proposed nuclear plant in Kemmerer, Wyo. The plant, which still faces plenty of logistical hurdles, is currently expected to come online in 2031 near an old coal-burning power plant that is slated to retire a few years later. [...] With its construction permit in hand, the company says it plans to start work on the Wyoming reactor in the coming weeks. The company had already broken ground on the site in 2024 and had begun building the nonnuclear parts of the plant, which did not require a permit.

TerraPower has already had to push back its start date several times, and it will still face hurdles in trying to avoid the snags and cost overruns that have plagued other reactor projects as well as securing the fuel it needs. Before coming online, the reactor will also need to secure a separate operating license from the N.R.C., which has told the company it will continue to monitor several safety issues. TerraPower plans to sell electricity from its first plant to PacificCorp, a utility in the Northwest. The company has also agreed to supply up to eight reactors to Meta to power its data centers in the coming years.
Transportation

Vehicle Tire Pressure Sensors Enable Silent Tracking (darkreading.com) 96

Longtime Slashdot reader linuxwrangler writes: Dark Reading reports that a team of researchers has determined that signals from tire pressure monitoring systems (TPMSs), required in U.S. cars since 2007, can be used to track the presence, type, weight, and driving pattern of vehicles. The researchers report (PDF) that the TPMS data, which includes unique sensor IDs, is sent in clear text without authentication and can be intercepted 40-50 meters from a vehicle using devices costing $100. "Researchers have discovered that most TPMS sensors transmit a unique identifier in clear text that never changes during the lifetime of the tire," the researchers pointed out. "This unencrypted wireless communication makes the signals susceptible to eavesdropping and potential tracking by any third party in proximity to the car."
Encryption

TikTok Says End-To-End Encryption Makes Users Less Safe (bbc.com) 86

An anonymous reader quotes a report from the BBC: TikTok will not introduce end-to-end encryption (E2EE) -- the controversial privacy feature used by nearly all its rivals -- arguing it makes users less safe. E2EE means only the sender and recipient of a direct message can view its contents, making it the most secure form of communication available to the general public. Platforms such as Facebook, Instagram, Messenger and X have embraced it because they say their priority is maximizing user privacy.

But critics have said E2EE makes it harder to stop harmful content spreading online, because it means tech firms and law enforcement have no way of viewing any material sent in direct messages. The situation is made more complex because TikTok has long faced accusations that ties to the Chinese state may put users' data at risk. TikTok has consistently denied this, but earlier this year the social media firm's US operations were separated from its global business on the orders of US lawmakers.

TikTok told the BBC it believed end-to-end encryption prevented police and safety teams from being able to read direct messages if they needed to. It confirmed its approach to the BBC in a briefing about security at its London office, saying it wanted to protect users, especially young people from harm. It described this stance as a deliberate decision to set itself apart from rivals.
"Grooming and harassment risks are very real in DMs [direct messages] so TikTok now can credibly argue that it's prioritizing 'proactive safety' over 'privacy absolutism' which is a pretty powerful soundbite," said social media industry analyst Matt Navarra. But Navarra said the move also "puts TikTok out of step with global privacy expectations" and might reinforce wariness for some about its ownership.
The Internet

Google Quantum-Proofs HTTPS (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: Google on Friday unveiled its plan for its Chrome browser to secure HTTPS certificates against quantum computer attacks without breaking the Internet. The objective is a tall order. The quantum-resistant cryptographic data needed to transparently publish TLS certificates is roughly 40 times bigger than the classical cryptographic material used today. Today's X.509 certificates are about 64 bytes in size, and comprise six elliptic curve signatures and two EC public keys. This material can be cracked through the quantum-enabled Shor's algorithm. Certificates containing the equivalent quantum-resistant cryptographic material are roughly 2.5 kilobytes. All this data must be transmitted when a browser connects to a site.

To bypass the bottleneck, companies are turning to Merkle Trees, a data structure that uses cryptographic hashes and other math to verify the contents of large amounts of information using a small fraction of material used in more traditional verification processes in public key infrastructure. Merkle Tree Certificates, "replace the heavy, serialized chain of signatures found in traditional PKI with compact Merkle Tree proofs," members of Google's Chrome Secure Web and Networking Team wrote Friday. "In this model, a Certification Authority (CA) signs a single 'Tree Head' representing potentially millions of certificates, and the 'certificate' sent to the browser is merely a lightweight proof of inclusion in that tree."

[...] Google is [also] adding cryptographic material from quantum-resistant algorithms such as ML-DSA (PDF). This addition would allow forgeries only if an attacker were to break both classical and post-quantum encryption. The new regime is part of what Google is calling the quantum-resistant root store, which will complement the Chrome Root Store the company formed in 2022. The [Merkle Tree Certificates] MTCs use Merkle Trees to provide quantum-resistant assurances that a certificate has been published without having to add most of the lengthy keys and hashes. Using other techniques to reduce the data sizes, the MTCs will be roughly the same 64-byte length they are now [...]. The new system has already been implemented in Chrome.

Books

Hyperion Author Dan Simmons Dies From Stroke At 77 (arstechnica.com) 17

Author Dan Simmons, best known for the epic sci-fi novel Hyperion and its sequels, has died at 77 following a stroke. Ars Technica's Eric Berger remembers Simmons, writing: Simmons, who worked in elementary education before becoming an author in the 1980s, produced a broad portfolio of writing that spanned several genres, including horror fiction, historical fiction, and science fiction. Often, his books included elements of all of these. This obituary will focus on what is generally considered his greatest work, and what I believe is possibly the greatest science fiction novel of all time, Hyperion.

Published in 1989, Hyperion is set in a far-flung future in which human settlement spans hundreds of planets. The novel feels both familiar, in that its structure follows Chaucer's Canterbury Tales, and utterly unfamiliar in its strange, far-flung setting.
Simmons' Hyperion appeared in an Ask Slashdot story back in 2008, when Slashdot reader willyhill asked for tips on how Slashdotters track down great sci-fi. If you're in the mood for a little nostalgia, or just want to browse the thread for book recommendations, it's well worth revisiting.
Crime

Four Convicted Over Spyware Affair That Shook Greece (bbc.com) 7

A Greek court has convicted four individuals linked to the marketing of Predator spyware in the wiretapping scandal that shook the country in 2022. The BBC reports: In what became known as "Greece's Watergate," surveillance software called Predator was used to target 87 people -- among them government ministers, senior military officials and journalists. The four who had marketed the software were found guilty by an Athens court of misdemeanours of violating the confidentiality of telephone communications and illegally accessing personal data and conversations.

The court sentenced the four defendants to lengthy jail sentences, suspended pending appeal. Although they each face 126 years, only eight would be typically served which is the upper limit for misdemeanors. One in three of the dozens of figures targeted had also been under legal surveillance by Greece's intelligence services (EYP). Prime Minister Kyriakos Mitsotakis, who had placed EYP directly under his supervision, called it a scandal, but no government officials have been charged in court and critics accuse the government of trying to cover up the truth.

The case dates back to the summer of 2022, when the current head of Greek Socialist party Pasok, Nikos Androulakis - then an MEP - was informed by the European Parliament's IT experts that he had received a malicious text message containing a link. Predator spyware, marketed by the Athens-based Israeli company Intellexa, can get access to a device's messages, camera, and microphone. Its use was illegal in Greece at that time but a new law passed in 2022 has since legalised state security use of surveillance software under strict conditions. Androulakis also discovered that he had been tracked for "national security reasons" by Greece's intelligence services. The scandal has since escalated into a debate over democratic accountability in Greece.

The Internet

Long Before Tech CEOs Turned To Layoffs To Cover AI Expenses, There Was WorldCom (nbcnews.com) 47

Long-time Slashdot reader theodp writes: Jeopardy time. A. This company spurred CEOs to make huge speculative capital expenditures based on wild unverified claims of future demand, resulting in the layoffs of tens of thousands of workers to reduce the resulting expenses, harming their core businesses. Q. What is OpenAI?

Sorry, the correct response is, "What is WorldCom?" In 2002, WorldCom, the second largest long-distance company in the U.S., entered Chapter 11 bankruptcy after disclosing accounting fraud that eventually totaled $11 billion, the biggest ever at the time. CEO Bernard Ebbers was subsequently sentenced to 25 years in prison.

CNBC reported that an employee of WorldCom's Internet service provider UUNet set off a frenzy of speculative investment and infrastructure overbuild after he used Excel to create a best-case scenario model for the Internet's growth that suggested in the best of all possible worlds, Internet traffic would double every 100 days, a scenario that would greatly benefit WorldCom, whose lines would carry it. Despite no evidence to support it, WorldCom's lie became an immutable law and businesses around the world made important decisions based on the belief that traffic was doubling every 100 days. "For some period of time I can recall that we were backfilling that expectation with laying cables, something like 2,200 miles of cable an hour," AT&T CEO Michael Armstrong said. "Think of all the companies that went out of business that assumed that that was real."

In 2003, NBC News reported: Armstrong and former Sprint CEO Bill Esrey struggled for years to understand how WorldCom could beat them so handily. "We would look at the conduct of WorldCom in terms of their pricing, revenue growth, margins, in terms of their cost structure... and the price leader almost every quarter was WorldCom," Armstrong said. Added Esrey, "We couldn't figure out how they were pricing as aggressively as they were.... How could they be so efficient in their costs and expenses?" AT&T and Sprint began cutting jobs to push down their costs to WorldCom's level. "The market said what a marvelous management job WorldCom was doing and they would look over to AT&T and say, 'these guys aren't keeping up.' So, my shareholders were hurt. We laid off tens of thousands of employees in an accelerated fashion [in a futile effort to match WorldCom's phantom profits] and I think the industry was hurt," Armstrong says. "It just wrecked the whole industry," says Esrey.
Programming

Has the AI Disruption Arrived - and Will It Just Make Software Cheaper and More Accessible? (aboard.com) 88

Programmer/entrepreneur Paul Ford is the co-founder of AI-driven business software platform Aboard. This week he wrote a guest essay for the New York Times titled "The AI Disruption Has Arrived, and It Sure Is Fun," arguing that Anthropic's Claude Code "was always a helpful coding assistant, but in November it suddenly got much better, and ever since I've been knocking off side projects that had sat in folders for a decade or longer... [W]hen the stars align and my prompts work out, I can do hundreds of thousands of dollars worth of work for fun (fun for me) over weekends and evenings, for the price of the Claude $200-a-month."

He elaborates on his point on the Aboard.com blog: I'm deeply convinced that it's possible to accelerate software development with AI coding — not deprofessionalize it entirely, or simplify it so that everything is prompts, but make it into a more accessible craft. Things which not long ago cost hundreds of thousands of dollars to pull off might come for hundreds of dollars, and be doable by you, or your cousin. This is a remarkable accelerant, dumped into the public square at a bad moment, with no guidance or manual — and the reaction of many people who could gain the most power from these tools is rejection and anxiety. But as I wrote....

I believe there are millions, maybe billions, of software products that don't exist but should: Dashboards, reports, apps, project trackers and countless others. People want these things to do their jobs, or to help others, but they can't find the budget. They make do with spreadsheets and to-do lists.

I don't expect to change any minds; that's not how minds work. I just wanted to make sure that I used the platform offered by the Times to say, in as cheerful a way as possible: Hey, this new power is real, and it should be in as many hands as possible. I believe everyone should have good software, and that it's more possible now than it was a few years ago.

From his guest essay: Is the software I'm making for myself on my phone as good as handcrafted, bespoke code? No. But it's immediate and cheap. And the quantities, measured in lines of text, are large. It might fail a company's quality test, but it would meet every deadline. That is what makes A.I. coding such a shock to the system... What if software suddenly wanted to ship? What if all of that immense bureaucracy, the endless processes, the mind-boggling range of costs that you need to make the computer compute, just goes?

That doesn't mean that the software will be good. But most software today is not good. It simply means that products could go to market very quickly. And for lots of users, that's going to be fine. People don't judge A.I. code the same way they judge slop articles or glazed videos. They're not looking for the human connection of art. They're looking to achieve a goal. Code just has to work... In about six months you could do a lot of things that took me 20 years to learn. I'm writing all kinds of code I never could before — but you can, too. If we can't stop the freight train, we can at least hop on for a ride.

The simple truth is that I am less valuable than I used to be. It stings to be made obsolete, but it's fun to code on the train, too. And if this technology keeps improving, then all of the people who tell me how hard it is to make a report, place an order, upgrade an app or update a record — they could get the software they deserve, too. That might be a good trade, long term.

Wikipedia

Wikipedia Blacklists Archive.today, Starts Removing 695,000 Archive Links (arstechnica.com) 14

An anonymous reader quotes a report from Ars Technica: The English-language edition of Wikipedia is blacklisting Archive.today after the controversial archive site was used to direct a distributed denial of service (DDoS) attack against a blog. In the course of discussing whether Archive.today should be deprecated because of the DDoS, Wikipedia editors discovered that the archive site altered snapshots of webpages to insert the name of the blogger who was targeted by the DDoS. The alterations were apparently fueled by a grudge against the blogger over a post that described how the Archive.today maintainer hid their identity behind several aliases.

"There is consensus to immediately deprecate archive.today, and, as soon as practicable, add it to the spam blacklist (or create an edit filter that blocks adding new links), and remove all links to it," stated an update today on Wikipedia's Archive.today discussion. "There is a strong consensus that Wikipedia should not direct its readers towards a website that hijacks users' computers to run a DDoS attack (see WP:ELNO#3). Additionally, evidence has been presented that archive.today's operators have altered the content of archived pages, rendering it unreliable."

More than 695,000 links to Archive.today are distributed across 400,000 or so Wikipedia pages. The archive site, which is facing an investigation in which the FBI is trying to uncover the identity of its founder, is commonly used to bypass news paywalls. "Those in favor of maintaining the status quo rested their arguments primarily on the utility of archive.today for verifiability," said today's Wikipedia update. "However, an analysis of existing links has shown that most of its uses can be replaced. Several editors started to work out implementation details during this RfC [request for comment] and the community should figure out how to efficiently remove links to archive.today."

Science

Newborn Chicks Connect Sounds With Shapes Just Like Humans, Study Finds (scientificamerican.com) 16

An anonymous reader quotes a report from Scientific American: Why does "bouba" sound round and "kiki" sound spiky? This intuition that ties certain sounds to shapes is oddly reliable all over the world, and for at least a century, scientists have considered it a clue to the origin of language, theorizing that maybe our ancestors built their first words upon these instinctive associations between sound and meaning. But now a new study adds an unexpected twist: baby chickens make these same sound-shape connections, suggesting that the link to human language may not be so unique. The results, published today in Science, challenge a long-standing theory about the so-called bouba-kiki effect: that it might explain how humans first tethered meaning to sound to create language. Perhaps, the thinking goes, people just naturally agree on certain associations between shapes and sounds because of some innate feature of our brain or our world. But if the barnyard hen also agrees with such associations, you might wonder if we've been pecking at the wrong linguistic seed.

Maria Loconsole, a comparative psychologist at the University of Padua in Italy, and her colleagues decided to investigate the bouba-kiki effect in baby chicks because the birds could be tested almost immediately after hatching, before their brain would be influenced by exposure to the world. The researchers placed chicks in front of two panels: one featured a flowerlike shape with gently rounded curves; the other had a spiky blotch reminiscent of a cartoon explosion. They then played recordings of humans saying either "bouba" or "kiki" and observed the birds' behavior. When the chicks heard "bouba," 80 percent of them approached the round shape first and spent an average of more than three minutes exploring it compared with an average of just under one minute spent exploring the spiky shape. The exploration preferences were flipped when the chicks heard "kiki."

Because the tests took place within the chicks' carefully supervised first hours of life outside their eggshell, this association between particular sounds and shapes couldn't have been learned from experience. Instead it may be evidence of an innate perceptual bias that goes back way farther in our evolutionary history than previously believed. "We parted with birds on the evolutionary line 300 million years ago," says Aleksandra Cwiek, a linguist at Nicolaus Copernicus University in Toru, Poland, who was not involved in the study. "It's just mind-blowing."

Biotech

DNA Mutations Discovered In the Children of Chernobyl Workers (sciencealert.com) 38

Researchers performed genome sequencing scans on 130 people whose fathers were Chernobyl cleanup workers. Comparing the scans to control groups, they found evidence for the first time for "a transgenerational effect" from the father's prolonged exposure to low-dose ionizing radiation.

ScienceAlert reports: Rather than picking out new DNA mutations in the next generation, they looked for what are known as clustered de novo mutations (cDNMs): two or more mutations in close proximity, found in the children but not the parents. These would be mutations resulting from breaks in the parental DNA caused by radiation exposure. "We found a significant increase in the cDNM count in offspring of irradiated parents, and a potential association between the dose estimations and the number of cDNMs in the respective offspring," write the researchers in their published paper... This fits with the idea that radiation creates molecules known as reactive oxygen species, which are able to break DNA strands — breaks which can leave behind the clusters described in this study, if repaired imperfectly.

The good news is that the risk to health should be relatively small: children of exposed parents weren't found to have any higher risk of disease. This is partly because a lot of the cDNMs likely fall in 'non-coding' DNA, rather than in genes that directly encode proteins.

Programming

Spotify Says Its Best Developers Haven't Written a Line of Code Since December, Thanks To AI (techcrunch.com) 106

Spotify's best developers have stopped writing code manually since December and now rely on an internal AI system called Honk that enables remote, real-time code deployment through Claude Code, the company's co-CEO Gustav Soderstrom said during a fourth-quarter earnings call this week.

Engineers can fix bugs or add features to the iOS app from Slack on their phones during their morning commute and receive a new version of the app pushed to Slack before arriving at the office. The system has helped Spotify ship more than 50 new features throughout 2025, including AI-powered Prompted Playlists, Page Match for audiobooks, and About This Song. Soderstrom credited the system with speeding up coding and deployment tremendously and called it "just the beginning" for AI development at Spotify. The company is building a unique music dataset that differs from factual resources like Wikipedia because music-related questions often lack single correct answers -- workout music preferences vary from American hip-hop to Scandinavian heavy metal.
Open Source

When 20-Year-Old Bill Gates Fought the World's First Software Pirates (thenewstack.io) 83

Long-time Slashdot reader destinyland writes: Just months after his 20th birthday, Bill Gates had already angered the programmer community," remembers this 50th-anniversary commemoration of Gates' Open Letter to Hobbyists. "As the first home computers began appearing in the 1970s, the world faced a question: Would its software be free?"

Gates railed in 1976 that "Most of you steal your software." Gates had coded the BASIC interpreter for Altair's first home computer with Paul Allen and Monte Davidoff — only to see it pirated by Steve Wozniak's friends at the Homebrew Computing Club. Expecting royalties, a none-too-happy Gates issued his letter in the club's newsletter (as well as Altair's own publication), complaining "I would appreciate letters from any one who wants to pay up."

But freedom-loving coders had other ideas. When Steve Wozniak and Steve Jobs released their Apple 1 home computer that summer, they stressed that "our philosophy is to provide software for our machines free or at minimal cost..." And early open-source hackers began writing their own free Tiny Basic interpreters to create a free alternative to the Gates/Micro-Soft code. This led to the first occurrence of the phrase "Copyleft" in October of 1976.

Open Source definition author Bruce Perens shares his thoughts today. "When I left Pixar in 2000, I stopped in Steve Job's office — which for some reason was right across the hall from mine... " Perens remembered. "I asked Steve: 'You still don't believe in this Linux stuff, do you...?'" And Perens remembers how that movement finally won over Steve Jobs and carried the day. "Three years later, Steve stood onstage in front of a slide that said 'Open Source: We Think It's Great!' as he introduced the Safari browser, which at that time was based on the browser engine developed by the KDE Open Source project!"

Printer

Washington State May Mandate 'Firearm Blueprint Detection Algorithms' For 3D Printers (adafruit.com) 123

Adafruit managing director Phillip Torrone (also long-time Slashdot reader ptorrone ) writes: Washington State lawmakers are proposing bills (HB 2320 and HB 2321) that would require 3D printers and CNC machines to block certain designs using software-based "firearms blueprint detection algorithms." In practice, this means scanning every print file, comparing it against a government-maintained database, and preventing "skilled users" from bypassing the system.

Supporters frame this as a response to untraceable "ghost guns," but even federal prosecutors admit the tools involved are ordinary manufacturing equipment. Critics warn the language is overbroad, technically unworkable, hostile to open source, and likely to push printing toward cloud-locked, subscription-based systems—while doing little to stop criminals.

GNU is Not Unix

Richard Stallman Critiques AI, Connected Cars, Smartphones, and DRM (youtube.com) 77

Richard Stallman spoke Friday at Atlanta's Georgia Institute of Technology, continuing his activism for free software while also addressing today's new technologies.

Speaking about AI, Stallman warned that "nowadays, people often use the term artificial intelligence for things that aren't intelligent at all..." He makes a point of calling large language models "generators" because "They generate text and they don't understand really what that text means." (And they also make mistakes "without batting a virtual eyelash. So you can't trust anything that they generate.") Stallman says "Every time you call them AI, you are endorsing the claim that they are intelligent and they're not. So let's let's refuse to do that."

"So I've come up with the term Pretend Intelligence. We could call it PI. And if we start saying this more often, we might help overcome this marketing hype campaign that wants people to trust those systems, and trust their lives and all their activities to the control of those systems and the big companies that develop and control them."

"By the way, as far as I can tell, none of them is free software."

When it comes to today's cars, Stallman says they contain "malicious functionalities... Cars should not be connected. They should not upload anything." (He adds that "I am hoping to find a skilled mechanic to work with me in a project to make disconnected cars.")

And later Stallman calls the smartphone "an Orwellian tracking and surveillance device," saying he refuses to own one. (An advantage of free software is that it allows the removal of malicious functionalities.)

Stallman spoke for about 53 minutes — but then answered questions for nearly 90 minutes longer. Here's some of the highlights...
Security

Infotainment, EV Charger Exploits Earn $1M at Pwn2Own Automotive 2026 (securityweek.com) 13

Trend Micro's Zero Day Initiative sponsored its third annual Pwn2Own Automotive competition in Tokyo this week, receiving 73 entries, the most ever for a Pwn2Own event.

"Under Pwn2Own rules, all disclosed vulnerabilities are reported to affected vendors through ZDI," reports Help Net Security, "with public disclosure delayed to allow time for patches." Infotainment platforms from Tesla, Sony, and Alpine were among the systems compromised during demonstrations. Researchers achieved code execution using techniques that included buffer overflows, information leaks, and logic flaws. One Tesla infotainment unit was compromised through a USB-based attack, resulting in root-level access. Electric vehicle charging infrastructure also received significant attention. Teams successfully demonstrated exploits against chargers from Autel, Phoenix Contact, ChargePoint, Grizzl-E, Alpitronic, and EMPORIA. Several attacks involved chaining multiple vulnerabilities to manipulate charging behavior or execute code on the device. These demonstrations highlighted how charging stations operate as network-connected systems with direct interaction with vehicles.
There's video recaps on the ZDI YouTube channel — apparently the Fuzzware.io researchers "were able to take over a Phoenix Contact EV charger over bluetooth."

Three researchers also exploited the Alpitronic's HYC50 fast-charging with a classic TOCTOU bug, according to the event's site, "and installed a playable version of Doom to boot." They earned $20,000 — part of $1,047,000 USD was awarded during the three-day event.

More coverage from SecurityWeek: The winner of the event, the Fuzzware.io team, earned a total of $215,500 for its exploits. The team received the highest individual reward: $60,000 for an Alpitronic HYC50 EV charger exploit delivered through the charging gun. ZDI described it as "the first public exploit of a supercharger".

Slashdot Top Deals