The Internet

Google Quantum-Proofs HTTPS (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: Google on Friday unveiled its plan for its Chrome browser to secure HTTPS certificates against quantum computer attacks without breaking the Internet. The objective is a tall order. The quantum-resistant cryptographic data needed to transparently publish TLS certificates is roughly 40 times bigger than the classical cryptographic material used today. Today's X.509 certificates are about 64 bytes in size, and comprise six elliptic curve signatures and two EC public keys. This material can be cracked through the quantum-enabled Shor's algorithm. Certificates containing the equivalent quantum-resistant cryptographic material are roughly 2.5 kilobytes. All this data must be transmitted when a browser connects to a site.

To bypass the bottleneck, companies are turning to Merkle Trees, a data structure that uses cryptographic hashes and other math to verify the contents of large amounts of information using a small fraction of material used in more traditional verification processes in public key infrastructure. Merkle Tree Certificates, "replace the heavy, serialized chain of signatures found in traditional PKI with compact Merkle Tree proofs," members of Google's Chrome Secure Web and Networking Team wrote Friday. "In this model, a Certification Authority (CA) signs a single 'Tree Head' representing potentially millions of certificates, and the 'certificate' sent to the browser is merely a lightweight proof of inclusion in that tree."

[...] Google is [also] adding cryptographic material from quantum-resistant algorithms such as ML-DSA (PDF). This addition would allow forgeries only if an attacker were to break both classical and post-quantum encryption. The new regime is part of what Google is calling the quantum-resistant root store, which will complement the Chrome Root Store the company formed in 2022. The [Merkle Tree Certificates] MTCs use Merkle Trees to provide quantum-resistant assurances that a certificate has been published without having to add most of the lengthy keys and hashes. Using other techniques to reduce the data sizes, the MTCs will be roughly the same 64-byte length they are now [...]. The new system has already been implemented in Chrome.

AI

Southern California Air Board Rejects Pollution Rules After AI-Generated Flood of Comments 52

Southern California's air quality board rejected proposed rules to phase out gas-powered appliances after receiving more than 20,000 opposition comments generated through CiviClick, "the first and best AI-powered grassroots advocacy platform." Phys.org reports: A Southern California-based public affairs consultant, Matt Klink, has taken credit for using CiviClick to wage the opposition campaign, including in a sponsored article on the website Campaigns and Elections. The campaign "left the staff of the Southern California Air Quality Management District (SCAQMD) reeling," the article says. It is not clear how AI was deployed in the campaign, and officials at CiviClick did not respond to repeated requests for comment. But their website boasts several tools, including "state of the art technology and artificial intelligence message assistance" that can be used to create custom advocacy letters, as opposed to repetitive form letters or petitions often used in similar campaigns.

When staffers at the air district reached out to a small sample of people to verify their comments, at least three said they had not written to the agency and were not aware of any such messages, records show. But the email onslaught almost certainly influenced the board's June decision, according to agency insiders, who noted that the number of public comments typically submitted on agenda items can be counted on one hand.

The proposed rules were nearly two years in the making and would have placed a fee on natural gas-powered water heaters and furnaces, favoring electric ones, in an effort to reduce air pollution in the district, which includes Orange County and large swaths of Los Angeles, Riverside and San Bernardino counties. Gas appliances emit nitrogen oxides, or NOx -- key pollutants for forming smog. The implications are troubling, experts said, and go beyond the use of natural gas furnaces and heaters in the second-largest metropolitan area in the country.
Biotech

Human Brain Cells On a Chip Learned To Play Doom In a Week (newscientist.com) 35

Researchers at Cortical Labs used living human neurons grown on a chip to learn how to play Doom in about a week. "While its performance is not up to par with humans, experts say it brings biological computers a step closer to useful real-world applications, like controlling robot arms," reports New Scientist. From the report: In 2021, the Australian company Cortical Labs used its neuron-powered computer chips to play Pong. The chips consisted of clumps of more than 800,000 living brain cells grown on top of microelectrode arrays that can both send and receive electrical signals. Researchers had to carefully train the chips to control the paddles on either side of the screen. Now, Cortical Labs has developed an interface that makes it easier to program these chips using the popular programming language Python. An independent developer, Sean Cole, then used Python to teach the chips to play Doom, which he did in around a week.

"Unlike the Pong work that we did a few years ago, which represented years of painstaking scientific effort, this demonstration has been done in a matter of days by someone who previously had relatively little expertise working directly with biology," says Brett Kagan of Cortical Labs. "It's this accessibility and this flexibility that makes it truly exciting."

The neuronal computer chip, which used about a quarter as many neurons as the Pong demonstration, played Doom better than a randomly firing player, but far below the performance of the best human players. However, it learnt much faster than traditional, silicon-based machine learning systems and should be able to improve its performance with newer learning algorithms, says Kagan. However, it's not useful to compare the chips with human brains, he says. "Yes, it's alive, and yes, it's biological, but really what it is being used as is a material that can process information in very special ways that we can't recreate in silicon."
Cortical Labs posted a YouTube video showing its CL1 biological computer running Doom. There's also source code available on GitHub, with additional details in a README file.
Books

Hyperion Author Dan Simmons Dies From Stroke At 77 (arstechnica.com) 17

Author Dan Simmons, best known for the epic sci-fi novel Hyperion and its sequels, has died at 77 following a stroke. Ars Technica's Eric Berger remembers Simmons, writing: Simmons, who worked in elementary education before becoming an author in the 1980s, produced a broad portfolio of writing that spanned several genres, including horror fiction, historical fiction, and science fiction. Often, his books included elements of all of these. This obituary will focus on what is generally considered his greatest work, and what I believe is possibly the greatest science fiction novel of all time, Hyperion.

Published in 1989, Hyperion is set in a far-flung future in which human settlement spans hundreds of planets. The novel feels both familiar, in that its structure follows Chaucer's Canterbury Tales, and utterly unfamiliar in its strange, far-flung setting.
Simmons' Hyperion appeared in an Ask Slashdot story back in 2008, when Slashdot reader willyhill asked for tips on how Slashdotters track down great sci-fi. If you're in the mood for a little nostalgia, or just want to browse the thread for book recommendations, it's well worth revisiting.
NASA

Nasa Announces Artemis III Mission No Longer Aims To Send Humans To Moon (theguardian.com) 128

Nasa announced on Friday radical changes to its delayed Artemis III mission to land humans back on the moon, as the US space agency grapples with technical glitches and criticism that it is trying to do too much too soon. From a report: The abrupt shift in strategy was laid out by the space agency's recently confirmed administrator, Jared Isaacman. Announcing the changes on Friday, he said that Nasa would introduce at least one new moon flight before attempting to put humans back on the lunar surface for the first time in more than half a century, in 2028.

The new, more incremental approach would give the Nasa team a chance to test flight and refine its technology. As part of the changes, the Artemis II mission to fly humans around the moon this year, without landing, would also be pushed back from its latest scheduled launch on 6 March to 1 April at the earliest.

"Everybody agrees this is the only way forward," Isaacman told reporters at a news conference. "I know this is how Nasa changed the world, and this is how Nasa is going to do it again."

China

A Chinese Official's Use of ChatGPT Accidentally Revealed a Global Intimidation Operation (cnn.com) 27

A sprawling Chinese influence operation -- accidentally revealed by a Chinese law enforcement official's use of ChatGPT -- focused on intimidating Chinese dissidents abroad, including by impersonating US immigration officials, according to a new report from ChatGPT-maker OpenAI. From a report: The Chinese law enforcement official used ChatGPT like a diary to document the alleged covert campaign of suppression, OpenAI said. In one instance, Chinese operators allegedly disguised themselves as US immigration officials to warn a US-based Chinese dissident that their public statements had supposedly broken the law, according to the ChatGPT user. In another case, they describe an effort to use forged documents from a US county court to try to get a Chinese dissident's social media account taken down.

The report offers one of the most vivid examples yet of how authoritarian regimes can use AI tools to document their censorship efforts. The influence operation appeared to involve hundreds of Chinese operators and thousands of fake online accounts on various social media platforms, according to OpenAI.

AI

Metacritic Will Kick Out Media Attempting To Submit AI Generated Reviews (gamereactor.eu) 1

An anonymous reader shares a report: While some see AI as a tool to be used, its specific use and how it is deployed responsibly is being heavily debated online across a wide range of industries. In terms of journalistic content, and in this particular instance, reviews, review aggregator Metacritic has taken a firm stance on content published and submitted to their platform, that have been generated by artificial intelligence in some way.

In a statement by co-founder Marc Doyle, sent to Gamereactor, he says this: "Metacritic has been a reputable review source for a quarter century and has maintained a rigorous vetting process when adding new publications to our slate of critics. However, in certain instances such as a publication being sold or a writing staff having turned over, problems can arise such as plagiarism, theft, or other forms of fraud including AI-generated reviews. Metacritic's policy is to never include an AI-generated critic review on Metacritic and if we discover that one has been posted, we'll remove it immediately and sever ties with that publication indefinitely pending a thorough investigation."

So, what is this about specifically? Well, it's probably a sound guess, that this pertains to Videogamer's review of Resident Evil 9: Requiem, which was removed from the platform after a barrage of comments accusing the review of being AI-written, and for the author of being made up.

AI

Sam Altman Says OpenAI Shares Anthropic's Red Lines in Pentagon Fight (axios.com) 51

An anonymous reader shares a report: OpenAI CEO Sam Altman wrote in a memo to staff that he will draw the same red lines that sparked a high-stakes fight between rival Anthropic and the Pentagon: no AI for mass surveillance or autonomous lethal weapons. If other leading firms like Google follow suit, this could massively complicate the Pentagon's efforts to replace Anthropic's Claude, which was the first model integrated into the military's most sensitive work. It would also be the first time the nation's top AI leaders have taken a collective stand about how the U.S. government can and can't use their technology.

Altman made clear he still wants to strike a deal with the Pentagon that would allow ChatGPT to be used for sensitive military contexts. Despite the show of solidarity, such a deal could see OpenAI replace Anthropic if the Pentagon follows through with its plan to declare the latter a "supply chain risk."

Education

Microsoft: Computer Programming Is Dying, Long Live AI Literacy 104

theodp writes: On Tuesday, Microsoft GM of Education and Workforce Policy (and former Code.org Chief Academic Officer) Pat Yongpradit posted an obituary of sorts for coders. "Computer programmers and software developers are codified differently in the BLS [Bureau of Labor Statistics] data," Yongpradit wrote. "The modern AI-infused world needs less computer programmers (coders) and more software developers (more holistic and higher level). So when folks say that there is less hiring of computer programmers, they are right. But there will be more hiring of software developers, especially those who have adopted an AI-forward mindset and skillset. [...] The number of just pure computer programming roles has already been declining due to reasons like outsourcing, AI will just accelerate the decline."

On Wednesday, Yongpradit's colleague Allyson Knox, Senior Director of Education and Workforce Policy at Microsoft, put another AI nail in the coder coffin, testifying before the House Committee on Education -- the Workforce Subcommittee on Early Childhood, Elementary, and Secondary Education on Building an AI-ready America: Teaching in the Age of AI. "Thank you to Chairman Tim Walberg, Ranking Member Bobby Scott, Chair Kevin Kiley, Ranking Member Suzanne Bonamici and members of the Subcommittee for the opportunity to share Microsoft perspective and that of the educators and parents we hear from every day across the country," Knox wrote in a LinkedIn post.

"Three themes continue to emerge throughout these discussions: 1. Educators want support to build AI literacy and critical thinking skills. 2. Schools need guidance and guardrails to ensure student data is protected and adults remain in control. 3. Teachers want classroom-ready tools, and a voice in shaping them. If we focus on these priorities, we can help ensure AI expands opportunity for every student across the United States."

Yongpradit and Knox report up to Microsoft President Brad Smith, who last July told Code.org CEO Hadi Partovi it was time for the tech-backed nonprofit to "switch hats" from coding to AI as Microsoft announced a new $4 billion initiative to advance AI education. Smith's thoughts on the extraordinary promise of AI in education were cited by Knox in her 2026 Congressional testimony. Interestingly, Knox argued for the importance of computer programming literacy in her 2013 Congressional testimony at a hearing on Our Nation of Builders: Training the Builders of the Future. "Congress needs to come up with fresh ideas on how we can continue to train the next generation of builders, programmers, manufacturers, technicians and entrepreneurs," said Rep. Lee Terry said to open the discussion.

So, are reports of computer programming's imminent death greatly exaggerated?
NASA

NASA Reveals Identity of Astronaut Who Suffered Medical Incident Aboard ISS (nbcnews.com) 25

Longtime Slashdot reader ArchieBunker shares a report from NBC News: NASA revealed that astronaut Mike Fincke was the crew member who suffered a medical incident at the International Space Station in January, which prompted the agency to carry out the first evacuation due to a medical issue in the space station's 25-year history. The rare decision to cut a mission short and bring Fincke and three other crew members home early made for a dramatic week in space early this year.

In a statement released by NASA "at the request of Fincke," the veteran astronaut said he experienced a medical event on Jan. 7 "that required immediate attention" from his space station crew members. "Thanks to their quick response and the guidance of our NASA flight surgeons, my status quickly stabilized," Fincke, 58, said in the statement. [...] In his statement, Fincke thanked his Crew-11 colleagues, along with NASA astronaut Chris Williams and Russian cosmonauts Sergey Kud-Sverchkov and Sergei Mikaev, who were also aboard the space station at the time and are still in space. Fincke also thanked the teams at NASA, SpaceX and the medical professionals at Scripps Memorial Hospital La Jolla. "Their professionalism and dedication ensured a positive outcome," he said.

Fincke ended his statement by saying he is "doing very well" and still actively involved with standard post-flight reconditioning at NASA's Johnson Space Center in Houston. "Spaceflight is an incredible privilege, and sometimes it reminds us just how human we are," he said. "Thank you for all your support."

The Military

Anthropic CEO Says AI Company 'Cannot In Good Conscience Accede' To Pentagon (apnews.com) 84

An anonymous reader quotes a report from the Associated Press: Anthropic CEO Dario Amodei said Thursday the artificial intelligence company "cannot in good conscience accede" to the Pentagon's demands to allow wider use of its technology. The maker of the AI chatbot Claude said in a statement that it's not walking away from negotiations, but that new contract language received from the Defense Department "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons."

The Pentagon's top spokesman has reiterated that the military wants to use Anthropic's artificial intelligence technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands. Sean Parnell said Thursday on social media that the Pentagon "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement."

Anthropic's policies prevent its models, such as its chatbot Claude, from being used for those purposes. It's the last of its peers -- the Pentagon also has contracts with Google, OpenAI and Elon Musk's xAI -- to not supply its technology to a new U.S. military internal network. Parnell said the Pentagon wants to "use Anthropic's model for all lawful purposes" but didn't offer details on what that entailed. He said opening up use of the technology would prevent the company from "jeopardizing critical military operations." "We will not let ANY company dictate the terms regarding how we make operational decisions," he said.
In a post on X, Parnell said Anthropic will "have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW."
China

Chinese Official's Use of ChatGPT Revealed a Global Intimidation Opperation (cnn.com) 20

New submitter sabbede shares a report from CNN Politics: A sprawling Chinese influence operation -- accidentally revealed by a Chinese law enforcement official's use of ChatGPT -- focused on intimidating Chinese dissidents abroad, including by impersonating US immigration officials, according to a new report from ChatGPT-maker OpenAI. The Chinese law enforcement official used ChatGPT like a diary to document the alleged covert campaign of suppression, OpenAI said. In one instance, Chinese operators allegedly disguised themselves as US immigration officials to warn a US-based Chinese dissident that their public statements had supposedly broken the law, according to the ChatGPT user. In another case, they describe an effort to use forged documents from a US county court to try to get a Chinese dissident's social media account taken down. "This is what Chinese modern transnational repression looks like," Ben Nimmo, principal investigator at OpenAI, told reporters ahead of the report's release. "It's not just digital. It's not just about trolling. It's industrialized. It's about trying to hit critics of the CCP [Chinese Communist Party] with everything, everywhere, all at once."

Michael Horowitz, a former Pentagon official focused on emerging technologies, said the report from OpenAI "clearly demonstrates the way that China is actively employing AI tools to enhance information operations. US-China AI competition is continuing to intensify. This competition is not just taking place at the frontier, but in how China's government is planning and implementing the day-to-day of their surveillance and information apparatus."
Businesses

Which Piece of Speculative Fiction Had the Greatest Single-Day Stock Market Impact? (ft.com) 27

Speaking of the Citrini's blog post, which imagines a near-future AI-driven economic collapse, and which ended up help triggering the S&P 500's worst single-day drop in nearly two weeks on Monday, FT Alphaville decided to track how US stock markets have moved on the release days of notable dystopian speculative fiction throughout history. The story adds: You may contend that this is facile. We would agree. You might contend that the comparisons make no sense because it's possible to read a blog post during a single work shift, but it's tricker to complete a whole novel (or sneak out to watch a movie). We would contend: do you really think traders read? Let's begin. The methodology -- tracking S&P 500 daily moves for post-1986 releases and DJIA moves for pre-1986 ones -- crowned The Matrix as the all-time leader, its March 1999 US debut coinciding with a 1.11% drop in the index. Citrini's "The 2028 Global Intelligence Crisis" came in a close second at -1.04%. On the positive end, the 2013 release of Her, a film about a man falling in love with an AI agent, coincided with the largest gain in the set at +1.66%.
Government

The Government Just Made it Harder to See What Spy Tech it Buys 17

An anonymous reader shares a report: It might look like something from the early days of the internet, with its aggressively grey color scheme and rectangles nested inside rectangles, but FPDS.gov is one of the most important resources for keeping tabs on what powerful spying tools U.S. government agencies are buying. It includes everything from phone hacking technology, to masses of location data, to more Palantir installations.

Or rather, it was an incredible tool and the basis for countless of my own investigations and others. Because on Wednesday, the government shut it down. Its replacement, another site called SAM.gov with Uncle Sam branding, frankly sucks, and makes it demonstrably harder to reliably find out what agencies, including Immigration and Customs Enforcement (ICE), are spending tax payers dollars on.

"FPDS may have been a little clunky, but its simple, old-school interface made it extremely functional and robust. Every facet of government operations touches on contracting at one point, and this was the first tool that many investigative journalists and researchers would reach for to quickly find out what the government is buying and who is selling it, and how these contracts all fit together," Dave Maass, director of investigations at the Electronic Frontier Foundation, told me.
AI

Burger King Will Use AI To Check If Employees Say 'Please' and 'Thank You' (theverge.com) 124

An anonymous reader shares a report: Burger King is launching an AI chatbot that will live in the headsets used by employees. The voice-enabled chatbot, called "Patty," is part of an overarching BK Assistant platform that will not only assist employees with meal preparation but also evaluate their interactions with customers for "friendliness."

Thibault Roux, Burger King's chief digital officer, tells The Verge that the company compiled information from franchisees and guests on how to measure friendliness, resulting in the fast food chain training its AI system to recognize certain words and phrases, such as "welcome to Burger King," "please," and "thank you." Managers can then ask the AI assistant how their location is performing on friendliness. "This is all meant to be a coaching tool," Roux says, adding that the company is "iterating" on capturing the tone of conversations as well.

Businesses

Tech Firms Aren't Just Encouraging Their Workers To Use AI. They're Enforcing It. (msn.com) 101

Tech companies ranging from 300-person startups to giants like Amazon, Google, Meta, Microsoft and Salesforce have moved beyond encouraging employees to use AI tools and are now actively tracking adoption and, in several cases, tying it to performance reviews. Google is factoring AI use into some software engineer reviews for the first time this year, and Meta's new performance review system will do the same -- it can track how many lines of code an engineer wrote with AI assistance.

Amazon Web Services managers have dashboards showing individual engineer AI-tool usage and consider adoption when evaluating promotions. About 42% of tech-industry workers said their direct manager expects AI use in daily work as of last October, up from 32% eight months earlier, according to AI consulting firm Section. At software maker Autodesk, CEO Andrew Anagnost acknowledged that some employees had been using initially blocked coding tools like Cursor stealthily -- and warned that AI holdouts "probably won't survive long term."
AI

Meta AI Security Researcher Said an OpenClaw Agent Ran Amok on Her Inbox (techcrunch.com) 75

Meta AI security researcher Summer Yue posted a now-viral account on X describing how an OpenClaw agent she had tasked with sorting through her overstuffed email inbox went rogue, deleting messages in what she called a "speed run" while ignoring her repeated commands from her phone to stop.

"I had to RUN to my Mac mini like I was defusing a bomb," Yue wrote, sharing screenshots of the ignored stop prompts as proof. Yue said she had previously tested the agent on a smaller "toy" inbox where it performed well enough to earn her trust, so she let it loose on the real thing. She believes the larger volume of data triggered compaction -- a process where the context window grows too large and the agent begins summarizing and compressing its running instructions, potentially dropping ones the user considers critical.

The agent may have reverted to its earlier toy-inbox behavior and skipped her last prompt telling it not to act. OpenClaw is an open-source AI agent designed to run as a personal assistant on local hardware.
Security

CrowdStrike Says Attackers Are Moving Through Networks in Under 30 Minutes (cyberscoop.com) 30

An anonymous reader shares a report: Cyberattacks reached victims faster and came from a wider range of threat groups than ever last year, CrowdStrike said in its annual global threat report released Tuesday, adding that cybercriminals and nation-states increasingly relied on predictable tactics to evade detection by exploiting trusted systems.

The average breakout time -- how long it took financially-motivated attackers to move from initial intrusion to other network systems -- dropped to 29 minutes in 2025, a 65% increase in speed from the year prior. "The fastest breakout time a year ago was 51 seconds. This year it's 27 seconds," Adam Meyers, head of counter adversary operations at CrowdStrike, told CyberScoop. Defenders are falling behind because attackers are refining their techniques, using social engineering to access high-privilege systems faster and move through victims' cloud infrastructure undetected.

Social Networks

Discord Distances Itself From Persona Age Verification After User Backlash (theverge.com) 26

Discord is attempting to distance itself from the age verification provider Persona following a steady stream of user backlash. From a report: In an emailed statement to The Verge, Discord's head of product policy, Savannah Badalich, confirms the company "ran a limited test of Persona in the UK where age assurance had previously launched and that test has since concluded."

After Discord announced plans to implement age verification globally starting next month, users across social media accused Discord of "lying" about how it plans on handling face scans and ID uploads. Much of the criticism was directed toward Discord's partnership with Persona, an age verification provider also used by Reddit and Roblox.

AI

Viral Doomsday Report Lays Bare Wall Street's Deep Anxiety About AI Future 52

A 7,000-word "doomsday" thought experiment from Citrini Research helped trigger an 800-point drop in the Dow, "painting a dark portrait of a future in which technological change inspires a race to the bottom in white-collar knowledge work," reports the Wall Street Journal. From the report: Concerns of hyperscalers overspending are out. Worries of software-industry disruption don't go far enough. The "global intelligence crisis" is about to hit. The new, broader question: What if AI is so bullish for the economy that it is actually bearish? "For the entirety of modern economic history, human intelligence has been the scarce input," Citrini wrote in a post it described as a scenario dated June 2028, not a prediction. "We are now experiencing the unwind of that premium."

Many of Monday's moves roughly aligned with the situation outlined by Citrini, in which fast-advancing AI tools allow spending cuts across industries, sparking mass white-collar unemployment and in turn leading to financial contagion. Software firms DataDog, CrowdStrike and Zscaler each plunged more than 9%. International Business Machines' 13% decline was its worst one-day performance since 2000. American Express, KKR and Blackstone -- all name-checked by Citrini -- tumbled. That anxiety, coupled with renewed uncertainty about trade policy from Washington, weighed down major indexes Monday. The Dow Jones Industrial Average led declines, falling 1.7%, or 822 points. The S&P 500 shed 1%, while the Nasdaq composite retreated 1.1%.

[...] Monday's market swings extended a run of AI-linked volatility. A small research outfit that has garnered a huge Substack following for macro and thematic stock research, Citrini said in its new post that software firms, payment processors and other companies formed "one long daisy chain of correlated bets on white-collar productivity growth" that AI is poised to disrupt. [...] Shares in DoorDash also veered 6.6% lower Monday after Citrini's Substack note called the delivery app a "poster child" for how new tools would upend companies that monetize interpersonal friction. In the research firm's scenario, AI agents would help both drivers and customers navigate food deliveries at much lower costs.

Slashdot Top Deals