China

A Chinese Official's Use of ChatGPT Accidentally Revealed a Global Intimidation Operation (cnn.com) 27

A sprawling Chinese influence operation -- accidentally revealed by a Chinese law enforcement official's use of ChatGPT -- focused on intimidating Chinese dissidents abroad, including by impersonating US immigration officials, according to a new report from ChatGPT-maker OpenAI. From a report: The Chinese law enforcement official used ChatGPT like a diary to document the alleged covert campaign of suppression, OpenAI said. In one instance, Chinese operators allegedly disguised themselves as US immigration officials to warn a US-based Chinese dissident that their public statements had supposedly broken the law, according to the ChatGPT user. In another case, they describe an effort to use forged documents from a US county court to try to get a Chinese dissident's social media account taken down.

The report offers one of the most vivid examples yet of how authoritarian regimes can use AI tools to document their censorship efforts. The influence operation appeared to involve hundreds of Chinese operators and thousands of fake online accounts on various social media platforms, according to OpenAI.

AI

Metacritic Will Kick Out Media Attempting To Submit AI Generated Reviews (gamereactor.eu) 1

An anonymous reader shares a report: While some see AI as a tool to be used, its specific use and how it is deployed responsibly is being heavily debated online across a wide range of industries. In terms of journalistic content, and in this particular instance, reviews, review aggregator Metacritic has taken a firm stance on content published and submitted to their platform, that have been generated by artificial intelligence in some way.

In a statement by co-founder Marc Doyle, sent to Gamereactor, he says this: "Metacritic has been a reputable review source for a quarter century and has maintained a rigorous vetting process when adding new publications to our slate of critics. However, in certain instances such as a publication being sold or a writing staff having turned over, problems can arise such as plagiarism, theft, or other forms of fraud including AI-generated reviews. Metacritic's policy is to never include an AI-generated critic review on Metacritic and if we discover that one has been posted, we'll remove it immediately and sever ties with that publication indefinitely pending a thorough investigation."

So, what is this about specifically? Well, it's probably a sound guess, that this pertains to Videogamer's review of Resident Evil 9: Requiem, which was removed from the platform after a barrage of comments accusing the review of being AI-written, and for the author of being made up.

AI

Sam Altman Says OpenAI Shares Anthropic's Red Lines in Pentagon Fight (axios.com) 51

An anonymous reader shares a report: OpenAI CEO Sam Altman wrote in a memo to staff that he will draw the same red lines that sparked a high-stakes fight between rival Anthropic and the Pentagon: no AI for mass surveillance or autonomous lethal weapons. If other leading firms like Google follow suit, this could massively complicate the Pentagon's efforts to replace Anthropic's Claude, which was the first model integrated into the military's most sensitive work. It would also be the first time the nation's top AI leaders have taken a collective stand about how the U.S. government can and can't use their technology.

Altman made clear he still wants to strike a deal with the Pentagon that would allow ChatGPT to be used for sensitive military contexts. Despite the show of solidarity, such a deal could see OpenAI replace Anthropic if the Pentagon follows through with its plan to declare the latter a "supply chain risk."

Education

Microsoft: Computer Programming Is Dying, Long Live AI Literacy 104

theodp writes: On Tuesday, Microsoft GM of Education and Workforce Policy (and former Code.org Chief Academic Officer) Pat Yongpradit posted an obituary of sorts for coders. "Computer programmers and software developers are codified differently in the BLS [Bureau of Labor Statistics] data," Yongpradit wrote. "The modern AI-infused world needs less computer programmers (coders) and more software developers (more holistic and higher level). So when folks say that there is less hiring of computer programmers, they are right. But there will be more hiring of software developers, especially those who have adopted an AI-forward mindset and skillset. [...] The number of just pure computer programming roles has already been declining due to reasons like outsourcing, AI will just accelerate the decline."

On Wednesday, Yongpradit's colleague Allyson Knox, Senior Director of Education and Workforce Policy at Microsoft, put another AI nail in the coder coffin, testifying before the House Committee on Education -- the Workforce Subcommittee on Early Childhood, Elementary, and Secondary Education on Building an AI-ready America: Teaching in the Age of AI. "Thank you to Chairman Tim Walberg, Ranking Member Bobby Scott, Chair Kevin Kiley, Ranking Member Suzanne Bonamici and members of the Subcommittee for the opportunity to share Microsoft perspective and that of the educators and parents we hear from every day across the country," Knox wrote in a LinkedIn post.

"Three themes continue to emerge throughout these discussions: 1. Educators want support to build AI literacy and critical thinking skills. 2. Schools need guidance and guardrails to ensure student data is protected and adults remain in control. 3. Teachers want classroom-ready tools, and a voice in shaping them. If we focus on these priorities, we can help ensure AI expands opportunity for every student across the United States."

Yongpradit and Knox report up to Microsoft President Brad Smith, who last July told Code.org CEO Hadi Partovi it was time for the tech-backed nonprofit to "switch hats" from coding to AI as Microsoft announced a new $4 billion initiative to advance AI education. Smith's thoughts on the extraordinary promise of AI in education were cited by Knox in her 2026 Congressional testimony. Interestingly, Knox argued for the importance of computer programming literacy in her 2013 Congressional testimony at a hearing on Our Nation of Builders: Training the Builders of the Future. "Congress needs to come up with fresh ideas on how we can continue to train the next generation of builders, programmers, manufacturers, technicians and entrepreneurs," said Rep. Lee Terry said to open the discussion.

So, are reports of computer programming's imminent death greatly exaggerated?
NASA

NASA Reveals Identity of Astronaut Who Suffered Medical Incident Aboard ISS (nbcnews.com) 25

Longtime Slashdot reader ArchieBunker shares a report from NBC News: NASA revealed that astronaut Mike Fincke was the crew member who suffered a medical incident at the International Space Station in January, which prompted the agency to carry out the first evacuation due to a medical issue in the space station's 25-year history. The rare decision to cut a mission short and bring Fincke and three other crew members home early made for a dramatic week in space early this year.

In a statement released by NASA "at the request of Fincke," the veteran astronaut said he experienced a medical event on Jan. 7 "that required immediate attention" from his space station crew members. "Thanks to their quick response and the guidance of our NASA flight surgeons, my status quickly stabilized," Fincke, 58, said in the statement. [...] In his statement, Fincke thanked his Crew-11 colleagues, along with NASA astronaut Chris Williams and Russian cosmonauts Sergey Kud-Sverchkov and Sergei Mikaev, who were also aboard the space station at the time and are still in space. Fincke also thanked the teams at NASA, SpaceX and the medical professionals at Scripps Memorial Hospital La Jolla. "Their professionalism and dedication ensured a positive outcome," he said.

Fincke ended his statement by saying he is "doing very well" and still actively involved with standard post-flight reconditioning at NASA's Johnson Space Center in Houston. "Spaceflight is an incredible privilege, and sometimes it reminds us just how human we are," he said. "Thank you for all your support."

The Military

Anthropic CEO Says AI Company 'Cannot In Good Conscience Accede' To Pentagon (apnews.com) 84

An anonymous reader quotes a report from the Associated Press: Anthropic CEO Dario Amodei said Thursday the artificial intelligence company "cannot in good conscience accede" to the Pentagon's demands to allow wider use of its technology. The maker of the AI chatbot Claude said in a statement that it's not walking away from negotiations, but that new contract language received from the Defense Department "made virtually no progress on preventing Claude's use for mass surveillance of Americans or in fully autonomous weapons."

The Pentagon's top spokesman has reiterated that the military wants to use Anthropic's artificial intelligence technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands. Sean Parnell said Thursday on social media that the Pentagon "has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement."

Anthropic's policies prevent its models, such as its chatbot Claude, from being used for those purposes. It's the last of its peers -- the Pentagon also has contracts with Google, OpenAI and Elon Musk's xAI -- to not supply its technology to a new U.S. military internal network. Parnell said the Pentagon wants to "use Anthropic's model for all lawful purposes" but didn't offer details on what that entailed. He said opening up use of the technology would prevent the company from "jeopardizing critical military operations." "We will not let ANY company dictate the terms regarding how we make operational decisions," he said.
In a post on X, Parnell said Anthropic will "have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW."
China

Chinese Official's Use of ChatGPT Revealed a Global Intimidation Opperation (cnn.com) 20

New submitter sabbede shares a report from CNN Politics: A sprawling Chinese influence operation -- accidentally revealed by a Chinese law enforcement official's use of ChatGPT -- focused on intimidating Chinese dissidents abroad, including by impersonating US immigration officials, according to a new report from ChatGPT-maker OpenAI. The Chinese law enforcement official used ChatGPT like a diary to document the alleged covert campaign of suppression, OpenAI said. In one instance, Chinese operators allegedly disguised themselves as US immigration officials to warn a US-based Chinese dissident that their public statements had supposedly broken the law, according to the ChatGPT user. In another case, they describe an effort to use forged documents from a US county court to try to get a Chinese dissident's social media account taken down. "This is what Chinese modern transnational repression looks like," Ben Nimmo, principal investigator at OpenAI, told reporters ahead of the report's release. "It's not just digital. It's not just about trolling. It's industrialized. It's about trying to hit critics of the CCP [Chinese Communist Party] with everything, everywhere, all at once."

Michael Horowitz, a former Pentagon official focused on emerging technologies, said the report from OpenAI "clearly demonstrates the way that China is actively employing AI tools to enhance information operations. US-China AI competition is continuing to intensify. This competition is not just taking place at the frontier, but in how China's government is planning and implementing the day-to-day of their surveillance and information apparatus."
Businesses

Which Piece of Speculative Fiction Had the Greatest Single-Day Stock Market Impact? (ft.com) 27

Speaking of the Citrini's blog post, which imagines a near-future AI-driven economic collapse, and which ended up help triggering the S&P 500's worst single-day drop in nearly two weeks on Monday, FT Alphaville decided to track how US stock markets have moved on the release days of notable dystopian speculative fiction throughout history. The story adds: You may contend that this is facile. We would agree. You might contend that the comparisons make no sense because it's possible to read a blog post during a single work shift, but it's tricker to complete a whole novel (or sneak out to watch a movie). We would contend: do you really think traders read? Let's begin. The methodology -- tracking S&P 500 daily moves for post-1986 releases and DJIA moves for pre-1986 ones -- crowned The Matrix as the all-time leader, its March 1999 US debut coinciding with a 1.11% drop in the index. Citrini's "The 2028 Global Intelligence Crisis" came in a close second at -1.04%. On the positive end, the 2013 release of Her, a film about a man falling in love with an AI agent, coincided with the largest gain in the set at +1.66%.
Government

The Government Just Made it Harder to See What Spy Tech it Buys 17

An anonymous reader shares a report: It might look like something from the early days of the internet, with its aggressively grey color scheme and rectangles nested inside rectangles, but FPDS.gov is one of the most important resources for keeping tabs on what powerful spying tools U.S. government agencies are buying. It includes everything from phone hacking technology, to masses of location data, to more Palantir installations.

Or rather, it was an incredible tool and the basis for countless of my own investigations and others. Because on Wednesday, the government shut it down. Its replacement, another site called SAM.gov with Uncle Sam branding, frankly sucks, and makes it demonstrably harder to reliably find out what agencies, including Immigration and Customs Enforcement (ICE), are spending tax payers dollars on.

"FPDS may have been a little clunky, but its simple, old-school interface made it extremely functional and robust. Every facet of government operations touches on contracting at one point, and this was the first tool that many investigative journalists and researchers would reach for to quickly find out what the government is buying and who is selling it, and how these contracts all fit together," Dave Maass, director of investigations at the Electronic Frontier Foundation, told me.
AI

Burger King Will Use AI To Check If Employees Say 'Please' and 'Thank You' (theverge.com) 124

An anonymous reader shares a report: Burger King is launching an AI chatbot that will live in the headsets used by employees. The voice-enabled chatbot, called "Patty," is part of an overarching BK Assistant platform that will not only assist employees with meal preparation but also evaluate their interactions with customers for "friendliness."

Thibault Roux, Burger King's chief digital officer, tells The Verge that the company compiled information from franchisees and guests on how to measure friendliness, resulting in the fast food chain training its AI system to recognize certain words and phrases, such as "welcome to Burger King," "please," and "thank you." Managers can then ask the AI assistant how their location is performing on friendliness. "This is all meant to be a coaching tool," Roux says, adding that the company is "iterating" on capturing the tone of conversations as well.

Businesses

Tech Firms Aren't Just Encouraging Their Workers To Use AI. They're Enforcing It. (msn.com) 101

Tech companies ranging from 300-person startups to giants like Amazon, Google, Meta, Microsoft and Salesforce have moved beyond encouraging employees to use AI tools and are now actively tracking adoption and, in several cases, tying it to performance reviews. Google is factoring AI use into some software engineer reviews for the first time this year, and Meta's new performance review system will do the same -- it can track how many lines of code an engineer wrote with AI assistance.

Amazon Web Services managers have dashboards showing individual engineer AI-tool usage and consider adoption when evaluating promotions. About 42% of tech-industry workers said their direct manager expects AI use in daily work as of last October, up from 32% eight months earlier, according to AI consulting firm Section. At software maker Autodesk, CEO Andrew Anagnost acknowledged that some employees had been using initially blocked coding tools like Cursor stealthily -- and warned that AI holdouts "probably won't survive long term."
AI

Meta AI Security Researcher Said an OpenClaw Agent Ran Amok on Her Inbox (techcrunch.com) 75

Meta AI security researcher Summer Yue posted a now-viral account on X describing how an OpenClaw agent she had tasked with sorting through her overstuffed email inbox went rogue, deleting messages in what she called a "speed run" while ignoring her repeated commands from her phone to stop.

"I had to RUN to my Mac mini like I was defusing a bomb," Yue wrote, sharing screenshots of the ignored stop prompts as proof. Yue said she had previously tested the agent on a smaller "toy" inbox where it performed well enough to earn her trust, so she let it loose on the real thing. She believes the larger volume of data triggered compaction -- a process where the context window grows too large and the agent begins summarizing and compressing its running instructions, potentially dropping ones the user considers critical.

The agent may have reverted to its earlier toy-inbox behavior and skipped her last prompt telling it not to act. OpenClaw is an open-source AI agent designed to run as a personal assistant on local hardware.
Security

CrowdStrike Says Attackers Are Moving Through Networks in Under 30 Minutes (cyberscoop.com) 30

An anonymous reader shares a report: Cyberattacks reached victims faster and came from a wider range of threat groups than ever last year, CrowdStrike said in its annual global threat report released Tuesday, adding that cybercriminals and nation-states increasingly relied on predictable tactics to evade detection by exploiting trusted systems.

The average breakout time -- how long it took financially-motivated attackers to move from initial intrusion to other network systems -- dropped to 29 minutes in 2025, a 65% increase in speed from the year prior. "The fastest breakout time a year ago was 51 seconds. This year it's 27 seconds," Adam Meyers, head of counter adversary operations at CrowdStrike, told CyberScoop. Defenders are falling behind because attackers are refining their techniques, using social engineering to access high-privilege systems faster and move through victims' cloud infrastructure undetected.

Social Networks

Discord Distances Itself From Persona Age Verification After User Backlash (theverge.com) 26

Discord is attempting to distance itself from the age verification provider Persona following a steady stream of user backlash. From a report: In an emailed statement to The Verge, Discord's head of product policy, Savannah Badalich, confirms the company "ran a limited test of Persona in the UK where age assurance had previously launched and that test has since concluded."

After Discord announced plans to implement age verification globally starting next month, users across social media accused Discord of "lying" about how it plans on handling face scans and ID uploads. Much of the criticism was directed toward Discord's partnership with Persona, an age verification provider also used by Reddit and Roblox.

AI

Viral Doomsday Report Lays Bare Wall Street's Deep Anxiety About AI Future 52

A 7,000-word "doomsday" thought experiment from Citrini Research helped trigger an 800-point drop in the Dow, "painting a dark portrait of a future in which technological change inspires a race to the bottom in white-collar knowledge work," reports the Wall Street Journal. From the report: Concerns of hyperscalers overspending are out. Worries of software-industry disruption don't go far enough. The "global intelligence crisis" is about to hit. The new, broader question: What if AI is so bullish for the economy that it is actually bearish? "For the entirety of modern economic history, human intelligence has been the scarce input," Citrini wrote in a post it described as a scenario dated June 2028, not a prediction. "We are now experiencing the unwind of that premium."

Many of Monday's moves roughly aligned with the situation outlined by Citrini, in which fast-advancing AI tools allow spending cuts across industries, sparking mass white-collar unemployment and in turn leading to financial contagion. Software firms DataDog, CrowdStrike and Zscaler each plunged more than 9%. International Business Machines' 13% decline was its worst one-day performance since 2000. American Express, KKR and Blackstone -- all name-checked by Citrini -- tumbled. That anxiety, coupled with renewed uncertainty about trade policy from Washington, weighed down major indexes Monday. The Dow Jones Industrial Average led declines, falling 1.7%, or 822 points. The S&P 500 shed 1%, while the Nasdaq composite retreated 1.1%.

[...] Monday's market swings extended a run of AI-linked volatility. A small research outfit that has garnered a huge Substack following for macro and thematic stock research, Citrini said in its new post that software firms, payment processors and other companies formed "one long daisy chain of correlated bets on white-collar productivity growth" that AI is poised to disrupt. [...] Shares in DoorDash also veered 6.6% lower Monday after Citrini's Substack note called the delivery app a "poster child" for how new tools would upend companies that monetize interpersonal friction. In the research firm's scenario, AI agents would help both drivers and customers navigate food deliveries at much lower costs.
Bitcoin

Trump's 'Board of Peace' Explores Stablecoin For Gaza (ft.com) 119

An anonymous reader quotes a report from the Financial Times: Officials working with Donald Trump's "Board of Peace" are exploring setting up a stablecoin for Gaza as part of efforts to reshape the devastated Palestinian enclave's economy, according to five people familiar with the discussions. The talks around introducing a stablecoin -- a type of cryptocurrency whose value is pegged to a mainstream currency, such as the US dollar -- are at a preliminary stage, and many details of how one could be introduced in Gaza remain to be determined.

But officials have discussed the idea as part of their plan for the future of the enclave, where economic activity collapsed during Israel's two-year war with Hamas and the traditional banking and payments system has been severely impaired. A person familiar with the project said the stablecoin was expected to be tied to the US dollar, with the hope that Gulf Arab and Palestinian companies with expertise in the field of digital currencies will help spearhead the effort. "This will not be a 'Gaza Coin' or a new Palestinian currency, but a means to allow Gazans to transact digitally," the person said.

Work on the idea is being led by Liran Tancman, an Israeli tech entrepreneur and former reservist who is now working as an unpaid adviser to Trump's "Board of Peace," the US-led body tasked with rebuilding Gaza, according to two people familiar with the matter. [...] According to the person familiar with the project, the "Board of Peace" and NCAG will decide on the stablecoin's regulatory framework and access, although "nothing definitive" has yet been finalized. Speaking at a meeting of the "Board of Peace" in Washington last week, Tancman said the NCAG was working on building "a secure digital backbone, an open platform enabling e-payments, financial services, e-learning, and healthcare with user control over data", but did not elaborate.

IBM

IBM Shares Crater 13% After Anthropic Says Claude Code Can Tackle COBOL Modernization (cnbc.com) 113

IBM shares plunged nearly 13% on Monday after Anthropic published a blog post arguing that its Claude Code tool could automate much of the complex analysis work involved in modernizing COBOL, the decades-old programming language that still underpins an estimated 95% of ATM transactions in the United States and runs on the kind of mainframe systems IBM has sold for generations.

Anthropic said the shrinking pool of developers who understand COBOL had long made modernization cost-prohibitive, and that AI could now flip that equation by mapping dependencies and documenting workflows across thousands of lines of legacy code. The sell-off deepened a rough 2026 for IBM, whose shares are now down more than 22% year to date.
IT

'How Many AIs Does It Take To Read a PDF?' (theverge.com) 61

Despite AI's progress in building complex software, the ubiquitous PDF remains something of a grand challenge -- a format Adobe developed in the early 1990s to preserve the precise visual appearance of documents. PDFs consist of character codes, coordinates, and rendering instructions rather than logically ordered text, and even state-of-the-art models asked to extract information from them will summarize instead, confuse footnotes with body text, or outright hallucinate contents, The Verge writes.

Companies like Reducto are now tackling the problem by segmenting pages into components -- headers, tables, charts -- before routing each to specialized parsing models, an approach borrowed from computer vision techniques used in self-driving vehicles. Researchers at Hugging Face recently found roughly 1.3 billion PDFs sitting in Common Crawl alone, and the Allen Institute for AI has noted that PDFs could provide trillions of novel, high-quality training tokens from government reports, textbooks, and academic papers -- the kind of data AI developers are increasingly desperate for.
Earth

Climate Physicists Face the Ghosts in Their Machines: Clouds (quantamagazine.org) 25

Climate scientists trying to predict how much hotter the planet will get have long grappled with a surprisingly stubborn problem -- clouds, which both reflect sunlight and trap heat, account for more than half the variation between climate predictions and are the main reason warming projections for the next 50 years range from 2 to 6 degrees Celsius.

Two research groups are now racing to close that gap using AI, though they disagree sharply on method. Tapio Schneider at Caltech built CLIMA, a model that uses machine learning to optimize cloud parameters within traditional physics equations; it will be unveiled at a conference in Japan in March. Chris Bretherton at the Allen Institute for AI took a different path -- his ACE2 neural network, released in 2024, learns from 50 years of atmospheric data and largely bypasses physics equations altogether.
AI

Sam Altman Would Like To Remind You That Humans Use a Lot of Energy, Too (techcrunch.com) 142

OpenAI CEO Sam Altman is pushing back on growing concerns about AI's environmental footprint, dismissing claims about ChatGPT's water consumption as "totally fake" and arguing that the fairer way to measure AI's energy use is to compare it against humans.

In an interview with Indian Express, Altman acknowledged that evaporative cooling in data centers once made water usage a real concern but said that is no longer the case, calling internet claims of 17 gallons of water per query "completely untrue, totally insane, no connection to reality."

On energy, he conceded it is "fair" to worry about total consumption given how heavily the world now relies on AI, and called for a rapid shift toward nuclear, wind and solar power. He took particular issue with comparisons that pit the cost of training a model against a single human inference, noting it "takes like 20 years of life and all of the food you eat" before a person gets smart -- and that on a per-query basis, AI has "probably already caught up on an energy efficiency basis."

Slashdot Top Deals