Businesses

Tech Firms Aren't Just Encouraging Their Workers To Use AI. They're Enforcing It. (msn.com) 101

Tech companies ranging from 300-person startups to giants like Amazon, Google, Meta, Microsoft and Salesforce have moved beyond encouraging employees to use AI tools and are now actively tracking adoption and, in several cases, tying it to performance reviews. Google is factoring AI use into some software engineer reviews for the first time this year, and Meta's new performance review system will do the same -- it can track how many lines of code an engineer wrote with AI assistance.

Amazon Web Services managers have dashboards showing individual engineer AI-tool usage and consider adoption when evaluating promotions. About 42% of tech-industry workers said their direct manager expects AI use in daily work as of last October, up from 32% eight months earlier, according to AI consulting firm Section. At software maker Autodesk, CEO Andrew Anagnost acknowledged that some employees had been using initially blocked coding tools like Cursor stealthily -- and warned that AI holdouts "probably won't survive long term."
XBox (Games)

Xbox Co-founder Says Microsoft is Quietly Sunsetting the Platform (gamesbeat.com) 46

Seamus Blackley, one of the original founders of Xbox who helped convince Bill Gates and Steve Ballmer to back a console project more than 26 years ago, told GamesBeat in an interview that he believes Microsoft is quietly sunsetting the platform under the guise of an AI-driven leadership transition.

Microsoft recently announced that Asha Sharma, whose career has focused on AI and software as a service, will replace Phil Spencer as Xbox CEO, and that COO and president Sarah Bond is leaving the company. Blackley said he expects Sharma's role to be that of "a palliative care doctor who slides Xbox gently into the night," arguing that Satya Nadella's all-consuming bet on generative AI has turned every business unit -- Xbox included -- into a nail for the same hammer.

He compared the appointment to putting someone who doesn't like movies in charge of a major motion picture studio, and advised Sharma to either develop a genuine passion for games or find a way to leave the job soon.
AI

Hacker Used Anthropic's Claude To Steal Sensitive Mexican Data (bloomberg.com) 22

A hacker exploited Anthropic's AI chatbot to carry out a series of attacks against Mexican government agencies, resulting in the theft of a huge trove of sensitive tax and voter information, according to cybersecurity researchers. From a report: The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup Gambit Security said in research published Wednesday.

The activity started in December and continued for roughly a month. In all, 150 gigabytes of Mexican government data was stolen, including documents related to 195 million taxpayer records as well as voter records, government employee credentials and civil registry files, according to the researchers.

AI

Anthropic Drops Flagship Safety Pledge (time.com) 81

Anthropic, the AI company that has long positioned itself as the industry's most safety-conscious research lab, is dropping the central commitment of its Responsible Scaling Policy -- a 2023 pledge to never train an AI system unless it could guarantee beforehand that its safety measures were adequate. "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments ... if competitors are blazing ahead," chief science officer Jared Kaplan told TIME.

The overhauled policy, approved unanimously by CEO Dario Amodei and Anthropic's board, instead commits the company to matching or surpassing competitors' safety efforts and to delaying development only if Anthropic considers itself to be leading the AI race and believes catastrophic risks are significant.

The company also plans to publish detailed "Risk Reports" every three to six months and release "Frontier Safety Roadmaps" laying out future safety goals. Chris Painter, director of policy at the AI evaluation nonprofit METR, who reviewed an early draft, told TIME the shift signals that Anthropic "believes it needs to shift into triage mode with its safety plans, because methods to assess and mitigate risk are not keeping up with the pace of capabilities."
HP

HP Says Memory's Contribution To PC Costs Just Doubled To 35% (theregister.com) 25

HP has revealed that memory now accounts for 35% of the cost of materials it needs to build a PC, up from between 15 and 18% last quarter. And the company expects RAM's contribution will rise through the year. From a report: Speaking on the company's Q1 2026 earnings call, interim CEO Bruce Broussard said the company has secured long-term supply agreements for the year and also "qualified new suppliers [and] built in strategic inventory positions for key platforms and cut the time to qualify new material in half to accelerate our product configuration changes."

That sounds a lot like HP Inc is signing up new suppliers at a brisk pace. Broussard said the company has also "expanded lower-cost sourcing across our commodity basket, lowering logistics costs with agile end-to-end planning processes." The company is using its internal AI initiatives to power those new processes. The company is also "configuring our products and shaping demand to align the supply we have with our customer needs" and "taking targeted pricing actions to offset the remaining cost impact in close partnership with both our channel and direct customers."

AI

Meta AI Security Researcher Said an OpenClaw Agent Ran Amok on Her Inbox (techcrunch.com) 75

Meta AI security researcher Summer Yue posted a now-viral account on X describing how an OpenClaw agent she had tasked with sorting through her overstuffed email inbox went rogue, deleting messages in what she called a "speed run" while ignoring her repeated commands from her phone to stop.

"I had to RUN to my Mac mini like I was defusing a bomb," Yue wrote, sharing screenshots of the ignored stop prompts as proof. Yue said she had previously tested the agent on a smaller "toy" inbox where it performed well enough to earn her trust, so she let it loose on the real thing. She believes the larger volume of data triggered compaction -- a process where the context window grows too large and the agent begins summarizing and compressing its running instructions, potentially dropping ones the user considers critical.

The agent may have reverted to its earlier toy-inbox behavior and skipped her last prompt telling it not to act. OpenClaw is an open-source AI agent designed to run as a personal assistant on local hardware.
United Kingdom

New Datacentres Risk Doubling Great Britain's Electricity Use, Regulator Says (theguardian.com) 44

The amount of power being sought by new datacentre projects in Great Britain would exceed the national current peak electricity consumption, according to an industry watchdog. From a report: Ofgem said about 140 proposed datacentre schemes, driven by use of artificial intelligence, could require 50 gigawatts of electricity -- 5GW more than the country's current peak demand.

The figure was revealed in an Ofgem consultation on demand for new connections to the power grid. It pointed to a "surge in demand" for connection applications between November 2024 and June last year, with a significant number coming from datacentres. This has exceeded even the most ambitious forecasts.

Meanwhile, new renewable energy projects are not being connected to the grid at the pace they are being built to help meet the government's clean energy targets by the end of the decade. Ofgem said the work required to connect surging numbers of datacentres could mean delays for other projects that are "critical for decarbonisation and economic growth." Datacentres are the central nervous system of AI tools such as chatbots and image generators, playing a vital role in training and operating products such as ChatGPT and Gemini.

AI

Hegseth Gives Anthropic Until Friday To Back Down on AI Safeguards (axios.com) 195

Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei until Friday evening to give the military unfettered access to its AI model or face harsh penalties, Axios has learned. Hegseth told Amodei in a tense meeting on Tuesday that the Pentagon will either cut ties and declare Anthropic a "supply chain risk," or invoke the Defense Production Act to force the company to tailor its model to the military's needs.

The Pentagon wants to punish Anthropic as the feud over AI safeguards grows increasingly nasty, but officials are also worried about the consequences of losing access to its industry-leading model, Claude. "The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good," a Defense official told Axios ahead of the meeting. Anthropic has said it is willing to adapt its usage policies for the Pentagon, but not to allow its model to be used for the mass surveillance of Americans or the development of weapons that fire without human involvement.
Programming

Microsoft Execs Worry AI Will Eat Entry Level Coding Jobs (theregister.com) 62

An anonymous reader shares a report: Microsoft Azure CTO Mark Russinovich and VP of Developer Community Scott Hanselman have written a paper arguing that senior software engineers must mentor junior developers to prevent AI coding agents from hollowing out the profession's future skills base.

The paper, Redefining the Engineering Profession for AI, is based on several assumptions, the first of which is that agentic coding assistants "give senior engineers an AI boost... while imposing an AI drag on early-in-career (EiC) developers to steer, verify and integrate AI output."

In an earlier podcast on the subject, Russinovich said this basic premise -- that AI is increasing productivity only for senior developers while reducing it for juniors -- is a "hot topic in all our customer engagements... they all say they see it at their companies." [...] The logical outcome is that "if organizations focus only on short-term efficiency -- hiring those who can already direct AI -- they risk hollowing out the next generation of technical leaders," Russinovich and Hanselman state in the paper.

Firefox

Firefox 148 Now Available With The New AI Controls, AI Kill Switches 71

Firefox 148 introduces granular AI controls and a global "AI kill switch" that allows users to disable or selectively manage the browser's AI features. Phoronix reports: Among the AI features that can be toggled individually are around translations, image alt text in the Firefox PDF viewer, tab group suggestions, key points in link previews, and AI chatbot providers in the sidebar. Firefox 148 also brings Firefox for Android, support for the Trusted Types API, CSS shape() function support, Sanitizer API support, WebGPU enhancements, and a variety of other changes. Developer chances can be found at developer.mozilla.org. Binaries are available from ftp.mozilla.org.
United States

US Farmers Are Rejecting Multimillion-Dollar Datacenter Bids For Their Land (theguardian.com) 96

An anonymous reader quotes a report from the Guardian: When two men knocked on Ida Huddleston's door last May, they carried a contract worth more than $33m in exchange for the Kentucky farm that had fed her family for centuries. According to Huddleston, the men's client, an unnamed "Fortune 100 company," sought her 650 acres (260 hectares) in Mason county for an unspecified industrial development. Finding out any more would require signing a non-disclosure agreement. More than a dozen of her neighbors received the same knock. Searching public records for answers, they discovered that a new customer (PDF) had applied for a 2.2 gigawatt project from the local power plant, nearly double its annual generation capacity. The unknown company was building a datacenter. "You don't have enough to buy me out. I'm not for sale. Leave me alone, I'm satisfied," Huddleston, 82, later told the men.

As tech companies race to build the massive datacenters needed to power artificial intelligence across the US and the world, bids like the one for Huddleston's land are appearing on rural doorsteps nationwide. Globally, 40,000 acres of powered land – real estate prepped for datacenter development -- are projected to be needed for new projects over the next five years, double the amount currently in use. Yet despite sums that often dwarf the land's recent value, farmers are increasingly shutting the door. At least five of Huddleston's neighbors gave similar categorical rejections, including one who was told he could name any price.

In Pennsylvania, a farmer rejected $15m in January for land he'd worked for 50 years. A Wisconsin farmer turned down $80m the same month. Other landowners have declined offers exceeding $120,000 per acre -- prices unimaginable just a few years ago. The rebuffs are a jarring reminder of AI's physical bounds, and limits of the dollars behind the technology. [...] As AI promises to transcend corporeal fallibility, these standoffs reveal its very physical constraints -- and Wall Street's miscalculation of what some people value most. In the rolling hills of Mason county and farmland across America, that gap is measured not in dollars but in something harder to price: identity.

XBox (Games)

New Microsoft Gaming CEO Has 'No Tolerance For Bad AI' (variety.com) 58

In her first major interview as Microsoft's new gaming chief, Asha Sharma said that "great games" must deliver emotional resonance and a distinct creative voice, while making clear that she has "no tolerance for bad AI." Stepping in after Phil Spencer's retirement, she's pledging consistency, community trust, and a human-first approach to storytelling as Xbox enters a new era. Variety reports: Sharma was quick in laying out her top priorities for Microsoft Gaming in an internal memo announcing her promotion, noting "great games," "the return of Xbox" and the "future of play" as her three main commitments to the gaming community. So first, what makes a great game for Sharma, whose roles prior to CoreAI include top positions at Instacart and Meta? The new Microsoft Gaming CEO tells Variety it's all about games with "deep emotional resonance" and "a distinct point of view." She wants to develop stories that make players "feel something," like the kind of feelings Campo Santo's 2016 first-person mystery "Firewatch" elicited in her.

Sharma takes on the mantle as head of the leading competitor to Sony's PlayStation and Nintendo knowing full well she's entering the role as an outsider to the larger gaming community and has "a lot to learn" still. But Sharma says she's got a commitment to "being grounded in what the community is telling us." "I'm coming into gaming as a platform builder," Sharma said, adding that her goal is to "earn the right to be trusted by players and developers" and show the fanbase that "consistency" over time. In her interview with Variety, Sharma acknowledged the tumultuous state of the gaming industry, referencing Matthew Ball's recent State of Video Gaming in 2026 report as evidence that the larger "transformation" of the sector is "protecting what we believe in while remaining open-minded about the future."

Due to her strong background in AI, initial reactions to Sharma's appointment have raised concerns about what her specific views are on the use of generative AI in game development. Sharma says her stance is simple: she has "no tolerance for bad AI." "AI has long been part of gaming and will continue to be," Sharma said, noting that gaming needs new "growth engines," but that "great stories are created by humans."

AI

Viral Doomsday Report Lays Bare Wall Street's Deep Anxiety About AI Future 52

A 7,000-word "doomsday" thought experiment from Citrini Research helped trigger an 800-point drop in the Dow, "painting a dark portrait of a future in which technological change inspires a race to the bottom in white-collar knowledge work," reports the Wall Street Journal. From the report: Concerns of hyperscalers overspending are out. Worries of software-industry disruption don't go far enough. The "global intelligence crisis" is about to hit. The new, broader question: What if AI is so bullish for the economy that it is actually bearish? "For the entirety of modern economic history, human intelligence has been the scarce input," Citrini wrote in a post it described as a scenario dated June 2028, not a prediction. "We are now experiencing the unwind of that premium."

Many of Monday's moves roughly aligned with the situation outlined by Citrini, in which fast-advancing AI tools allow spending cuts across industries, sparking mass white-collar unemployment and in turn leading to financial contagion. Software firms DataDog, CrowdStrike and Zscaler each plunged more than 9%. International Business Machines' 13% decline was its worst one-day performance since 2000. American Express, KKR and Blackstone -- all name-checked by Citrini -- tumbled. That anxiety, coupled with renewed uncertainty about trade policy from Washington, weighed down major indexes Monday. The Dow Jones Industrial Average led declines, falling 1.7%, or 822 points. The S&P 500 shed 1%, while the Nasdaq composite retreated 1.1%.

[...] Monday's market swings extended a run of AI-linked volatility. A small research outfit that has garnered a huge Substack following for macro and thematic stock research, Citrini said in its new post that software firms, payment processors and other companies formed "one long daisy chain of correlated bets on white-collar productivity growth" that AI is poised to disrupt. [...] Shares in DoorDash also veered 6.6% lower Monday after Citrini's Substack note called the delivery app a "poster child" for how new tools would upend companies that monetize interpersonal friction. In the research firm's scenario, AI agents would help both drivers and customers navigate food deliveries at much lower costs.
Businesses

OpenAI Calls In the Consultants For Its Enterprise Push (techcrunch.com) 14

OpenAI has formed a multi-year "Frontier Alliance" with four consulting heavyweights to accelerate enterprise adoption of its no-code AI agent platform, OpenAI Frontier. TechCrunch reports: The alliance includes multi-year partnerships between OpenAI and four major consulting firms, Boston Consulting Group (BCG), McKinsey, Accenture and Capgemini, to sell its enterprise products. OpenAI's Forward Deployed Engineering team will work with the consulting giants to help them implement OpenAI's enterprise-focused technologies like OpenAI Frontier into customers' tech stacks.

The company launched OpenAI Frontier in early February. The no-code open software allows users to build, deploy, and manage AI agents both built on OpenAI's AI models and beyond. OpenAI argues in its latest announcement that consultants are the right avenue to get enterprises on board.

"AI alone does not drive transformation. It must be linked to strategy, built into redesigned processes, and adopted at scale with aligned incentives and culture to deliver sustained outcomes," BCG CEO Christoph Schweizer said in OpenAI's blog post. "Our expanded partnership combines OpenAI's Frontier platform with BCG's deep industry, functional, and tech expertise and BCG X's build-and-scale capabilities to drive measurable impact with safeguards from day one."

IBM

IBM Shares Crater 13% After Anthropic Says Claude Code Can Tackle COBOL Modernization (cnbc.com) 113

IBM shares plunged nearly 13% on Monday after Anthropic published a blog post arguing that its Claude Code tool could automate much of the complex analysis work involved in modernizing COBOL, the decades-old programming language that still underpins an estimated 95% of ATM transactions in the United States and runs on the kind of mainframe systems IBM has sold for generations.

Anthropic said the shrinking pool of developers who understand COBOL had long made modernization cost-prohibitive, and that AI could now flip that equation by mapping dependencies and documenting workflows across thousands of lines of legacy code. The sell-off deepened a rough 2026 for IBM, whose shares are now down more than 22% year to date.
IT

'How Many AIs Does It Take To Read a PDF?' (theverge.com) 61

Despite AI's progress in building complex software, the ubiquitous PDF remains something of a grand challenge -- a format Adobe developed in the early 1990s to preserve the precise visual appearance of documents. PDFs consist of character codes, coordinates, and rendering instructions rather than logically ordered text, and even state-of-the-art models asked to extract information from them will summarize instead, confuse footnotes with body text, or outright hallucinate contents, The Verge writes.

Companies like Reducto are now tackling the problem by segmenting pages into components -- headers, tables, charts -- before routing each to specialized parsing models, an approach borrowed from computer vision techniques used in self-driving vehicles. Researchers at Hugging Face recently found roughly 1.3 billion PDFs sitting in Common Crawl alone, and the Allen Institute for AI has noted that PDFs could provide trillions of novel, high-quality training tokens from government reports, textbooks, and academic papers -- the kind of data AI developers are increasingly desperate for.
AI

Anthropic Accuses Chinese Companies of Siphoning Data From Claude (msn.com) 53

U.S. artificial-intelligence startup Anthropic said three Chinese AI companies set up more than 24,000 fraudulent accounts with its Claude AI model to help their own systems catch up. From a report: The three companies -- DeepSeek, Moonshot AI and MiniMax -- prompted Claude more than 16 million times, siphoning information from Anthropic's system to train and improve their own products, Anthropic said in a blog post Monday.

Earlier this month, an Anthropic rival, OpenAI, sent a memo to House lawmakers accusing DeepSeek of using the same tactic, called distillation, to mimic OpenAI's products. Anthropic said distillation had legitimate uses -- companies use it to build smaller versions of their own products, for example -- but it could also be used to build competitive products "in a fraction of the time, and at a fraction of the cost." The scale of the different companies' distillation activity varied. DeepSeek engaged in 150,000 interactions with Claude, whereas Moonshot and MiniMax had more than 3.4 million and 13 million, respectively, Anthropic said.

Earth

Climate Physicists Face the Ghosts in Their Machines: Clouds (quantamagazine.org) 25

Climate scientists trying to predict how much hotter the planet will get have long grappled with a surprisingly stubborn problem -- clouds, which both reflect sunlight and trap heat, account for more than half the variation between climate predictions and are the main reason warming projections for the next 50 years range from 2 to 6 degrees Celsius.

Two research groups are now racing to close that gap using AI, though they disagree sharply on method. Tapio Schneider at Caltech built CLIMA, a model that uses machine learning to optimize cloud parameters within traditional physics equations; it will be unveiled at a conference in Japan in March. Chris Bretherton at the Allen Institute for AI took a different path -- his ACE2 neural network, released in 2024, learns from 50 years of atmospheric data and largely bypasses physics equations altogether.
AI

Sam Altman Would Like To Remind You That Humans Use a Lot of Energy, Too (techcrunch.com) 142

OpenAI CEO Sam Altman is pushing back on growing concerns about AI's environmental footprint, dismissing claims about ChatGPT's water consumption as "totally fake" and arguing that the fairer way to measure AI's energy use is to compare it against humans.

In an interview with Indian Express, Altman acknowledged that evaporative cooling in data centers once made water usage a real concern but said that is no longer the case, calling internet claims of 17 gallons of water per query "completely untrue, totally insane, no connection to reality."

On energy, he conceded it is "fair" to worry about total consumption given how heavily the world now relies on AI, and called for a rapid shift toward nuclear, wind and solar power. He took particular issue with comparisons that pit the cost of training a model against a single human inference, noting it "takes like 20 years of life and all of the food you eat" before a person gets smart -- and that on a per-query basis, AI has "probably already caught up on an energy efficiency basis."
United States

Goldman Sachs, Morgan Stanley Calculate AI's Contribution To U.S. Growth May Be Basically Zero 30

The narrative that AI spending has been singlehandedly propping up the U.S. economy -- a claim that captivated Silicon Valley, Wall Street and Washington over the past year -- is facing serious pushback from economists [non-paywalled source] at Goldman Sachs, Morgan Stanley and JPMorgan Chase, all of whom now calculate that the AI buildup's direct contribution to growth was dramatically overstated and possibly close to zero.

The debate hinges on how GDP accounts for imported components: roughly three-quarters of AI data center costs go toward computer chips and gear largely manufactured in Asia, and that spending gets subtracted from domestic output because it boosts foreign economies. Joseph Politano of the Apricitas Economics newsletter pegs AI's actual contribution at about 0.2 percentage points of the 2.2 percent U.S. growth in 2025, and even Hannah Rubinton at the St. Louis Fed -- whose own analysis attributed 39 percent of growth to AI-related business spending through the first nine months of the year -- acknowledges that figure is probably the ceiling. "It's not like AI is propping up the economy," Rubinton said.

Slashdot Top Deals