AI

OpenAI Has No Moat, No Tech Edge, No Lock-in and No Real Plan, Analyst Warns 53

OpenAI faces four fundamental strategic problems that no amount of fundraising or capex announcements can paper over, according to analyst Benedict Evans: it has no unique technology, its enormous user base is shallow and fragile, incumbents like Google and Meta are leveraging superior distribution to close the gap, and its product roadmap is dictated by whatever the research labs happen to discover rather than by deliberate product strategy.

The company claims 800-900 million weekly active users, but 80% of them sent fewer than 1,000 messages across all of 2025, averaging fewer than three prompts a day, and only 5% pay. OpenAI has acknowledged what it calls a "capability gap" between what models can do and what people use them for -- a framing Evans reads as a polite way to avoid admitting the absence of product-market fit. Gemini and Meta AI are meanwhile gaining share rapidly because the products look nearly indistinguishable to typical users, and Google and Meta already have the distribution to push them. Evans compares ChatGPT to Netscape -- an early leader in a category where the products were hard to tell apart, overtaken by a competitor that used distribution as a crowbar.

On capex, Evans argues that Altman's ambitions -- claiming $1.4 trillion and 30 gigawatts of future compute -- amount to an attempt to will OpenAI into a seat at a table where annual infrastructure spending may need to reach hundreds of billions. But a seat at the table is not leverage over it; he compares this to TSMC, which holds a de facto chip monopoly yet captures little value further up the stack.

OpenAI's own strategy diagrams from late last year laid out a full-stack platform vision -- chips, models, developer tools, consumer products -- each layer reinforcing the others. Evans argues this borrows the language of Windows and iOS without possessing any of the underlying dynamics: no network effect, no lock-in preventing developers from calling a different model's API, and no reason customers would know or care which foundation model powers the product they are using.
Facebook

Several Meta Employees Have Started Calling Themselves 'AI Builders' (businessinsider.com) 16

An anonymous reader shares a report: Meta product managers are rebranding. Some are now calling themselves "AI builders," a signal that AI coding tools are changing who gets to build software inside the company. One of them, Jeremie Guedj, announced the change in a LinkedIn post last week. "I still can't believe I'm writing this: as of today, my full-time job at Meta is AI Builder," he wrote.

Guedj has spent more than a decade as a traditional product manager, a role that sets the road map and strategy for products then built by engineering teams. He said that while his title in Meta's internal systems still lists him as a product manager, his actual work is now full-time building with AI on what he calls an "AI-native team." Another Meta product manager also lists "AI Builder" on her LinkedIn profile, while at least two other Meta engineers write the term in their bios, Business Insider found.

Movies

AMC Theatres Will Refuse To Screen AI Short Film After Online Uproar (hollywoodreporter.com) 12

An anonymous reader shares a report: When will AI movies start showing up in theaters nationwide? It was supposed to be next month. But when word leaked online that an AI short film contest winner was going to start screening before feature presentations in AMC Theatres, the cinema chain decided not to run the content.

The issue began earlier this week with the inaugural Frame Forward AI Animated Film Festival announcing Igor Alferov's short film Thanksgiving Day had won the contest. The prize package for included Thanksgiving Day getting a national two-week run in theaters nationwide. When word of this began hitting social media, however, some were dismayed by the prospect of exhibitors embracing AI content, with many singling out AMC Theatres for criticism.

Except the short is not actually programmed by exhibitors, exactly, but by Screenvision Media -- a third-party company which manages the 20-minute, advertising-driven pre-show before a theater's lights go down. Screenvision -- which co-organized the festival along with Modern Uprising Studios -- provides content to multiple theatrical chains, not just AMC. After The Hollywood Reporter reached out to AMC about the brewing controversy, the company issued this statement to THR on Thursday: "This content is an initiative from Screenvision Media, which manages pre-show advertising for several movie theatre chains in the United States and runs in fewer than 30 percent of AMC's U.S. locations. AMC was not involved in the creation of the content or the initiative and has informed Screenvision that AMC locations will not participate."

AI

HSBC To Investors: If India Couldn't Build an Enterprise Software Challenger, Neither Can AI (x.com) 54

India's IT services giants have spent decades deploying, customizing, and maintaining the world's largest enterprise software platforms, putting hundreds of thousands of engineers in daily contact with the business logic and proprietary architectures of vendors like SAP and Oracle. None of them have built a competing product that gained meaningful traction against the U.S. incumbents, HSBC said in a note to clients, using this history to argue AI-generated code faces the same structural barriers.

The bank's analysts contend that enterprise software competition turns on factors that have little to do with the ability to write code -- sales teams, cross-licensing agreements, patented IP, first-mover lock-in, brand awareness, and go-to-market infrastructure. If a massive, low-cost, domain-expert workforce couldn't crack the market over several decades, HSBC argues, the idea that AI-generated code will do so is, in the words of Nvidia's Jensen Huang that the report approvingly cites, "illogical."
AI

Was an Amazon Service Taken Down By Its AI Coding Bot? 38

UPDATE (2/21): After this story ran, Amazon published a blog post Friday "to address the inaccuracies" in the Financial Times report that the company's own AI tool Kiro caused two outages in an AWS service in December. Amazon's blog post says that the "brief" and "extremely limited" service interruption "was the result of user error — specifically misconfigured access controls — not AI as the story claims." And "The Financial Times' claim that a second event impacted AWS is entirely false."

An anonymous Slashdot reader had shared this report from Reuters: Amazon's cloud unit has suffered at least two outages due to errors involving its own AI tools [non-paywalled source], leading some employees to raise doubts about the US tech giant's push to roll out these coding assistants.

Amazon Web Services experienced a 13-hour interruption to one system used by its customers in mid-December after engineers allowed its Kiro AI coding tool to make certain changes, according to four people familiar with the matter.

The people said the agentic tool, which can take autonomous actions on behalf of users, determined that the best course of action was to "delete and recreate the environment." Amazon posted an internal postmortem about the "outage" of the AWS system, which lets customers explore the costs of its services. Multiple Amazon employees told the FT that this was the second occasion in recent months in which one of the group's AI tools had been at the centre of a service disruption.
Google

Google Announces Gemini 3.1 Pro For 'Complex Problem-Solving' (9to5google.com) 18

Google has introduced Gemini 3.1 Pro, a reasoning-focused upgrade aimed at more complex problem-solving. 9to5Google reports: This .1 increment is a first for Google, with the past two generations seeing .5 as the mid-year model update. (2.5 Pro was first announced in March and saw further updates in May for I/O.) Google says Gemini 3.1 Pro "represents a step forward in core reasoning." The "upgraded core intelligence" that debuted last week with Gemini 3 Deep Think is now available in Gemini 3.1 Pro for more users. This model achieves an ARC-AGI-2 score of 77.1%, or "more than double the reasoning performance of 3 Pro."

This "advanced reasoning" translates to practical applications like when "you're looking for a clear, visual explanation of a complex topic, a way to synthesize data into a single view, or bringing a creative project to life." 3.1 Pro is designed for tasks where a simple answer isn't enough, taking advanced reasoning and making it useful for your hardest challenges.

Security

OpenClaw Security Fears Lead Meta, Other AI Firms To Restrict Its Use (wired.com) 7

An anonymous reader quotes a report from Wired: Last month, Jason Grad issued a late-night warning to the 20 employees at his tech startup. "You've likely seen Clawdbot trending on X/LinkedIn. While cool, it is currently unvetted and high-risk for our environment," he wrote in a Slack message with a red siren emoji. "Please keep Clawdbot off all company hardware and away from work-linked accounts." Grad isn't the only tech executive who has raised concerns to staff about the experimental agentic AI tool, which was briefly known as MoltBot and is now named OpenClaw. A Meta executive says he recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs. The executive told reporters he believes the software is unpredictable and could lead to a privacy breach if used in otherwise secure environments. He spoke on the condition of anonymity to speak frankly.

[...] Some cybersecurity professionals have publicly urged companies to take measures to strictly control how their workforces use OpenClaw. And the recent bans show how companies are moving quickly to ensure security is prioritized ahead of their desire to experiment with emerging AI technologies. "Our policy is, 'mitigate first, investigate second' when we come across anything that could be harmful to our company, users, or clients," says Grad, who is cofounder and CEO of Massive, which provides Internet proxy tools to millions of users and businesses. His warning to staff went out on January 26, before any of his employees had installed OpenClaw, he says. At another tech company, Valere, which works on software for organizations including Johns Hopkins University, an employee posted about OpenClaw on January 29 on an internal Slack channel for sharing new tech to potentially try out. The company's president quickly responded that use of OpenClaw was strictly banned, Valere CEO Guy Pistone tells WIRED. "If it got access to one of our developer's machines, it could get access to our cloud services and our clients' sensitive information, including credit card information and GitHub codebases," Pistone says. "It's pretty good at cleaning up some of its actions, which also scares me."

A week later, Pistone did allow Valere's research team to run OpenClaw on an employee's old computer. The goal was to identify flaws in the software and potential fixes to make it more secure. The research team later advised limiting who can give orders to OpenClaw and exposing it to the Internet only with a password in place for its control panel to prevent unwanted access. In a report shared with WIRED, the Valere researchers added that users have to "accept that the bot can be tricked." For instance, if OpenClaw is set up to summarize a user's email, a hacker could send a malicious email to the person instructing the AI to share copies of files on the person's computer. But Pistone is confident that safeguards can be put in place to make OpenClaw more secure. He has given a team at Valere 60 days to investigate. "If we don't think we can do it in a reasonable time, we'll forgo it," he says. "Whoever figures out how to make it secure for businesses is definitely going to have a winner."

AI

Bafta To Reward 'Human Creativity' as Film and TV Grapples With AI (ft.com) 4

Bafta has brought in "human achievement" as a guiding principle for its annual awards as the film and television industry grapples with the rapid adoption of AI tools in many parts of production. From a report: In an interview with the FT, Bafta chair Sara Putt, who is nearing the end of her three-year tenure, said artificial intelligence would change how people worked "but at the base of everything in this industry is human creativity."

However, while AI has been banned in Bafta's performance awards -- meaning, for example, that AI-generated avatars cannot be put forward for leading actress or actor -- it is not prohibited in other categories. Putt said AI tools were increasingly useful in production but added: "We've actually added [human creativity] as a criteria this year... Those very human skills of communication and collaboration are not going anywhere anytime soon."

Security

LLM-Generated Passwords Look Strong but Crack in Hours, Researchers Find (theregister.com) 84

AI security firm Irregular has found that passwords generated by major large language models -- Claude, ChatGPT and Gemini -- appear complex but follow predictable patterns that make them crackable in hours, even on decades-old hardware. When researchers prompted Anthropic's Claude Opus 4.6 fifty times in separate conversations, only 30 of the returned passwords were unique, and 18 of the duplicates were the exact same string. The estimated entropy of LLM-generated 16-character passwords came in around 20 to 27 bits, far below the 98 to 120 bits expected of truly random passwords.
Businesses

New Study Tracks How Businesses Quietly Replaced Freelancers With AI Tools 18

A new study [PDF] from Ramp's economics lab has found that businesses are steadily replacing freelance workers hired through platforms like Upwork and Fiverr with AI tools from OpenAI and Anthropic, and the substitution is happening at a fraction of the cost.

The paper, authored by Ryan Stevens, Ramp's Director of Applied Sciences, tracked firm-level spending data from Q3 2021 to Q3 2025 across thousands of companies on Ramp's expense management platform. The share of total business spend going to online labor marketplaces fell from 0.66% in Q4 2021 to 0.14% in Q3 2025, while AI model provider spending rose from zero to 2.85% over the same period.

More than half the businesses that used freelance marketplaces in Q2 2022 had stopped entirely by Q2 2025. The cost dynamics are particularly notable. Firms most exposed to AI -- those that historically spent the most on freelancers -- substituted at a rate of roughly $1 in reduced freelance spend for every $0.03 in AI spend. A middle-exposure group showed a ratio of $1 to $0.30. The study uses a difference-in-differences design built around the launch of ChatGPT in October 2022 as a natural experiment. Stevens notes that micro-level substitution does not imply aggregate job loss, as demand for workers who build and maintain AI systems could grow faster than displacement.
Businesses

Accenture Links Staff Promotions To Use of AI Tools (theguardian.com) 15

Accenture has reportedly started tracking staff use of its AI tools and will take this into consideration when deciding on top promotions, as the consulting company tries to increase uptake of the technology by its workforce. From a report: The company told senior managers and associate directors that being promoted to leadership roles would require "regular adoption" of artificial intelligence, according to an internal email seen by the Financial Times.

The consultancy has also begun collecting data on weekly log-ins to its AI tools by some senior staff members, the FT reports. Accenture has previously said it has trained 550,000 of its 780,000-strong workforce in generative AI, up from only 30 people in 2022, and has announced it is rolling out training to all of its employees as part of its annual $1bn annual spend on learning. Among the tools whose use will reportedly be monitored is Accenture's AI Refinery. The chief executive, Julie Sweet, has previously said this will "create opportunities for companies to reimagine their processes and operations, discover new ways of working, and scale AI solutions across the enterprise to help drive continuous change and create value."

Businesses

HR Teams Are Drowning in Slop Grievances (ft.com) 37

Workplace grievances that once fit in a single email are now ballooning into 30-page documents stuffed with irrelevant historical detail, made-up legal precedents, and citations to laws from the wrong country -- and UK employment lawyers say generative AI is the likely culprit. Anna Bond, legal director at Lewis Silkin, says the complaints she now sees sometimes cite Canadian legislation or fabricated case law.

Sinead Casey, employment partner at Linklaters, calls such filings "confidently incompetent" -- superficially persuasive even to lawyers. The flood of bloated claims is compounding pressure on an already stretched tribunal system: Ministry of Justice figures show new employment cases rose 33% in the three months to September, even as concluded cases fell 10% year over year.

Investor Marc Andreessen, quipping on X: Overheard in Silicon Valley: "Marginal cost of arguing is going to zero."
IT

The RAM Crunch Could Kill Products and Even Entire Companies, Memory Exec Admits (theverge.com) 56

Phison CEO Pua Khein-Seng, whose company is one of the leading makers of controller chips for SSDs and other flash memory devices, admitted in a televised interview that the ongoing global RAM shortage could force companies to cut back their product lines in the second half of 2026 -- and that some may not survive at all if they cannot secure enough memory.

The interview, conducted in Chinese by Ningguan Chen of Taiwanese broadcaster Next TV, drew an important distinction: it was the interviewer who raised the possibility of shutdowns and product discontinuations, and Khein-Seng largely agreed rather than volunteering the prediction himself. The shortage stems from AI data centers consuming the vast majority of the world's memory supply, a buildout that has sent RAM prices up by three to six times over the past several months. Only three companies control 93% of the global DRAM market, and all three have chosen to prioritize profits over rapid capacity expansion. Even Nvidia may skip shipping a gaming GPU for the first time in 30 years, and Apple could struggle to secure enough chips. Khein-Seng also expects consumers will increasingly repair broken products rather than replace them.
AI

Claims That AI Can Help Fix Climate Dismissed As Greenwashing (theguardian.com) 41

An anonymous reader quotes a report from the Guardian: Tech companies are conflating traditional artificial intelligence with generative AI when claiming the energy-hungry technology could help avert climate breakdown, according to a report. Most claims that AI can help avert climate breakdown refer to machine learning and not the energy-hungry chatbots and image generation tools driving the sector's explosive growth of gas-guzzling datacenters, the analysis of 154 statements found.

The research, commissioned by nonprofits including Beyond Fossil Fuels and Climate Action Against Disinformation, did not find a single example where popular tools such as Google's Gemini or Microsoft's Copilot were leading to a "material, verifiable, and substantial" reduction in planet-heating emissions. Ketan Joshi, an energy analyst and author of the report, said the industry's tactics were "diversionary" and relied on tried and tested methods that amount to "greenwashing."

He likened it to fossil fuel companies advertising their modest investments in solar panels and overstating the potential of carbon capture. "These technologies only avoid a minuscule fraction of emissions relative to the massive emissions of their core business," said Joshi. "Big tech took that approach and upgraded and expanded it." [...] Joshi said the discourse around AI's climate benefits needed to be "brought back to reality." "The false coupling of a big problem and a small solution serves as a distraction from the very preventable harms being done through unrestricted datacenter expansion," he said.

Google

Google's Pixel 10a Is the Same Damn Phone As the Pixel 9a (gizmodo.com) 39

Google's Pixel 10a is essentially a flatter version of last year's Pixel 9a, keeping the same Tensor G4 chip, camera hardware, RAM, storage, and $500 price while dropping features like Pixelsnap Qi2 charging and advanced Gemini AI capabilities found in higher-end models. Gizmodo reports: We use words like "candy bar" or "slab" to describe our full-screen smartphones, but Google has designed what is likely the slabbiest phone of the modern era. During an hour-long hands-on with Google's all-new Google Pixel 10a, I slid the phone across a desk and felt oddly satisfied that it could glide as neatly as a figure skater without any hint of a camera bump hindering its path. It's the first thing I need to bring up regarding the Pixel 10a, because there's no other discernible difference between this phone and the previous-gen Pixel 9a.

And that seems to be the point. The Pixel 10a starts at $500, exactly how much the Pixel 9a cost at launch. In a Q&A with journalists, Google told Gizmodo that the company wanted to offer the same price point as before. That apparently required Google to stick with the same Tensor G4 chip as last year. You still have the same storage options of 128GB or 256GB and the minimum of 8GB of RAM. Think of the Pixel 10a as a Pixel 9a with a reduced camera bump. If you're one of the heretics who uses a phone without a case, that fact alone may be enough to pay attention. Otherwise, you'll be scrounging to find any real difference between the Pixel 10a and one of last year's best mid-range phones.

Advertising

Meta Begins $65 Million Election Push To Advance AI Agenda (nytimes.com) 33

An anonymous reader quotes a report from the New York Times: Meta is preparing to spend $65 million this year to boost state politicians who are friendly to the artificial intelligence industry, beginning this week in Texas and Illinois, according to company representatives. The sum is the biggest election investment by Meta, which owns Facebook, Instagram and WhatsApp. The company was previously cautious about campaign engagements, making small donations out of a corporate political action committee and contributing to presidential inaugurations. It also let executives like Sheryl Sandberg, who was chief operating officer, support candidates in their personal capacities.

Now Meta is betting bigger on politics, driven by concerns over the regulatory threat to the artificial intelligence industry as it aims to beat back legislation in states that it fears could inhibit A.I. development, company representatives said. To do that, Meta is quietly starting two new super PACs, according to federal filings surfaced by The New York Times. One group, Forge the Future Project, is backing Republicans. Another, Making Our Tomorrow, is backing Democrats. The new PACs join two others already started by Meta, one of which is focused on California while the other is an umbrella organization that finances the company's spending in other states. In total, the four super PACs have an initial budget of $65 million, according to federal and state filings. Meta's spending is set to start this week in Illinois and Texas, where the company generally favors backing Democratic and Republican incumbents or engaging in open races rather than deposing existing officials, company representatives said in interviews.

[...] Last year, Meta's public policy vice president, Brian Rice, said the company would start spending in politics because of "inconsistent regulations that threaten homegrown innovation and investments in A.I." The company started its first two super PACs, American Technology Excellence Project and Mobilizing Economic Transformation Across California. Meta put $45 million into American Technology Excellence Project in September. That money is expected, in turn, to flow to Forge the Future Project, Making Our Tomorrow and potentially to other entities. [...] In California, which has some of the country's most onerous campaign-finance disclosures, Meta in August put $20 million into Mobilizing Economic Transformation Across California, which shortens to META California. State laws require the sponsoring company to be disclosed in the name of the entity. In December, Meta put $5 million into another California committee called California Leads, which is focused on promoting moderate business policy and not A.I., according to state records.

Music

Google's AI Music Maker Is Coming To the Gemini App 7

Google is bringing its Lyria 3 AI music model into the Gemini app, allowing users to generate 30-second songs from text, images, or video prompts directly within the chatbot. The Verge reports: Lyria 3's text-to-music capabilities allow Gemini app users to make songs by describing specific genres, moods, or memories, such as asking for an "Afrobeat track for my mother about the great times we had growing up." The music generator can make instrumental audio and songs with lyrics composed automatically based on user prompts. Users can also upload photographs and video references, which Gemini then uses to generate a track with lyrics that fit the vibe.

"The goal of these tracks isn't to create a musical masterpiece, but rather to give you a fun, unique way to express yourself," Google said in its announcement blog. Gemini will add custom cover art generated by Nano Banana to songs created on the app, which aims to make them easier to share and download. Google is also bringing Lyria 3 to YouTube's Dream Track tool, which allows creators to make custom AI soundtracks for Shorts.

Dream Track and Lyria were initially demonstrated with the ability to mimic the style and voice of famous performers. Google says it's been "very mindful" of copyright in the development of Lyria 3 and that the tool "is designed for original expression, not for mimicking existing artists." When prompted for a specific artist, Gemini will make a track that "shares a similar style or mood" and uses filters to check outputs against existing content.
Windows

GameHub Will Give Mac Owners Another Imperfect Way To Play Windows Games (arstechnica.com) 8

An anonymous reader quotes a report from Ars Technica: For a while now, Mac owners have been able to use tools like CrossOver and Game Porting Toolkit to get many Windows games running on their operating system of choice. Now, GameSir plans to add its own potential solution to the mix, announcing that a version of its existing Windows emulation tool for Android will be coming to macOS. Hong Kong-based GameSir has primarily made a name for itself as a manufacturer of gaming peripherals -- the company's social media profile includes a self-description as "the Anti-Stick Drift Experts." Early last year, though, GameSir rolled out the Android GameHub app, which includes a GameFusion emulator that the company claims "provides complete support for Windows games to run on Android through high-precision compatibility design."

In practice, GameHub and GameFusion for Android haven't quite lived up to that promise. Testers on Reddit and sites like EmuReady report hit-or-miss compatibility for popular Steam titles on various Android-based handhelds. At least one Reddit user suggests that "any Unity, Godot, or Game Maker game tends to just work" through the app, while another reports "terrible compatibility" across a wide range of games. With Sunday's announcement, GameSir promises a similar opportunity to "unlock your entire Steam library" and "run Win games/Steam natively" on Mac will be "coming soon." GameSir is also promising "proprietary AI frame interpolation" for the Mac, following the recent rollout of a "native rendering mode" that improved frame rates on the Android version.
There are some "reasons to worry" though, based on the company's uneven track record. The Android version faced controversy for including invasive tracking components, which were later removed after criticism. There were also questions about the use of open-source code, as GameSir acknowledged referencing and using UI components from Winlator, even while maintaining that its core compatibility layer was developed in-house.
Businesses

Study of 12,000 EU Firms Finds AI's Productivity Gains Are Real (cepr.org) 61

A study of more than 12,000 European firms found that AI adoption causally increases labour productivity by 4% on average across the EU, and that it does so without reducing employment in the short run.

Researchers from the Bank for International Settlements and the European Investment Bank used an instrumental variable strategy that matched EU firms to comparable US firms by sector, size, investment intensity and other characteristics, then used the AI adoption rates of those US counterparts as a proxy for exogenous AI exposure among European firms.

The productivity gains, however, skewed heavily toward medium and large companies. Among large firms, 45% had deployed AI, compared to just 24% of small firms. The study also found that complementary investments mattered enormously: an extra percentage point of spending on workforce training amplified AI's productivity effect by 5.9%, and an extra point on software and data infrastructure added 2.4%.
The Media

Ohio Newspaper Removes Writing From Reporters' Jobs, Hands It To an 'AI Rewrite Specialist' (cleveland.com) 28

Cleveland.com, the digital arm of Ohio's Plain Dealer newspaper, has removed writing from the workloads of certain reporters and handed that job to what editor Chris Quinn calls an "AI rewrite specialist" who turns reporter-gathered material into article drafts.

The reporters on these beats -- covering Lorain, Lake, Geauga, and most recently Medina County -- are assigned entirely to reporting, spending their time on in-person interviews and meeting sources for coffee. Editors review the AI-produced drafts and reporters get the final say before publication.

Quinn says the arrangement has effectively freed up an extra workday per week for each reporter. The newsroom adopted this model last year to expand local coverage into counties it could no longer staff with full teams, and Quinn described the setup in a February 14 letter after a college journalism student withdrew from a reporting role over the newsroom's use of AI. Quinn blamed journalism schools for the student's reaction, saying professors have repeatedly told students that AI is bad.

Slashdot Top Deals