United Kingdom

Apple Now Requires Device-Level Age Verification in the UK. Could the US Be Next? (gizmodo.com) 107

Apple unveiled new device-level age restrictions in the UK on Wednesday. "After downloading a new update, users will now have to confirm that they are 18 or older to access unrestricted features," reports Gizmodo.

"Users will be able to confirm their age with a credit card or by scanning an ID." For those underage or who have not confirmed their age, Apple will turn on Web Content Filter and Communication Safety, which will not only restrict access to certain apps or websites, but will also monitor messages, shared photo albums, AirDrop, and FaceTime calls for nudity. Apple didn't specify exactly which services and features are banned for under-18 users, but it will likely be in compliance with UK legislation...

The British government does not require Apple and other OS providers to institute device-level age checks, but it does restrict minor access to online pornography under the Online Safety Act, which passed in 2023. So far, that restriction has only been implemented at the website level, but UK officials have been worried about easy loopholes to evade the age restrictions, like VPNs.

The broader tech industry has been campaigning for some time to use device-level age checks instead in response to the rising tide of under-16 social media and internet bans around the world. Last month, in a landmark social media trial in California, Meta CEO Mark Zuckerberg also supported this idea, saying that conducting age verification "at the level of the phone is just a lot clearer than having every single app out there have to do this separately." Pornhub-operator Aylo had advocated for device-level restrictions in the UK as well, and even sent out letters to Apple, Google, and Microsoft in November asking for OS-level age verification...

The most obvious question: Could this be brought stateside?

Government

Tech Leaders Support California Bill to Stop 'Dominant Platforms' From Blocking Competition (ca.gov) 47

A new bill proposed in California "goes after big tech companies" writes Semafor. Supported by Y Combinator, Cory Doctorow , and the nonprofit advocacy group Fight for the Future, it's called the "BASED" act — an acronym which stands for "Blocking Anticompetitive Self-preferencing by Entrenched Dominant platforms."

As announced by San Francisco state representative Scott Wiener, the bill "will restore competition to the digital marketplace by prohibiting any digital platform with a market capitalization greater than $1 trillion and serving 100 million or more monthly users in the U.S., from favoring their own products and services on the platforms they operate."

More from Scott Wiener;s announcement: For years, giant digital platforms like Apple, Amazon, Google, and Meta have used their immense power to promote their own products and services while stifling competitors — a practice also known as self-preferencing. The result has been higher prices, diminished service, and fewer options for consumers, and less innovation across the technology ecosystem.

Self-preferencing also locks startups and mid-sized companies out of the online marketplace unless they play by rules set by their competitors. As a new generation of AI-powered startups seeks to enter the marketplace, their success — and public access to the innovations they produce — depends on their ability to compete on an even playing field.

"Anticompetitive behavior is everywhere on the internet," said Senator Wiener, "from rigged search results, to manipulative nudges boosting the 'house' product, to anti-discount policies that raise prices, to the dreaded green bubble that 'breaks' the group chat. When the world's largest digital platforms rig the game to favor their own products and services, we all lose. By prohibiting these anticompetitive practices, the BASED Act will protect competition online, empower consumers and startups, and promote innovations to improve all our lives."

The announcement includes a quote from Teri Olle, VP of the nonprofit Economic Security California Action, saying the act would "safeguard merit-based market competition. This legislation stands for a simple principle: owning the stadium doesn't mean that you get to rig the game." Some conduct prohibited by the proposed bill includes
  • Manipulating the order of search results to favor a provider's products or services, irrespective of a merit-based process,
  • Using non-public data generated by third-party sellers — including sales volumes, pricing, and customer behavior — to develop competing products that are subsequently boosted above the third-party sellers' product...

And the announcement also notes that "under the terms of the bill, providers could not prevent consumers from obtaining a portable copy of their own data or restrict voluntary data sharing (by consumers) with third parties."

Read on for reactions from DuckDuckGo, Proton, Yelp, Y Combinator, and Cory Doctorow.


The Internet

Online Bot Traffic Will Exceed Human Traffic By 2027, Cloudflare CEO Says 51

Cloudflare's CEO predicts AI-driven bot traffic will surpass human internet traffic by 2027, as AI agents generate vastly more web requests than people. "If a human were doing a task -- let's say you were shopping for a digital camera -- and you might go to five websites. Your agent or the bot that's doing that will often go to 1,000 times the number of sites that an actual human would visit," Cloudflare CEO Matthew Prince said in an interview at SXSW this week. "So it might go to 5,000 sites. And that's real traffic, and that's real load, which everyone is having to deal with and take into account." TechCrunch reports: Before the generative AI era, the internet was only about 20% bot traffic, with Google's web crawler being the largest, according to Prince, whose infrastructure and security company is used by one-fifth of all websites. But beyond some other reputable crawlers, the only other bots were those used by scammers and bad actors. "With the rise of generative AI, and its just insatiable need for data, we're seeing a rise where we suspect that, in 2027, the amount of bot traffic online will exceed the amount of human traffic that's online," Prince said.

The executive also noted that this change to the web would require the development of new technologies, like sandboxes for AI agents that can be spun up on the fly and then torn down when their task has finished. These could come into play when consumers ask AI agents to perform certain tasks on their behalf, like planning a vacation. "What we're trying to think about is, how do we actually build that underlying infrastructure where you can -- as easily as you open a new tab in your browser -- you can actually spin up new code, which can then run and service the agents that are out there," Prince said. He imagines there will soon be a time when millions of these "sandboxes" for agents would be created every second.
"I think the thing that people don't appreciate about AI is it's a platform shift," Prince said. "AI is another platform shift ... the way that you're going to consume information is completely different."
AI

AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet (404media.co) 153

An anonymous reader quotes a report from 404 Media, written by Jason Koebler: Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop.

Anthropic's paper, called "Labor market impacts of AI: A new measure and early evidence," essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job's tasks "are theoretically possible with AI," which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW's Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.) In his thread, Mims makes the case that the "theoretical capability" of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree.

But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually using AI, though Anthropic claims that that is exactly what it is doing. "We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily," the researchers write. This is based in part on the "Anthropic Economic Index," which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include "Complete humanities and social science academic assignments across multiple disciplines," "Draft and revise professional workplace correspondence and business communications," and "Build, debug, and customize web applications and websites." Not included in any of Anthropic's research are extremely popular uses of AI such as "create AI porn" and "create AI slop and spam." These uses are destroying discoverability on the internet, cause cascading societal and economic harms.
"Anthropic's research continues a time-honored tradition by AI companies who want to highlight the 'good' uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for," argues Koebler. "Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth..."

"This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media," writes Koebler, in closing. "We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What's happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice."
Android

Android, Epic, and What's Really Behind Google's 'Existential' Threat to F-Droid (thenewstack.io) 53

Starting in September, even Android developers not in Google's Play Store will still be required to register with Google to distribute their apps in Brazil, Singapore, Indonesia, and Thailand, with Google continuing "to roll out these requirements globally" four months later. Even developers distributing Android apps on the web for sideloading will be required to register, pay Google a $25 fee, and provide a government ID.

But there's a new theory on what's secretly been motivating Google from an unnamed source in the "Keep Android Open" movement, writes long-time Slashdot reader destinyland: "You can't separate this really from their ongoing interactions with Epic and the settlement that they came to," they argue. Twelve days ago Epic Games and Google announced a new proposal for settling their long-running dispute over the legality of alternative app stores on Android phones. (Rather than agreeing to let third-party app stores into their Play Store, Google wants them to continue being sideloaded, promising in a blog post last week that they'll even offer a "more streamlined" and "simplified" sideloading alternative for rival app stores. "This Registered App Store program will begin outside of the US first, and we intend to bring it to the US as well, subject to court approval.")

So "developer verification" could be Google's fallback plan if U.S. courts fail to approve this. "If the Google Play Store has to allow any third-party repository app store, Google essentially has given up all control of the apps. But if they're able to claw back that control by requiring that all developers, no matter how they distribute their apps, have to register with Google — have to agree to their Terms & Conditions, pay them money, provide identification — then they have a large degree of indirect control over any app that can be developed for the entire platform."

But that plan threatens millions of people using the alternative F/OSS app distributor F-Droid, since Google also wants to have only one signature attached to Android apps. Marc Prud'hommeaux, a member of F-Droid's board of directors, says that "all of a sudden breaks all those versions of the application distributed through F-Droid or any other app store!"

Prud'hommeaux says they've told Google's Android team "You know perfectly well that you're killing F-Droid!" creating an "existential" threat to an app distributor "that has existed happily for over 10 years." But good things started happening when he created the website Keep Android Open: There's now a "huge backlog" of signers for an Open Letter that already includes EFF, the Software Freedom Conservancy, and the Free Software Foundation. He believes Android's existing Play Protect security "is completely sufficient to handle the particular scenarios they claim that developer verification is meant to address"...

The Keep Android Open site urges developers not to sign up for Android's early access program when it launches next week. (Instead, they're asking developers to respond to invites with an email about their concerns — and to spread the word to other developers and organizations in forums and social media posts.) There's also a petition at Change.org currently signed by 64,000 developers — adding 20,000 new signatures in the last 10 days. And "If you have an Android device, try installing F-Droid!" he adds. Google tracks how many people install these alternative app repositories, and a larger user base means greater consequences from any Android policy changes.

Plus, installing F-Droid "might be refreshing!" Prud'hommeaux says. "You don't see all the advertisements and promotions and scam and crapware stuff that you see in the commercial app stores!"

Chrome

Google Chrome Is Finally Coming To ARM64 Linux (nerds.xyz) 35

BrianFagioli writes: Google says it will finally release Chrome for ARM64 Linux in the second quarter of 2026, bringing the company's full browser to a platform that has existed for years without official support. Until now, Linux users running Arm hardware have largely relied on Chromium builds or unofficial packages if they wanted something close to Chrome. Google says the new build will include the same features found on other platforms, including Google account syncing, Chrome Web Store extensions, built-in translation, Safe Browsing protections, and Google Password Manager.

The timing reflects how ARM hardware is becoming more common across the Linux ecosystem, from developer laptops to AI systems. Google also pointed to NVIDIA's DGX Spark, a compact AI supercomputing device built on the Grace Blackwell architecture, which will support installing Chrome through NVIDIA's package management tools. For many Linux users, the announcement feels like a "finally" moment, as ARM64 Linux systems have been widespread for years despite the absence of an official Chrome build.

Chrome

Google Chrome Is Switching To a Two-Week Release Cycle (9to5google.com) 31

Google is accelerating Chrome's major release cadence from four weeks to two starting with version 153 on September 8th. "...our goal is to ensure developers and users have immediate access to the latest performance improvements, fixes and new capabilities," says Google. "Building on our history of adapting our release process to match the demands of a modern web, Chrome is moving to a two-week release cycle." The company says the "smaller scope" of these releases "minimizes disruption and simplifies post-release debugging." They also cite "recent process enhancements" that will "maintain [Chrome's] high standards for stability." 9to5Google reports: There will still be weekly security updates between milestones. This applies to desktop, Android, and iOS, while there are "no changes to the Dev and the Canary channels": "A Chrome Beta for each version will ship three weeks before the stable release. We recommend developers test with the beta to keep up to date with any upcoming changes that might impact your sites and applications."

The eight-week Extended Stable release schedule for enterprise customers and Chromium embedders will not change. Chromebooks will also have "extended release options": "Our priority is a seamless experience, so the latest Chrome releases will roll out to Chromebooks after dedicated platform testing. We are adapting these channels for the new two-week browser cycle and we will share more details soon regarding milestone updates for managed devices."

The Internet

Google Quantum-Proofs HTTPS (arstechnica.com) 21

An anonymous reader quotes a report from Ars Technica: Google on Friday unveiled its plan for its Chrome browser to secure HTTPS certificates against quantum computer attacks without breaking the Internet. The objective is a tall order. The quantum-resistant cryptographic data needed to transparently publish TLS certificates is roughly 40 times bigger than the classical cryptographic material used today. Today's X.509 certificates are about 64 bytes in size, and comprise six elliptic curve signatures and two EC public keys. This material can be cracked through the quantum-enabled Shor's algorithm. Certificates containing the equivalent quantum-resistant cryptographic material are roughly 2.5 kilobytes. All this data must be transmitted when a browser connects to a site.

To bypass the bottleneck, companies are turning to Merkle Trees, a data structure that uses cryptographic hashes and other math to verify the contents of large amounts of information using a small fraction of material used in more traditional verification processes in public key infrastructure. Merkle Tree Certificates, "replace the heavy, serialized chain of signatures found in traditional PKI with compact Merkle Tree proofs," members of Google's Chrome Secure Web and Networking Team wrote Friday. "In this model, a Certification Authority (CA) signs a single 'Tree Head' representing potentially millions of certificates, and the 'certificate' sent to the browser is merely a lightweight proof of inclusion in that tree."

[...] Google is [also] adding cryptographic material from quantum-resistant algorithms such as ML-DSA (PDF). This addition would allow forgeries only if an attacker were to break both classical and post-quantum encryption. The new regime is part of what Google is calling the quantum-resistant root store, which will complement the Chrome Root Store the company formed in 2022. The [Merkle Tree Certificates] MTCs use Merkle Trees to provide quantum-resistant assurances that a certificate has been published without having to add most of the lengthy keys and hashes. Using other techniques to reduce the data sizes, the MTCs will be roughly the same 64-byte length they are now [...]. The new system has already been implemented in Chrome.

Television

Your Smart TV May Be Crawling the Web for AI (theverge.com) 42

Bright Data, a company that operates one of the world's largest residential proxy networks, has been running an SDK inside smart TV apps that turns those devices into nodes for web crawling -- collecting data used by AI companies, among other clients -- and most consumers have had no idea it was happening.

The company has published more than 200 first-party apps to LG's app store alone and still lists Samsung's Tizen OS and LG's webOS as supported platforms, though LG says the SDK is "not officially supported" and its operation on webOS "is not guaranteed." Google, Amazon, and Roku have all since adopted policies restricting or banning background proxy SDKs, and Bright Data no longer supports those platforms.

Several Roku apps still running the SDK disappeared from the store after a journalist with The Verge behind this reporting contacted the company.
Businesses

Tech Firms Aren't Just Encouraging Their Workers To Use AI. They're Enforcing It. (msn.com) 101

Tech companies ranging from 300-person startups to giants like Amazon, Google, Meta, Microsoft and Salesforce have moved beyond encouraging employees to use AI tools and are now actively tracking adoption and, in several cases, tying it to performance reviews. Google is factoring AI use into some software engineer reviews for the first time this year, and Meta's new performance review system will do the same -- it can track how many lines of code an engineer wrote with AI assistance.

Amazon Web Services managers have dashboards showing individual engineer AI-tool usage and consider adoption when evaluating promotions. About 42% of tech-industry workers said their direct manager expects AI use in daily work as of last October, up from 32% eight months earlier, according to AI consulting firm Section. At software maker Autodesk, CEO Andrew Anagnost acknowledged that some employees had been using initially blocked coding tools like Cursor stealthily -- and warned that AI holdouts "probably won't survive long term."
AI

Will Tech Giants Just Use AI Interactions to Create More Effective Ads? (seattletimes.com) 59

Google never asked its users before adding AI Overviews to its search results and AI-generated email summaries to Gmail, notes the New York Times. And Meta didn't ask before making "Meta AI" an unremovable part of its tool in Instagram, WhatsApp and Messenger.

"The insistence on AI everywhere — with little or no option to turn it off — raises an important question about what's in it for the internet companies..." Behind the scenes, the companies are laying the groundwork for a digital advertising economy that could drive the future of the internet. The underlying technology that enables chatbots to write essays and generate pictures for consumers is being used by advertisers to find people to target and automatically tailor ads and discounts to them....

Last month, OpenAI said it would begin showing ads in the free version of ChatGPT based on what people were asking the chatbot and what they had looked for in the past. In response, a Google executive mocked OpenAI, adding that Google had no plans to show ads inside its Gemini chatbot. What he didn't mention, however, was that Google, whose profits are largely derived from online ads, shows advertising on Google.com based on user interactions with the AI chatbot built into its search engine.

For the past six years, as regulators have cracked down on data privacy, the tech giants and online ad industry have moved away from tracking people's activities across mobile apps and websites to determine what ads to show them. Companies including Meta and Google had to come up with methods to target people with relevant ads without sharing users' personal data with third-party marketers. When ChatGPT and other AI chatbots emerged about four years ago, the companies saw an opportunity: The conversational interface of a chatty companion encouraged users to voluntarily share data about themselves, such as their hobbies, health conditions and products they were shopping for.

The strategy already appears to be working. Web search queries are up industrywide, including for Google and Bing, which have been incorporating AI chatbots into their search tools. That's in large part because people prod chatbot-powered search engines with more questions and follow-up requests, revealing their intentions and interests much more explicitly than when they typed a few keywords for a traditional internet search.

AI

Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code (theshamblog.com) 92

"I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change."

"Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." (UPDATE: Ars Technica acknowledges they'd asked ChatGPT to extract quotes from Shambaugh's post, and that it instead responded with inaccurate quotes it hallucinated.)

From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but.

It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet.

I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat...

It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine.

"How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...")

And amazingly, Shambaugh then had another run-in with a hallucinating AI...

I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.

This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here...

So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.

Thanks to long-time Slashdot reader steak for sharing the news.
AI

Firefox Announces 'AI Controls' To Block Its Upcoming AI Features (mozilla.org) 36

The Mozilla executive in charge of Firefox says that while some people just want AI tools that are genuinely useful, "We've heard from many who want nothing to do with AI..."

"Listening to our community, alongside our ongoing commitment to offer choice, led us to build AI controls." Starting with Firefox 148, which rolls out on Feb. 24, you'll find a new AI controls section within the desktop browser settings. It provides a single place to block current and future generative AI features in Firefox... This lets you use Firefox without AI while we continue to build AI features for those who want them...

At launch, AI controls let you manage these features individually:

— Translations, which help you browse the web in your preferred language.
— Alt text in PDFs, which add accessibility descriptions to images in PDF pages.
— AI-enhanced tab grouping, which suggests related tabs and group names.
— Link previews, which show key points before you open a link.
— AI chatbot in the sidebar, which lets you use your chosen chatbot as you browse, including options like Anthropic Claude, ChatGPT, Microsoft Copilot, Google Gemini and Le Chat Mistral.

You can choose to use some of these and not others. If you don't want to use AI features from Firefox at all, you can turn on the Block AI enhancements toggle. When it's toggled on, you won't see pop-ups or reminders to use existing or upcoming AI features. Once you set your AI preferences in Firefox, they stay in place across updates... We believe choice is more important than ever as AI becomes a part of people's browsing experiences. What matters to us is giving people control, no matter how they feel about AI.

If you'd like to try AI controls early, they'll be available first in Firefox Nightly.

Some context from The Register It's a refreshingly unsubtle stance, and one that lands just days after a similar bout of AI skepticism elsewhere in browser land, with Vivaldi's latest release leaning away from generative features entirely. CEO Jon von Tetzchner summed up the mood, telling The Register: "Basically, what we are finding is that people hate AI..." Mozilla's kill switch isn't the end of AI in browsers, but it does suggest the hype has met resistance.
When it comes to AI kill switches in browsers, Jack Wallen writes at ZDNet that "Most browsers already offer this feature. With Edge, you can disable Copilot. With Chrome, you can disable Gemini. With Opera, you can disable Aria...."
AI

Hollywood's AI Bet Isn't Paying Off (wired.com) 46

Hollywood's recent attempts to build entertainment around AI have consistently underperformed or outright flopped, whether the AI in question is a plot device or a production tool. The horror sequel M3GAN 2.0, Mission: Impossible -- The Final Reckoning, and Disney's Tron: Ares all disappointed at the box office in 2025 despite centering their narratives on AI.

The latest casualty is Mercy, a January 2026 crime thriller in which Chris Pratt faces an AI judge bot played by Rebecca Ferguson; one reviewer has already called it "the worst movie of 2026," and its ticket sales have been mediocre. AI-generated content hasn't fared any better. Darren Aronofsky executive-produced On This Day...1776, a YouTube web series that uses Google DeepMind video generation alongside real voice actors to dramatize the American Revolution. Viewer response has been brutal -- commenters mocked the uncanny faces and the fact that DeepMind rendered "America" as "Aamereedd."

A Taika Waititi-directed Xfinity commercial set to air during this weekend's Super Bowl, which de-ages Jurassic Park stars Sam Neill, Laura Dern and Jeff Goldblum, has already been mocked for producing what one viewer called "melting wax figures."
AI

OpenAI's Lead Is Contracting as AI Competition Intensifies (bigtechnology.com) 28

OpenAI's rivals are cutting into ChatGPT's lead. From a report: The top chatbot's market share fell from 69.1% to 45.3% between January 2025 and January 2026 among daily U.S. users of its mobile app. Gemini, in the same time period, rose from 14.7% to 25.1% and Grok rose from 1.6% to 15.2%.

The data, obtained by Big Technology from mobile insights firm Apptopia, indicates the chatbot race has tightened meaningfully over the past year with Google's surge showing up in the numbers. Overall, the chatbot market increased 152% since last January, according to Apptopia, with ChatGPT exhibiting healthy download growth.

On desktop and mobile web, a similar pattern appears, according to analytics firm Similarweb. Visits to ChatGPT went from 3.8 billion to 5.7 billion between January 2025 and January 2026, a 50% increase, while visits to Gemini went from 267.7 million to 2 billion, a 647% increase. ChatGPT is still far and away the leader in visits, but it has company in the race now.

Privacy

An AI Toy Exposed 50K Logs of Its Chats With Kids To Anyone With a Gmail Account (wired.com) 21

An anonymous reader quotes a report from Wired: Earlier this month, Joseph Thacker's neighbor mentioned to him that she'd preordered a couple of stuffed dinosaur toys for her children. She'd chosen the toys, called Bondus, because they offered an AI chat feature that lets children talk to the toy like a kind of machine-learning-enabled imaginary friend. But she knew Thacker, a security researcher, had done work on AI risks for kids, and she was curious about his thoughts.

So Thacker looked into it. With just a few minutes of work, he and a web security researcher friend named Joel Margolis made a startling discovery: Bondu's web-based portal, intended to allow parents to check on their children's conversations and for Bondu's staff to monitor the products' use and performance, also let anyone with a Gmail account access transcripts of virtually every conversation Bondu's child users have ever had with the toy.

Without carrying out any actual hacking, simply by logging in with an arbitrary Google account, the two researchers immediately found themselves looking at children's private conversations, the pet names kids had given their Bondu, the likes and dislikes of the toys' toddler owners, their favorite snacks and dance moves. In total, Margolis and Thacker discovered that the data Bondu left unprotected -- accessible to anyone who logged in to the company's public-facing web console with their Google username -- included children's names, birth dates, family member names, "objectives" for the child chosen by a parent, and most disturbingly, detailed summaries and transcripts of every previous chat between the child and their Bondu, a toy practically designed to elicit intimate one-on-one conversation.
More than 50,000 chat transcripts were accessible through the exposed web portal. When the researchers alerted Bondu about the findings, the company acted to take down the console within minutes and relaunched it the next day with proper authentication measures.

"We take user privacy seriously and are committed to protecting user data," Bondu CEO Fateen Anam Rafid said in his statement. "We have communicated with all active users about our security protocols and continue to strengthen our systems with new protections," as well as hiring a security firm to validate its investigation and monitor its systems in the future.
AI

Apple Reportedly Replacing Siri Interface With Actual Chatbot Experience For iOS 27 20

According to Bloomberg's Mark Gurman, Apple is reportedly planning a major Siri overhaul in iOS 27 and macOS 27 where the current assistant interface will be replaced with a deeply integrated, ChatGPT-style chatbot experience. "Users will be able to summon the new service the same way they open Siri now, by speaking the 'Siri' command or holding down the side button on their iPhone or iPad," says Gurman. "More significantly, Siri will be integrated into all of the company's core apps, including ones for mail, music, podcasts, TV, Xcode programming software and photos. That will allow users to do much more with just their voice." 9to5Mac reports: The unannounced Siri overhaul will reportedly be revealed at WWDC in June as the flagship feature for iOS 27 and macOS 27. Its release is expected in September when Apple typically ships major software updates. While Apple plans to release an improved version of Siri and Apple Intelligence this spring, that version will use the existing Siri interface. The big difference is that Google's Gemini models will power the intelligence. With the bigger update planned for iOS 27, the iOS 26 upgrade to Siri and Apple Intelligence sounds more like the first step to a long overdue modernization.

Gurman reports that the major Siri overhaul will "allow users to search the web for information, create content, generate images, summarize information and analyze uploaded files" while using "personal data to complete tasks, being able to more easily locate specific files, songs, calendar events and text messages." People are already familiar with conversational interactions with AI, and Bloomberg says the bigger update to Siri will be support both text and voice. Siri already uses these input methods, but there's no real continuity between sessions.
The Internet

Iran's Internet Shutdown Is Now One of the Longest Ever (techcrunch.com) 121

Iran has imposed one of the longest nationwide internet shutdowns in its history, cutting more than 92 million people off from connectivity for over a week as mass anti-government protests continue. TechCrunch reports: As of this writing, Iranians have not been able to access the internet for more than 170 hours. The previous longest shutdowns in the country lasted around 163 hours in 2019, and 160 hours in 2025, according to Isik Mater, the director of research at NetBlocks, a web monitoring company that tracks internet disruptions.

Mater said that the current shutdown in Iran is the third longest on record, after the internet shutdown in Sudan in mid-2021 that lasted around 35 days, followed by the outage in Mauritania in July 2024, which lasted 22 days. "Iran's shutdowns remain among the most comprehensive and tightly enforced nationwide blackouts we've observed, particularly in terms of population affected," Mater told TechCrunch.

The exact ranking depends on how each organization measures a shutdown. Zach Rosson, a researcher who studies internet disruptions at the digital rights nonprofit Access Now, told TechCrunch that according to its data, the ongoing shutdown in Iran is on a path to crack the top 10 longest shutdowns in history.
Further reading: Iran Shuts Down Musk's Starlink For First Time
AI

AI Fails at Most Remote Work, Researchers Find (msn.com) 39

A new study "compared how well top AI systems and human workers did at hundreds of real work assignments," reports the Washington Post.

They add that at least one example "illustrates a disconnect three years after the release of ChatGPT that has implications for the whole economy." AI can accomplish many impressive tasks involving computer code, documents or images. That has prompted predictions that human work of many kinds could soon be done by computers alone. Bentley University and Gallup found in a survey [PDF] last year that about three-quarters of Americans expect AI to reduce the number of U.S. jobs over the next decade. But economic data shows the technology largely has not replaced workers.

To understand what work AI can do on its own today, researchers collected hundreds of examples of projects posted on freelancing platforms that humans had been paid to complete. They included tasks such as making 3D product animations, transcribing music, coding web video games and formatting research papers for publication. The research team then gave each task to AI systems such as OpenAI's ChatGPT, Google's Gemini and Anthropic's Claude. The best-performing AI system successfully completed only 2.5 percent of the projects, according to the research team from Scale AI, a start-up that provides data to AI developers, and the Center for AI Safety, a nonprofit that works to understand risks from AI. "Current models are not close to being able to automate real jobs in the economy," said Jason Hausenloy, one of the researchers on the Remote Labor Index study...

The results, which show how AI systems fall short, challenge predictions that the technology is poised to soon replace large portions of the workforce... The AI systems failed on nearly half of the Remote Labor Index projects by producing poor-quality work, and they left more than a third incomplete. Nearly 1 in 5 had basic technical problems such as producing corrupt files, the researchers found.

One test involved creating an interactive dashboard for data from the World Happiness Report, according to the article. "At first glance, the AI results look adequate. But closer examination reveals errors, such as countries inexplicably missing data, overlapping text and legends that use the wrong colors — or no colors at all."

The researchers say AI systems are hobbled by a lack of memory, and are also weak on "visual" understanding.
AI

Amazon's AI Assistant Comes To the Web With Alexa.com 7

An anonymous reader quotes a report from TechCrunch: Amazon's AI-powered overhaul of its digital assistant, now known as Alexa+, is coming to the web. On Monday, at the start of the Consumer Electronics Show in Las Vegas, the company announced the official launch of a new website, Alexa.com, which is now rolling out to all Alexa+ Early Access customers. The site will allow customers to use Alexa+ online, much as you can do today with other AI chatbots such as ChatGPT or Google's Gemini.

[...] Related to this expansion, Amazon is updating its Alexa mobile app, which will now offer a more "agent-forward" experience. Or, in other words, it's putting a chatbot-style interface on the app's homepage, making it seem more like a typical AI chatbot. (While you could chat with Alexa before in the app, the focus is now on the chatting -- while the other features take a back seat.) On the Alexa.com website, customers can use Alexa+ for common tasks -- for instance, exploring complex topics, creating content, and making trip itineraries. However, Amazon aims to differentiate its assistant from others by focusing on families and their needs in the home.

[...] The Alexa.com website features a navigation sidebar for quicker access to your most-used Alexa features, so you can pick up where you left off on tasks like setting the thermostat, checking your calendar for appointments, reviewing shopping lists, and more. In addition, Amazon aims to convince customers to share their personal documents, emails, and calendar access with Alexa+, so its AI can become a sort of hub to manage the goings-on at home, from kids' school holidays and soccer schedules to doctor's appointments and other things families need to remember -- like when the dog got its last rabies shot, or what day the neighbor's backyard BBQ is taking place.
"Seventy-six percent of what customers are using Alexa+ for no other AI can do," says Daniel Rausch, VP of Alexa and Echo at Amazon.

"Ninety-seven percent of Alexa devices support Alexa+, and we see now in adoption from customers that they're using Alexa across all those many years and many generations of devices," Rausch adds. "We support all of Alexa's original capabilities, the tens of thousands of services and devices that Alexa was integrated with already are carried forward to the Alexa+ experience."

The report notes that Alexa.com will initially only be available to Early Access customers who sign in with their Amazon account.

Slashdot Top Deals