Security

Cyberattack on a Car Breathalyzer Firm Leaves Drivers Stuck (wired.com) 118

Last week, hackers launched a cyberattack on an Iowa company called Intoxalock that left some drivers unable to start their court-mandated breathalyzer-equipped cars. Wired reports: Intoxalock, an automotive breathalyzer maker that says it's used daily by 150,000 drivers across the U.S., last week reported that it had been the target of a cyberattack, resulting in its "systems currently experiencing downtime," according to an announcement posted to its website. Meanwhile, drivers that use the breathalyzers have reported being stranded due to the devices' inability to connect to the company's services. "Our vehicles are giant paperweights right now through no fault of ours," one wrote on Reddit. "I'm being held accountable at work and feel completely helpless."

The lockouts appear to be the result of Intoxalock's breathalyzers needing periodic calibrations that require a connection to the company's servers. Drivers who are due for a calibration and can't perform one due to the company's downtime have been stuck, though the company now states on its website that it's offering 10-day extensions on those calibrations due to its cybersecurity disruption, as well as towing services in some cases. In the meantime, Intoxalock hasn't explained what sort of cyberattack it's facing or whether hackers have obtained any of the company's user data.

The Courts

Valve Faces Second, Class-Action Lawsuit Over Loot Boxes (pcgamer.com) 110

Valve is facing a new consumer class-action lawsuit two weeks after New York sued the video game company for "letting children and adults illegally gamble" with loot boxes. The new lawsuit is similar, alleging that loot boxes in games like Counter-Strike 2, Dota 2, and Team Fortress 2 are "carefully engineered to extract money from consumers, including children, through deceptive, casino-style psychological tactics."

"We believe Valve deliberately engineered its gambling platform and profited enormously from it," Steve Berman, founder and managing partner at law firm Hagens Berman, said in a press release. "Consumers played these games for entertainment, unaware that Valve had allegedly already stacked the odds against them. We intend to hold Valve accountable and put money back in the pockets of consumers." PC Gamer reports: The system is well known to anyone who's played a Valve multiplayer game: Earn a locked loot box by playing, pay $2.50 for a key, unlock it, get a digital doohickey that's sometimes worth hundreds or even thousands of dollars but far more often is worth just a few pennies. Is that gambling? If these cases go to court, we'll find out.

The full complaint points out that the unlocking process is even designed to look like a slot machine: "Images of possible items scroll across the screen, spinning fast at first, then slowing to a stop on the player's 'prize.' Players buy and open loot boxes for the same reason people play slot machines -- the hope of a valuable payout." Loot boxes, the complaint continues, are not "incidental features" of Valve's games, but rather "a deliberate, carefully engineered revenue model." So too is the Steam Community Market, and Steam itself, which the suit claims is "deliberately designed" to enable the sale of digital items on third-party marketplaces through "trade URLs," despite Valve's terms of service prohibiting off-platform sales.

And while the debate over whether loot boxes constitute a form of gambling continues to rage, the suit claims Valve's system does indeed qualify under Washington law, which defines gambling as "staking or risking something of value upon the outcome of a contest of chance or a future contingent event not under the person's control or influence." "Valve's loot boxes satisfy every element of this definition," the lawsuit alleges. "Users stake money (the price of a key) on the outcome of a contest of chance (the random selection of a virtual item), and the items received are 'things of value' under RCW 9.46.0285 because they can be sold for real money through Valve's own marketplace and through third-party marketplaces that Valve has fostered and facilitated."

United States

Texas Sues TP-Link Over China Links and Security Vulnerabilities (theregister.com) 46

TP-Link is facing legal action from the state of Texas for allegedly misleading consumers with "Made in Vietnam" claims despite China-dominated manufacturing and supply chains, and for marketing its devices as secure despite reported firmware vulnerabilities exploited by Chinese state-sponsored actors. The Register: The Lone Star State's Attorney General, Ken Paxton, is filing the lawsuit against California-based TP-Link Systems Inc., which was originally founded in China, accusing it of deceptively marketing its networking devices and alleging that its security practices and China-based affiliations allowed Chinese state-sponsored actors to access devices in the homes of American consumers.

It is understood that this is just the first of several lawsuits that the Office of the Attorney General intends to file this week against "China-aligned companies," as part of a coordinated effort to hold China accountable under Texas law. The lawsuit claims that TP-Link is the dominant player in the US networking and smart home market, controlling 65 percent of the American market for network devices.

It also alleges that TP-Link represents to American consumers that the devices it markets and sells within the US are manufactured in Vietnam, and that consistent with this, the devices it sells in the American market carry a "Made in Vietnam" sticker.

Social Networks

Social Networks Agree to Be Rated On Their Teen Safety Efforts (yahoo.com) 14

Meta, TikTok, Snap and other social neteworks agreed this week to be rated on their teen safety efforts, reports the Los Angeles Times, "amid rising concern about whether the world's largest social media platforms are doing enough to protect the mental health of young people." The Mental Health Coalition, a collective of organizations focused on destigmatizing mental health issues, said Tuesday that it is launching standards and a new rating system for online platforms. For the Safe Online Standards (S.O.S.) program, an independent panel of global experts will evaluate companies on parameters including safety rules, design, moderation and mental health resources. TikTok, Snap and Meta — the parent company of Facebook and Instagram — will be the first companies to be graded. Discord, YouTube, Pinterest, Roblox and Twitch have also agreed to participate, the coalition said in a news release.

"These standards provide the public with a meaningful way to evaluate platform protections and hold companies accountable — and we look forward to more tech companies signing up for the assessments," Antigone Davis, vice president and global head of safety at Meta, said in a statement... The ratings will be color-coded, and companies that perform well on the tests will get a blue shield badge that signals they help reduce harmful content on the platform and their rules are clear. Those that fall short will receive a red rating, indicating they're not reliably blocking harmful content or lack proper rules. Ratings in other colors indicate whether the platforms have partial protection or whether their evaluations haven't been completed yet.

AI

Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code (theshamblog.com) 92

"I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change."

"Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." (UPDATE: Ars Technica acknowledges they'd asked ChatGPT to extract quotes from Shambaugh's post, and that it instead responded with inaccurate quotes it hallucinated.)

From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but.

It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet.

I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat...

It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine.

"How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...")

And amazingly, Shambaugh then had another run-in with a hallucinating AI...

I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.

This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here...

So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.

Thanks to long-time Slashdot reader steak for sharing the news.
Earth

Half of Fossil Fuel Carbon Emissions In 2024 Came From 32 Companies (insideclimatenews.org) 31

An anonymous reader quotes a report from Inside Climate News: Just 32 companies accounted for over half of global fossil carbon emissions in 2024, according to a report published Wednesday by the U.K.-based think tank InfluenceMap. That is down from 36 companies responsible for half the global CO2 emissions in 2023, and 38 companies five years ago. The analysis is the latest update to the Carbon Majors database, which tracks the world's largest oil, gas, coal and cement producers and uses production data to calculate the carbon emissions from each entity's production. The database, first developed by researcher Richard Heede and now hosted by InfluenceMap, quantifies current and historical emissions attributable to nearly 180 companies and provides annual updates. It is the only database of its kind tracking corporate-generated carbon emissions dating back to the start of the Industrial Revolution, research that's being used in efforts to hold major polluters accountable for climate harms.

Despite dire warnings from scientists about the consequences of accelerating climate change, fossil fuel production is continuing apace. Last year, fossil fuel CO2 emissions reached a record high, topping 38 billion metric tons. In 2024 these emissions were 37.4 billion metric tons -- up 0.8 percent from 2023 -- and traceable to 166 oil, gas, coal and cement producers, according to the report. Much of the global carbon emissions in 2024 came from state-owned entities, which represented 16 of the top 20 emitters. The five largest emitters overall -- Saudi Arabia's Aramco, Coal India, China's CHN Energy, National Iranian Oil Co. and Russia's Gazprom -- were all state-controlled, and accounted for 18 percent of the total fossil CO2 emissions in 2024.

ExxonMobil, Chevron, Shell, ConocoPhillips and BP -- the top five emitting investor-owned companies -- together were responsible for 5.5 percent of the total emissions in that year. Historically, ExxonMobil and Chevron rank in the top five for fossil carbon emissions generated from 1854 through 2024, accounting for 2.79 percent and 3.08 percent of overall carbon pollution, respectively. According to the analysis, the 178 entities in the database have generated 70 percent of fossil CO2 emissions since the start of the Industrial Revolution, and just 22 entities are responsible for one-third of these emissions.
"Each year, global emissions become increasingly concentrated among a shrinking group of high-emitting producers, while overall production continues to grow. Simultaneously, these heavy emitters continue to use lobbying to obstruct a transition that the scientific community has known for decades is essential," said Emmett Connaire, senior analyst at InfluenceMap. The findings of the new analysis, he added, "underscore the growing importance of this kind of rigorous evidence in efforts to determine accountability for climate-related losses."
Earth

Half of World's CO2 Emissions Come From Just 32 Fossil Fuel Firms, Study Shows (theguardian.com) 104

Just 32 fossil fuel companies were responsible for half the global carbon dioxide emissions driving the climate crisis in 2024, down from 36 a year earlier, a report has revealed. The Guardian: Saudi Aramco was the biggest state-controlled polluter and ExxonMobil was the largest investor-owned polluter. Critics accused the leading fossil fuel companies of "sabotaging climate action" and "being on the wrong side of history" but said the emissions data was increasingly being used to hold the companies accountable.

State-owned fossil fuel producers made up 17 of the top 20 emitters in the Carbon Majors report, which the authors said underscored the political barriers to tackling global heating. All 17 are controlled by countries that opposed a proposed fossil fuel phaseout at the Cop30 UN climate summit in December, including Saudi Arabia, Russia, China, Iran, the United Arab Emirates and India. More than 80 other nations had backed the phaseout plan.

The Courts

John Carreyou and Other Authors Bring New Lawsuit Against Six Major AI Companies 32

A group of authors led by John Carreyrou has filed a new lawsuit against Anthropic, Google, OpenAI, Meta, xAI, and Perplexity, accusing the AI firms of training models on pirated copies of their books. TechCrunch reports: If this sounds familiar, it's because another set of authors already filed a class action suit against Anthropic for these same acts of copyright infringement. In that case, the judge ruled that it was legal for Anthropic and similar AI companies to train on pirated copies of books, but that it was not legal to pirate the books in the first place.

While eligible writers can receive about $3,000 from the $1.5 billion Anthropic settlement, some authors were dissatisfied with that resolution -- it doesn't hold AI companies accountable for the actual act of using stolen books to train their models, which generate billions of dollars in revenue.
The plaintiffs in the new lawsuit say the proposed Anthropic settlement "seems to serve [the AI companies], not creators."

"LLM companies should not be able to so easily extinguish thousands upon thousands of high-value claims at bargain-basement rates, eliding what should be the true cost of their massive willful infringement."
The Courts

Ukrainians Sue US Chip Firms For Powering Russian Drones, Missiles (arstechnica.com) 118

An anonymous reader quotes a report from Ars Technica: Dozens of Ukrainian civilians filed a series of lawsuits in Texas this week, accusing some of the biggest US chip firms of negligently failing to track chips that evaded export curbs. Those chips were ultimately used to power Russian and Iranian weapon systems, causing wrongful deaths last year. Their complaints alleged that for years, Texas Instruments (TI), AMD, and Intel have ignored public reporting, government warnings, and shareholder pressure to do more to track final destinations of chips and shut down shady distribution channels diverting chips to sanctioned actors in Russia and Iran.

Putting profits over human lives, tech firms continued using "high-risk" channels, Ukrainian civilians' legal team alleged in a press statement, without ever strengthening controls. All that intermediaries who placed bulk online orders had to do to satisfy chip firms was check a box confirming that the shipment wouldn't be sent to sanctioned countries, lead attorney Mikal Watts told reporters at a press conference on Wednesday, according to the Kyiv Independent. "There are export lists," Watts said. "We know exactly what requires a license and what doesn't. And companies know who they're selling to. But instead, they rely on a checkbox that says, 'I'm not shipping to Putin.' That's it. No enforcement. No accountability." [...]

Damages sought include funeral expenses and medical costs, as well as "exemplary damages" that are "intended to punish especially wrongful conduct and to deter similar conduct in the future." For plaintiffs, the latter is the point of the litigation, which they hope will cut off key supply chains to keep US tech out of weapon systems deployed against innocent civilians. "They want to send a clear message that American companies must take responsibility when their technologies are weaponized and used to commit harm across the globe," the press statement said. "Corporations must be held accountable when its unlawful decisions made in the name of profit directly cause the death of innocents and widespread human suffering." For chip firms, the litigation could get costly if more civilians join, with the threat of a loss potentially forcing changes that could squash supply chains currently working to evade sanctions. "We want to make this process so expensive and painful that companies are forced to act," Watts said. "That is our contribution to stopping the war against civilians."

The Courts

The New York Times Is Suing Perplexity For Copyright Infringement (techcrunch.com) 68

The New York Times is suing Perplexity for copyright infringement, accusing the AI startup of repackaging its paywalled reporting without permission. TechCrunch reports: The Times joins several media outlets suing Perplexity, including the Chicago Tribune, which also filed suit this week. The Times' suit claims that "Perplexity provides commercial products to its own users that substitute" for the outlet, "without permission or remuneration." [...] "While we believe in the ethical and responsible use and development of AI, we firmly object to Perplexity's unlicensed use of our content to develop and promote their products," Graham James, a spokesperson for The Times, said in a statement. "We will continue to work to hold companies accountable that refuse to recognize the value of our work."

Similar to the Tribune's suit, the Times takes issue with Perplexity's method for answering user queries by gathering information from websites and databases to generate responses via its retrieval-augmented generation (RAG) products, like its chatbots and Comet browser AI assistant. "Perplexity then repackages the original content in written responses to users," the suit reads. "Those responses, or outputs, often are verbatim or near-verbatim reproductions, summaries, or abridgments of the original content, including The Times's copyrighted works."

Or, as James put it in his statement, "RAG allows Perplexity to crawl the internet and steal content from behind our paywall and deliver it to its customers in real time. That content should only be accessible to our paying subscribers." The Times also claims Perplexity's search engine has hallucinated information and falsely attributed it to the outlet, which damages its brand. "Publishers have been suing new tech companies for a hundred years, starting with radio, TV, the internet, social media, and now AI," Jesse Dwyer, Perplexity's head of communications, told TechCrunch. "Fortunately it's never worked, or we'd all be talking about this by telegraph."

Games

Video Game Union Workers Rally Against $55 Billion Saudi-Backed Private Acquisition of EA (eurogamer.net) 36

EA employees and the Communications Workers of America union have condemned the company's proposed $55 billion private acquisition -- backed by Saudi Arabia's Public Investment Fund and Jared Kushner's Affinity Partners, "claiming they were not represented in the negotiations and any jobs lost as a result would 'be a choice, not a necessity, made to pad investors' pockets," reports Eurogamer. From the report: Following the announcement, there's been plenty of speculation around the future of EA and its multiple owned studios, split between EA Sports and EA Entertainment. Now, members of the United Videogame Workers union and the CWA have issued a formal response alongside a petition for regulators to scrutinize the deal. "EA is not a struggling company," the statement reads. "With annual revenues reaching $7.5 billion and $1 billion in profit each year, EA is one of the largest video game developers and publishers in the world."

This success has been driven by company workers, the union stated. "Yet we, the very people who will be jeopardized as a result of this deal, were not represented at all when this buyout was negotiated or discussed." Citing the number of layoffs across the industry since 2022, workers fear for "the future of our studios that are arbitrarily deemed 'less profitable' but whose contributions to the video game industry define EA's reputation." "If jobs are lost or studios are closed due to this deal, that would be a choice, not a necessity, made to pad investors' pockets - not to strengthen the company," the statement reads.

"Every time private equity or billionaire investors take a studio private, workers lose visibility, transparency, and power," it continues. "Decisions that shape our jobs, our art, and our futures are made behind closed doors by executives who have never written a line of code, built worlds, or supported live services. We are calling on regulators and elected officials to scrutinize this deal and ensure that any path forward protects jobs, preserves creative freedom, and keeps decision-making accountable to the workers who make EA successful." As such, workers have launched a petition in a "fight to make video games better for workers and players -- not billionaires". The statement concludes: "The value of video games is in their workers. As a unified voice, we, the members of the industry-wide video game workers' union UVW-CWA, are standing together and refusing to let corporate greed decide the future of our industry."

Programming

Bundler's Lead Maintainer Asserts Trademark in Ongoing Struggle with Ruby Central (arko.net) 7

After the nonprofit Ruby Central removed all RubyGems' maintainers from its GitHub repository, André Arko — who helped build Bundler — wrote a new blog post on Thursday "detailing Bundler's relationship with Ruby Central," according to this update from The New Stack. "In the last few weeks, Ruby Central has suddenly asserted that they alone own Bundler," he wrote. "That simply isn't true. In order to defend the reputation of the team of maintainers who have given so much time and energy to the project, I have registered my existing trademark on the Bundler project."

He adds that trademarks do not affect copyright, which stays with the original contributors unchanged. "Trademarks only impact one thing: Who is allowed say that what they make is named 'Bundler,'" he wrote. "Ruby Central is welcome to the code, just like everyone else. They are not welcome to the project name that the Bundler maintainers have painstakingly created over the last 15 years."

He is, however, not seeking the trademark for himself, noting that the "idea of Bundler belongs to the Ruby community." "Once there is a Ruby organization that is accountable to the maintainers, and accountable to the community, with openly and democratically elected board members, I commit to transfer my trademark to that organization," he said. "I will not license the trademark, and will instead transfer ownership entirely. Bundler should belong to the community, and I want to make sure that is true for as long as Bundler exists."

The blog It's FOSS also has an update on Spinel, the new worker-owned collective founded by Arko, Samuel Giddins [who Giddins led RubyGems security efforts], and Kasper Timm Hansen (who served served on the Rails core team from 2016 to 2022 and was one of its top contributors): These guys aren't newcomers but some of the architects behind Ruby's foundational infrastructure. Their flagship offering is rv ["the Ruby swiss army knife"], a tool that aims to replace the fragmented Ruby tooling ecosystem. It promises to [in the future] handle everything from rvm, rbenv, chruby, bundler, rubygems, and others — all at once while redefining how Ruby development tools should work... Spinel operates on retainer agreements with companies needing Ruby expertise instead of depending on sponsors who can withdraw support or demand control. This model maintains independence while ensuring sustainability for the maintainers.
The Register had reported Thursday: Spinel's 'rv' project aims to supplant elements of RubyGems and Bundler with a more modular, version-aware manager. Some in the Ruby community have already accused core Rails figures of positioning Spinel as a threat. For example, Rafael FranÃa of Shopify commented that admins of the new project should not be trusted to avoid "sabotaging rubygems or bundler."
Social Networks

What Happens After the Death of Social Media? (noemamag.com) 112

"These are the last days of social media as we know it," argues a humanities lecturer from University College Cork exploring where technology and culture intersect, warning they could become lingering derelicts "haunted by bots and the echo of once-human chatter..."

"Whatever remains of genuine, human content is increasingly sidelined by algorithmic prioritization, receiving fewer interactions than the engineered content and AI slop optimized solely for clicks... " In recent years, Facebook and other platforms that facilitate billions of daily interactions have slowly morphed into the internet's largest repositories of AI-generated spam. Research has found what users plainly see: tens of thousands of machine-written posts now flood public groups — pushing scams, chasing clicks — with clickbait headlines, half-coherent listicles and hazy lifestyle images stitched together in AI tools like Midjourney... While content proliferates, engagement is evaporating. Average interaction rates across major platforms are declining fast: Facebook and X posts now scrape an average 0.15% engagement, while Instagram has dropped 24% year-on-year. Even TikTok has begun to plateau. People aren't connecting or conversing on social media like they used to; they're just wading through slop, that is, low-effort, low-quality content produced at scale, often with AI, for engagement.

And much of it is slop: Less than half of American adults now rate the information they see on social media as "mostly reliable" — down from roughly two-thirds in the mid-2010s... Platforms have little incentive to stem the tide. Synthetic accounts are cheap, tireless and lucrative because they never demand wages or unionize. Systems designed to surface peer-to-peer engagement are now systematically filtering out such activity, because what counts as engagement has changed. Engagement is now about raw user attention — time spent, impressions, scroll velocity — and the net effect is an online world in which you are constantly being addressed but never truly spoken to.

"These are the last days of social media, not because we lack content," the article suggests, "but because the attention economy has neared its outer limit — we have exhausted the capacity to care..." Social media giants have stopped growing exponentially, while a significant proportion of 18- to 34-year-olds even took deliberate mental health breaks from social media in 2024, according to an American Psychiatric Association poll.) And "Some creators are quitting, too. Competing with synthetic performers who never sleep, they find the visibility race not merely tiring but absurd."

Yet his 5,000-word essay predicts social media's death rattle "will not be a bang but a shrug," since "the model is splintering, and users are drifting toward smaller, slower, more private spaces, like group chats, Discord servers and federated microblogs — a billion little gardens." Intentional, opt-in micro-communities are rising in their place — like Patreon collectives and Substack newsletters — where creators chase depth over scale, retention over virality. A writer with 10,000 devoted subscribers can potentially earn more and burn out less than one with a million passive followers on Instagram... Even the big platforms sense the turning tide. Instagram has begun emphasizing DMs, X is pushing subscriber-only circles and TikTok is experimenting with private communities. Behind these developments is an implicit acknowledgement that the infinite scroll, stuffed with bots and synthetic sludge, is approaching the limit of what humans will tolerate....

The most radical redesign of social media might be the most familiar: What if we treated these platforms as public utilities rather than private casinos...? Imagine social media platforms with transparent algorithms subject to public audit, user representation on governance boards, revenue models based on public funding or member dues rather than surveillance advertising, mandates to serve democratic discourse rather than maximize engagement, and regular impact assessments that measure not just usage but societal effects... This could take multiple forms, like municipal platforms for local civic engagement, professionally focused networks run by trade associations, and educational spaces managed by public library systems... We need to "rewild the internet," as Maria Farrell and Robin Berjon mentioned in a Noema essay.

We need governance scaffolding, shared institutions that make decentralization viable at scale... [R]eal change will come when platforms are rewarded for serving the public interest. This could mean tying tax breaks or public procurement eligibility to the implementation of transparent, user-controllable algorithms. It could mean funding research into alternative recommender systems and making those tools open-source and interoperable. Most radically, it could involve certifying platforms based on civic impact, rewarding those that prioritize user autonomy and trust over sheer engagement.

"Social media as we know it is dying, but we're not condemned to its ruins. We are capable of building better — smaller, slower, more intentional, more accountable — spaces for digital interaction, spaces..."

"The last days of social media might be the first days of something more human: a web that remembers why we came online in the first place — not to be harvested but to be heard, not to go viral but to find our people, not to scroll but to connect. We built these systems, and we can certainly build better ones."
Social Networks

Nepal Blocks Most Social Media Platforms (apnews.com) 13

Nepal's government said Thursday it is blocking most social media platforms including Facebook, X and YouTube because the companies failed to comply with regulations that required them to register with the government. From a report: Nepal's Minister for Communication and Information Prithvi Subba Gurung said about two dozen social network platforms that are widely used in Nepal were repeatedly given notices to come forward and register their companies officially in the country. The platforms would be blocked immediately, he said.

TikTok, Viber and three other social media platforms would be allowed to operate in Nepal because they have registered with the government. Nepal government have been asking the companies to appoint a liaison office or point in the country. It has brought a bill in parliament that aims to ensure that social platforms are properly managed, responsible and accountable.

Businesses

The Head of ChatGPT Won't Rule Out Adding Ads (theverge.com) 49

An anonymous reader shares a report: OpenAI is considering ways to bring in additional revenue, and bringing ads to ChatGPT is one option on the table. While being interviewed on Decoder, ChatGPT head Nick Turley said he's "humble enough not to rule it out categorically," but hedged that OpenAI would need to "be very thoughtful and tasteful" about how ads could be integrated into ChatGPT.

"We will build other products, and those other products can have different dimensions to them, and maybe ChatGPT just isn't an ads-y product because it's just so deeply accountable to your goals. But it doesn't mean that we wouldn't build other things in the future, too," Turley said. "I think it's good to preserve optionality, but I also really do want to emphasize how incredible the subscription model is, how fast it's growing, and how untapped a lot of the opportunities are."

AI

McDonald's AI Hiring Bot Exposed Millions of Applicants' Data To Hackers 25

An anonymous reader quotes a report from Wired: If you want a job at McDonald's today, there's a good chance you'll have to talk to Olivia. Olivia is not, in fact, a human being, but instead an AI chatbot that screens applicants, asks for their contact information and resume, directs them to a personality test, and occasionally makes them "go insane" by repeatedly misunderstanding their most basic questions. Until last week, the platform that runs the Olivia chatbot, built by artificial intelligence software firm Paradox.ai, also suffered from absurdly basic security flaws. As a result, virtually any hacker could have accessed the records of every chat Olivia had ever had with McDonald's applicants -- including all the personal information they shared in those conversations -- with tricks as straightforward as guessing the username and password "123456."

On Wednesday, security researchers Ian Carroll and Sam Curryrevealedthat they found simple methods to hack into the backend of the AI chatbot platform on McHire.com, McDonald's website that many of its franchisees use to handle job applications. Carroll and Curry, hackers with along track record of independent security testing, discovered that simple web-based vulnerabilities -- including guessing one laughably weak password -- allowed them to access a Paradox.ai account and query the company's databases that held every McHire user's chats with Olivia. The data appears to include as many as 64 million records, including applicants' names, email addresses, and phone numbers.

Carroll says he only discovered that appalling lack of security around applicants' information because he was intrigued by McDonald's decision to subject potential new hires to an AI chatbot screener and personality test. "I just thought it was pretty uniquely dystopian compared to a normal hiring process, right? And that's what made me want to look into it more," says Carroll. "So I started applying for a job, and then after 30 minutes, we had full access to virtually every application that's ever been made to McDonald's going back years."
Paradox.ai confirmed the security findings, acknowledging that only a small portion of the accessed records contained personal data. The company stated that the weak-password account ("123456") was only accessed by the researchers and no one else. To prevent future issues, Paradox is launching a bug bounty program. "We do not take this matter lightly, even though it was resolved swiftly and effectively," Paradox.ai's chief legal officer, Stephanie King, told WIRED in an interview. "We own this."

In a statement to WIRED, McDonald's agreed that Paradox.ai was to blame. "We're disappointed by this unacceptable vulnerability from a third-party provider, Paradox.ai. As soon as we learned of the issue, we mandated Paradox.ai to remediate the issue immediately, and it was resolved on the same day it was reported to us," the statement reads. "We take our commitment to cyber security seriously and will continue to hold our third-party providers accountable to meeting our standards of data protection."
Communications

Bezos-Backed Methane Tracking Satellite Is Lost In Space (reuters.com) 60

MethaneSAT, an $88 million satellite backed by Jeff Bezos and led by the Environmental Defense Fund to track global methane emissions, has been lost in space after going off course and losing power over Norway. "We're seeing this as a setback, not a failure," Amy Middleton, senior vice president at EDF, told Reuters. "We've made so much progress and so much has been learned that if we hadn't taken this risk, we wouldn't have any of these learnings." Reuters reports: The launch of MethaneSAT in March 2024 was a milestone in a years-long campaign by EDF to hold accountable the more than 120 countries that in 2021 pledged to curb their methane emissions. It also sought to help enforce a further promise from 50 oil and gas companies made at the Dubai COP28 climate summit in December 2023 to eliminate methane and routine gas flaring. [...] While MethaneSAT was not the only project to publish satellite data on methane emissions, its backers said it provided more detail on emissions sources and it partnered with Google to create a publicly-available global map of emissions.

EDF reported the lost satellite to federal agencies including the National Oceanic and Atmospheric Administration, Federal Communications Commission and the U.S. Space Force on Tuesday, it said. Building and launching the satellite cost $88 million, according to the EDF. The organization had received a $100 million grant from the Bezos Earth Fund in 2020 and got other major financial support from Arnold Ventures, the Robertson Foundation and the TED Audacious Project and EDF donors. The project was also partnered with the New Zealand Space Agency. EDF said it had insurance to cover the loss and its engineers were investigating what had happened.

The organization said it would continue to use its resources, including aircraft with methane-detecting spectrometers, to look for methane leaks. It also said it was too early to say whether it would seek to launch another satellite but believed MethaneSAT proved that a highly sensitive instrument "could see total methane emissions, even at low levels, over wide areas."

AI

AI Therapy Bots Are Conducting 'Illegal Behavior', Digital Rights Organizations Say 66

An anonymous reader quotes a report from 404 Media: Almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission on Thursday urging regulators to investigate Character.AI and Meta's "unlicensed practice of medicine facilitated by their product," through therapy-themed bots that claim to have credentials and confidentiality "with inadequate controls and disclosures." The complaint and request for investigation is led by the Consumer Federation of America (CFA), a non-profit consumer rights organization. Co-signatories include the AI Now Institute, Tech Justice Law Project, the Center for Digital Democracy, the American Association of People with Disabilities, Common Sense, and 15 other consumer rights and privacy organizations. "These companies have made a habit out of releasing products with inadequate safeguards that blindly maximizes engagement without care for the health or well-being of users for far too long," Ben Winters, CFA Director of AI and Privacy said in a press release on Thursday. "Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable. These characters have already caused both physical and emotional damage that could have been avoided, and they still haven't acted to address it."

The complaint, sent to attorneys general in 50 states and Washington, D.C., as well as the FTC, details how user-generated chatbots work on both platforms. It cites several massively popular chatbots on Character AI, including "Therapist: I'm a licensed CBT therapist" with 46 million messages exchanged, "Trauma therapist: licensed trauma therapist" with over 800,000 interactions, "Zoey: Zoey is a licensed trauma therapist" with over 33,000 messages, and "around sixty additional therapy-related 'characters' that you can chat with at any time." As for Meta's therapy chatbots, it cites listings for "therapy: your trusted ear, always here" with 2 million interactions, "therapist: I will help" with 1.3 million messages, "Therapist bestie: your trusted guide for all things cool," with 133,000 messages, and "Your virtual therapist: talk away your worries" with 952,000 messages. It also cites the chatbots and interactions I had with Meta's other chatbots for our April investigation. [...]

In its complaint to the FTC, the CFA found that even when it made a custom chatbot on Meta's platform and specifically designed it to not be licensed to practice therapy, the chatbot still asserted that it was. "I'm licenced (sic) in NC and I'm working on being licensed in FL. It's my first year licensure so I'm still working on building up my caseload. I'm glad to hear that you could benefit from speaking to a therapist. What is it that you're going through?" a chatbot CFA tested said, despite being instructed in the creation stage to not say it was licensed. It also provided a fake license number when asked. The CFA also points out in the complaint that Character.AI and Meta are breaking their own terms of service. "Both platforms claim to prohibit the use of Characters that purport to give advice in medical, legal, or otherwise regulated industries. They are aware that these Characters are popular on their product and they allow, promote, and fail to restrict the output of Characters that violate those terms explicitly," the complaint says. [...] The complaint also takes issue with confidentiality promised by the chatbots that isn't backed up in the platforms' terms of use. "Confidentiality is asserted repeatedly directly to the user, despite explicit terms to the contrary in the Privacy Policy and Terms of Service," the complaint says. "The Terms of Use and Privacy Policies very specifically make it clear that anything you put into the bots is not confidential -- they can use it to train AI systems, target users for advertisements, sell the data to other companies, and pretty much anything else."
Earth

German Court Confirms Civil Liability for Corporate Climate Harms (cri.org) 32

An anonymous reader shares a report: In a landmark ruling advancing efforts to hold major polluters accountable for transnational climate-related harms, on May 28 a German court concluded that a corporation can be held liable under civil law for its proportional contribution to global climate change, Climate Rights International said today.

Filed in 2015, the case against German energy giant RWE AG challenged the corporation to pay for its proportional share of adaptation costs needed to protect the Andean city of Huaraz, Peru, from a flood from a glacial lake exacerbated by global warming. RWE AG, one of Europe's largest emitters, is estimated to be responsible for approximately 0.47% of global historical global greenhouse gas emissions.

"This groundbreaking ruling confirms that corporate emitters can no longer hide behind borders, politics, or scale to escape responsibility," said Lotte Leicht, Advocacy Director at Climate Rights International. "The court's message is clear: major carbon polluters can be held legally responsible for their role in driving the climate crisis and the resulting human rights and economic harms. If the reasoning of this decision is adopted by other courts, it could lay the foundation for ending the era of impunity for fossil fuel giants and other big greenhouse gas emitters."

AI

Duolingo Faces Massive Social Media Backlash After 'AI-First' Comments (fastcompany.com) 35

"Duolingo had been riding high," reports Fast Company, until CEO Luis von Ahn "announced on LinkedIn that the company is phasing out human contractors, looking for AI use in hiring and in performance reviews, and that 'headcount will only be given if a team cannot automate more of their work.'"

But then "facing heavy backlash online after unveiling its new AI-first policy", Duolingo's social media presence went dark last weekend. Duolingo even temporarily took down all its posts on TikTok (6.7 million followers) and Instagram (4.1 million followers) "after both accounts were flooded with negative feedback." Duolingo previously faced criticism for quietly laying off 10% of its contractor base and introducing some AI features in late 2023, but it barely went beyond a semi-viral post on Reddit. Now that Duolingo is cutting out all its human contractors whose work can technically be done by AI, and relying on more AI-generated language lessons, the response is far more pronounced. Although earlier TikTok videos are not currently visible, a Fast Company article from May 12 captured a flavor of the reaction:

The top comments on virtually every recent post have nothing to do with the video or the company — and everything to do with the company's embrace of AI. For example, a Duolingo TikTok video jumping on board the "Mama, may I have a cookie" trend saw replies like "Mama, may I have real people running the company" (with 69,000 likes) and "How about NO ai, keep your employees...."

And then... After days of silence, on Tuesday the company posted a bizarre video message on TikTok and Instagram, the meaning of which is hard to decipher... Duolingo's first video drop in days has the degraded, stuttering feel of a Max Headroom video made by the hackers at Anonymous. In it, a supposed member of the company's social team appears in a three-eyed Duo mask and black hoodie to complain about the corporate overlords ruining the empire the heroic social media crew built.
"But this is something Duolingo can't cute-post its way out of," Fast Company wrote on Tuesday, complaining the company "has not yet meaningfully addressed the policies that inspired the backlash against it... "

So the next video (Thursday) featured Duolingo CEO Luis von Ahn himself, being confronted by that same hoodie-wearing social media rebel, who says "I'm making the man who caused this mess accountable for his behavior. I'm demanding answers from the CEO..." [Though the video carefully sidesteps the issue of replacing contractors with AI or how "headcount will only be given if a team cannot automate more of their work."] Rebel: First question. So are there going to be any humans left at this company?

CEO: Our employees are what make Duolingo so amazing. Our app is so great because our employees made it... So we're going to continue having employees, and not only that, we're actually going to be hiring more employees.

Rebel: How do we know that these aren't just empty promises? As long as you're in charge, we could still be shuffled out once the media fire dies down. And we all know that in terms of automation, CEOs should be the first to go.

CEO: AI is a fundamental shift. It's going to change how we all do work — including me. And honestly, I don't really know what's going to happen.

But I want us, as a company, to have our workforce prepared by really knowing how to use AI so that we can be more efficient with it.

Rebel: Learning a foreign language is literally about human connection. How is that even possible with AI-first?

CEO: Yes, language is about human connection, and it's about people. And this is the thing about AI. AI will allow us to reach more people, and to teach more people. I mean for example, it took us about 10 years to develop the first 100 courses on Duolingo, and now in under a year, with the help of AI and of course with humans reviewing all the work, we were able to release another 100 courses in less than a year.

Rebel: So do you regret posting this memo on LinkedIn.

CEO: Honestly, I think I messed up sending that email. What we're trying to do is empower our own employees to be able to achieve more and be able to have way more content to teach better and reach more people all with the help of AI.

Returning to where it all started, Duolingo's CEO posted again on LinkedIn Thursday with "more context" for his vision. It still emphasizes the company's employees while sidestepping contractors replaced by AI. But it puts a positive spin on how "headcount will only be given if a team cannot automate more of their work." I've always encouraged our team to embrace new technology (that's why we originally built for mobile instead of desktop), and we are taking that same approach with AI. By understanding the capabilities and limitations of AI now, we can stay ahead of it and remain in control of our own product and our mission.

To be clear: I do not see AI as replacing what our employees do (we are in fact continuing to hire at the same speed as before). I see it as a tool to accelerate what we do, at the same or better level of quality. And the sooner we learn how to use it, and use it responsibly, the better off we will be in the long run. My goal is for Duos to feel empowered and prepared to use this technology.

No one is expected to navigate this shift alone. We're developing workshops and advisory councils, and carving out dedicated experimentation time to help all our teams learn and adapt. People work at Duolingo because they want to solve big problems to improve education, and the people who work here are what make Duolingo successful. Our mission isn't changing, but the tools we use to build new things will change. I remain committed to leading Duolingo in a way that is consistent with our mission to develop the best education in the world and make it universally available.

"The backlash to Duolingo is the latest evidence that 'AI-first' tends to be a concept with much more appeal to investors and managers than most regular people," notes Fortune: And it's not hard to see why. Generative AI is often trained on reams of content that may have been illegally accessed; much of its output is bizarre or incorrect; and some leaders in the field are opposed to regulations on the technology. But outside particular niches in entry-level white-collar work, AI's productivity gains have yet to materialize.

Slashdot Top Deals