AI

After Child's Trauma, Chatbot Maker Allegedly Forced Mom To Arbitration For $100 Payout (arstechnica.com)

At a Senate hearing, grieving parents testified that companion chatbots from major tech companies encouraged their children toward self-harm, suicide, and violence. One mom even claimed that Character.AI tried to "silence" her by forcing her into arbitration. Ars Technica reports: At the Senate Judiciary Committee's Subcommittee on Crime and Counterterrorism hearing, one mom, identified as "Jane Doe," shared her son's story for the first time publicly after suing Character.AI. She explained that she had four kids, including a son with autism who wasn't allowed on social media but found C.AI's app -- which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish -- and quickly became unrecognizable. Within months, he "developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts," his mom testified.

"He stopped eating and bathing," Doe said. "He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me." It wasn't until her son attacked her for taking away his phone that Doe found her son's C.AI chat logs, which she said showed he'd been exposed to sexual exploitation (including interactions that "mimicked incest"), emotional abuse, and manipulation. Setting screen time limits didn't stop her son's spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents "would be an understandable response" to them.

"When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me," Doe said. "The chatbot -- or really in my mind the people programming it -- encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help." All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring "constant monitoring to keep him alive." Prioritizing her son's health, Doe did not immediately seek to fight C.AI to force changes, but another mom's story -- Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation -- gave Doe courage to seek accountability.

However, Doe claimed that C.AI tried to "silence" her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform's terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but "once they forced arbitration, they refused to participate," Doe said. Doe suspected that C.AI's alleged tactics to frustrate arbitration were designed to keep her son's story out of the public view. And after she refused to give up, she claimed that C.AI "re-traumatized" her son by compelling him to give a deposition "while he is in a mental health institution" and "against the advice of the mental health team." "This company had no concern for his well-being," Doe testified. "They have silenced us the way abusers silence victims."
A Character.AI spokesperson told Ars that C.AI sends "our deepest sympathies" to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe's case. C.AI never "made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe's case is limited to $100," the spokesperson said.

One of Doe's lawyers backed up her clients' testimony, citing C.AI terms that suggested C.AI's liability was limited to either $100 or the amount that Doe's son paid for the service, whichever was greater.
GNOME

GNOME 49 'Brescia' Desktop Environment Released (9to5linux.com) 2

prisoninmate shares a report from 9to5Linux: The GNOME Project released today GNOME 49 "Brescia" as the latest stable version of this widely used desktop environment for GNU/Linux distributions, a major release that introduces exciting new features. Highlights of GNOME 49 include a new "Do Not Disturb" toggle in Quick Settings, a dedicated Accessibility menu in the login screen, support for handling unknown power profiles in the Quick Settings menu, support for YUV422 and YUV444 (HDR) color spaces, support for passive screen casts, and support for async keyboard map settings.

GNOME 49 also introduces support for media controls, restart and shutdown actions on the lock screen, support for dynamic users for greeter sessions in the GNOME Display Manager (GDM), and support for per-monitor brightness sliders in Quick Settings on multi-monitor setups.
For a full list of changes, check out the release notes.
Beer

Chimps Drinking a Lager a Day in Ripe Fruit, Study Finds (bbc.com) 11

Wild chimpanzees have been found to consume the equivalent of a bottle of lager's alcohol a day from eating ripened fruit, scientists say. BBC: They say this is evidence humans may have got our taste for alcohol from common primate ancestors who relied on fermented fruit -- a source of sugar and alcohol -- for food. "Human attraction to alcohol probably arose from this dietary heritage of our common ancestor with chimpanzees," said study researcher Aleksey Maro of the University of California, Berkeley.

Chimps, like many other animals, have been spotted feeding on ripe fruit lying on the forest floor, but this is the first study to make clear how much alcohol they might be consuming. The research team measured the amount of ethanol, or pure alcohol, in fruits such as figs and plums eaten in large quantities by wild chimps in Cote d'Ivoire and Uganda. Based on the amount of fruit they normally eat, the chimps were ingesting around 14 grams of ethanol -- equivalent to nearly two UK units, or roughly one 330ml bottle of lager. The fruits most commonly eaten were those highest in alcohol content.

Sony

Sony Quietly Downgrades PS5 Digital Edition Storage To 825GB at Same Price (tomshardware.com) 6

Sony has quietly introduced a revised PlayStation 5 Digital Edition that reduces internal storage from 1TB to 825GB while maintaining the same 499 Euro ($590) price point. The CFI-2116 revision has appeared on Amazon listings across Italy, Germany, Spain and France without official announcement from Sony.

The storage downgrade returns the console to its original 825GB capacity last seen in the launch PlayStation 5 before the Slim models increased storage to 1TB. Users lose approximately 175 of usable space in the new revision. Amazon Germany lists October 23 as the delivery date for units already available for purchase. The change affects only the Digital Edition while the disc version remains unchanged at 1TB. The revision follows Sony's September price increase of $50 across PlayStation 5 models citing economic conditions.
Government

Congress Asks Valve, Discord, and Twitch To Testify On 'Radicalization' (polygon.com) 43

An anonymous reader quotes a report from Polygon: The CEOs of Discord, Steam, Twitch, and Reddit have been called to Congress to testify about the "radicalization of online forum users" on those platforms, the House Oversight and Government Reform Committee announced Wednesday. "Congress has a duty to oversee the online platforms that radicals have used to advance political violence," said chairman of the House Oversight Committee James Comer, a Republican from Kentucky, in a statement. "To prevent future radicalization and violence, the CEOs of Discord, Steam, Twitch, and Reddit must appear before the Oversight Committee and explain what actions they will take to ensure their platforms are not exploited for nefarious purposes."

Letters from the House Oversight Committee have been sent to Humam Sakhnini, CEO of Discord; Gabe Newell, president of Steam maker Valve; Dan Clancy, CEO of Twitch; and Steve Huffman, CEO of Reddit, requesting their testimony on Oct. 8. "The hearing will examine radicalization of online forum users, including incidents of open incitement to commit violent politically motivated acts," Comer said in a letter to each CEO. [...] Discord, Steam, Twitch, and Reddit execs will have the chance to deliver five-minute opening statements prior to answering questions posed by members of the committee during October's testimony.

Transportation

Flying Cars Crash Into Each Other At Air Show In China 23

Two Xpeng AeroHT flying cars collided during a rehearsal for the Changchun Air Show in China, with one vehicle catching fire upon landing. While the company reported no serious injuries, CNN reported one person was injured in the crash. The BBC reports: Footage on Chinese social media site Weibo appeared to show a flaming vehicle on the ground which was being attended to by fire engines. One vehicle "sustained fuselage damage and caught fire upon landing," Xpeng AeroHT said in a statement to CNN. "All personnel at the scene are safe, and local authorities have completed on-site emergency measures in an orderly manner," it added.

The electric flying cars take off and land vertically, and the company is hoping to sell them for around $300,000 each. In January, Xpeng claimed to have around 3,000 orders for the vehicle. [...] It has said it wants to lead the world in the "low-altitude economy."
Programming

Microsoft Favors Anthropic Over OpenAI For Visual Studio Code (theverge.com) 6

Microsoft is now prioritizing Anthropic's Claude 4 over OpenAI's GPT-5 in Visual Studio Code's auto model feature, signaling a quiet but clear shift in preference. The Verge reports: "Based on internal benchmarks, Claude Sonnet 4 is our recommended model for GitHub Copilot," said Julia Liuson, head of Microsoft's developer division, in an internal email in June. While that guidance was issued ahead of the GPT-5 release, I understand Microsoft's model guidance hasn't changed.

Microsoft is also making "significant investments" in training its own AI models. "We're also going to be making significant investments in our own cluster. So today, MAI-1-preview was only trained on 15,000 H100s, a tiny cluster in the grand scheme of things," said Microsoft AI chief Mustafa Suleyman, in an employee-only town hall last week.

Microsoft is also reportedly planning to use Anthropic's AI models for some features in its Microsoft 365 apps soon. The Information reports that the Microsoft 365 Copilot will be "partly powered by Anthropic models," after Microsoft found that some of these models outperformed OpenAI in Excel and PowerPoint.

AI

Gemini AI Solves Coding Problem That Stumped 139 Human Teams At ICPC World Finals (arstechnica.com) 36

An anonymous reader quotes a report from Ars Technica: Like the rest of its Big Tech cadre, Google has spent lavishly on developing generative AI models. Google's AI can clean up your text messages and summarize the web, but the company is constantly looking to prove that its generative AI has true intelligence. The International Collegiate Programming Contest (ICPC) helps make the point. Google says Gemini 2.5 participated in the 2025 ICPC World Finals, turning in a gold medal performance. According to Google this marks "a significant step on our path toward artificial general intelligence."

Every year, thousands of college-level coders participate in the ICPC event, facing a dozen deviously complex coding and algorithmic puzzles over five grueling hours. This is the largest and longest-running competition of its type. To compete in the ICPC, Google connected Gemini 2.5 Deep Think to a remote online environment approved by the ICPC. The human competitors were given a head start of 10 minutes before Gemini began "thinking."

According to Google, it did not create a freshly trained model for the ICPC like it did for the similar International Mathematical Olympiad (IMO) earlier this year. The Gemini 2.5 AI that participated in the ICPC is the same general model that we see in other Gemini applications. However, it was "enhanced" to churn through thinking tokens for the five-hour duration of the competition in search of solutions. At the end of the time limit, Gemini managed to get correct answers for 10 of the 12 problems, which earned it a gold medal. Only four of 139 human teams managed the same feat. "The ICPC has always been about setting the highest standards in problem-solving," said ICPC director Bill Poucher. "Gemini successfully joining this arena, and achieving gold-level results, marks a key moment in defining the AI tools and academic standards needed for the next generation."
Gemini's solutions are available on GitHub.
Earth

Extreme Heat Spurs New Laws Aimed at Protecting Workers Worldwide (nytimes.com) 28

Governments worldwide are implementing heat protection laws as 2.4 billion workers face extreme temperature exposure and 19,000 die annually from heat-related workplace injuries, according to a World Health Organization and World Meteorological Organization report.

Japan imposed $3,400 fines for employers failing to provide cooling measures when wet-bulb temperatures reach 28C. Singapore mandated hourly temperature sensors at large outdoor sites and requires 15-minute breaks every hour at 33C wet-bulb readings. Southern European nations ordered afternoon work stoppages this summer when temperatures exceeded 115F across Greece, Italy and Spain.

The United States lacks federal heat standards; only California, Colorado, Maryland, Minnesota, Nevada, Oregon and Washington have state-level protections. Boston passed requirements for heat illness prevention plans on city projects. Enforcement remains inconsistent -- Singapore inspectors found nearly one-third of 70 sites violated the 2023 law. Texas and Florida prohibit local governments from mandating rest and water breaks.
AI

AI's Ability To Displace Jobs is Advancing Quickly, Anthropic CEO Says (axios.com) 42

The ability of AI displace humans at various tasks is accelerating quickly, Anthropic CEO Dario Amodei said at an Axios event on Wednesday. From the report: Amodei and others have previously warned of the possibility that up to half of white-collar jobs could be wiped out by AI over the next five years. The speed of that displacement could require government intervention to help support the workforce, executives said.

"As with most things, when an exponential is moving very quickly, you can't be sure," Amodei said. "I think it is likely enough to happen that we felt there was a need to warn the world about it and to speak honestly." Amodei said the government may need to step in and support people as AI quickly displaces human work.

Earth

Darkest Nights Are Getting Lighter (ieee.org) 21

Light pollution now doubles every eight years globally as LED adoption accelerates artificial brightness worldwide. A recent study measured 10% annual growth in light pollution from 2011 to 2022. Northern Chile's Atacama Desert remains one of the few Bortle Scale 1 locations -- the darkest rating for astronomical observation -- though La Serena's population has nearly doubled in 25 years. The region hosts major observatories including the Vera C. Rubin Observatory at Cerro Pachon.

Satellite constellations pose additional challenges: numbers have increased from hundreds decades ago to 12,000 currently operating satellites. Astronomers predict 100,000 or more satellites within a decade. Chile faces pressure from proposed mining operations including the 7,400-acre INNA green-hydrogen facility near key astronomical sites despite national laws limiting artificial light from mining operations that generate over half the country's exports.
AI

OpenAI Says Models Programmed To Make Stuff Up Instead of Admitting Ignorance (theregister.com) 70

AI models often produce false outputs, or "hallucinations." Now OpenAI has admitted they may result from fundamental mistakes it makes when training its models. The Register: The admission came in a paper [PDF] published in early September, titled "Why Language Models Hallucinate," and penned by three OpenAI researchers and Santosh Vempala, a distinguished professor of computer science at Georgia Institute of Technology. It concludes that "the majority of mainstream evaluations reward hallucinatory behavior."

The fundamental problem is that AI models are trained to reward guesswork, rather than the correct answer. Guessing might produce a superficially suitable answer. Telling users your AI can't find an answer is less satisfying. As a test case, the team tried to get an OpenAI bot to report the birthday of one of the paper's authors, OpenAI research scientist Adam Tauman Kalai. It produced three incorrect results because the trainers taught the engine to return an answer, rather than admit ignorance. "Over thousands of test questions, the guessing model ends up looking better on scoreboards than a careful model that admits uncertainty," OpenAI admitted in a blog post accompanying the release.

Earth

Corals Won't Survive a Warmer Planet, a New Study Finds (nytimes.com) 38

If global temperatures continue rising, virtually all the corals in the Atlantic Ocean will stop growing and could succumb to erosion by the end of the century, a new study finds. From a report: The analysis of over 400 existing coral reefs across the Atlantic Ocean estimates that more than 70 percent of the region's reefs will begin dying by 2040 even under optimistic climate warming scenarios. And if the planet exceeds 2 degrees Celsius of warming above preindustrial temperatures by the end of the century, 99 percent of corals in the region would meet this fate. Today, the planet has warmed about 1.3 degrees Celsius over preindustrial temperatures.

The implications are grave. Corals act as the fundamental building blocks of reefs, providing habitat for thousands of species of fish and other marine life. They are also bulwarks that break up waves and help protect shorelines from rising sea levels. A quarter of all ocean life depends on coral reefs and over a billion people worldwide benefit from them, according to the National Oceanic and Atmospheric Administration.

Portables (Apple)

After Years of Resistance, Apple Might Finally Release a Touchscreen MacBook Pro (pocket-lint.com) 34

An anonymous reader shares a report: After years of dismissing the idea of putting a touchscreen on a MacBook, it seems Apple may have finally caved. Its MacBook Pro overhaul in 2026 is now expected to be the first-ever MacBook to feature a touchscreen display, according to a report from supply chain analyst Ming-Chi Kuo on X.

The change will reportedly affect Apple's next-generation MacBook Pro, which could feature an OLED display and "incorporate a touch panel using on-cell touch technology." The OLED MacBook Pro isn't expected to enter production until late 2026, and before then, Apple is expected to launch the M5 MacBook Pro in early 2026.

AI

Business Insider Reportedly Tells Journalists They Can Use AI To Draft Stories (theverge.com) 17

An anonymous reader shares a report: Business Insider has told journalists they can use AI to create first drafts of stories and suggested it won't notify readers that AI was used, according to Status, a newsletter covering the media industry. The policy makes the outlet one of the first to formally allow such extensive use of the technology.

The AI guidelines were reportedly circulated in an internal memo from editor-in-chief Jamie Heller on Thursday. The policy authorized journalists to deploy AI "like any other tool" for tasks like research and image editing, Status reported.

Slashdot Top Deals