Media

Sound Blaster Crowdfunds Linux-Powered Audio Hub 'Re:Imagine' For Creators and Gamers (nerds.xyz) 49

Slashdot reader BrianFagioli summarizes some news from Nerds.xyz: Creative Technology has launched Sound Blaster Re:Imagine, a modular, Linux-powered audio hub that reimagines the classic PC sound card for the modern age. The device acts as both a high-end digital-to-analog converter (DAC) and a customizable control deck that connects PCs, consoles, phones, and tablets in one setup.

Users can instantly switch inputs and outputs, while developers get full hardware access through an SDK for creating their own apps. It even supports AI-driven features like an on-device DJ, a revived "Dr. Sbaitso" speech synthesizer, and a built-in DOS emulator for retro gaming.

The Kickstarter campaign has already raised more than $150,000, far surpassing its initial goal of $15,000 with over 50 days remaining. Each unit ships with a modular "Horizon" base and swappable knobs, sliders, and buttons, while a larger "Vertex" version will unlock at a higher funding milestone.

Running an unspecified Linux build, Re:Imagine positions itself as both a nostalgic nod to Sound Blaster's roots and a new open platform for creators, gamers, and tinkerers.

AI

Adobe Struggles To Assure Investors That It Can Thrive in AI Era (msn.com) 16

An anonymous reader shares a report: Adobe brought together 10,000 marketers, filmmakers and content creators to its annual conference this week to persuade them that the company's software products are adapting to AI and remain the best tools for their work. But it's Adobe's investors, rather than its users, who are the most skeptical that generative AI technology won't disrupt the company's business as the top seller of software for creative professionals.

Despite a strong strategy, Adobe is "at risk of structural AI-driven competitive and pricing pressure," wrote Tyler Radke, an analyst at Citigroup. The company's shares have lost about a quarter of their value this year as AI tools like Google's video-generating model Veo have gained steam. In an interview with Bloomberg Television earlier this week, Adobe Chief Executive Officer Shantanu Narayen said the company is undervalued as the market is focused on semiconductors and the training of AI models.

China

China Bars Influencers From Discussing Professional Topics Without Relevant Degrees (iol.co.za) 196

schwit1 writes: China has enacted a new law regulating social media influencers, requiring them to hold verified professional qualifications before posting content on sensitive topics such as medicine, law, education, and finance, IOL reported. The new law went into effect on Saturday.

The regulation was introduced by the Cyberspace Administration of China (CAC) as part of its broader effort to curb misinformation online. Under the new rules, influencers must prove their expertise through recognized degrees, certifications, or licenses before discussing regulated subjects. Major platforms such as Douyin (China's TikTok), Bilibili, and Weibo are now responsible for verifying influencer credentials and ensuring that content includes clear citations, disclaimers, and transparency about sources.

Audiences expect influencers to be both creative and credible. Yet when they blur the line between opinion and expertise, the impact can be severe. A single misleading financial tip could wipe out someone's savings. A viral health trend could cause real harm. That's why many believe it's time for creators to acknowledge the weight of their influence. However, China's new law raises deeper questions: Who defines "expertise"? What happens to independent creators who challenge official narratives but lack formal credentials? And how far can regulation go before it suppresses free thought?

Businesses

Electronic Arts' AI Tools Are Creating More Work Than They Save (businessinsider.com) 42

Electronic Arts has spent the past year pushing its nearly 15,000 employees to use AI for everything from code generation to scripting difficult conversations about pay. Employees in some areas must complete multiple AI training courses and use tools like the company's in-house chatbot ReefGPT daily.

The tools produce flawed code and hallucinations that employees then spend time correcting. Staff say the AI creates more work rather than less, according to Business Insider. They fix mistakes while simultaneously training the programs on their own work. Creative employees fear the technology will eventually eliminate demand for character artists and level designers. One recently laid-off senior quality-assurance designer says AI performed a key part of his job -- reviewing and summarizing feedback from hundreds of play testers. He suspects this contributed to his termination when about 100 colleagues were let go this past spring from the company's Respawn Entertainment studio.
Books

Was the Web More Creative and Human 20 Years Ago? (bookforum.com) 77

Readers in 2025 "may struggle to remember the optimism of the aughts, when the internet seemed to offer endless possibilities for virtual art and writing that was free..." argues a new review at Bookforum. "The content we do create online, if we still create, often feels unreflectively automatic: predictable quote-tweet dunks, prefabricated poses on Instagram, TikTok dances that hit their beats like clockwork, to say nothing of what's literally thoughtlessly churned out by LLM-powered bots."

They write that author Joanna Walsh "wants us to remember how truly creative, and human, the internet once was," in the golden age of user-generated content — and funny cat picture sites like I Can Has Cheezburger: I Can Has Cheezburger... was an amateur project, an outlet for tech professionals who wanted an easier way to exchange cute cat pics after a hard day at work. In Amateurs!: How We Built Internet Culture and Why It Matters, Walsh documents how unpaid creative labor is the basis for almost everything that's good (and much that's bad) online, including the open-source code Linux, developed by Linus Torvalds when he was still in school ("just as a hobby, won't be big and professional"), and even, in Walsh's account, the World Wide Web itself. The platforms that emerged in the 2000s as "Web 2.0," including Facebook, YouTube, Reddit, and Twitter, allowed anyone to experiment in a space that had been reserved for coders and hackers, making the internet interactive even for the inexpert and virtually unlimited in potential audience. The explosion in amateur creativity that followed took many forms, from memes to tweeted one-liners to diaristic blogs to durational digital performances to sloppy Photoshops to the formal and informal taxonomic structures — wikis, neologisms, digitally native dialects...

[U]ser-generated content was also, at bottom, about the bottom line, a business model sold to us under the guise of artistic empowerment. Even referring to an anonymous amateur as a "user," Walsh argues, cedes ground: these platforms are populated by producers, but their owners see us as, and turn us into, "helpless addicts." For some, online amateurism translated to professional success, a viral post earning an author a book deal, or a reputation as a top commenter leading to a staff writing job on a web publication... But for most, these days, participation in the online attention economy feels like a tax, or maybe a trickle of revenue, rather than free fun or a ticket to fame. The few remaining professionals in the arts and letters have felt pressured to supplement their full-time jobs with social media self-promotion, subscription newsletters, podcasts, and short-form video. On what was once called Twitter, users can pay, and sometimes get paid, to post with greater reach...

The chapters are bookended by an introduction on the early promise of 2004 and a coda on the defeat of 2025 and supplemented by an appendix with a straightforward timeline of the major events and publications that serve as the book's touchstones... The online spaces where amateur content creators once "created and steered online culture" have been hollowed out and replaced by slop, but what really hurts is that the slop is being produced by bots trained on precisely that amateur content.

IT

To Fight Business 'Enshittification', Cory Doctorow Urges Tech Workers: Join Unions (acm.org) 136

Cory Doctorow has always warned that companies "enshittify" their services — shifting "as much as they can from users, workers, suppliers, and business customers to themselves." But this week Doctorow writes in Communications of the ACM that enshittification "would be much, much worse if not for tech workers," who have "the power to tell their bosses to go to hell..." When your skills are in such high demand that you can quit your job, walk across the street, and get a better one later that same day, your boss has a real incentive to make you feel like you are their social equal, empowered to say and do whatever feels technically right... The per-worker revenue for successful tech companies is unfathomable — tens or even hundreds of times their wages and stock compensation packages.
"No wonder tech bosses are so excited about AI coding tools," Doctorow adds, "which promise to turn skilled programmers from creative problem-solvers to mere code reviewers for AI as it produces tech debt at scale. Code reviewers never tell their bosses to go to hell, and they are a lot easier to replace."

So how should tech workers respond in a world where tech workers are now "as disposable as Amazon warehouse workers and drivers...?" Throughout the entire history of human civilization, there has only ever been one way to guarantee fair wages and decent conditions for workers: unions. Even non-union workers benefit from unions, because strong unions are the force that causes labor protection laws to be passed, which protect all workers. Tech workers have historically been monumentally uninterested in unionization, and it's not hard to see why. Why go to all those meetings and pay those dues when you could tell your boss to go to hell on Tuesday and have a new job by Wednesday? That's not the case anymore. It will likely never be the case again.

Interest in tech unions is at an all-time high. Groups such as Tech Solidarity and the Tech Workers Coalition are doing a land-office business, and copies of Ethan Marcotte's You Deserve a Tech Union are flying off the shelves. Now is the time to get organized. Your boss has made it clear how you'd be treated if they had their way. They're about to get it.

Thanks to long-time Slashdot reader theodp for sharing the article.
Games

Video Game Union Workers Rally Against $55 Billion Saudi-Backed Private Acquisition of EA (eurogamer.net) 36

EA employees and the Communications Workers of America union have condemned the company's proposed $55 billion private acquisition -- backed by Saudi Arabia's Public Investment Fund and Jared Kushner's Affinity Partners, "claiming they were not represented in the negotiations and any jobs lost as a result would 'be a choice, not a necessity, made to pad investors' pockets," reports Eurogamer. From the report: Following the announcement, there's been plenty of speculation around the future of EA and its multiple owned studios, split between EA Sports and EA Entertainment. Now, members of the United Videogame Workers union and the CWA have issued a formal response alongside a petition for regulators to scrutinize the deal. "EA is not a struggling company," the statement reads. "With annual revenues reaching $7.5 billion and $1 billion in profit each year, EA is one of the largest video game developers and publishers in the world."

This success has been driven by company workers, the union stated. "Yet we, the very people who will be jeopardized as a result of this deal, were not represented at all when this buyout was negotiated or discussed." Citing the number of layoffs across the industry since 2022, workers fear for "the future of our studios that are arbitrarily deemed 'less profitable' but whose contributions to the video game industry define EA's reputation." "If jobs are lost or studios are closed due to this deal, that would be a choice, not a necessity, made to pad investors' pockets - not to strengthen the company," the statement reads.

"Every time private equity or billionaire investors take a studio private, workers lose visibility, transparency, and power," it continues. "Decisions that shape our jobs, our art, and our futures are made behind closed doors by executives who have never written a line of code, built worlds, or supported live services. We are calling on regulators and elected officials to scrutinize this deal and ensure that any path forward protects jobs, preserves creative freedom, and keeps decision-making accountable to the workers who make EA successful." As such, workers have launched a petition in a "fight to make video games better for workers and players -- not billionaires". The statement concludes: "The value of video games is in their workers. As a unified voice, we, the members of the industry-wide video game workers' union UVW-CWA, are standing together and refusing to let corporate greed decide the future of our industry."

AI

Hollywood Demands Copyright Guardrails from Sora 2 - While Users Complain That's Less Fun (yahoo.com) 56

Enthusiasm for Sora 2 "wasn't shared in Hollywood," reports the Los Angeles Times, "where the new AI tools have created a swift backlash" that "appears to be only just the beginning of a bruising legal fight that could shape the future of AI use in the entertainment business." [OpenAI] executives went on a charm offensive last year. They reached out to key players in the entertainment industry — including Walt Disney Co. — about potential areas for collaboration and trying to assuage concerns about its technology. This year, the San Francisco-based AI startup took a more assertive approach. Before unveiling Sora 2 to the general public, OpenAI executives had conversations with some studios and talent agencies, putting them on notice that they need to explicitly declare which pieces of intellectual property — including licensed characters — were being opted-out of having their likeness depicted on the AI platform, according to two sources familiar with the matter who were not authorized to comment. Actors would be included in Sora 2 unless they opted out, the people said. OpenAI disputes the claim and says that it was always the company's intent to give actors and other public figures control over how their likeness is used.

The response was immediate.... [Big talent agencies objected, along with performers' unions and major studios.] "Decades of enforceable copyright law establishes that content owners do not need to 'opt out' to prevent infringing uses of their protected IP," Warner Bros. Discovery said in a statement... The strong pushback from the creative community could be a strategy to force OpenAI into entering licensing agreements for the content they need, legal experts said... One challenge is figuring out a way that fairly compensates talent and rights holders. Several people who work within the entertainment industry ecosystem said they don't believe a flat fee works.

Meanwhile, "the complete copyright-free-for-all approach that OpenAI took to its new AI video generation model, Sora 2, lasted all of one week," writes Gizmodo. But that means the service has "now pissed off its users." As 404 Media pointed out, social channels like Twitter and Reddit are now flooded with Sora users who are angry they can't make 10-second clips featuring their favorite characters anymore. One user in the OpenAI subreddit said that being able to play with copyrighted material was "the only reason this app was so fun."
Futurism published more reactions, including ""It's official, Sora 2 is completely boring and useless with these copyright restrictions." Others accused OpenAI of abusing copyright to hype up its new app. "This is just classic OpenAI at this point," another user wrote. "They do this s*** all the time. Let people have fun for a day or two and then just start censoring like crazy." The app now has a measly 2.9-star rating on the App Store, indicative of growing disillusionment and frustration with censorship... [It's not dropped to 2.8.]

In an apparent effort to save face, Altman claimed this week that many copyright holders are actually begging to have their characters appear on Sora, instead of complaining about the trend. "In the case of Sora, we've heard from a lot of concerned rightsholders and also a lot of rightsholders who are like 'My concern is you won't put my character in enough,'" he told the a16z podcast earlier this week. "So I can completely see a world where subject to the decisions that a rightsholder has, they get more upset with us for not generating their character often enough than too much," he added. Whether most rightsholders would agree with that sentiment remains to be seen.

Business Insider offers another reaction. After watching Sora 2's main public feed, they write that Sora 2 "seems to be overrun with teenage boys."
AI

DC Comics Won't Support Generative AI: 'Not Now, Not Ever' (theverge.com) 31

An anonymous reader shares a report: DC Comics president and publisher Jim Lee said that the company "will not support AI-generated storytelling or artwork," assuring fans that its future will remain rooted in human creativity. "Not now, not ever, as long as [SVP, general manager] Anne DePies and I are in charge," Lee said during his panel at New York Comic Con on Wednesday, likening concerns around AI dominating future creative industries to the Millennium bug scare and NFT hype.

"People have an instinctive reaction to what feels authentic. We recoil from what feels fake. That's why human creativity matters," said Lee. "AI doesn't dream. It doesn't feel. It doesn't make art. It aggregates it."

The Almighty Buck

Irish Basic Income Support Scheme For Artists To Be Made Permanent (www.rte.ie) 144

AmiMoJo writes: The Irish Government's basic income scheme for artists is set to become a permanent fixture from next year, with 2,000 new places to be made available under Budget 2026. Minister for Culture Patrick O'Donovan has secured agreement with other government departments to continue and expand the initiative, which had previously operated on a pilot basis. Participants in the scheme receive a weekly payment of $379.50.

The pilot programme, launched in 2022, provided basic income support to 2,000 artists and creative arts workers across Ireland. It aimed to support the arts sector's recovery following the COVID-19 pandemic, during which many artists experienced significant income loss due to restrictions on live performances and events. The scheme provides unconditional, regular payments to eligible artists and creative workers, allowing them to focus on their practice without the pressure of commercial viability. It is not means-tested and operates independently of social welfare payments. An independent evaluation of the pilot, published earlier this year, found that recipients reported increased time spent on creative work, reduced financial stress, and improved well-being.

Security

Mouse Sensors Can Pick Up Speech From Surface Vibrations, Researchers Show (tomshardware.com) 40

"A group of researchers from the University of California, Irvine, have developed a way to use the sensors in high-quality optical mice to capture subtle vibrations and convert them into audible data," reports Tom's Hardware: [T]he high polling rate and sensitivity of high-performance optical mice pick up acoustic vibrations from the surface where they sit. By running the raw data through signal processing and machine learning techniques, the team could hear what the user was saying through their desk. Mouse sensors with a 20,000 DPI or higher are vulnerable to this attack. And with the best gaming mice becoming more affordable annually, even relatively affordable peripherals are at risk....

[T]his compromise does not necessarily mean a complicated virus installed through a backdoor — it can be as simple as an infected FOSS that requires high-frequency mouse data, like creative apps or video games. This means it's not unusual for the software to gather this data. From there, the collected raw data can be extracted from the target computer and processed off-site. "With only a vulnerable mouse, and a victim's computer running compromised or even benign software (in the case of a web-based attack surface), we show that it is possible to collect mouse packet data and extract audio waveforms," the researchers state.

The researchers created a video with raw audio samples from various stages in their pipeline on an accompanying web site where they calculate that "the majority of human speech" falls in a frequency range detectable by their pipeline. While the collected signal "is low-quality and suffers from non-uniform sampling, a non-linear frequency response, and extreme quantization," the researchers augment it with "successive signal processing and machine learning techniques to overcome these challenges and achieve intelligible reconstruction of user speech."

They've titled their paper Invisible Ears at Your Fingertips: Acoustic Eavesdropping via Mouse Sensors. The paper's conclusion? "The increasing precision of optical mouse sensors has enhanced user interface performance but also made them vulnerable to side-channel attacks exploiting their sensitivity."

Thanks to Slashdot reader jjslash for sharing the article.
Games

Saudi Takeover of EA in $55 Billion Deal Raises Serious Concerns (nerds.xyz) 67

BrianFagioli writes: Electronic Arts has agreed to a $55 billion buyout by Saudi Arabia's Public Investment Fund (PIF), private equity firm Silver Lake, and Jared Kushner's Affinity Partners, marking the largest all-cash sponsor take-private deal ever. Shareholders will receive $210 per share, a 25 percent premium over EA's unaffected price, and once the transaction closes the company will be delisted from public markets. EA CEO Andrew Wilson will remain in charge, with the group arguing that private ownership will allow the publisher to innovate faster and expand its global footprint.

The deal, however, is already sparking controversy. PIF, a sovereign wealth fund controlled by the Saudi government, will effectively gain control of one of the most influential names in gaming. While investors stand to profit, many gamers and industry watchers are concerned about how Saudi ownership could shape EA's creative direction, monetization strategies, and role in esports. With regulatory approvals still pending, the takeover raises difficult questions about the intersection of gaming, politics, and global soft power.

Music

Spotify Announces New AI Safeguards, Says It's Removed 75 Million 'Spammy' Tracks 18

Spotify says it has has removed over 75 million fraudulent tracks in the past year as it works to combat "AI slop," deepfake impersonations, and spam uploads. Variety reports: Its new protections include a policy to police unauthorized vocal impersonation ("deepfakes") and fraudulent music uploaded to artists' official profiles; an enhanced spam filter to prevent mass uploads, duplicates, SEO hacks, artificially short tracks designed to fraudulently boost streaming numbers and payments. The company also says it's collaborating with industry partners to devise an industry standard in a song's credits to "clearly indicate where and how AI played a role in the creation of a track."

"The pace of recent advances in generative AI technology has felt quick and at times unsettling, especially for creatives," the company writes in a just-published post on its official blog. "At its best, AI is unlocking incredible new ways for artists to create music and for listeners to discover it. At its worst, AI can be used by bad actors and content farms to confuse or deceive listeners, push 'slop' into the ecosystem, and interfere with authentic artists working to build their careers. The future of the music industry is being written, and we believe that aggressively protecting against the worst parts of Gen AI is essential to enabling its potential for artists and producers."

In a press briefing on Wednesday, Spotify VP and Global Head of Music Product Charlie Hellman said, "I want to be clear about one thing: We're not here to punish artists for using AI authentically and responsibly. We hope that they will enable them to be more creative than ever. But we are here to stop the bad actors who are gaming the system. And we can only benefit from all that good side if we aggressively protect against the bad side."
The Internet

Cloudflare To Launch Stablecoin for AI-Driven Internet Economy (nerds.xyz) 21

Cloudflare announced plans Thursday to launch NET Dollar, a U.S. dollar-backed stablecoin designed to enable autonomous AI agents to conduct instant financial transactions. The company says the stablecoin will support microtransactions and pay-per-use models as AI agents take over tasks like booking flights and ordering groceries. BrianFagioli comments: A U.S. dollar-backed cryptocurrency from Cloudflare feels unusual to me, and I'm still surprised by it. The decision shows just how much the Internet is shifting in response to artificial intelligence.

CEO Matthew Prince said, "For decades, the business model of the Internet ran on ad platforms and bank transfers. The Internet's next business model will be powered by pay-per-use, fractional payments, and microtransactions -- "tools that shift incentives toward original, creative content that actually adds value." He added that by using its global network, Cloudflare aims to "help modernize the financial rails needed to move money at the speed of the Internet."

China

Horror Film's Wedding Scene Digitally Altered for Chinese Audiences (theguardian.com) 47

Australian horror film Together, starring Dave Franco and Alison Brie, underwent digital alterations for its mainland China release on September 12. Chinese cinemagoers discovered that a wedding scene between two men had been modified using face-swapping technology to transform one male character into a female appearance. The change only became apparent after side-by-side screenshots from the original and altered versions circulated on social media platforms.

Chinese viewers are expressing outrage over the AI-powered modification, The Guardian reports, citing concerns about creative integrity and the difficulty of detecting such alterations compared to traditional scene cuts. The film's distributor halted the scheduled September 19 general release following the backlash. China's censorship authorities require all imported films to undergo approval before release.
AI

AI-Generated 'Workslop' Is Destroying Productivity (hbr.org) 48

40% of U.S. employees have received "workslop" -- AI-generated content that appears polished but lacks substance -- in the past month, according to research from BetterUp Labs and Stanford Social Media Lab. The survey of 1,150 full-time workers found recipients spend an average of one hour and 56 minutes addressing each incident of workslop, costing organizations an estimated $186 per employee monthly. For a 10,000-person company, lost productivity totals over $9 million annually.

Professional services and technology sectors are disproportionately affected. Workers report that 15.4% of received content qualifies as workslop. The phenomenon occurs primarily between peers at 40%, though 18% flows from direct reports to managers and 16% moves down the hierarchy. Beyond financial costs, workslop damages workplace relationships -- half of recipients view senders as less creative, capable, and reliable, while 42% see them as less trustworthy.
AI

Hundreds of Google AI Workers Were Fired Amid Fight Over Working Conditions (theguardian.com) 48

Last week the Guardian reported on "thousands of AI workers contracted for Google through Japanese conglomerate Hitachi's GlobalLogic to rate and moderate the output of Google's AI products, including its flagship chatbot Gemini... and its summaries of search results, AI Overviews." "AI isn't magic; it's a pyramid scheme of human labor," said Adio Dinika, a researcher at the Distributed AI Research Institute based in Bremen, Germany. "These raters are the middle rung: invisible, essential and expendable...." Ten of Google's AI trainers the Guardian spoke to said they have grown disillusioned with their jobs because they work in siloes, face tighter and tighter deadlines, and feel they are putting out a product that's not safe for users... In May 2023, a contract worker for Appen submitted a letter to the US Congress that the pace imposed on him and others would make Google Bard, Gemini's predecessor, a "faulty" and "dangerous" product
This week Google laid off 200 of those moderating contractors, reports Wired. "These workers, who often are hired because of their specialist knowledge, had to have either a master's or a PhD to join the super rater program, and typically include writers, teachers, and people from creative fields." Workers still at the company claim they are increasingly concerned that they are being set up to replace themselves. According to internal documents viewed by WIRED, GlobalLogic seems to be using these human raters to train the Google AI system that could automatically rate the responses, with the aim of replacing them with AI. At the same time, the company is also finding ways to get rid of current employees as it continues to hire new workers. In July, GlobalLogic made it mandatory for its workers in Austin, Texas, to return to office, according to a notice seen by WIRED...

Some contractors attempted to unionize earlier this year but claim those efforts were quashed. Now they allege that the company has retaliated against them. Two workers have filed a complaint with the National Labor Relations Board, alleging they were unfairly fired, one due to bringing up wage transparency issues, and the other for advocating for himself and his coworkers. "These individuals are employees of GlobalLogic or their subcontractors, not Alphabet," Courtenay Mencini, a Google spokesperson, said in a statement...

"Globally, other AI contract workers are fighting back and organizing for better treatment and pay," the article points out, noting that content moderators from around the world facing similar issues formed the Global Trade Union Alliance of Content Moderators which includes workers from Kenya, Turkey, and Colombia.

Thanks to long-time Slashdot reader mspohr for sharing the news.
AI

ChatGPT Will Guess Your Age and Might Require ID For Age Verification 111

OpenAI is rolling out stricter safety measures for ChatGPT after lawsuits linked the chatbot to multiple suicides. "ChatGPT will now attempt to guess a user's age, and in some cases might require users to share an ID in order to verify that they are at least 18 years old," reports 404 Media. "We know this is a privacy compromise for adults but believe it is a worthy tradeoff," the company said in its announcement. "I don't expect that everyone will agree with these tradeoffs, but given the conflict it is important to explain our decisionmaking," OpenAI CEO Sam Altman said on X. From the report: OpenAI introduced parental controls to ChatGPT earlier in September, but has now introduced new, more strict and invasive security measures. In addition to attempting to guess or verify a user's age, ChatGPT will now also apply different rules to teens who are using the chatbot. "For example, ChatGPT will be trained not to do the above-mentioned flirtatious talk if asked, or engage in discussions about suicide of self-harm even in a creative writing setting," the announcement said. "And, if an under-18 user is having suicidal ideation, we will attempt to contact the users' parents and if unable, will contact the authorities in case of imminent harm."

OpenAI's post explains that it is struggling to manage an inherent problem with large language models that 404 Media has tracked for several years. ChatGPT used to be a far more restricted chatbot that would refuse to engage users on a wide variety of issues the company deemed dangerous or inappropriate. Competition from other models, especially locally hosted and so-called "uncensored" models, and a political shift to the right which sees many forms of content moderation as censorship, has caused OpenAI to loosen those restrictions.

"We want users to be able to use our tools in the way that they want, within very broad bounds of safety," Open AI said in its announcement. The position it seemed to have landed on given these recent stories about teen suicide, is that it wants to "'Treat our adult users like adults' is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else's freedom."
AI

OpenAI's First Study On ChatGPT Usage (arstechnica.com) 20

An anonymous reader quotes a report from Ars Technica: Today, OpenAI's Economic Research Team went a long way toward answering that question, on a population level, releasing a first-of-its-kind National Bureau of Economic Research working paper (in association with Harvard economist David Denning) detailing how people end up using ChatGPT across time and tasks. While other research has sought to estimate this kind of usage data using self-reported surveys, this is the first such paper with direct access to OpenAI's internal user data. As such, it gives us an unprecedented direct window into reliable usage stats for what is still the most popular application of LLMs by far. After digging through the dense 65-page paper, here are seven of the most interesting and/or surprising things we discovered about how people are using OpenAI today. Here are the seven most interesting and surprising findings from the study:

1. ChatGPT is now used by "nearly 10% of the world's adult population," up from 100 million users in early 2024 to over 700 million users in 2025. Daily traffic is about one-fifth of Google's at 2.6 billion GPT messages per day.

2. Long-term users' daily activity has plateaued since June 2025. Almost all recent growth comes from new sign-ups experimenting with ChatGPT, not from established users increasing their usage.

3. 46% of users are aged 18-25, making ChatGPT especially popular among the youngest adult cohort. Factoring in under-18 users (not counted in the study), the majority of ChatGPT users likely weren't alive in the 20th century.

4. At launch in 2022, ChatGPT was 80% male-dominated. By late 2025, the balance has shifted: 52.4% of users are now female.

5. In 2024, work vs. personal use was close to even. By mid-2025, 72% of usage is non-work related -- people are using ChatGPT more for personal, creative, and casual needs than for productivity.

6. 28% of all conversations involve writing assistance (emails, edits, translations). For work-related queries, that jumps to 42% overall, and 52% among business/management jobs. Furthermore, the report found that editing and critiquing text is more common than generating text from scratch.

7. 14.9% of work-related usage is dealt with "making decisions and solving problems." This shows people don't just use ChatGPT to do tasks -- they use it as an advisor or co-pilot to help weigh options and guide choices.
Music

Spotify Peeved After 10,000 Users Sold Data To Build AI Tools (arstechnica.com) 17

An anonymous reader quotes a report from Ars Technica: For millions of Spotify users, the "Wrapped" feature -- which crunches the numbers on their annual listening habits -- is a highlight of every year's end, ever since it debuted in 2015. NPR once broke down exactly why our brains find the feature so "irresistible," while Cosmopolitan last year declared that sharing Wrapped screenshots of top artists and songs had by now become "the ultimate status symbol" for tens of millions of music fans. It's no surprise then that, after a decade, some Spotify users who are especially eager to see Wrapped evolve are no longer willing to wait to see if Spotify will ever deliver the more creative streaming insights they crave.

With the help of AI, these users expect that their data can be more quickly analyzed to potentially uncover overlooked or never-considered patterns that could offer even more insights into what their listening habits say about them. Imagine, for example, accessing a music recap that encapsulates a user's full listening history -- not just their top songs and artists. With that unlocked, users could track emotional patterns, analyzing how their music tastes reflected their moods over time and perhaps helping them adjust their listening habits to better cope with stress or major life events. And for users particularly intrigued by their own data, there's even the potential to use AI to cross data streams from different platforms and perhaps understand even more about how their music choices impact their lives and tastes more broadly.

Likely just as appealing as gleaning deeper personal insights, though, users could also potentially build AI tools to compare listening habits with their friends. That could lead to nearly endless fun for the most invested music fans, where AI could be tapped to assess all kinds of random data points, like whose breakup playlists are more intense or who really spends the most time listening to a shared favorite artist. In pursuit of supporting developers offering novel insights like these, more than 18,000 Spotify users have joined "Unwrapped," a collective launched in February that allows them to pool and monetize their data.

Voting as a group through the decentralized data platform Vana -- which Wired profiled earlier this year -- these users can elect to sell their dataset to developers who are building AI tools offering fresh ways for users to analyze streaming data in ways that Spotify likely couldn't or wouldn't. In June, the group made its first sale, with 99.5 percent of members voting yes. Vana co-founder Anna Kazlauskas told Ars that the collective -- at the time about 10,000 members strong -- sold a "small portion" of its data (users' artist preferences) for $55,000 to Solo AI. While each Spotify user only earned about $5 in cryptocurrency tokens -- which Kazlauskas suggested was not "ideal," wishing the users had earned about "a hundred times" more -- she said the deal was "meaningful" in showing Spotify users that their data "is actually worth something."
Spotify responded to the collective by citing both trademark and policy violations. The company sent a letter to Unwrapped developers, warning that the project's name may infringe on Spotify's Wrapped branding, and that Unwrapped breaches developer terms. Specifically, Spotify objects to Unwrapped's use of platform data for AI/ML training and facilitating user data sales.

"Spotify honors our users' privacy rights, including the right of portability," Spotify's spokesperson said. "All of our users can receive a copy of their personal data to use as they see fit. That said, UnwrappedData.org is in violation of our Developer Terms which prohibit the collection, aggregation, and sale of Spotify user data to third parties."

Unwrapped says it plans to defend users' right to "access, control, and benefit from their own data," while providing reassurances that it will "respect Spotify's position as a global music leader."

Slashdot Top Deals