AI

More Than Half of New Articles On the Internet Are Being Written By AI 61

An anonymous reader quotes a report from the Conversation: The line between human and machine authorship is blurring, particularly as it's become increasingly difficult to tell whether something was written by a person or AI. Now, in what may seem like a tipping point, the digital marketing firm Graphite recently published a study showing that more than 50% of articles on the web are being generated by artificial intelligence. [...]

It's important to clarify what's meant by "online content," the phrase used in the Graphite study, which analyzed over 65,000 randomly selected articles of at least 100 words on the web. These can include anything from peer-reviewed research to promotional copy for miracle supplements. A closer reading of the Graphite study shows that the AI-generated articles consist largely of general-interest writing: news updates, how-to guides, lifestyle posts, reviews and product explainers.

The primary economic purpose of this content is to persuade or inform, not to express originality or creativity. Put differently, AI appears to be most useful when the writing in question is low-stakes and formulaic: the weekend-in-Rome listicle, the standard cover letter, the text produced to market a business. A whole industry of writers -- mostly freelance, including many translators -- has relied on precisely this kind of work, producing blog posts, how-to material, search engine optimization text and social media copy. The rapid adoption of large language models has already displaced many of the gigs that once sustained them.

The dramatic loss of this work points toward another issue raised by the Graphite study: the question of authenticity, not only in identifying who or what produced a text, but also in understanding the value that humans attach to creative activity. How can you distinguish a human-written article from a machine-generated one? And does that ability even matter? Over time, that distinction is likely to grow less significant, particularly as more writing emerges from interactions between humans and AI...
"If you set aside the more apocalyptic scenarios and assume that AI will continue to advance -- perhaps at a slower pace than in the recent past -- it's quite possible that thoughtful, original, human-generated writing will become even more valuable," writes author Francesco Agnellini, in closing.

"Put another way: The work of writers, journalists and intellectuals will not become superfluous simply because much of the web is no longer written by humans."
Privacy

Google Maps Will Let You Hide Your Identity When Writing Reviews (pcmag.com) 37

An anonymous reader quotes a report from PCMag: Four new features are coming to Google Maps, including a way to hide your identity in reviews. Maps will soon let you use a nickname and select an alternative profile picture for online reviews, so you can rate a business without linking it to full name and Google profile photo. Google says it will monitor for "suspicious and fake reviews," and every review is still associated with an account on Google's backend, which it believes will discourage bad actors.

Look for a new option under Your Profile that says Use a custom name & picture for posting. You'll then be able to pick an illustration to represent you and add a nickname. Google didn't explain why it is introducing anonymous reviews; it pitched the idea as a way to be a business's "Secret Santa." Some users are nervous to publicly post reviews for local businesses as it may be used to track their location or movements. It may encourage more people to contribute honest feedback to its platform, for better or worse.
Further reading: Gemini AI To Transform Google Maps Into a More Conversational Experience
Google

Singapore Orders Apple, Google To Prevent Government Spoofing on Messaging Platforms (reuters.com) 8

An anonymous reader shares a report: Singapore's police have ordered Apple and Google to prevent the spoofing of government agencies on their messaging platforms, the home affairs ministry said on Tuesday. The order under the nation's Online Criminal Harms Act came after the police observed scams on Apple's iMessage and Google Messages purporting to be from companies such as the local postal service SingPost. While government agencies have registered with a local SMS registry so only they can send messages with the "gov.sg" name, this does not currently apply to the iMessage and Google Messages platforms.
Security

Hacker Conference Installed a Literal Antivirus Monitoring System (wired.com) 49

An anonymous reader quotes a report from Wired: Hacker conferences -- like all conventions -- are notorious for giving attendees a parting gift of mystery illness. To combat "con crud," New Zealand's premier hacker conference, Kawaiicon, quietly launched a real-time, room-by-room carbon dioxide monitoring system for attendees. To get the system up and running, event organizers installed DIY CO2 monitors throughout the Michael Fowler Centre venue before conference doors opened on November 6. Attendees were able to check a public online dashboard for clean air readings for session rooms, kids' areas, the front desk, and more, all before even showing up. "It's ALMOST like we are all nerds in a risk-based industry," the organizers wrote on the convention's website. "What they did is fantastic," Jeff Moss, founder of the Defcon and Black Hat security conferences, told WIRED. "CO2 is being used as an approximation for so many things, but there are no easy, inexpensive network monitoring solutions available. Kawaiicon building something to do this is the true spirit of hacking." [...]

Kawaiicon's work began one month before the conference. In early October, organizers deployed a small fleet of 13 RGB Matrix Portal Room CO2 Monitors, an ambient carbon dioxide monitor DIY project adapted from US electronics and kit company Adafruit Industries. The monitors were connected to an Internet-accessible dashboard with live readings, daily highs and lows, and data history that showed attendees in-room CO2 trends. Kawaiicon tested its CO2 monitors in collaboration with researchers from the University of Otago's public health department. The Michael Fowler Centre is a spectacular blend of Scandinavian brutalism and interior woodwork designed to enhance sound and air, including two grand pou -- carved Mori totems -- next to the main entrance that rise through to the upper foyers. Its cathedral-like acoustics posed a challenge to Kawaiicon's air-hacking crew, which they solved by placing the RGB monitors in stereo. There were two on each level of the Main Auditorium (four total), two in the Renouf session space on level 1, plus monitors in the daycare and Kuracon (kids' hacker conference) areas. To top it off, monitors were placed in the Quiet Room, at the Registration Desk, and in the Green Room.

Kawaiicon's attendees could quickly check the conditions before they arrived and decide how to protect themselves accordingly. At the event, WIRED observed attendees checking CO2 levels on their phones, masking and unmasking in different conference areas, and watching a display of all room readings on a dashboard at the registration desk. In each conference session room, small wall-mounted monitors displayed stoplight colors showing immediate conditions: green for safe, orange for risky, and red to show the room had high CO2 levels, the top level for risk. Colorful custom-made Kawaiicon posters by New Zealand artist Pepper Raccoon placed throughout the Michael Fowler Centre displayed a QR code, making the CO2 dashboard a tap away, no matter where they were at the conference.
Resources, parts lists, and assembly guides can be found here.
AI

'We Could've Asked ChatGPT': UK Students Fight Back Over Course Taught By AI (theguardian.com) 55

An anonymous reader shared this report from the Guardian: James and Owen were among 41 students who took a coding module at the University of Staffordshire last year, hoping to change careers through a government-funded apprenticeship programme designed to help them become cybersecurity experts or software engineers. But after a term of AI-generated slides being read, at times, by an AI voiceover, James said he had lost faith in the programme and the people running it, worrying he had "used up two years" of his life on a course that had been done "in the cheapest way possible".

"If we handed in stuff that was AI-generated, we would be kicked out of the uni, but we're being taught by an AI," said James during a confrontation with his lecturer recorded as a part of the course in October 2024. James and other students confronted university officials multiple times about the AI materials. But the university appears to still be using AI-generated materials to teach the course. This year, the university uploaded a policy statement to the course website appearing to justify the use of AI, laying out "a framework for academic professionals leveraging AI automation" in scholarly work and teaching...

For students, AI teaching appears to be less transformative than it is demoralising. In the US, students post negative online reviews about professors who use AI. In the UK, undergraduates have taken to Reddit to complain about their lecturers copying and pasting feedback from ChatGPT or using AI-generated images in courses.

"I feel like a bit of my life was stolen," James told the Guardian (which also quotes an unidentified student saying they felt "robbed of knowledge and enjoyment".) But the article also points out that a survey last year of 3,287 higher-education teaching staff by edtech firm Jisc found that nearly a quarter were using AI tools in their teaching.
Music

Napster Said It Raised $3 Billion From a Mystery Investor. But Now the 'Investor' and 'Money' Are Gone (forbes.com) 41

An anonymous reader shared this report from Forbes: On November 20, at approximately 4 p.m. Eastern time, Napster held an online meeting for its shareholders; an estimated 700 of roughly 1,500 including employees, former employees and individual investors tuned in. That's when its CEO John Acunto told everyone he believed that the never-identified big investor — who the company had insisted put in $3.36 billion at a $12 billion valuation in January, which would have made it one of the year's biggest fundraises — was not going to come through.

In an email sent out shortly after, it told existing investors that some would get a bigger percentage of the company, due to the canceled shares, and went on to describe itself as a "victim of misconduct," adding that it was "assisting law enforcement with their ongoing investigations." As for the promised tender offer, which would have allowed shareholders to cash out, that too was called off. "Since that investor was also behind the potential tender, we also no longer believe that will occur," the company wrote in the email.

At this point it seems unlikely that getting bigger stakes in the business will make any of the investors too happy. The company had been stringing its employees and investors along for nearly a year with ever-changing promises of an impending cash infusion and chances to sell their shares in a tender offer that would change everything. In fact, it was the fourth time since 2022 they've been told they could soon cash out via a tender offer, and the fourth time the potential deal fell through. Napster spokesperson Gillian Sheldon said certain statements about the fundraise "were made in good faith based on what we understood at the time. We have since uncovered indications of misconduct that suggest the information provided to us then was not accurate."

The article notes America's Department of Justice has launched an investigation (in which Napster is not a target), while the Securities and Exchange Commission has a separate ongoing investigation from 2022 into Napster's scrapped reverse merger.

While Napster announced they'd been acquired for $207 million by a tech company named Infinite Reality, Forbes says that company faced "a string of lawsuits from creditors alleging unpaid bills, a federal lawsuit to enforce compliance with an SEC subpoena (now dismissed) and exaggerated claims about the extent of their partnerships with Manchester City Football Club and Google. The company also touted 'top-tier' investors who never directly invested in the firm, and its anonymous $3 billion investment that its spokesperson told Forbes in March was in "an Infinite Reality account and is available to us" and that they were 'actively leveraging' it..."

And by the end, "Napster appears to have been scrambling to raise cash to keep the lights on, working with brokers and investment advisors including a few who had previously gotten into trouble with regulators.... If it turns out that Napster knew the fundraise wasn't happening and it benefited from misrepresenting itself to investors or acquirees, it could face much bigger problems. That's because doing so could be considered securities fraud."
Mozilla

Mozilla Announces 'TABS API' For Developers Building AI Agents (omgubuntu.co.uk) 10

"Fresh from announcing it is building an AI browsing mode in Firefox and laying the groundwork for agentic interactions in the Firefox 145 release, the corp arm of Mozilla is now flexing its AI muscles in the direction of those more likely to care," writes the blog OMG Ubuntu: If you're a developer building AI agents, you can sign up to get early access to Mozilla's TABS API, a "powerful web content extraction and transformation toolkit designed specifically for AI agent builders"... The TABS API enables devs to create agents to automate web interactions, like clicking, scrolling, searching, and submitting forms "just like a human". Real-time feedback and adaptive behaviours will, Mozilla say, offer "full control of the web, without the complexity."

As TABS is not powered by a Mozilla-backed LLM you'll need to connect it to your choice of third-party LLM for any relevant processing... Developers get 1,000 requests monthly on the free tier, which seems reasonable for prototyping personal projects. Complex agentic workloads may require more. Though pricing is yet to be locked in, the TABS API website suggests it'll cost ~$5 per 1000 requests. Paid plans will offer additional features too, like lower latency and, somewhat ironically, CAPTCHA solving so AI can 'prove' it's not a robot on pages gated to prevent automated activities.

Google, OpenAI, and other major AI vendors offer their own agentic APIs. Mozilla is pitching up late, but it plans to play differently. It touts a "strong focus on data minimisation and security", with scraped data treated ephemerally — i.e., not kept. As a distinction, that matters. AI agents can be given complex online tasks that involve all sorts of personal or sensitive data being fetched and worked with.... If you're minded to make one, perhaps without a motivation to asset-strip the common good, Mozilla's TABS API look like a solid place to start.

AI

Analyzing 47,000 ChatGPT Conversations Shows Echo Chambers, Sensitive Data - and Unpredictable Medical Advice (yahoo.com) 33

For nearly three years OpenAI has touted ChatGPT as a "revolutionary" (and work-transforming) productivity tool, reports the Washington Post.

But after analyzing 47,000 ChatGPT conversations, the Post found that users "are overwhelmingly turning to the chatbot for advice and companionship, not productivity tasks." The Post analyzed a collection of thousands of publicly shared ChatGPT conversations from June 2024 to August 2025. While ChatGPT conversations are private by default, the conversations analyzed were made public by users who created shareable links to their chats that were later preserved in the Internet Archive and downloaded by The Post. It is possible that some people didn't know their conversations would become publicly preserved online. This unique data gives us a glimpse into an otherwise black box...

Overall, about 10 percent of the chats appeared to show people talking about their emotions, role-playing, or seeking social interactions with the chatbot. Some users shared highly private and sensitive information with the chatbot, such as information about their family in the course of seeking legal advice. People also sent ChatGPT hundreds of unique email addresses and dozens of phone numbers in the conversations... Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said that it appears ChatGPT "is trained to further or deepen the relationship." In some of the conversations analyzed, the chatbot matched users' viewpoints and created a personalized echo chamber, sometimes endorsing falsehoods and conspiracy theories.

Four of ChatGPT's answers about health problems got a failing score from a chair of medicine at the University of California San Francisco, the Post points out. But four other answers earned a perfect score.
HP

HP and Dell Disable HEVC Support Built Into Their Laptops' CPUs (arstechnica.com) 105

An anonymous reader quotes a report from Ars Technica: Some Dell and HP laptop owners have been befuddled by their machines' inability to play HEVC/H.265 content in web browsers, despite their machines' processors having integrated decoding support. Laptops with sixth-generation Intel Core and later processors have built-in hardware support for HEVC decoding and encoding. AMD has made laptop chips supporting the codec since 2015. However, both Dell and HP have disabled this feature on some of their popular business notebooks.

HP discloses this in the data sheets for its affected laptops, which include the HP ProBook 460 G11 [PDF], ProBook 465 G11 [PDF], and EliteBook 665 G11 [PDF]. "Hardware acceleration for CODEC H.265/HEVC (High Efficiency Video Coding) is disabled on this platform," the note reads. Despite this notice, it can still be jarring to see a modern laptop's web browser eternally load videos that play easily in media players.
HP and Dell didn't explain why the companies disabled HEVC hardware decoding on their laptops' processors.

A statement from an HP spokesperson said: "In 2024, HP disabled the HEVC (H.265) codec hardware on select devices, including the 600 Series G11, 400 Series G11, and 200 Series G9 products. Customers requiring the ability to encode or decode HEVC content on one of the impacted models can utilize licensed third-party software solutions that include HEVC support. Check with your preferred video player for HEVC software support."

Dell's media relations team shared a similar statement: "HEVC video playback is available on Dell's premium systems and in select standard models equipped with hardware or software, such as integrated 4K displays, discrete graphics cards, Dolby Vision, or Cyberlink BluRay software. On other standard and base systems, HEVC playback is not included, but users can access HEVC content by purchasing an affordable third-party app from the Microsoft Store. For the best experience with high-resolution content, customers are encouraged to select systems designed for 4K or high-performance needs."
AI

Advocacy Groups Urge Parents To Avoid AI Toys This Holiday Season 32

An anonymous reader quotes a report from the Associated Press: They're cute, even cuddly, and promise learning and companionship -- but artificial intelligence toys are not safe for kids, according to children's and consumer advocacy groups urging parents not to buy them during the holiday season. These toys, marketed to kids as young as 2 years old, are generally powered by AI models that have already been shown to harm children and teenagers, such as OpenAI's ChatGPT, according to an advisory published Thursday by the children's advocacy group Fairplay and signed by more than 150 organizations and individual experts such as child psychiatrists and educators.

"The serious harms that AI chatbots have inflicted on children are well-documented, including fostering obsessive use, having explicit sexual conversations, and encouraging unsafe behaviors, violence against others, and self-harm," Fairplay said. AI toys, made by companies including Curio Interactive and Keyi Technologies, are often marketed as educational, but Fairplay says they can displace important creative and learning activities. They promise friendship but disrupt children's relationships and resilience, the group said. "What's different about young children is that their brains are being wired for the first time and developmentally it is natural for them to be trustful, for them to seek relationships with kind and friendly characters," said Rachel Franz, director of Fairplay's Young Children Thrive Offline Program. Because of this, she added, the trust young children are placing in these toys can exacerbate the types of harms older children are already experiencing with AI chatbots.

A separate report Thursday by Common Sense Media and psychiatrists at Stanford University's medical school warned teenagers against using popular AI chatbots as therapists. Fairplay, a 25-year-old organization formerly known as the Campaign for a Commercial-Free Childhood, has been warning about AI toys for years. They just weren't as advanced as they are today. A decade ago, during an emerging fad of internet-connected toys and AI speech recognition, the group helped lead a backlash against Mattel's talking Hello Barbie doll that it said was recording and analyzing children's conversations. This time, though AI toys are mostly sold online and more popular in Asia than elsewhere, Franz said some have started to appear on store shelves in the U.S. and more could be on the way. "Everything has been released with no regulation and no research, so it gives us extra pause when all of a sudden we see more and more manufacturers, including Mattel, who recently partnered with OpenAI, potentially putting out these products," Franz said.
Last week, consumer advocates at U.S. PIRG called out the trend of buying AI toys in its annual "Trouble in Toyland" report. This year, the organization tested four toys that use AI chatbots. "We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls," the report said.
Games

Roblox Blocks Children From Chatting To Adult Strangers (bbc.com) 52

Roblox is rolling out mandatory facial age-verification for chat features to prevent children from communicating with adult strangers. The platform will restrict chat to verified age groups, expand parental controls, and become the first major gaming platform to require facial age checks for messaging. The BBC reports: Mandatory age checks will be introduced for accounts using chat features, starting in December for Australia, New Zealand and the Netherlands, then the rest of the globe from January. [...] Rani Govender, policy manager for child safety online at the NSPCC, said action had been needed because young people had been exposed to "unacceptable risks" on Roblox, "leaving many vulnerable to harm and online abuse."

The charity welcomed the platform's latest announcement but called on Roblox to "ensure they deliver change for children in practice and prevent adult perpetrators from targeting and manipulating young users." The platform averaged more than 80 million daily players in 2024, about 40% of them under the age of 13. [...]

Matt Kaufman, chief safety officer for Roblox, told a press briefing the age estimation technology is "pretty accurate." He claimed the system can make close estimates of "within one to two years" bracket for users aged between five and 25. Currently it can be used voluntarily by anyone in the world.

Businesses

Adobe Bolsters AI Marketing Tools With $1.9 Billion Semrush Buy (reuters.com) 4

Adobe is buying Semrush for $1.9 billion in a move to supercharge its AI-driven marketing stack. Reuters reports: Semrush designs and develops AI software that helps companies with search engine optimization, social media and digital advertising. The acquisition, expected to close in the first half of next year, would allow Adobe to help marketers better understand how their brands are viewed by online consumers through searches on websites and generative AI bots such as ChatGPT and Gemini. "The price is steep as Semrush isn't a massive revenue engine on its own, so Adobe is likely paying for strategic value. The payoff could be high too if Adobe can quickly turn Semrush's data into monetizable AI products," said Emarketer analyst Grace Harmon.

"While we are positive on Adobe restarting its M&A engine given the success that it has seen with this motion over the years... this deal likely does little to answer the questions revolving around the company's creative cloud business," added William Blair analysts.
The Internet

Europe's Cookie Nightmare is Crumbling (theverge.com) 126

The EU's cookie consent policies have been an annoying and unavoidable part of browsing the web in Europe since their introduction in 2018. But the cookie nightmare is about to crumble thanks to some big proposed changes announced by the European Commission today. From a report: Instead of having to click accept or reject on a cookie pop-up for every website you visit in Europe, the EU is preparing to enforce rules that will allow users to set their preferences for cookies at the browser level. "People can set their privacy preferences centrally -- for example via the browser -- and websites must respect them," says the EU. "This will drastically simplify users' online experience."

This key change is part of a new Digital Package of proposals to simplify the EU's digital rules, and will initially see cookie prompts change to be a simplified yes or no single-click prompt ahead of the "technological solutions" eventually coming to browsers. Websites will be required to respect cookie choices for at least six months, and the EU also wants website owners to not use cookie banners for "harmless uses" like counting website visits, to lessen the amount of pop-ups.

Biotech

Man Who Cryogenically Froze Late Wife Sparks Debate By Dating New Partner (bbc.com) 87

A Chinese man who cryogenically preserved his wife after her death has sparked a heated online debate after it emerged he began dating a new partner in 2020. Some argue it's natural for him to move on, while others say he's being selfish or disrespectful to both his late wife and his current partner. The BBC reports: As a sign of his devotion, Gui Junmin decided to freeze his wife Zhan Wenlian's body after she died from lung cancer in 2017, aged 49, making her China's first cryogenically preserved person. But after a November interview revealed he had been dating a different partner since 2020, Chinese social media has been torn on Mr Junmin's predicament. Whilst some asked why the 57-year-old didn't just "let go" another commenter remarked he appeared to be "most devoted to himself."

After Zhan Wenlian was given months to live by doctors, Gui Junmin decided to use cryonics - which is scientifically unproven - to preserve her body once she died. Following her death, he signed a 30-year agreement to preserve his wife's frozen body with the Shandong Yinfeng Life Science Research Institute. Since then, Zhan's body has been stored in a 2,000-litre container at the institute in a vat of -190C liquid nitrogen.

Chinese newspaper Southern Weekly revealed that although Mr Junmin lived alone for two years after the procedure, in 2020 he began dating again, despite his wife remaining in cryopreservation. He told the newspaper that a severe gout attack which left him unable to move for two days began to change his mind about the benefits of living alone. Soon after, he started seeing his current partner Wang Chunxia, although Mr Junmin suggested to the paper the love was only "utilitarian" and that she hadn't "entered" his heart.

Botnet

Microsoft Mitigated the Largest Cloud DDoS Ever Recorded, 15.7 Tbps (securityaffairs.com) 11

An anonymous reader quotes a report from Security Affairs: On October 24, 2025, Azure DDoS Protection detected and mitigated a massive multi-vector attack peaking at 15.72 Tbps and 3.64 billion pps, the largest cloud DDoS ever recorded, aimed at a single Australian endpoint. Azure's global protection network filtered the traffic, keeping services online. The attack came from the Aisuru botnet, a Turbo Mirai-class IoT botnet using compromised home routers and cameras.

The attack used massive UDP floods from more than 500,000 IPs hitting a single public address, with little spoofing and random source ports that made traceback easier. It highlights how attackers are scaling with the internet: faster home fiber and increasingly powerful IoT devices keep pushing DDoS attack sizes higher.
"On October 24, 2025, Azure DDOS Protection automatically detected and mitigated a multi-vector DDoS attack measuring 15.72 Tbps and nearly 3.64 billion packets per second (pps). This was the largest DDoS attack ever observed in the cloud and it targeted a single endpoint in Australia," reads a report published by Microsoft. "The attack originated from Aisuru botnet."

"Attackers are scaling with the internet itself. As fiber-to-the-home speeds rise and IoT devices get more powerful, the baseline for attack size keeps climbing," concludes the post. "As we approach the upcoming holiday season, it is essential to confirm that all internet-facing applications and workloads are adequately protected against DDOS attacks."
The Courts

NetChoice Sues Virginia To Block Its One-Hour Social Media Limit For Kids (theverge.com) 30

NetChoice is suing Virginia to block a new law that limits kids under 16 to one hour of daily social media use unless parents approve more time, arguing the rule violates the First Amendment and introduces serious privacy risks through mandatory age-verification. The Verge reports: In addition to restricting access to legal speech, NetChoice alleges that Virginia's incoming law (SB 854) will require platforms to verify user ages in ways that would pose privacy and security risks. The law requires platforms to use "commercially reasonable methods," which it says include a screen that prompts the user to enter a birth date. However, NetChoice argues that Virginia could go beyond this requirement, citing a post from Governor Youngkin on X, stating "platforms must verify age," potentially referring to stricter methods, like having users submit a government ID or other personal information.

NetChoice, which is backed by tech giants like Meta, Google, Amazon, Reddit, and Discord, alleges that the law puts a burden on minors' ability to engage or consume speech online. "The First Amendment prohibits the government from placing these types of restrictions on accessing lawful and valuable speech, just in the same way that the government can't tell you how long you could spend reading a book, watching a television program, or consuming a documentary," Paul Taske, the co-director of the Netchoice Litigation Center, tells The Verge.

"Virginia must leave the parenting decisions where they belong: with parents," Taske says. "By asserting that authority for itself, Virginia not only violates its citizens' rights to free speech but also exposes them to increased risk of privacy and security breaches."

The Internet

Global Web Freedoms Tumble (semafor.com) 12

Global internet freedom declined for a 15th consecutive year, according to Freedom House's annual report. Semafor: "Always grim reading," this year's is particularly sobering, Tech Policy Press noted, with the lowest-ever portion of users living in countries categorized as "free." Conditions declined in 27 of the 72 countries assessed, with those in Kenya -- where anti-corruption protests were quelled, in part, by a seven-hour internet shutdown -- deteriorating the most. China and Myanmar tied for least-free, and the US' ranking dropped, while Iceland retained its top spot for the freest digital environment. Bangladesh improved the most. The most consistent trend observed over 15 years, Freedom House noted, is the growing digital influence of state actors: "Online spaces are more manipulated than ever."
AI

How Should the Linux Kernel Handle AI-Generated Contributions? (webpronews.com) 45

Linux kernel maintainers "are grappling with how to integrate AI-generated contributions without compromising the project's integrity," reports WebProNews: The latest push comes from a proposal by Sasha Levin, a prominent kernel developer at NVIDIA, who has outlined guidelines for tool-generated submissions. Posted to the kernel mailing list, these guidelines aim to standardize how AI-assisted patches are handled. According to Phoronix, the v3 iteration of the proposal [posted by Intel engineer Dave Hansen] emphasizes transparency and accountability, requiring developers to disclose AI involvement in their contributions. This move reflects broader industry concerns about the quality and copyright implications of machine-generated code.

Linus Torvalds, the creator of Linux, has weighed in on the debate, advocating for treating AI tools no differently than traditional coding aids. As reported by heise online, Torvalds sees no need for special copyright treatment for AI contributions, stating that they should be viewed as extensions of the developer's work. This perspective aligns with the kernel's pragmatic approach to innovation. The proposal, initially put forward by Levin in July 2025, includes a 'Co-developed-by' tag for AI-assisted patches, ensuring credit and traceability. OSTechNix details how tools like GitHub Copilot and Claude are specifically addressed, with configurations to guide their use in kernel development... ZDNET warns that without official policy, AI could 'creep' into the kernel and cause chaos...

The New Stack provides insight into how AI is already assisting kernel maintainers with mundane tasks. According to The New Stack, large language models (LLMs) are being used like 'novice interns' for drudgery work, freeing up experienced developers for complex problems... The Linux kernel's approach could set precedents for other open-source projects. With AI integration accelerating, projects like those in the Linux Foundation are watching closely... Recent kernel releases, such as 6.17.7, include performance improvements that indirectly support AI applications, as noted in Linux Compatible.

Programming

Security Researchers Spot 150,000 Function-less npm Packages in Automated 'Token Farming' Scheme (theregister.com) 11

An anonymous reader shared this report from The Register: Yet another supply chain attack has hit the npm registry in what Amazon describes as "one of the largest package flooding incidents in open source registry history" — but with a twist. Instead of injecting credential-stealing code or ransomware into the packages, this one is a token farming campaign.

Amazon Inspector security researchers, using a new detection rule and AI assistance, originally spotted the suspicious npm packages in late October, and, by November 7, the team had flagged thousands. By November 12, they had uncovered more than 150,000 malicious packages across "multiple" developer accounts. These were all linked to a coordinated tea.xyz token farming campaign, we're told. This is a decentralized protocol designed to reward open-source developers for their contributions using the TEA token, a utility asset used within the tea ecosystem for incentives, staking, and governance.

Unlike the spate of package poisoning incidents over recent months, this one didn't inject traditional malware into the open source code. Instead, the miscreants created a self-replicating attack, infecting the packages with code to automatically generate and publish, thus earning cryptocurrency rewards on the backs of legitimate open source developers. The code also included tea.yaml files that linked these packages to attacker-controlled blockchain wallet addresses.

At the moment, Tea tokens have no value, points out CSO Online. "But it is suspected that the threat actors are positioning themselves to receive real cryptocurrency tokens when the Tea Protocol launches its Mainnet, where Tea tokens will have actual monetary value and can be traded..." In an interview on Friday, an executive at software supply chain management provider Sonatype, which wrote about the campaign in April 2024, told CSO that number has now grown to 153,000. "It's unfortunate that the worm isn't under control yet," said Sonatype CTO Brian Fox. And while this payload merely steals tokens, other threat actors are paying attention, he predicted. "I'm sure somebody out there in the world is looking at this massively replicating worm and wondering if they can ride that, not just to get the Tea tokens but to put some actual malware in there, because if it's replicating that fast, why wouldn't you?"

When Sonatype wrote about the campaign just over a year ago, it found a mere 15,000 packages that appeared to come from a single person. With the swollen numbers reported this week, Amazon researchers wrote that it's "one of the largest package flooding incidents in open source registry history, and represents a defining moment in supply chain security...." For now, says Sonatype's Fox, the scheme wastes the time of npm administrators, who are trying to expel over 100,000 packages. But Fox and Amazon point out the scheme could inspire others to take advantage of other reward-based systems for financial gain, or to deliver malware.

After deplooying a new detection rule "paired with AI", Amazon's security researchers' write, "within days, the system began flagging packages linked to the tea.xyz protocol... By November 7, the researchers flagged thousands of packages and began investigating what appeared to be a coordinated campaign. The next day, after validating the evaluation results and analyzing the patterns, they reached out to OpenSSF to share their findings and coordinate a response.
Their blog post thanks the Open Source Security Foundation (OpenSSF) for rapid collaboration, while calling the incident "a defining moment in supply chain security..."
First Person Shooters (Games)

Sony Killed This Game in 2024. Three Developers Reverse-Engineered It Back to Life (aftermath.site) 19

An anonymous reader shared this post from the gaming news site Aftermath: Concord, Sony Interactive Entertainment and Firewalk Studios' Overwatch-like shooter, was live for just two weeks before it was pulled offline. Though Concord certainly had some dedicated players, it didn't have many — which is why it may be surprising to hear that a group of players are reverse-engineering the game and its servers to bring it back to life.

Publisher Sony removed Concord from stores and digital marketplaces, automatically refunded some, and, later, shut down Firewalk Studios. Two hundred or so people were laid off, and any hopes of Concord's return were dashed. Poor sales — estimated to be under 25,000 copies sold — and low player numbers marred the release. Firewalk Studios' game director Ryan Ellis said in a blog post that pieces of the game "resonated with players," but "other aspects of the game and [Concord's] initial launch didn't land the way [Firewalk Studios] intended."

Concord wasn't a bad game, but it just didn't generate enough interest with enough players. Now, a group of three hobbyist reverse-engineers, who go by real, Red, and gwog online, are trying to make it playable again... "Sometimes there's enough of the server left in the game, that we can 'activate' that code and make the game believe it's a server," Red said. "We do pretty much always need to fill in the gaps though..." Concord used an anti-tamper software to keep people from cheating, which also creates a problem for people reverse engineering. It's "nearly impossible" to crack, Red said, so the group didn't — they found an exploit to "forcefully decrypt the game's code" to "restore the game and start working on servers...."

It's not open to the public, but people can sign up for future tests. Even former Firewalk Studios employees have joined the server. They're excited to see Concord come back to life, too, the developers said.

"Friday morning, a video of the playtest was posted to the Concord Reddit page," according to the article. (Though ironically by Friday night YouTube had had removed the video "due to a copyright claim by MarkScan Enforcement."

Slashdot Top Deals