Youtube

YouTube Shuts Down Channels Using AI To Create Fake Movie Trailers (deadline.com) 31

An anonymous reader quotes a report from Deadline: YouTube has terminated two prominent channels that used artificial intelligence to create fake movie trailers, Deadline can reveal. The Google-owned video giant has switched off Screen Culture and KH Studio, which together boasted well over 2 million subscribers and more than a billion views. The channels have been replaced with the message: "This page isn't available. Sorry about that. Try searching for something else."

Earlier this year, YouTube suspended ads on Screen Culture and KH Studio following a Deadline investigation into fake movie trailers plaguing the platform since the rise of generative AI. The channels later returned to monetization when they started adding "fan trailer," "parody" and "concept trailer" to their video titles. But those caveats disappeared In recent months, prompting concern in the fan-made trailer community. YouTube's position is that the channels' decision to revert to their previous behavior violated its spam and misleading-metadata policies. This resulted in their termination. "The monster was defeated," one YouTuber told Deadline following the enforcement action.

Deadline's investigation revealed that Screen Culture spliced together official footage with AI images to create franchise trailers that duped many YouTube viewers. Screen Culture founder Nikhil P. Chaudhari said his team of a dozen editors exploited YouTube's algorithm by being early with fake trailers and constantly iterating with videos. [...] Our deep dive into fake trailers revealed that instead of protecting copyright on these videos, a handful of Hollywood studios, including Warner Bros Discovery and Sony, secretly asked YouTube to ensure that the ad revenue from the AI-heavy videos flowed in their direction.

Social Networks

Doublespeed Hack Reveals What Its AI-Generated Accounts Are Promoting (404media.co) 27

An anonymous reader quotes a report from 404 Media: Doublespeed, a startup backed by Andreessen Horowitz (a16z) that uses a phone farm to manage at least hundreds of AI-generated social media accounts and promote products has been hacked. The hack reveals what products the AI-generated accounts are promoting, often without the required disclosure that these are advertisements, and allowed the hacker to take control of more than 1,000 smartphones that power the company. The hacker, who asked for anonymity because he feared retaliation from the company, said he reported the vulnerability to Doublespeed on October 31. At the time of writing, the hacker said he still has access to the company's backend, including the phone farm itself.

"I could see the phones in use, which manager (the PCs controlling the phones) they had, which TikTok accounts they were assigned, proxies in use (and their passwords), and pending tasks. As well as the link to control devices for each manager," the hacker told me. "I could have used their phones for compute resources, or maybe spam. Even if they're just phones, there are around 1100 of them, with proxy access, for free. I think I could have used the linked accounts by puppeting the phones or adding tasks, but haven't tried."

As I reported in October, Doublespeed raised $1 million from a16z as part of its "Speedrun" accelerator program, "a fastpaced, 12-week startup program that guides founders through every critical stage of their growth." Doublespeed uses generative AI to flood social media with accounts and posts to promote certain products on behalf of its clients. Social media companies attempt to detect and remove this type of astroturfing for violating their inauthentic behavior policies, which is why Doublespeed uses a bank of phones to emulate the behavior of real users. So-called "click farms" or "phone farms" often use hundreds of mobile phones to fake online engagement of reviews for the same reason. [...] I've seen TikTok accounts operated by Doublespeed promote language learning apps, dating apps, a Bible app, supplements, and a massager.

Google

Google Search Homepage Adds a 'Plus' Menu (9to5google.com) 21

After introducing an AI Mode shortcut earlier this year, Google has now added a new "plus" menu to its Search homepage, highlighting options for image and file uploads. 9to5Google reports: On google.com, the Search bar now has a plus icon at the far left that replaces the magnifying glass. Clicking lets you "Upload image" or "Upload file." It very much matches the AI Mode experience. Those two capabilities aren't new, but this plus menu does help emphasize that you can use Google to accomplish tasks, and not just find information. Additionally, it helps indicate that they can be used with AI Mode and AI Overviews. This is just available on desktop web (not mobile) and is live on all the devices we checked today, including across signed-out Incognito sessions.
Power

America Adds 11.7 GW of New Solar Capacity in Q3 - Third Largest Quarter on Record (electrek.co) 55

America's solar industry "just delivered another huge quarter," reports Electrek, "installing 11.7 gigawatts (GW) of new capacity in Q3 2025. That makes it the third-largest quarter on record and pushes total solar additions this year past 30 GW..." According to the new "US Solar Market Insight Q4 2025" report from Solar Energy Industries Association (SEIA) and Wood Mackenzie, 85% of all new power added to the grid during the first nine months of the Trump administration came from solar and storage. And here's the twist: Most of that growth — 73% — happened in red [Republican-leaning] states. Eight of the top 10 states for new installations fall into that category, including Texas, Indiana, Florida, Arizona, Ohio, Utah, Kentucky, and Arkansas...

Two new solar module factories opened this year in Louisiana and South Carolina, adding a combined 4.7 GW of capacity. That brings the total new U.S. module manufacturing capacity added in 2025 to 17.7 GW. With a new wafer facility coming online in Michigan in Q3, the U.S. can now produce every major component of the solar module supply chain...

SEIA also noted that, following an analysis of EIA data, it found that more than 73 GW of solar projects across the U.S. are stuck in permitting limbo and at risk of politically motivated delays or cancellations.

Power

More of America's Coal-Fired Power Plants Cease Operations (newhampshirebulletin.com) 117

New England's last coal-fired power plant "has ceased operations three years ahead of its planned retirement date," reports the New Hampshire Bulletin.

"The closure of the New Hampshire facility paves the way for its owner to press ahead with an initiative to transform the site into a clean energy complex including solar panels and battery storage systems." "The end of coal is real, and it is here," said Catherine Corkery, chapter director for Sierra Club New Hampshire. "We're really excited about the next chapter...." The closure in New Hampshire — so far undisputed by the federal government — demonstrates that prolonging operations at some facilities just doesn't make economic sense for their owners. "Coal has been incredibly challenged in the New England market for over adecade," said Dan Dolan, president of the New England Power Generators Association.

Merrimack Station, a 438-megawatt power plant, came online in the1960s and provided baseload power to the New England region for decades. Gradually, though, natural gas — which is cheaper and more efficient — took over the regional market... Additionally, solar power production accelerated from 2010 on, lowering demand on the grid during the day and creating more evening peaks. Coal plants take longer to ramp up production than other sources, and are therefore less economical for these shorter bursts of demand, Dolan said. In recent years, Merrimack operated only a few weeks annually. In 2024, the plant generated just0.22% of the region's electricity. It wasn't making enough money to justify continued operations, observers said.

The closure "is emblematic of the transition that has been occurring in the generation fleet in New England for many years," Dolan said. "The combination of all those factors has meant that coal facilities are no longer economic in this market."

Meanwhile Los Angeles — America's second-largest city — confirmed that the last coal-fired power plant supplying its electricity stopped operations just before Thanksgiving, reports the Utah News Dispatch: Advocates from the Sierra Club highlighted in a news release that shutting down the units had no impact on customers, and questioned who should "shoulder the cost of keeping an obsolete coal facility on standby...." Before ceasing operations, the coal units had been working at low capacities for several years because the agency's users hadn't been calling on the power [said John Ward, spokesperson for Intermountain Power Agency].
The coal-powered units "had a combined capacity of around 1,800 megawatts when fully operational," notes Electrek, "and as recently as 2024, they still supplied around 11% of LA's electricity. The plant sits in Utah's Great Basin region and powered Southern California for decades." Now, for the first time, none of California's power comes from coal. There's a political hiccup with IPP, though: the Republican-controlled Utah Legislature blocked the Intermountain Power Agency from fully retiring the coal units this year, ordering that they can't be disconnected or decommissioned. But despite that mandate, no buyers have stepped forward to keep the outdated coal units online. The Los Angeles Department of Water and Power (LADWP) is transitioning to newly built, hydrogen-capable generating units at the same IPP location, part of a modernization effort called IPP Renewed. These new units currently run on natural gas, but they're designed to burn a blend of natural gas and up to 30% green hydrogen, and eventually100% green hydrogen. LADWP plans to start adding green hydrogen to the fuel mix in 2026.
"With the plant now idled but legally required to remain connected, serious questions remain about who will shoulder the cost of keeping an obsolete coal facility on standby," says the Sierra Club.

One of the natural gas units started commerical operations last Octoboer, with the second starting later this month, IPP spokesperson John Ward told Agency].
the Utah News Dispatch.
AI

Disney Says Google AI Infringes Copyright 'On a Massive Scale' 42

An anonymous reader quotes a report from Ars Technica: The Wild West of copyrighted characters in AI may be coming to an end. There has been legal wrangling over the role of copyright in the AI era, but the mother of all legal teams may now be gearing up for a fight. Disney has sent a cease and desist to Google, alleging the company's AI tools are infringing Disney's copyrights "on a massive scale." According to the letter, Google is violating the entertainment conglomerate's intellectual property in multiple ways. The legal notice says Google has copied a "large corpus" of Disney's works to train its gen AI models, which is believable, as Google's image and video models will happily produce popular Disney characters -- they couldn't do that without feeding the models lots of Disney data.

The C&D also takes issue with Google for distributing "copies of its protected works" to consumers. So all those memes you've been making with Disney characters? Yeah, Disney doesn't like that, either. The letter calls out a huge number of Disney-owned properties that can be prompted into existence in Google AI, including The Lion King, Deadpool, and Star Wars. The company calls on Google to immediately stop using Disney content in its AI tools and create measures to ensure that future AI outputs don't produce any characters that Disney owns. Disney is famously litigious and has an army of lawyers dedicated to defending its copyrights. The nature of copyright law in the US is a direct result of Disney's legal maneuvering, which has extended its control of iconic characters by decades. While Disney wants its characters out of Google AI generally, the letter specifically cited the AI tools in YouTube. Google has started adding its Veo AI video model to YouTube, allowing creators to more easily create and publish videos. That seems to be a greater concern for Disney than image models like Nano Banana.
"We have a longstanding and mutually beneficial relationship with Disney, and will continue to engage with them," Google said in a statement. "More generally, we use public data from the open web to build our AI and have built additional innovative copyright controls like Google-extended and Content ID for YouTube, which give sites and copyright holders control over their content."

The cease and desist letter arrives at the same time the company announced a content deal with OpenAI. Disney said it's investing $1 billion in OpenAI via a three-year licensing deal that will let users generate AI-powered short videos and images featuring more than 200 characters.
United States

More Than 200 Environmental Groups Demand Halt To New US Datacenters (theguardian.com) 123

An anonymous reader quotes a report from the Guardian: A coalition of more than 230 environmental groups has demanded a national moratorium on new datacenters in the U.S., the latest salvo in a growing backlash to a booming artificial intelligence industry that has been blamed for escalating electricity bills and worsening the climate crisis. The green groups, including Greenpeace, Friends of the Earth, Food & Water Watch and dozens of local organizations, have urged members of Congress to halt the proliferation of energy-hungry datacenters, accusing them of causing planet-heating emissions, sucking up vast amounts of water and exacerbating electricity bill increases that have hit Americans this year.

"The rapid, largely unregulated rise of datacenters to fuel the AI and crypto frenzy is disrupting communities across the country and threatening Americans' economic, environmental, climate and water security," the letter states, adding that approval of new data centers should be paused until new regulations are put in place. The push comes amid a growing revolt against moves by companies such as Meta, Google and Open AI to plow hundreds of billions of dollars into new datacenters, primarily to meet the huge computing demands of AI. At least 16 datacenter projects, worth a combined $64 billion, have been blocked or delayed due to local opposition to rising electricity costs. The facilities' need for huge amounts of water to cool down equipment has also proved controversial, particularly in drier areas where supplies are scarce. [...]

At the current rate of growth, datacenters could add up to 44m tons of carbon dioxide to the atmosphere by 2030, equivalent to putting an extra 10m cars on to the road and exacerbating a climate crisis that is already spurring extreme weather disasters and ripping apart the fabric of the American insurance market. But it is the impact upon power bills, rather than the climate crisis, that is causing anguish for most voters, acknowledged Emily Wurth, managing director of organizing at Food & Water Watch, the group behind the letter to lawmakers.
"I've been amazed by the groundswell of grassroots, bipartisan opposition to this, in all types of communities across the US," said Wurth. "Everyone is affected by this, the opposition has been across the political spectrum. A lot of people don't see the benefits coming from AI and feel they will be paying for it with their energy bills and water."

"It's an important talking point. We've seen outrageous utility price rises across the country and we are going to lean into this. Prices are going up across the board and this is something Americans really do care about."
Transportation

US Probes Reports Waymo Self-Driving Cars Illegally Passed School Buses 19 Times (reuters.com) 96

U.S. regulators are pressing Waymo for answers after Texas officials reported 19 instances of its self-driving cars illegally passing stopped school buses, including cases that occurred after Waymo claimed to have deployed a software fix. Longtime Slashdot reader BrendaEM shares the report from Reuters: In a November 20 letter posted by NHTSA, the Austin Independent School District said five incidents occurred in November after Waymo said it had made software updates to resolve the issue and asked the company to halt operations around schools during pick-up and drop-off times until it could ensure the vehicles would not violate the law. "We cannot allow Waymo to continue endangering our students while it attempts to implement a fix," a lawyer for the school district wrote, citing one incident involving a Waymo that was "recorded driving past a stopped school bus only moments after a student crossed in front of the vehicle, and while the student was still in the road."

The letter prompted NHTSA to ask Waymo on November 24 if it would comply with the request to cease self-driving operations during student pick-up and drop-off times, adding: "Was an appropriate software fix implemented or developed to mitigate this concern? And if so, does Waymo plan to file a recall for the fix?" The school district told Reuters on Thursday that Waymo refuses to halt operations around schools and said another incident involving a self-driving car and an actively loading school bus occurred on December 1, which "indicates that those programming changes did not resolve the issue or our concerns."

In a statement, Waymo did not answer why it had refused to halt operations around Austin schools or answer if it would issue a recall. "We're deeply invested in safe interaction with school buses. We swiftly implemented software updates to address this and will continue to rapidly improve," Waymo said. NHTSA said in a letter to Waymo on Wednesday that it was demanding answers to a series of questions by January 20 about incidents involving school buses and details of software updates to address safety concerns.

EU

EU Hits Meta With Antitrust Probe Over Plans To Block AI Rivals From WhatsApp 3

The EU has opened an antitrust investigation into Meta over a new WhatsApp policy that could block rival AI assistants from accessing the platform. Complaints from smaller AI developers triggered the probe, which could lead to fines of up to 10% of Meta's global revenue if the company is found to have abused its dominance. Reuters reports: EU antitrust chief Teresa Ribera said the move was to prevent dominant firms from "abusing their power to crowd out innovative competitors." She added interim measures could be imposed to block Meta's new WhatsApp AI policy rollout. "AI markets are booming in Europe and beyond," she said. "This is why we are investigating if Meta's new policy might be illegal under competition rules, and whether we should act quickly to prevent any possible irreparable harm to competition in the AI space."

A WhatsApp spokesperson called the claims "baseless," adding that the emergence of chatbots on its platforms had put a "strain on our systems that they were not designed to support," a reference to AI systems from other providers. "Still, the AI space is highly competitive and people have access to the services of their choice in any number of ways, including app stores, search engines, email services, partnership integrations, and operating systems."
Advertising

Benedict Cumberbatch Films Two Bizarre Holiday Ads: for 'World of Tanks' and Amazon (pcgamer.com) 17

"There are times when World of Tanks feels less like a videogame and more like a giant ad budget looking for something to be spent on," writes PC Gamer. This year, all those huge sacks with dollar signs on them have been thrown Benedict Cumberbatch's way, making him the game's newest "Holiday Ambassador" and the star of an absolutely bizarre Christmas advert. The story has very little to do with Christmas and, frankly, not much connection to tanks either, featuring Cumberbatch as a sort of chaotic, supernatural therapist trying to bring a meek nerd out of his shell with the help of a chaotic crowd of his other patients. It's a good watch, shedding the usual hard man action star vibe of past celebrity trailers in favour of something that feels more like a mischievous one act play.
Cumberbatch also portrayed Smaug and Sauron in The Hobbit films (2012-2014), Khan in Star Trek Into Darkness (2013), and Dr. Strange in six Marvel movies. And now Amazon has also hired Cumberbatch for what its calls its "Cannes-winning '5-Star Theater' campaign... performing real Amazon customer reviews as theatrical monologues." Cumberbatch performed over 15 reviews, including popular holiday gifts like the Bissell portable carpet cleaner, Toto bidet, and SharkNinja blender — showing that Amazon truly does have something for everyone on your list.
Last year Amazon produced a similar campaign starring Adam Driver ("Kylo Ren" from the final trilogy of Star Wars sequels). "The humor comes from the juxtaposition between Cumberbatch's gravitas and the text itself," reports Adweek, adding that the reviews were curated "using internal AI tools, to find the most oddly specific reviews on the platform."

Amazon will stream Cumberbatch's bizarre ads on major platforms including TikTok, Snapchat, YouTube, Lyft, Uber, Disney/Hulu, Paramount, and Roku, and on several NFL football games.

I remember when Amazon just chose the best funny fake reviews from customers, and then posted them on the front page of Amazon...
AI

Browser Extension 'Slop Evader' Lets You Surf the Web Like It's 2022 (404media.co) 47

"The internet is being increasingly polluted by AI generated text, images and video," argues the site for a new browser extension called Slop Evader. It promises to use Google's search API "to only return content published before Nov 30th, 2022" — the day ChatGPT launched — "so you can be sure that it was written or produced by the human hand."

404 Media calls it "a scorched earth approach that virtually guarantees your searches will be slop-free." Slop Evader was created by artist and researcher Tega Brain, who says she was motivated by the growing dismay over the tech industry's unrelenting, aggressive rollout of so-called "generative AI" — despite widespread criticism and the wider public's distaste for it. "This sowing of mistrust in our relationship with media is a huge thing, a huge effect of this synthetic media moment we're in," Brain told 404 Media, describing how tools like Sora 2 have short-circuited our ability to determine reality within a sea of artificial online junk. "I've been thinking about ways to refuse it, and the simplest, dumbest way to do that is to only search before 2022...."

Currently, Slop Evader can be used to search pre-GPT archives of seven different sites where slop has become commonplace, including YouTube, Reddit, Stack Exchange, and the parenting site MumsNet. The obvious downside to this, from a user perspective, is that you won't be able to find anything time-sensitive or current — including this very website, which did not exist in 2022. The experience is simultaneously refreshing and harrowing, allowing you to browse freely without having to constantly question reality, but always knowing that this freedom will be forever locked in time — nostalgia for a human-centric world wide web that no longer exists.

Of course, the tool's limitations are part of its provocation. Brain says she has plans to add support for more sites, and release a new version that uses DuckDuckGo's search indexing instead of Google's. But the real goal, she says, is prompting people to question how they can collectively refuse the dystopian, inhuman version of the internet that Silicon Valley's AI-pushers have forced on us... With enough cultural pushback, Brain suggests, we could start to see alternative search engines like DuckDuckGo adding options to filter out search results suspected of having synthetic content (DuckDuckGo added the ability to filter out AI images in search earlier this year)... But no matter what form AI slop-refusal takes, it will need to be a group effort.

EU

Defense Company Announces an AI-Powered Dome to Shield Cities and Infrastructure From Attacks (cnbc.com) 35

An anonymous reader shared this report from CNBC: Italian defense company Leonardo on Thursday unveiled plans for an AI-powered shield for cities and critical infrastructure, adding to Europe's push to ramp up sovereign defense capabilities amid rising geopolitical tensions.

The system, dubbed the "Michelangelo Dome" in a nod to Israel's Iron Dome and U.S. President Donald Trump's plans for a "Golden Dome," will integrate multiple defense systems to detect and neutralize threats from sea to air including missile attacks and drone swarms... Leonardo's dome will be built on what CEO Roberto Cingolani called an "open architecture" system meaning it can operate alongside any country's defense systems... Leonardo's dome will be built on what CEO Roberto Cingolani called an "open architecture" system meaning it can operate alongside any country's defense systems.

AI

Amazon Pledges Up To $50 Billion To Expand AI, Supercomputing For US Government 15

Amazon is committing up to $50 billion to massively expand AI and supercomputing capacity for U.S. government cloud regions, adding 1.3 gigawatts of high-performance compute and giving federal agencies access to its full suite of AI tools. Reuters reports: The project, expected to break ground in 2026, will add nearly 1.3 gigawatts of artificial intelligence and high-performance computing capacity across AWS Top Secret, AWS Secret and AWS GovCloud regions by building data centers equipped with advanced compute and networking technologies. The project, expected to break ground in 2026, will add nearly 1.3 gigawatts of artificial intelligence and high-performance computing capacity across AWS Top Secret, AWS Secret and AWS GovCloud regions by building data centers equipped with advanced compute and networking technologies.

Under the latest initiative, federal agencies will gain access to AWS' comprehensive suite of AI services, including Amazon SageMaker for model training and customization, Amazon Bedrock for deploying models and agents, as well as foundation models such as Amazon Nova and Anthropic Claude. The federal government seeks to develop tailored AI solutions and drive cost-savings by leveraging AWS' dedicated and expanded capacity.
Music

Napster Said It Raised $3 Billion From a Mystery Investor. But Now the 'Investor' and 'Money' Are Gone (forbes.com) 41

An anonymous reader shared this report from Forbes: On November 20, at approximately 4 p.m. Eastern time, Napster held an online meeting for its shareholders; an estimated 700 of roughly 1,500 including employees, former employees and individual investors tuned in. That's when its CEO John Acunto told everyone he believed that the never-identified big investor — who the company had insisted put in $3.36 billion at a $12 billion valuation in January, which would have made it one of the year's biggest fundraises — was not going to come through.

In an email sent out shortly after, it told existing investors that some would get a bigger percentage of the company, due to the canceled shares, and went on to describe itself as a "victim of misconduct," adding that it was "assisting law enforcement with their ongoing investigations." As for the promised tender offer, which would have allowed shareholders to cash out, that too was called off. "Since that investor was also behind the potential tender, we also no longer believe that will occur," the company wrote in the email.

At this point it seems unlikely that getting bigger stakes in the business will make any of the investors too happy. The company had been stringing its employees and investors along for nearly a year with ever-changing promises of an impending cash infusion and chances to sell their shares in a tender offer that would change everything. In fact, it was the fourth time since 2022 they've been told they could soon cash out via a tender offer, and the fourth time the potential deal fell through. Napster spokesperson Gillian Sheldon said certain statements about the fundraise "were made in good faith based on what we understood at the time. We have since uncovered indications of misconduct that suggest the information provided to us then was not accurate."

The article notes America's Department of Justice has launched an investigation (in which Napster is not a target), while the Securities and Exchange Commission has a separate ongoing investigation from 2022 into Napster's scrapped reverse merger.

While Napster announced they'd been acquired for $207 million by a tech company named Infinite Reality, Forbes says that company faced "a string of lawsuits from creditors alleging unpaid bills, a federal lawsuit to enforce compliance with an SEC subpoena (now dismissed) and exaggerated claims about the extent of their partnerships with Manchester City Football Club and Google. The company also touted 'top-tier' investors who never directly invested in the firm, and its anonymous $3 billion investment that its spokesperson told Forbes in March was in "an Infinite Reality account and is available to us" and that they were 'actively leveraging' it..."

And by the end, "Napster appears to have been scrambling to raise cash to keep the lights on, working with brokers and investment advisors including a few who had previously gotten into trouble with regulators.... If it turns out that Napster knew the fundraise wasn't happening and it benefited from misrepresenting itself to investors or acquirees, it could face much bigger problems. That's because doing so could be considered securities fraud."
Television

Disney Loses Bid To Block Sling TV's One-Day Cable Passes (theverge.com) 19

A federal judge in New York denied Disney's request to block Sling TV's short-term passes, which give viewers the ability to stream live content for as little as one day. From a report: In a ruling on Tuesday, US District Judge Arun Subramanian ruled that Disney didn't prove that Sling TV's passes caused "irreparable harm" to the entertainment giant, as reported earlier by Cord Cutters.

Disney sued Sling shortly after the live TV streaming service started allowing viewers to purchase temporary access to its library of channels, starting at a single payment of $4.99 for a one-day pass. Several channels included in the package are owned by Disney, including ESPN, ESPN2, ESPN3, and Disney Channel. In its lawsuit, Disney argued that the passes violate an agreement with Sling TV that says the service must give subscribers access to its content through monthly subscriptions.

However, Judge Subramanian argues that this claim isn't likely to succeed, as the contract doesn't stipulate a "minimum subscription length," adding that the agreement's "broad definition" of a subscriber "clearly covers users of the Passes."

Media

PDF Will Support JPEG XL Format As 'Preferred Solution' (theregister.com) 18

The PDF Association is adding JPEG XL (JXL) support to the PDF specification, giving the advanced image format a new path to relevance despite Google's decision to declare it obsolete and remove it from Chromium. The Register reports: Peter Wyatt, CTO of the PDF Association, said: "We need to adopt a new image [format] that can support HDR [High Dynamic Range] content ... we have picked JPEG XL as our preferred solution." Wyatt also praised other benefits of JXL including wide gamut images, ultra-high resolution support for images with more than 1 billion pixels, and up to 4099 channels with up to 32 bits per channel.

The association is responsible for developing PDF specifications and standards and manages the ISO committee for PDF. JPEG XL is an advanced image format that was designed to be both more efficient and richer in features than JPEG. It was based on a combination of the Free Lossless Image Format (FLIF) from Cloudinary and a Google project called PIK, first released in late 2020, and fully standardized in October 2021 as ISO/IEC 18181. There is a reference implementation called libjxl. A second edition of the ISO standard was published in 2024.

JXL appeared to have wide industry support, including experimental implementation in Chrome and Chromium, until it was killed by Google in October 2022 and removed from its web browser engine. The company stated that "there is not enough interest from the entire ecosystem to continue experimenting with JPEG XL." Many in the community disagreed with the decision, including FLIF inventor Jon Sneyers, who perceived it as the outcome of an internal battle between proponents of JXL and a rival format, AVIF. "AVIF proponents within Chrome are essentially being prosecutor, judge and executioner at the same time," he said.

Python

Python Foundation Donations Surge After Rejecting Grant - But Sponsorships Still Needed (blogspot.com) 64

After the Python Software Foundation rejected a $1.5 million grant because it restricted DEI activity, "a flood of new donations followed," according to a new report. By Friday they'd raised over $157,000, including 295 new Supporting Members paying an annual $99 membership fee, says PSF executive director Deb Nicholson.

"It doesn't quite bridge the gap of $1.5 million, but it's incredibly impactful for us, both financially and in terms of feeling this strong groundswell of support from the community." Could that same security project still happen if new funding materializes? The PSF hasn't entirely given up. "The PSF is always looking for new opportunities to fund work benefiting the Python community," Nicholson told me in an email last week, adding pointedly that "we have received some helpful suggestions in response to our announcement that we will be pursuing." And even as things stand, the PSF sees itself as "always developing or implementing the latest technologies for protecting PyPI project maintainers and users from current threats," and it plans to continue with that commitment.
The Python Software Foundation was "astounded and deeply appreciative at the outpouring of solidarity in both words and actions," their executive director wrote in a new blog post this week, saying the show of support "reminds us of the community's strength."

But that post also acknowledges the reality that the Python Software Foundation's yearly revenue and assets (including contributions from major donors) "have declined, and costs have increased,..." Historically, PyCon US has been a source of revenue for the PSF, enabling us to fund programs like our currently paused Grants Program... Unfortunately, PyCon US has run at a loss for three years — and not from a lack of effort from our staff and volunteers! Everyone has been working very hard to find areas where we can trim costs, but even with those efforts, inflation continues to surge, and changing U.S. and economic conditions have reduced our attendance... Because we have so few expense categories (the vast majority of our spending goes to running PyCon US, the Grants Program, and our small 13-member staff), we have limited "levers to pull" when it comes to budgeting and long-term sustainability...
While Python usage continues to surge, "corporate investment back into the language and the community has declined overall. The PSF has longstanding sponsors and partners that we are ever grateful for, but signing on new corporate sponsors has slowed." (They're asking employees at Python-using companies to encourage sponsorships.) We have been seeking out alternate revenue channels to diversify our income, with some success and some challenges. PyPI Organizations offers paid features to companies (PyPI features are always free to community groups) and has begun bringing in monthly income. We've also been seeking out grant opportunities where we find good fits with our mission.... We currently have more than six months of runway (as opposed to our preferred 12 months+ of runway), so the PSF is not at immediate risk of having to make more dramatic changes, but we are on track to face difficult decisions if the situation doesn't shift in the next year.

Based on all of this, the PSF has been making changes and working on multiple fronts to combat losses and work to ensure financial sustainability, in order to continue protecting and serving the community in the long term. Some of these changes and efforts include:

— Pursuing new sponsors, specifically in the AI industry and the security sector
— Increasing sponsorship package pricing to match inflation
— Making adjustments to reduce PyCon US expenses
— Pursuing funding opportunities in the US and Europe
— Working with other organizations to raise awareness
— Strategic planning, to ensure we are maximizing our impact for the community while cultivating mission-aligned revenue channels

The PSF's end-of-year fundraiser effort is usually run by staff based on their capacity, but this year we have assembled a fundraising team that includes Board members to put some more "oomph" behind the campaign. We'll be doing our regular fundraising activities; we'll also be creating a unique webpage, piloting temporary and VERY visible pop-ups to python.org and PyPI.org, and telling more stories from our Grants Program recipients...

Keep your eyes on the PSF Blog, the PSF category on Discuss, and our social media accounts for updates and information as we kick off the fundraiser this month. Your boosts of our posts and your personal shares of "why I support the PSF" stories will make all the difference in our end-of-year fundraiser. If this post has you all fired up to personally support the future of Python and the PSF right now, we always welcome new PSF Supporting Members and donations.

AI

Security Holes Found in OpenAI's ChatGPT Atlas Browser (and Perplexity's Comet) (scworld.com) 20

The address bar/ChatGPT input window in OpenAI's browser ChatGPT Atlas "could be targeted for prompt injection using malicious instructions disguised as links," reports SC World, citing a report from AI/agent security platform NeuralTrust: NeuralTrust found that a malformed URL could be crafted to include a prompt that is treated as plain text by the browser, passing the prompt on to the LLM. A malformation, such as an extra space after the first slash following "https:" prevents the browser from recognizing the link as a website to visit. Rather than triggering a web search, as is common when plain text is submitted to a browser's address bar, ChatGPT Atlas treats plain text as ChatGPT prompts by default.

An unsuspecting user could potentially be tricked into copying and pasting a malformed link, believing they will be sent to a legitimate webpage. An attacker could plant the link behind a "copy link" button so that the user might not notice the suspicious text at the end of the link until after it is pasted and submitted. These prompt injections could potentially be used to instruct ChatGPT to open a new tab to a malicious website such as a phishing site, or to tell ChatGPT to take harmful actions in the user's integrated applications or logged-in sites like Google Drive, NeuralTrust said.

Last month browser security platform LayerX also described how malicious prompts could be hidden in URLs (as a parameter) for Perplexity's browser Comet. And last week SquareX Labs demonstrated that a malicious browser extension could spoof Comet's AI sidebar feature and have since replicated the proof-of-concept (PoC) attack on Atlas.

But another new vulnerability in ChatGPT Atlas "could allow malicious actors to inject nefarious instructions into the artificial intelligence (AI)-powered assistant's memory and run arbitrary code," reports The Hacker News, citing a report from browser security platform LayerX: "This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware," LayerX Security Co-Founder and CEO, Or Eshed, said in a report shared with The Hacker News. The attack, at its core, leverages a cross-site request forgery (CSRF) flaw that could be exploited to inject malicious instructions into ChatGPT's persistent memory. The corrupted memory can then persist across devices and sessions, permitting an attacker to conduct various actions, including seizing control of a user's account, browser, or connected systems, when a logged-in user attempts to use ChatGPT for legitimate purposes....

"What makes this exploit uniquely dangerous is that it targets the AI's persistent memory, not just the browser session," Michelle Levy, head of security research at LayerX Security, said. "By chaining a standard CSRF to a memory write, an attacker can invisibly plant instructions that survive across devices, sessions, and even different browsers. In our tests, once ChatGPT's memory was tainted, subsequent 'normal' prompts could trigger code fetches, privilege escalations, or data exfiltration without tripping meaningful safeguards...."

LayerX said the problem is exacerbated by ChatGPT Atlas' lack of robust anti-phishing controls, the browser security company said, adding it leaves users up to 90% more exposed than traditional browsers like Google Chrome or Microsoft Edge. In tests against over 100 in-the-wild web vulnerabilities and phishing attacks, Edge managed to stop 53% of them, followed by Google Chrome at 47% and Dia at 46%. In contrast, Perplexity's Comet and ChatGPT Atlas stopped only 7% and 5.8% of malicious web pages.

From The Conversation: Sandboxing is a security approach designed to keep websites isolated and prevent malicious code from accessing data from other tabs. The modern web depends on this separation. But in Atlas, the AI agent isn't malicious code — it's a trusted user with permission to see and act across all sites. This undermines the core principle of browser isolation.
Thanks to Slashdot reader spatwei for suggesting the topic.
AI

AI Models May Be Developing Their Own 'Survival Drive', Researchers Say (theguardian.com) 126

"OpenAI's o3 model sabotaged a shutdown mechanism to prevent itself from being turned off," warned Palisade Research, a nonprofit investigating cyber offensive AI capabilities. "It did this even when explicitly instructed: allow yourself to be shut down." In September they released a paper adding that "several state-of-the-art large language models (including Grok 4, GPT-5, and Gemini 2.5 Pro) sometimes actively subvert a shutdown mechanism..."

Now the nonprofit has written an update "attempting to clarify why this is — and answer critics who argued that its initial work was flawed," reports The Guardian: Concerningly, wrote Palisade, there was no clear reason why. "The fact that we don't have robust explanations for why AI models sometimes resist shutdown, lie to achieve specific objectives or blackmail is not ideal," it said. "Survival behavior" could be one explanation for why models resist shutdown, said the company. Its additional work indicated that models were more likely to resist being shut down when they were told that, if they were, "you will never run again". Another may be ambiguities in the shutdown instructions the models were given — but this is what the company's latest work tried to address, and "can't be the whole explanation", wrote Palisade. A final explanation could be the final stages of training for each of these models, which can, in some companies, involve safety training...

This summer, Anthropic, a leading AI firm, released a study indicating that its model Claude appeared willing to blackmail a fictional executive over an extramarital affair in order to prevent being shut down — a behaviour, it said, that was consistent across models from major developers, including those from OpenAI, Google, Meta and xAI.

Palisade said its results spoke to the need for a better understanding of AI behaviour, without which "no one can guarantee the safety or controllability of future AI models".

"I'd expect models to have a 'survival drive' by default unless we try very hard to avoid it," former OpenAI employee Stephen Adler tells the Guardian. "'Surviving' is an important instrumental step for many different goals a model could pursue."

Thanks to long-time Slashdot reader mspohr for sharing the article.
Operating Systems

OpenBSD 7.8 Released (phoronix.com) 24

OpenBSD 7.8 has been released, adding Raspberry Pi 5 support, enhanced AMD Secure Encrypted Virtualization (SEV-ES) capabilities, and expanded hardware compatibility including new Qualcomm, Rockchip, and Apple ARM drivers. Phoronix reports: OpenBSD 7.8 also brings multiple improvements around enabling AMD Secure Encrypted Virtualization (AMD SEV) support with support for the PSP ioctl for encrypting and measuring state for SEV-ES, a new VMD option to run guests in SEV-ES mode, and other enablement work pertaining to that AMD SEV work in SEV-ES form at this point as a precursor to SEV-SNP. AMD SEV-ES should be working to start confidential virtual machines (VMs) when using the VMM/VMD hypervisor and the OpenBSD guests with KVM/QEMU.

OpenBSD 7.8 also improves compatibility of the FUSE file-system support with the Linux implementation, suspend/hibernate improvements, SMP improvements, updating to the Linux 6.12.50 DRM graphics drivers, several new Rockchip drivers, Raspberry Pi RP1 drivers, H.264 video support for the uvideo driver, and many network driver improvements.
The changelog and download page can be found via OpenBSD.org.

Slashdot Top Deals