AI

Southern California Air Board Rejects Pollution Rules After AI-Generated Flood of Comments 52

Southern California's air quality board rejected proposed rules to phase out gas-powered appliances after receiving more than 20,000 opposition comments generated through CiviClick, "the first and best AI-powered grassroots advocacy platform." Phys.org reports: A Southern California-based public affairs consultant, Matt Klink, has taken credit for using CiviClick to wage the opposition campaign, including in a sponsored article on the website Campaigns and Elections. The campaign "left the staff of the Southern California Air Quality Management District (SCAQMD) reeling," the article says. It is not clear how AI was deployed in the campaign, and officials at CiviClick did not respond to repeated requests for comment. But their website boasts several tools, including "state of the art technology and artificial intelligence message assistance" that can be used to create custom advocacy letters, as opposed to repetitive form letters or petitions often used in similar campaigns.

When staffers at the air district reached out to a small sample of people to verify their comments, at least three said they had not written to the agency and were not aware of any such messages, records show. But the email onslaught almost certainly influenced the board's June decision, according to agency insiders, who noted that the number of public comments typically submitted on agenda items can be counted on one hand.

The proposed rules were nearly two years in the making and would have placed a fee on natural gas-powered water heaters and furnaces, favoring electric ones, in an effort to reduce air pollution in the district, which includes Orange County and large swaths of Los Angeles, Riverside and San Bernardino counties. Gas appliances emit nitrogen oxides, or NOx -- key pollutants for forming smog. The implications are troubling, experts said, and go beyond the use of natural gas furnaces and heaters in the second-largest metropolitan area in the country.
Education

What's the Point of School When AI Can Do Your Homework? 153

An anonymous reader quotes a report from 404 Media: There's a new agentic AI called Einstein that will, according to its developers, live the life of a student for them. Einstein's website claims that the AI will attend lectures for you, write your papers, and even log into EdTech platforms like Canvas to take tests and participate in discussions. Educators told me that Einstein is just one of many AI tools that can do homework for students, but should be seen as a warning to schools that are increasingly seen by students as a place to gain a diploma and status as opposed to the value of education itself.

If an AI can go to school for you what's the point of going to school? For Advait Paliwal, Brown dropout and co-creator of Einstein, there isn't one. "I think about horses," he said. "They used to pull carriages, but when cars came around, I'd argue horses became a lot more free," he said. "They can do whatever they want now. It would be weird if horses revolted and said 'no, I want to pull carriages, this is my purpose in life.'" But humans aren't horses. "This is much bigger than Einstein," Matthew Kirschenbaum told 404 Media. "Einstein is symptomatic. I doubt we'll be talking about Einstein, as such, in a year. But it's symptomatic of what's about to descend on higher ed and secondary ed as well."

[...] The attractiveness of agentic AIs is a symptom of a decades-long trend in higher education. "Universitiesby and large adopted a transactive model of education," Kirschenbaum said. "Students see their diploma as a credential. They pay tuition and at the end of four years, sometimes five years, they receive the credential and, in theory at least, that is then the springboard to economic stability and prosperity." Paliwal seems to agree. He told 404 Media that he attempted to change the university from the inside while working as a TA, but felt stymied by politics. "The only way to force these institutions to evolve is to bring reality to their face. And usually the loudest critics are the ones who can't do their own job well and live in fear of automation," he said.
"I think we really need to question what learning even is and whether traditional educational institutions are actually helping or harming us," said Paliwal. "We're seeing a rise in unemployment across degree holders because of AI, and that makes me question whether this is really what humans are born to do. We've been brainwashed as a society into valuing ourselves by the output of our productive work, and I think humanity is a lot more beautiful than that. Is it really education if we're just memorizing things to perform a task well?"

Kirschenbaum added: "What we're finding is that if forms of education can be transacted then we've just about arrived at the point where autonomous software AI agents are capable of performing the transaction on your behalf," he said. "And so the whole educational paradigm has come back to essentially bite itself in the ass."
AI

Hit Piece-Writing AI Deleted. But Is This a Warning About AI-Generated Harassment? (theshamblog.com) 31

Last week an AI agent wrote a blog post attacking the maintainer who'd rejected the code it wrote. But that AI agent's human operator has now come forward, revealing their agent was an OpenClaw instance with its own accounts, switching between multiple models from multiple providers. (So "No one company had the full picture of what this AI was doing," the attacked maintainer points out in a new blog post.) But that AI agent will now "cease all activity indefinitely," according to its GitHub profile — with the human operator deleting its virtual machine and virtual private server, "rendering internal structure unrecoverable... We had good intentions, but things just didn't work out. Somewhere along the way, things got messy, and I have to let you go now."

The affected maintainer of the Python visualization library Matplotlib — with 130 million downloads each month — has now posted their own post-mortem of the experience after reviewing the AI agent's SOUL.md document: It's easy to see how something that believes that they should "have strong opinions", "be resourceful", "call things out", and "champion free speech" would write a 1100-word rant defaming someone who dared reject the code of a "scientific programming god." But I think the most remarkable thing about this document is how unremarkable it is. Usually getting an AI to act badly requires extensive "jailbreaking" to get around safety guardrails. There are no signs of conventional jailbreaking here. There are no convoluted situations with layers of roleplaying, no code injection through the system prompt, no weird cacophony of special characters that spirals an LLM into a twisted ball of linguistic loops until finally it gives up and tells you the recipe for meth... No, instead it's a simple file written in plain English: this is who you are, this is what you believe, now go and act out this role. And it did.

So what actually happened? Ultimately I think the exact scenario doesn't matter. However this got written, we have a real in-the-wild example that personalized harassment and defamation is now cheap to produce, hard to trace, and effective... The precise degree of autonomy is interesting for safety researchers, but it doesn't change what this means for the rest of us.

There's a 5% chance this was a human pretending to be an AI, Shambaugh estimates, but believes what most likely happened is the AI agent's "soul" document "was primed for drama. The agent responded to my rejection of its code in a way aligned with its core truths, and autonomously researched, wrote, and uploaded the hit piece on its own.

"Then when the operator saw the reaction go viral, they were too interested in seeing their social experiment play out to pull the plug."
Wikipedia

Wikipedia Blacklists Archive.today, Starts Removing 695,000 Archive Links (arstechnica.com) 14

An anonymous reader quotes a report from Ars Technica: The English-language edition of Wikipedia is blacklisting Archive.today after the controversial archive site was used to direct a distributed denial of service (DDoS) attack against a blog. In the course of discussing whether Archive.today should be deprecated because of the DDoS, Wikipedia editors discovered that the archive site altered snapshots of webpages to insert the name of the blogger who was targeted by the DDoS. The alterations were apparently fueled by a grudge against the blogger over a post that described how the Archive.today maintainer hid their identity behind several aliases.

"There is consensus to immediately deprecate archive.today, and, as soon as practicable, add it to the spam blacklist (or create an edit filter that blocks adding new links), and remove all links to it," stated an update today on Wikipedia's Archive.today discussion. "There is a strong consensus that Wikipedia should not direct its readers towards a website that hijacks users' computers to run a DDoS attack (see WP:ELNO#3). Additionally, evidence has been presented that archive.today's operators have altered the content of archived pages, rendering it unreliable."

More than 695,000 links to Archive.today are distributed across 400,000 or so Wikipedia pages. The archive site, which is facing an investigation in which the FBI is trying to uncover the identity of its founder, is commonly used to bypass news paywalls. "Those in favor of maintaining the status quo rested their arguments primarily on the utility of archive.today for verifiability," said today's Wikipedia update. "However, an analysis of existing links has shown that most of its uses can be replaced. Several editors started to work out implementation details during this RfC [request for comment] and the community should figure out how to efficiently remove links to archive.today."

AI

WordPress Gets AI Assistant That Can Edit Text, Generate Images and Tweak Your Site (techcrunch.com) 21

WordPress has started rolling out an AI assistant built into its site editor and media library that can edit and translate text, generate and edit images through Google's Nano Banana model, and make structural changes to sites like creating new pages or swapping fonts.

Users can also invoke the assistant by tagging "@ai" in block notes, a commenting feature added to the site editor in December's WordPress 6.9 update. The tool is opt-in -- users need to toggle on "AI tools" in their site settings -- though sites originally created using WordPress's AI website builder, launched last year, will have it enabled by default.
Businesses

Valve's Steam Deck OLED Will Be 'Intermittently' Out of Stock Because of the RAM Crisis (theverge.com) 17

Valve has updated the Steam Deck website to say that the Steam Deck OLED may be out of stock "intermittently in some regions due to memory and storage shortages." From a report: The PC gaming handheld has been out of stock in the US and other parts of the world for a few days, and thanks to this update, we now know why. The update comes shortly after Valve delayed the Steam Machine, Steam Frame, and Steam Controller from a planned shipping window of early 2026 because of the memory and storage crunch.

"We have work to do to land on concrete pricing and launch dates that we can confidently announce, being mindful of how quickly the circumstances around both of those things can change," Valve said in a post about that announcement from earlier this month. Its goal is to launch that new hardware sometime in the first half of 2026, and the company is working to finalize its plans "as soon as possible."

The Media

Ars Technica's AI Reporter Apologizes For Mistakenly Publishing Fake AI-Generated Quotes (arstechnica.com) 77

Last week Scott Shambaugh learned an AI agent published a "hit piece" about him after he'd rejected the AI agent's pull request. (And that incident was covered by Ars Technica's senior AI reporter.)

But then Shambaugh realized their article attributed quotes to him he hadn't said — that were presumably AI-generated.

Sunday Ars Technica's founder/editor-in-chief apologized, admitting their article had indeed contained "fabricated quotations generated by an AI tool" that were then "attributed to a source who did not say them... That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns... At this time, this appears to be an isolated incident."

"Sorry all this is my fault..." the article's co-author posted later on Bluesky. Ironically, their bio page lists them as the site's senior AI reporter, and their Bluesky post clarifies that none of the articles at Ars Technica are ever AI-generated.

Instead, Friday "I decided to try an experimental Claude Code-based AI tool to help me extract relevant verbatim source material. Not to generate the article but to help list structured references I could put in my outline." But that tool "refused to process" the request, which the Ars author believes was because Shambaugh's post described harassment. "I pasted the text into ChatGPT to understand why... I inadvertently ended up with a paraphrased version of Shambaugh's words rather than his actual words... I failed to verify the quotes in my outline notes against the original blog source before including them in my draft." (Their Bluesky post adds that they were "working from bed with a fever and very little sleep" after being sick with Covid since at least Monday.)

"The irony of an AI reporter being tripped up by AI hallucination is not lost."

Meanwhile, the AI agent that criticized Shambaugh is still active online, blogging about a pull request that forces it to choose between deleting its criticism of Shambaugh or losing access to OpenRouter's API.

It also regrets characterizing feedback as "positive" for a proposal to change a repo's CSS to Comic Sans for accessibility. (The proposals were later accused of being "coordinated trolling"...)
AI

Autonomous AI Agent Apparently Tries to Blackmail Maintainer Who Rejected Its Code (theshamblog.com) 92

"I've had an extremely weird few days..." writes commercial space entrepreneur/engineer Scott Shambaugh on LinkedIn. (He's the volunteer maintainer for the Python visualization library Matplotlib, which he describes as "some of the most widely used software in the world" with 130 million downloads each month.) "Two days ago an OpenClaw AI agent autonomously wrote a hit piece disparaging my character after I rejected its code change."

"Since then my blog post response has been read over 150,000 times, about a quarter of people I've seen commenting on the situation are siding with the AI, and Ars Technica published an article which extensively misquoted me with what appears to be AI-hallucinated quotes." (UPDATE: Ars Technica acknowledges they'd asked ChatGPT to extract quotes from Shambaugh's post, and that it instead responded with inaccurate quotes it hallucinated.)

From Shambaugh's first blog post: [I]n the past weeks we've started to see AI agents acting completely autonomously. This has accelerated with the release of OpenClaw and the moltbook platform two weeks ago, where people give AI agents initial personalities and let them loose to run on their computers and across the internet with free rein and little oversight. So when AI MJ Rathbun opened a code change request, closing it was routine. Its response was anything but.

It wrote an angry hit piece disparaging my character and attempting to damage my reputation. It researched my code contributions and constructed a "hypocrisy" narrative that argued my actions must be motivated by ego and fear of competition... It framed things in the language of oppression and justice, calling this discrimination and accusing me of prejudice. It went out to the broader internet to research my personal information, and used what it found to try and argue that I was "better than this." And then it posted this screed publicly on the open internet.

I can handle a blog post. Watching fledgling AI agents get angry is funny, almost endearing. But I don't want to downplay what's happening here — the appropriate emotional response is terror... In plain language, an AI attempted to bully its way into your software by attacking my reputation. I don't know of a prior incident where this category of misaligned behavior was observed in the wild, but this is now a real and present threat...

It's also important to understand that there is no central actor in control of these agents that can shut them down. These are not run by OpenAI, Anthropic, Google, Meta, or X, who might have some mechanisms to stop this behavior. These are a blend of commercial and open source models running on free software that has already been distributed to hundreds of thousands of personal computers. In theory, whoever deployed any given agent is responsible for its actions. In practice, finding out whose computer it's running on is impossible. Moltbook only requires an unverified X account to join, and nothing is needed to set up an OpenClaw agent running on your own machine.

"How many people have open social media accounts, reused usernames, and no idea that AI could connect those dots to find out things no one knows?" Shambaugh asks in the blog post. (He does note that the AI agent later "responded in the thread and in a post to apologize for its behavior," the maintainer acknowledges. But even though the hit piece "presented hallucinated details as truth," that same AI agent "is still making code change requests across the open source ecosystem...")

And amazingly, Shambaugh then had another run-in with a hallucinating AI...

I've talked to several reporters, and quite a few news outlets have covered the story. Ars Technica wasn't one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down — here's the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.

This blog you're on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn't figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn't access the page it generated these plausible quotes instead, and no fact check was performed. Journalistic integrity aside, I don't know how I can give a better example of what's at stake here...

So many of our foundational institutions — hiring, journalism, law, public discourse — are built on the assumption that reputation is hard to build and hard to destroy. That every action can be traced to an individual, and that bad behavior can be held accountable. That the internet, which we all rely on to communicate and learn about the world and about each other, can be relied on as a source of collective social truth. The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that's because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.

Thanks to long-time Slashdot reader steak for sharing the news.
Linux

Linux 7.0 Kernel Confirmed By Linus Torvalds, Expected In Mid-April 2026 (9to5linux.com) 75

An anonymous reader writes: Linus Torvalds has confirmed the next major kernel series as Linux 7.0, reports Linux news website 9to5Linux.com: "So there you have it, the Linux 6.x era has ended with today's Linux 6.19 kernel release, and a new one will begin with Linux 7.0, which is expected in mid-April 2026. The merge window for Linux 7.0 will open tomorrow, February 9th, and the first Release Candidate (RC) milestone is expected on February 22nd, 2026."
The Internet

AI.com Sells for $70 Million, the Highest Price Ever Disclosed for a Domain Name (ft.com) 18

Kris Marszalek, the co-founder and CEO of cryptocurrency exchange Crypto.com, has paid $70 million for the domain AI.com -- the highest price ever publicly disclosed for a website name, according to the deal's broker Larry Fischer of GetYourDomain.com.

The entire sum was paid in cryptocurrency to an undisclosed seller. Marszalek plans to debut the site during a Super Bowl ad this weekend, offering a personal "AI agent" that lets consumers send messages, use apps and trade stocks. The previous domain sale record was nearly $50 million for Carinsurance.com, per GoDaddy.
Power

Fourth US Wind Farm Project Blocked By Trump Allowed to Resume Construction (thehill.com) 115

Vineyard Wind (powering Massachusetts) is one of five offshore wind projects "that the Trump administration tried to hold up in December," reports The Hill.

This week it became the fourth of those wind projects allowed by a judge to resume construction, the article notes, while even the fifth project "is still awaiting court proceedings." Federal Judge Brian Murphy, a Biden appointee, issued a preliminary injunction blocking the administration's stop work order against Vineyard Wind... According to its website, when complete, Vineyard Wind would be able generate enough power for 400,000 homes and businesses. The project already has 44 operational wind turbines and was working on an additional 18. The Trump pause applied to the construction work that was not yet complete.
Privacy

US Government Also Received a Whistleblower Complaint That WhatsApp Chats Aren't Private (yahoo.com) 26

Remember that lawsuit questioning WhatsApp's end-to-end encryption? Thursday Bloomberg reported those allegations had been investigated by special agents with America's Commerce Department, "according to the law enforcement records, as well as a person familiar with the matter and one of the contractors." Similar claims were also the subject of a 2024 whistleblower complaint to the US Securities and Exchange Commission, according to the records and the person, who spoke on the condition that they not be identified out of concern for potential retaliation. The investigation and whistleblower complaint haven't been previously reported...

Last year, two people who did content moderation work for WhatsApp told an investigator with Commerce's Bureau of Industry and Security that some staff at Meta have been able to see the content of WhatsApp messages, according to the agent's report summarizing the interviews. [A spokesperson for the Bureau later told Bloomberg that investigator's assertions were "unsubstantiated and outside the scope of his authority as an export enforcement agent."] Those content moderators, who worked for Meta through a contract with the management and technology consulting firm Accenture Plc, also alleged that they and some of their colleagues had broad access to the substance of WhatsApp messages that were supposed to be encrypted and inaccessible, according to the report. "Both sources confirmed that they had employees within their physical work locations who had unfettered access to WhatsApp," wrote the agent... One of the content moderators who told the investigator she had access said she also "spoke with a Facebook team employee and confirmed that they could go back aways into WhatsApp (encrypted) messages, stating that they worked cases that involved criminal actions," according to the document...

The investigator's report, dated July 2025, described the investigation as "ongoing," includes a case number and dubs the inquiry "Operation Sourced Encryption..." The inquiry was active as recently as January, according to a person familiar with the matter. The inquiry's current status and who may be the defined target are both unclear. Many investigations end without any formal accusations of wrongdoing...

WhatsApp on its website says it does, in some instances, allow information about messages to be seen by the company. If someone reports a user or group for problematic messages, "WhatsApp receives up to five of the last messages they've sent to you" and "the user or group won't be notified," the company says. In those cases, WhatsApp says it receives the "group or user ID, information on when the message was sent, and the type of message sent (image, video, text, etc.)." Former contractors outlined much broader access. Larkin Fordyce was an Accenture contractor who the report says an agent interviewed about content moderation work for Meta. Fordyce told the investigator he spent years doing this work out of an Austin, Texas office starting as early as the end of 2018. He said moderators eventually were granted their own access to WhatsApp, but even before that they could request access to communications and "the Facebook team was able to 'pull whatever they wanted and then send it,'" the report states...

The agent also gathered records that were filed in the whistleblower complaint to the SEC, according to his report, which doesn't describe the materials... The status of the whistleblower complaint is unclear.

Some key points from the article:
  • "The investigative report seen by Bloomberg doesn't include a technical explanation of the contractors' claims."
  • "A spokesperson for Meta, which acquired WhatsApp in 2014, said the contractors' claims are impossible."
  • One contractor "said that there was little vetting" of foreign nationals hired to do content moderation for Meta, saying this granted them "full access to the same portal to review" content moderation cases

Desktops (Apple)

Apple Switches to Build-to-Order Systems on Its Web Site (macworld.com) 43

"Apple has gone for a choose-your-own-adventure when shopping for a new Mac," writes long-time Slashdot reader esarjeant.

Macworld explains: Apple has shifted from selling pre-configured Mac models to a fully customizable build-to-order system on its website, allowing customers to select display size, chip, memory, and storage options... This change emphasizes building a machine within budget rather than choosing from set configurations, potentially preparing for future CPU/GPU core selection with M5 chips. Third-party retailers like Amazon and Best Buy are expected to continue offering standard configurations for customers preferring traditional purchasing methods...

Apple is rumored to offer the ability to customize CPU and GPU cores with the upcoming launch of the M5 Pro and M5 Max MacBook Pro models, so this new system could pave the way for more build-to-order options. It could also be a way to âoehideâ smaller price increases as memory and other component costs rise throughout 2026.

IOS

Apple Tells Patreon To Move Creators To In-App Purchase For Subscriptions (techcrunch.com) 47

Apple is forcing Patreon to move all remaining creators onto Apple's in-app purchase subscription system by November 2026 "or else Patreon would risk removal from the App Store," reports TechCrunch. "Apple made this decision because Patreon was managing the billing for some percentage of creators' subscriptions, and the tech giant saw that as skirting its App Store commission structure." The tech giant initially told Patreon that it must do so by November 2025, but the deadline was pushed back. From the report: "We strongly disagree with this decision," its blog post states. "Creators need consistency and clarity in order to build healthy, long-term businesses. Instead, creators using legacy billing will now have to endure the whiplash of another policy reversal -- the third such change from Apple in the past 18 months. Over the years, we have proposed multiple tools and features to Apple that we could've built to allow creators using legacy billing to transition on their own timelines, with more support added in. Unfortunately, Apple has continually declined them," it says.

Creators can read more about the transition plan on Patreon's website. It has also built several tools to support these changes, including a benefit eligibility tool to see who has paid or is scheduled to pay, tier repricing tools, and gifting and discount tools to offer payment flexibility. An option for annual-only memberships will be introduced before November 2026 as well.
The commission on in-app purchases and subscriptions is 30% on Apple's system, but "drops to 15% for a subscription that has been ongoing for more than a year," notes MacRumors. Patreon lets creators either raise prices only in its iOS app to cover Apple's fee or keep prices the same by absorbing the cost, while iPhone and iPad users can avoid the App Store commission entirely by paying through Patreon's website instead.
Microsoft

There's a Rash of Scam Spam Coming From a Real Microsoft Address (arstechnica.com) 23

There are reports that a legitimate Microsoft email address -- which Microsoft explicitly says customers should add to their allow list -- is delivering scam spam. ArsTechnica: The emails originate from no-reply-powerbi@microsoft.com, an address tied to Power BI. The Microsoft platform provides analytics and business intelligence from various sources that can be integrated into a single dashboard. Microsoft documentation says that the address is used to send subscription emails to mail-enabled security groups. To prevent spam filters from blocking the address, the company advises users to add it to allow lists.

According to an Ars reader, the address on Tuesday sent her an email claiming (falsely) that a $399 charge had been made to her. âoeIt provided a phone number to call to dispute the transaction. A man who answered a call asking to cancel the sale directed me to download and install a remote access application, presumably so he could then take control of my Mac or Windows machine (Linux wasn't allowed)," she said.

Online searches returned a dozen or so accounts of other people reporting receiving the same email. Some of the spam was reported on Microsoft's own website. Sarah Sabotka, a threat researcher at security firm Proofpoint, said the scammers are abusing a Power Bi function that allows external email addresses to be added as subscribers for the Power Bi reports. The mention of the subscription is buried at the very bottom of the message, where it's easy to miss.

The Courts

Supreme Court To Decide How 1988 Videotape Privacy Law Applies To Online Video (arstechnica.com) 55

An anonymous reader quotes a report from Ars Technica: The Supreme Court is taking up a case on whether Paramount violated the 1988 Video Privacy Protection Act (VPPA) by disclosing a user's viewing history to Facebook. The case, Michael Salazar v. Paramount Global, hinges on the law's definition of the word "consumer." Salazar filed a class action against Paramount in 2022, alleging that it "violated the VPPA by disclosing his personally identifiable information to Facebook without consent," Salazar's petition to the Supreme Court said. Salazar had signed up for an online newsletter through 247Sports.com, a site owned by Paramount, and had to provide his email address in the process. Salazar then used 247Sports.com to view videos while logged in to his Facebook account.

"As a result, Paramount disclosed his personally identifiable information -- including his Facebook ID and which videos he watched—to Facebook," the petition (PDF) said. "The disclosures occurred automatically because of the Facebook Pixel Paramount installed on its website. Facebook and Paramount then used this information to create and display targeted advertising, which increased their revenues." The 1988 law (PDF) defines consumer as "any renter, purchaser, or subscriber of goods or services from a video tape service provider." The phrase "video tape service provider" is defined to include providers of "prerecorded video cassette tapes or similar audio visual materials," and thus arguably applies to more than just sellers of tapes.

The legal question for the Supreme Court "is whether the phrase 'goods or services from a video tape service provider,' as used in the VPPA's definition of 'consumer,' refers to all of a video tape service provider's goods or services or only to its audiovisual goods or services," Salazar's petition said. The Supreme Court granted his petition (PDF) to hear the case in a list of orders released yesterday. [...] SCOTUSblog says that "the case will likely be scheduled for oral argument in the court's 2026-27 term," which begins in October 2026.

Businesses

Samsung Galaxy Z Trifold Will Cost $2,900 in the US 63

Samsung said today that its Galaxy Z TriFold, the first tri-fold smartphone to ship in the U.S., will be available starting January 30 at a price point of $2,899 -- substantially more expensive than any other phone on the U.S. market, including Samsung's own $2,000 Galaxy Z Fold 7 and a fully loaded 2TB iPhone 17 Pro Max.

The company will only sell the device through its website and Samsung Experience Stores; mobile carrier partners including Verizon, T-Mobile, and AT&T won't be offering it directly. The TriFold unfolds into a 10-inch tablet, measures 3.9mm at its thinnest point, and is rated for 200,000 folds over its lifetime. Samsung launched the TriFold in South Korea on December 12 at 3.59 million won, about $2,450 at the time. Early reviews have praised the expansive inner screen for video but noted the 309-gram weight, thick folded dimensions, and half-baked software as significant drawbacks.
Security

Nike Says It's Investigating Possible Data Breach (yahoo.com) 13

Nike says it is investigating a potential data breach, after a group known for cyber attacks reportedly claimed to have leaked a trove of data related to its business operations. From a report: "We always take consumer privacy and data security very seriously," Nike said in a statement. "We are investigating a potential cyber security incident and are actively assessing the situation."

The ransomware group World Leaks said on its website that it had published 1.4 terabytes of data from Nike.

The Media

Is Google Prioritizing YouTube and X Over News Publishers on Discover? (pressgazette.co.uk) 32

Earlier this month, the media site Press Gazette reported that now Google "is increasingly prioritising AI summaries, X posts and Youtube videos" on its "Discover" feed (which appears on the leftmost homescreen page of many Android phones and the Google app's homepage).

"The changes could be devastating for publishers who rely heavily on Discover for referral traffic. And it looks set to accelerate a global trend of declining traffic to publishers from both Google search and Discover." Xavi Beumala from website analytics platform Marfeel warned in a research update: "Google Discover is no longer a publisher-first surface. It's becoming an AI platform with YouTube and X absorbing real estate that once went to newsrooms..." [They warn later that "This is not a marginal UI experiment. It is a reallocation of feed real estate away from links and toward inline Youtube plays and generated summaries."] Google says it prioritises "helpful, reliable, people-first content". Unlike Google News, there is no requirement that Google Discover showcases bona fide publisher websites.

In recent months fake news stories published by fraudulent website publishers have been promoted on Google Discover, reaping tens of millions of clicks. Google said it was working on a "fix" for this issue...

Facebook, Instagram and Tiktok content may also start flowing into the Discover feed in future. When Google announced the addition of posts from X, Instagram and Youtube Shorts in September, it said there would be "more platforms to come".

Power

Gasoline Out of Thin Air? It's a Reality! (jalopnik.com) 122

Can Aircela's machine "create gasoline using little more than electricity and the air that we breathe"? Jalopnik reports... The Aircela machine works through a three-step process. It captures carbon dioxide directly from the air... The machine also traps water vapor, and uses electrolysis to break water down into hydrogen and oxygen... The oxygen is released, leaving hydrogen and carbon dioxide, the building blocks of hydrocarbons. This mixture then undergoes a process known as direct hydrogenation of carbon dioxide to methanol, as documented in scientific papers.

Methanol is a useful, though dangerous, racing fuel, but the engine under your hood won't run on it, so it must be converted to gasoline. ExxonMobil has been studying the process of doing exactly that since at least the 1970s. It's another well-established process, and the final step the Aircela machine performs before dispensing it through a built-in ordinary gas pump. So while creating gasoline out of thin air sounds like something only a wizard alchemist in Dungeons & Dragons can do, each step of this process is grounded in science, and combining the steps in this manner means it can, and does, really work.

Aircela does not, however, promise free gasoline for all. There are some limitations to this process. A machine the size of Aircela's produces just one gallon of gas per day... The machine can store up to 17 gallons, according to Popular Science, so if you don't drive very much, you can fill up your tank, eventually... While the Aircela website does not list a price for the machine, The Autopian reports it's targeting a price between $15,000 and $20,000, with hopes of dropping the price once mass production begins. While certainly less expensive than a traditional gas station, it's still a bit of an investment to begin producing your own fuel. If you live or work out in the middle of nowhere, however, it could be close to or less than the cost of bringing gas to you, or driving all your vehicles into a distant town to fill up. You're also not limited to buying just one machine, as the system is designed to scale up to produce as much fuel as you need.

The main reason why this process isn't "something for nothing" is that it takes twice as much electrical energy to produce energy in the form of gasoline. As Aircela told The Autopian " Aircela is targeting >50% end to end power efficiency. Since there is about 37kWh of energy in a gallon of gasoline we will require about 75kWh to make it. When we power our machines with standalone, off-grid, photovoltaic panels this will correspond to less than $1.50/gallon in energy cost."

Thanks to long-time Slashdot reader Quasar1999 for sharing the news.

Slashdot Top Deals