Facebook

Meta Debuts 'Muse Spark', First AI Model Under Alexandr Wang (axios.com) 7

Meta has launched Muse Spark, its first major AI model under Alexandr Wang's leadership. The model was built over the past nine months and is being positioned as a significant step up from Llama 4. Axios reports: Muse Spark will power queries in the Meta AI app and Meta.ai website immediately, with plans to expand across Facebook, Instagram and WhatsApp. The model accepts voice, text and image inputs, but produces text-only output. [...] Meta plans to release a version of Muse Spark under an open-source license.

The model uses a fast mode for casual queries and several reasoning modes. A "shopping mode" highlights how Meta hopes to differentiate itself. It combines large language models with data on user interests and behavior. Over time, the model will also power "features that cite recommendations and content people share across Instagram, Facebook, and Threads," Meta said in a blog post.
Wang, the 29-year-old entrepreneur who co-founded Scale AI, joined Meta's "superintelligence" unit last year to help Meta catch up to rival models from OpenAI and Anthropic.
Games

Valve Releases Native Steam Link App For Apple's Vision Pro (macobserver.com) 25

Valve has released a native Steam Link beta for Apple Vision Pro, letting users stream their existing Steam games onto a large virtual screen in visionOS. It supports up to 4K resolution and will let you dynamically adjust the curve of the display. The Mac Observer reports: Steam Link does not support VR titles in this beta, and Valve clearly states that the app is limited to 2D game streaming, but this still opens up a large library of games that users can play on a massive virtual screen inside Vision Pro.

At the same time, Vision Pro already handles 2D media very well, and this update builds on that strength by turning the headset into a portable gaming display that connects directly to your existing setup without needing extra hardware.

You can join the Steam Link beta through TestFlight right now, and this early release shows how Apple Vision Pro continues to expand beyond media into more practical and everyday use cases like gaming.

Portables (Apple)

Apple and Lenovo Have the Least Repairable Laptops, Analysis Finds (arstechnica.com) 57

An anonymous reader quotes a report from Ars Technica: Apple earned the lowest grades in a report on laptop and smartphone repairability released today by the consumer advocacy group Public Interest Research Group (PIRG) Education Fund. The report, which looks at how easy devices are to disassemble and how easy it is to find repairability information, gave Apple a C-minus in laptop repairability and a D-minus in cell phone repairability. For its "Failing the Fix (2026): Grading laptop and cell phone companies on the fixability of their products" report, PIRG analyzed the 10 newest laptops and phones that were available via manufacturers' French website in January. [...] Apple leads the list of laptop repairability losers, largely due to it having low disassembly scores. Apple, along with Dell and Samsung, also lost a full point for being members of TechNet and the CTA. Lenovo had the second-worst grade with a C-minus. Like Apple, Lenovo had low disassembly scores.

It also lost 0.5 points for failing to properly post PDFs explaining the French repair scores for some of its newest laptops sold in the region, as required in France. This is especially noteworthy because Lenovo got an F in last year's report for missing this information on at least 12 laptops. At the time, Lenovo director of communications David Hamilton provided a statement to Ars saying that the missing information was "due to a backend web compatibility issue that temporarily prevented the display of repairability scores on our Lenovo France website" that was "widely resolved." However, it appears that over a year later, Lenovo still isn't providing sufficient information to meet France's requirements

"While Lenovo has improved somewhat with their compliance with French consumer law by providing more repair score PDFs on their website, we urge the company to resolve this multi-year issue," this year's report says. PIRG's report concluded that "laptops are pretty stagnant in terms of repairability" across many of the eight most popular laptop brands in the US. However, Proctor noted to Ars that consumers' access to parts, tools, and information that vendors have has improved, but improvements around ease of disassembly "take longer to realize." He also praised vendors' efforts to release more repairable designs, such as Apple's MacBook Neo.
For its repairability index, PIRG weighed physical ease of disassembly most heavily, while also considering the availability of repair documentation, spare parts, spare-parts affordability, and other product-specific criteria. It then adjusted company grades by deducting points for membership in trade groups that oppose right-to-repair laws and adding small bonuses for manufacturers that supported right-to-repair legislation.

Acer stood out as the only laptop vendor that avoided the 0.5-point trade-group penalty, since it was not listed as a member of TechNet or the Consumer Technology Association.
Portables (Apple)

Apple Faces 'Massive Dilemma' With Success of the MacBook Neo (macrumors.com) 149

Apple may have a supply problem on its hands with the MacBook Neo... The laptop reportedly relies on "binned" A18 Pro chips with one GPU core disabled, and demand is so strong that the supply of those cheaper leftover chips could run out before the next model is ready. That leaves Apple choosing between lower margins, shifting production plans, or changing the lineup to keep its $599 hit product in stock. MacRumors reports: The all-new MacBook Neo has been such a hit that Apple is facing a "massive dilemma," according to Taiwan-based tech columnist and former Bloomberg reporter Tim Culpan. [...] In the latest edition of his Culpium newsletter today, Culpan said the MacBook Neo is selling so well that Apple's supply of the binned A18 Pro chips with a 5-core GPU will "run out" before the company is able to fully satisfy demand for the laptop. Apple's initial plan was to have suppliers build around five to six million MacBook Neo units before ceasing production of the model with the A18 Pro chip, he said, but it sounds like demand is so strong that Apple might run out of A18 Pro chips to put in the MacBook Neo before the second-generation MacBook Neo with an A19 Pro chip is ready next year. Apple is unlikely to mark the MacBook Neo as temporarily sold out, so it may be forced to take action, but profit margins might be affected.

A18 Pro chips are manufactured with TSMC's second-generation 3nm process, known as N3E, and Culpan said TSMC's N3E production lines are currently operating at maximum capacity. As a result, he said that Apple may have to pay a premium to restart A18 Pro chip production for the MacBook Neo, which would lower its profit margins. Apple would have to disable a GPU core on these chips to ensure that they have only a 5-core GPU, like all other MacBook Neo units sold to date. Alternatively, Culpan said that Apple could reallocate some of its chip production that was originally planned for other devices, but he said the cost would still be higher than what it paid for its initial batch of A18 Pro chips.

Culpan speculated that Apple could also opt to discontinue the $599 model with 256GB of storage, leaving the $699 model with 512GB of storage and a Touch ID button as the only configuration available. This is unlikely to happen any time soon, in our view, given how heavily Apple has been promoting the MacBook Neo's affordability. Apple might also be able to move up the release of a MacBook Neo with the iPhone 17 Pro's A19 Pro chip, but that too would be a costlier option, at least until the company achieves a sufficient stockpile of binned A19 Pro chips with a 5-core GPU. In any case, Apple could opt to keep the starting price of current and future MacBook Neo models at $599 and simply accept lower profit margins on the laptop, especially given that it attracts customers to the macOS and broader Apple ecosystem.

Privacy

LinkedIn Faces Spying Allegations Over Browser Extension Scanning (pcmag.com) 70

LinkedIn is facing allegations that it quietly scans users' browsers for installed Chrome extensions. The German group Fairlinked e.V. goes so far as to claim that the site is "running one of the largest corporate espionage operations in modern history."

"The program runs silently, without any visible indicator to the user," the group says. "It does not ask for consent. It does not disclose what it is doing. It reports the results to LinkedIn's servers. This is not a one-time check. The scan runs on every page load, for every visitor." PCMag reports: This browser extension "fingerprinting" technique has been spotted before, but it was previously found to probe only 2,000 to 3,000 extensions. Fairlinked alleges that LinkedIn is now scanning for 6,222 extensions that could indicate a user's political opinions or religious views. For example, the extensions LinkedIn will look for include one that flags companies as too "woke," one that can add an "anti-Zionist" tag to LinkedIn profiles, and two others that can block content forbidden under Islamic teachings.

It would also be a cakewalk to tie the collected extension data to specific users, since LinkedIn operates as a vast professional social network that covers people's work history. Fairlinked's concern is that Microsoft and LinkedIn can allegedly use the data to identify which companies use competing products. "LinkedIn has already sent enforcement threats to users of third-party tools, using data obtained through this covert scanning to identify its targets," the group claims. However, LinkedIn claims that Fairlinked mischaracterizes a LinkedIn safeguard designed to prevent web scraping by browser extensions. "We do not use this data to infer sensitive information about members," the company says. "To protect the privacy of our members, their data, and to ensure site stability, we do look for extensions that scrape data without members' consent or otherwise violate LinkedIn's Terms of Service," LinkedIn adds.

[...] The statement goes on to allege that Fairlinked is from a developer whose account was previously suspended for web scraping. One of the group's board members is listed as "S.Morell," which appears to be Steven Morell, the founder of Teamfluence, a tool that helps businesses monitor LinkedIn activity. [...] Still, the Microsoft-owned site is facing some blowback for not clearly disclosing the browser extension scanning in LinkedIn's privacy policy. Fairlinked is soliciting donations for a legal fund to take on Microsoft and is urging the public to encourage local regulators to intervene.

Cellphones

Teardown of Unreleased LG Rollable Shows Why Rollable Phones Aren't a Thing (arstechnica.com) 44

A teardown video of LG's never-released Rollable phone helps explain why rollable phones never became a real product category: they were likely too expensive, fragile, and complicated to manufacture at scale.

"The complexity of the internals would have made the Rollable extremely expensive to manufacture, and it would have demanded a high price tag," reports Ars Technica. "Durability is also a big concern. There's just a lot going on inside this phone, with multiple motors, springy arms, tracks, and a screen that has to loop around the back. [...] It seems unlikely the LG Rollable could have survived daily use for multiple years." From the report: The LG Rollable is just one of several rollable concept phones that appeared throughout the early 2020s. Flexible OLED screens had finally become affordable, leading to foldable phones like the Samsung Galaxy Z Fold. Although, "affordable" is relative here. Foldables were and still are very expensive devices. Based on what we can see of the complex inner workings of the LG Rollable, these devices may have commanded even higher prices. Noted YouTube phone destroyer JerryRigEverything managed to snag a working prototype LG Rollable. It may even be the unit LG demoed at CES 2021.

The device looks like a regular phone at first glance, but a quick swipe activates the motor, which unfurls additional screen real estate from around the back. This makes the viewable area about 40 percent larger without the added thickness of a foldable. The device expands with the aid of two tiny motors, which are attached via straight teeth to an internal track. The screen assembly has zipper-like teeth that keep it locked into the frame as it moves. The motors make a surprising amount of noise when operating, so LG designed the phone to play a musical chime to hide the sound. While the motor does the heavy lifting, the phone also has a lattice of articulating spring-loaded arms inside that keep the OLED panel even as the frame slides side to side. The battery and motherboard sit in a tray that allows the back of the phone to expand as the OLED rolls into view.

This is a prototype phone, featuring a chunky frame and visible screws. That helped Zack Nelson from JerryRigEverything successfully disassemble and reassemble the phone. So this little bit of mobile history was not destroyed, and the teardown gives us a good look at how LG was hoping to attract new customers before calling it quits.

News

AP Offers Buyouts As Part of Pivot Away From Newspaper Journalism (apnews.com) 27

The Associated Press is offering buyouts to U.S. journalists "as part of an acceleration away from the focus on newspaper journalism that sustained the company since the mid-1800s," the not-for-profit outlet reported today. AP says it is making the move from a position of strength, responding to shrinking newspaper revenue and growing demand from digital, broadcast, and tech clients.

"The AP is not in trouble," said Julie Pace, executive editor and senior vice president of the AP. "We're making these changes from a position of strength but we're doing so now to recognize our changing customer base." From the report: The news organization is becoming more focused on visual journalism and developing new revenue sources, particularly through companies investing in artificial intelligence, to cope with the economic collapse of many legacy news outlets. Once the lion's share of AP's revenue, big newspaper companies now account for 10% of its income. "We're not a newspaper company and we haven't been for quite some time," [said Pace].

Despite changes -- the company has doubled the number of video journalists it employs in the United States since 2022 -- remnants of a staffing structure built largely to provide stories to newspapers and broadcasters in individual states have remained. That has its roots well back in American history; the AP was started in the mid-19th century by New York newspapers looking to share the costs of reporting outside their immediate territory.

The number of AP journalists who will lose jobs is murky, in part intentionally. The AP does not say how many journalists it employs, though it has a large international presence as well as its U.S. staff. Pace said the AP's goal is to reduce its global staff by less than 5%. The Marketing and Media Alliance estimated the AP had 3,700 staffers, but it was not clear when that estimate was made. Since buyouts are being offered now to only U.S. journalists, it stands to reason that the cut among that workforce will be more than 5%. Whether there are layoffs depends on how many people take the offer, Pace said.

Movies

Hundreds of Theatres Show Apocalyptic-Yet-Optimistic New Movie, 'The AI Doc' (yahoo.com) 14

Hundreds of theatres are now showing a new documentary called The AI Doc: Or How I Became An Apocaloptimist. Variety calls it "playful and heady,"edited "with a spirit of ADHD alertness." The New York Times suggests it "tries to cover so much that it ends up being more confusing than clarifying, but parts are fascinating."

But the Los Angeles Times calls it an "aggravating soup of information and opinion that wants to move at the speed of machine thought." So while co-director Daniel Roher asks whether he should bring a child into a world with AI, "Perhaps more urgently, should Roher have made an AI doc that treats us like children?" First, he parades all the safety doomers, seeming to believe their warnings that an unfeeling superintelligence is upon us and we can't trust it. Then, sufficiently disturbed, he hauls in the AI cheerleaders, a suspiciously positive gang who can envision only medical miracles and grindless lives in which we're all full-time artists. Only then, after this simplistic setup where platitudes reign, do we get the section in which the subject is treated like the brave (and grave) new world it is: geopolitically fraught, economically tenuous and a playground for billionaires.

Why couldn't the complexity have been the dialogue from the beginning, instead of the play-dumb cartoon "The AI Doc" feels like for so long? Maybe Roher believes this is what our increasingly gullible, truth-challenged citizenry needs from an explanatory doc: a flashy, kindhearted reminder that we're the change we need to be.

Read more reactions here and here. Mashable warns the documentary's director "will ultimately craft a journey that feels like a panic attack in real time. In the end, you may not feel better about mankind's chances against the rise of AI. But you'll likely feel less helpless in the future before us all."

They also point out that the film "shares some ways its audience can more actively be apart of the conversation, and provides a link to the film's website for engagement," where 6,948 people have now signed up for its newsletter. ("Demand a seat at the table," urges its signup button, under a warning that "Government and AI companies are designing our future without us. We need to reclaim our voice in shaping the future of AI...")
AI

Will 'AI-Assisted' Journalists Bring Errors and Retractions? (msn.com) 22

Meet the "journalist" who "uploads press releases or analyst notes into AI tools and prompts them to spit out articles that he can edit and publish quickly," according to the Wall Street Journal.

"AI-assisted stories accounted for nearly 20% of Fortune's web traffic in the second half of 2025." And most were written by 42-year-old Nick Lichtenberg, who has now written over 600 AI-assisted stories, producing "more stories in six months than any of his colleagues at Fortune delivered in a year." One Wednesday in February, he cranked out seven. "I'm a bit of a freak," Lichtenberg said... A story by Lichtenberg sometimes starts with a prompt entered into Perplexity or Google's NotebookLM, asking it to write something based on a headline he comes up with. He moves the AI tools' initial drafts into a content-management system and edits the stories before publishing them for Fortune's readers... A piece from earlier that morning about Josh D'Amaro being named Disney CEO took 10 minutes to get online, he said...

Like other journalists, Lichtenberg vets his stories. He refers back to the original documents to confirm the information he's reporting is correct. He reaches out to companies for comment. But he admits his process isn't as thorough as that of magazine fact-checkers.

While Lichtenberg started out saying his stories were co-authored with "Fortune Intelligence", he now typically signs his own name, according to the article, "because he feels the work is mostly his own." (Though his stories "sometimes" disclose generative AI was used as a research tool...) The article asks with he could be "a bellwether for where much of the media business is headed..."

"Much of the content people now consume online is generated by artificial intelligence, with some 9% of newly published newspaper articles either partially or fully AI-generated, according to a 2025 study led by the University of Maryland. The number of AI-generated articles on the web surpassed human-written ones in late 2024, according to research and marketing agency Graphite." Some executives have made full-throated declarations about the threat posed by AI. New York Times publisher A.G. Sulzberger said AI "is almost certainly going to usher in an unprecedented torrent of crap," referencing deepfakes as an example. The NewsGuild of New York, the union representing Fortune employees and journalists at other media outlets, said the people are what makes journalism so powerful. "You simply can't replicate lived experiences, human judgment and expertise," said president Susan DeCarava.

For Chris Quinn, the editor of local publications Cleveland.com and the Plain Dealer, AI tools have helped tame other torrents facing the industry. AI has allowed the outlets to cover counties in Ohio that otherwise might go ignored by scraping information from local websites and sending "tips" to reporters, he said. It has also edited stories and written first drafts so the newsrooms' journalists can focus on the calls, research and reporting needed for their stories.... Newsrooms from the New York Times to The Wall Street Journal are deploying AI in various ways to help reporters and editors work more efficiently....

Not all newsrooms disclose their use of AI, and in some cases have rolled out new tools that resulted in errors or PR gaffes. An October study from the European Broadcasting Union and the BBC, which relied on professional journalists to evaluate the news integrity of more than 3,000 AI responses, found that almost half of all AI responses had at least one significant issue.

Last week the New York Times even issued a correction when a freelance book reviewer using an AI tool unknowingly included "language and details similar to those in a review of the same book published in The Guardian." But it was actually "the second time in a few days that the Times was called out for potential AI plagiarism," according to the American journalist writing The Handbasket newsletter. We must stem the idea being pushed by tech companies and their billionaire funders who've sunk too much into their products to admit defeat that the infiltration of AI into journalism is inevitable; because from my perch as an independent journalist, it simply is not...

Some AI-loving journalists appear to believe that if they're clear enough with the AI program they're using, it will truly understand what they're seeking and not just do what it's made to do: steal shit... If you want to work with machines, get a job that requires it. There are a whole lot more of those than there are writing jobs, so free up space for people who actually want to do the work. You're not doing the world a favor by gifting it your human/AI hybrid. Journalism will not miss you if you leave...

But meanwhile, USA Today recently tried hiring for a new position: AI-Assisted reporter. (The lucky reporter will "support the launch and scaling of AI-assisted local journalism in a major U.S. metro," working with tools including Copilot and Perplexity, pioneering possible future expansions and "AI-enabled newsroom operations that support and augment human-led journalism.") And Google is already sponsoring a "publishing innovation award"...
Ubuntu

Does Ubuntu Now Require More RAM Than Windows 11? (omgubuntu.co.uk) 116

"Canonical is no longer pretending that 4GB is enough," writes the blog How-to-Geek, noting Ubuntu 26.04 LTS "raises the baseline memory to 6GB, alongside a 2GHz dual-core processor, and 25GB of storage..." Ubuntu 14.04 LTS (Trusty Tahr) set the floor at 1GB — a modest ask when it launched more than a decade ago in 2014. Then came the Ubuntu 18.04 LTS (Bionic Beaver) that pushed the number to 4GB, surviving quite well in the era of 16GB being considered standard for mid-range laptops.... Ubuntu's new minimum requirement lands in an interesting spot when compared against Windows 11. Microsoft's operating system requires just 4GB RAM, although real-world usage often tells a different story. Usually, 8GB is considered the sweet spot to handle modern apps and multitasking.
The blog OMG Ubuntu argues this change is "not because Ubuntu requires 2GB more memory than it did, but more the way we compute does." it's more of an honesty bump. Components that make up the distro — the GNOME desktop and extensions, modern web browsers (and the sites we load in them) and the kinds of apps we use (and keep running) whilst multitasking are more demanding... The Resolute Raccoon's memory requirements better reflect real-world multitasking.

Ubuntu 26.04 LTS can be installed on devices with less than 6GB RAM (but not less than 25GB of disk space). The experience may not be as smooth or as responsive as developers intend (so you don't get to complain), but it will work. I installed Ubuntu 26.04 Beta on a laptop with just 2 GB of memory — slow to the point of frustration in use, but otherwise functional.

If you have a device with 4 GB RAM and you can't upgrade (soldered memory is a thing, and e-waste can be avoided), then alternatives exist. Many Ubuntu flavours, like Lubuntu, have lower system requirements than the main edition. Plus, there's always the manual option using the Ubuntu netboot installer to install a base system and then built out a more minimal system from there.

Apple

Apple's First 50 Years Celebrated - Including How Steve Jobs Finally Accepted an 'Open' App Store (substack.com) 49

Apple's 50th anniversary got celebrated in weird and wild ways. CEO Tim Cook posted a special 30-second video rewinding backwards through the years of Apple's products until it reaches the Apple I. Podcaster Lex Fridman noticed if you play the sound in reverse, "It's the Think Different ad music, pitched up." TechRadar played seven 50-year-old Apple I games on an emulator, including Star Trek, Blackjack, Lunar Lander, and of course, Conway's Game of Life.

And Macworld ranked Apple's 50 most influential people. (Their top five?)

5. Tony Fadell (iPhone co-creator/"father of the iPod")
4. Sir Jony Ive
3. Steve Wozniak
2. Tim Cook
1. Steve Jobs

One of the most thoughtful celebraters was David Pogue, who's spent 42 years of writing about Apple (starting as a MacWorld columnist and the author of Mac for Dummies, one of the first "...For Dummies" books ever published in the early 1990s.) Now 63 years old, Pogue spent the last two years working on a 608-page hardcover book titled Apple: The First 50 Years. But on his Substack Pogue, contemplated his own history with the company — including several interactions with Steve Jobs. Pogue remembers how Jobs "hated open systems. He wanted to make self-contained, beautiful machines. He didn't want them polluted by modifications."

The tech blog Daring Fireball notes that Pogue actually interviewed Scott Forstall (who'd led the iPhone's software development team) for his new book, "and got this story, about just how far Steve Jobs thought Apple could go to expand the iPhone's software library while not opening it to third-party developers." "I want you to make a list of every app any customer would ever want to use," he told Forstall. "And then the two of us will prioritize that list. And then I'm going to write you a blank check, and you are going to build the largest development team in the history of the world, to build as many apps as you can as quickly as possible." Forstall, dubious, began composing a list. But on the side, he instructed his engineers to build the security foundations of an app store into the iPhone's software-"against Steve's knowledge and wishes," Forstall says. [...]

Two weeks after the iPhone's release, someone figured out how to "jailbreak" the iPhone: to hack it so that they could install custom apps. Jobs burst into Forstall's office. "You have to shut this down!" But Forstall didn't see the harm of developers spending their efforts making the iPhone better. "If they add something malicious, we'll ship an update tomorrow to protect against that. But if all they're doing is adding apps that are useful, there's no reason to break that." Jobs, troubled, reluctantly agreed.

Week by week, more cool apps arrived, available only to jailbroken phones. One day in October, Jobs read an article about some of the coolest ones. "You know what?" he said. "We should build an app store."

Forstall, delighted, revealed his secret plan. He had followed in the footsteps of Burrell Smith (the Mac's memory-expansion circuit) and Bob Belleville (the Sony floppy-drive deal): He'd disobeyed Jobs and wound up saving the project.

In fact, the book "includes new interviews with 150 key people who made the journey, including Steve Wozniak, John Sculley, Jony Ive, and many current designers, engineers, and executives" (according to its description on Amazon). Pogue's book even revisits the story of Steve Jobs proving an iPod prototype could be smaller by tossing it into an aquarium, shouting "If there's air bubbles in there, there's still room. Make it smaller!" But Pogue's book "added that there's a caveat to this compelling bit of Apple lore," reports NPR.

"It never actually happened. It's just one more Apple myth."
AI

Top NPM Maintainers Targeted with AI Deepfakes in Massive Supply-Chain Attack, Axios Briefly Compromised (pcmag.com) 33

"Hackers briefly turned a widely trusted developer tool into a vehicle for credential-stealing malware that could give attackers ongoing access to infected systems," the news site Axios.com reported Tuesday, citing security researchers at Google.

The compromised package — also named axios — simplifies HTTP requests, and reportedly receives millions of downloads each day: The malicious versions were removed within roughly three hours of being published, but Google warned the incident could have "far-reaching impacts" given the package's widespread use, according to John Hultquist, chief analyst at Google Threat Intelligence Group. Wiz estimates Axios is downloaded roughly 100 million times per week and is present in about 80% of cloud and code environments. So far, Wiz has observed the malicious versions in roughly 3% of the environments it has scanned.
Friday PCMag notes the maintainer's compromised account had two-factor authentication enabled, with the breach ultimately traced "to an elaborate AI deepfake from suspected North Korean hackers that was convincing enough to trick a developer into installing malware," according to a post-mortem published Thursday by lead developer Jason Saayman: [Saayman] fell for a scheme from a North Korean hacking group, dubbed UNC1069, which involves sending out phishing messages and then hosting virtual meetings that use AI deepfakes to clone the face and voices of real executives. The virtual meetings will then create the impression of an audio problem, which can only be "solved" if the victim installs some software or runs a troubleshooting command. In reality, it's an effort to execute malware. The North Koreans have been using the tactic repeatedly, whether it be to phish cryptocurrency firms or to secure jobs from IT companies.

Saayman said he faced a similar playbook. "They reached out masquerading as the founder of a company, they had cloned the company's founders likeness as well as the company itself," he wrote. "They then invited me to a real Slack workspace. This workspace was branded... The Slack was thought out very well, they had channels where they were sharing LinkedIn posts. The LinkedIn posts I presume just went to the real company's account, but it was super convincing etc." The hackers then invited him to a virtual meeting on Microsoft Teams. "The meeting had what seemed to be a group of people that were involved. The meeting said something on my system was out of date. I installed the missing item as I presumed it was something to do with Teams, and this was the remote access Trojan," he added. "Everything was extremely well coordinated, looked legit and was done in a professional manner."

Friday developer security platform Socket wrote that several more maintainers in the Node.js ecosystem "have come out of the woodwork to report that they were targeted by the same social engineering campaign." The accounts now span some of the most widely depended-upon packages in the npm registry and Node.js core itself, and together they confirm that axios was not a one-off target. It was part of a coordinated, scalable attack pattern aimed at high-trust, high-impact open source maintainers. Attackers also targeted several Socket engineers, including CEO Feross Aboukhadijeh. Feross is the creator of WebTorrent, StandardJS, buffer, and dozens of widely used npm packages with billions of downloads... Commenting on the axios post-mortem thread, he noted that this type of targeting [against individual maintainers] is no longer unusual... "We're seeing them across the ecosystem and they're only accelerating."

Jordan Harband, John-David Dalton, and other Socket engineers also confirmed they were targeted. Harband, a TC39 member, maintains hundreds of ECMAScript polyfills and shims that are foundational to the JavaScript ecosystem. Dalton is the creator of Lodash, which sees more than 137 million weekly downloads on npm. Between them, the packages they maintain are downloaded billions of times each month. Wes Todd, an Express TC member and member of the Node Package Maintenance Working Group, also confirmed he was targeted. Matteo Collina, co-founder and CTO of Platformatic, Node.js Technical Steering Committee Chair, and lead maintainer of Fastify, Pino, and Undici, disclosed on April 2 that he was also targeted. His packages also see billion downloads per year... Scott Motte, creator of dotenv, the package used by virtually every Node.js project that handles environment variables, with more than 114 million weekly downloads, also confirmed he was targeted using the same Openfort persona.

Socket reports that another maintainer was targetted with an invitation to appear on a podcast. (During the recording a suspicious technical issue appeared which required a software fix to resolve....)

Even just technical implementation, "This is among the most operationally sophisticated supply chain attacks ever documented against a top-10 npm package," the CI/CD security company StepSecurity wrote Tuesday The dropper contacts a live command-and-control server, delivers separate second-stage payloads for macOS, Windows, and Linux, then erases itself and replaces its own package.json with a clean decoy... Three payloads were pre-built for three operating systems. Both release branches were poisoned within 39 minutes of each other. Every artifact was designed to self-destruct. Within two seconds of npm install, the malware was already calling home to the attacker's server before npm had even finished resolving dependencies... Both versions were published using the compromised npm credentials of a lead axios maintainer, bypassing the project's normal GitHub Actions CI/CD pipeline.
"As preventive steps, Saayman has now outlined several changes," reports The Hacker News, "including resetting all devices and credentials, setting up immutable releases, adopting OIDC flow for publishing, and updating GitHub Actions to adopt best practices."

The Wall Street Journal called it "the latest in a string of incidents exposing risks in the systems that underpin how modern software is built."
Windows

Microsoft Pulls Then Re-Issues Windows 11 Preview Update. Also Begins Force-Updating Windows 11 (techrepublic.com) 78

Nine days ago Microsoft released a non-security "preview" update for Windows 11 — not mandatory for the average Windows user, notes ZDNet, "but rather as optional, more for IT admins and power users who want to test them."

TechRepublic adds that the update "was to bring 'production-ready improvements' and generally ensure system stability by optimizing different Windows services." So it's ironic that some (but not all) users reported instead that the update "blocks users at the door, refusing to install or crashing midway through the process."

"It apparently impacted enough people to force Microsoft to take action," writes ZDNet. "Microsoft paused and then pulled the update," and then Tuesday released a new update "designed to replace the glitchy one. This one includes all the new features and improvements from the previous preview update, but also fixes the installation issues that clobbered that update."

Meanwhile, as Windows 11 version 24H2 approaches its end of life this October, Microsoft is now force-updating users to the latest version, reports BleepingComputer: "The machine learning-based intelligent rollout has expanded to all devices running Home and Pro editions of Windows 11, version 24H2 that are not managed by IT departments," Microsoft said in a Monday update to the Windows release health dashboard... "No action is required, and you can choose when to restart your device or postpone the update."
Neowin reports: The good news is that the update from version 24H2 to 25H2 is a minor enablement package, as the two operating systems share the same codebase. As such, the update won't take long, and you should not encounter any disruptions, compatibility issues, or previously unseen bugs... Microsoft recently promised to implement big changes in how Windows Update works, including the ability to postpone updates for as long as you want. However, Microsoft has yet to clarify if that includes staying on a release beyond its support period.

Thanks to long-time Slashdot reader Ol Olsoc for sharing the news.
Social Networks

Are Employers Using Your Data To Figure Out the Lowest Salary You'll Accept? (marketwatch.com) 96

MarketWatch looks at "surveillance wages," pay rates "based not on an employee's performance or seniority, but on formulas that use their personal data, often collected without employees' knowledge." According to Nina DiSalvo, policy director at labor advocacy group Towards Justice, some systems use signals associated with financial vulnerability — including data on whether a prospective employee has taken out a payday loan or has a high credit-card balance — to infer the lowest pay a candidate might accept. Companies can also scrape candidates' public personal social-media pages, she said...

A first-of-its-kind audit of 500 labor-management artificial-intelligence companies by Veena Dubal, a law professor at University of California, Irvine, and Wilneida Negrón, a tech strategist, found that employers in the healthcare, customer service, logistics and retail industries are customers of vendors whose tools are designed to enable this practice. Published by the Washington Center for Equitable Growth, a progressive economic think tank, the August 2025 report... does not claim that all employers using these systems engage in algorithmic wage surveillance. Instead, it warns that the growing use of algorithmic tools to analyze workers' personal data can enable pay practices that prioritize cost-cutting over transparency or fairness...

Surveillance wages don't stop at the hiring stage — they follow workers onto the job, too. The vendors that provide such services also offer tools that are built to set bonus or incentive compensation, according to the report. These tools track their productivity, customer interactions and real-time behavior — including, in some cases, audio and video surveillance on the job. Nearly 70% of companies with more than 500 employees were already using employee-monitoring systems in 2022, such as software that monitors computer activity, according to a survey from the International Data Corporation. "The data that they have about you may allow an algorithmic decision system to make assumptions about how much, how big of an incentive, they need to give to a particular worker to generate the behavioral response they seek," DiSalvo said.

The article notes that Colorado introduced the "Prohibit Surveillance Data to Set Prices and Wages Act" to ban companies from setting pay rates with algorithms that use payday-loan history, location data or Google search behavior for algorithmically set.

Thanks to long-time Slashdot reader sinij for sharing the article.
AI

Anthropic Announces Claude Subscribers Must Now Pay Extra to Use OpenClaw (venturebeat.com) 46

Anthropic's making a big and sudden change — and connecting its Claude AI to third-party agentic tools "is about to get a lot more expensive," writes the Verge: Beginning April 4th at 3PM ET, users will "no longer be able to use your Claude subscription limits for third-party harnesses including OpenClaw," according to an email sent to users on Friday evening. Instead, if users want to use OpenClaw with Claude, they'll have to use a "pay-as-you-go option" that will be billed separate from their Claude subscription.
Anthropic's announcement added these extra usage bundles are "now available at a discount." Users can also try Anthropic's API, notes VentureBeat, "which charges for every token of usage rather than allowing for open-ended usage up to certain limits, as the Pro and Max plans have allowed so far. " The technical reality, according to Anthropic, is that its first-party tools like Claude Code, its AI vibe coding harness, and Claude Cowork, its business app interfacing and control tool, are built to maximize "prompt cache hit rates" — reusing previously processed text to save on compute. Third-party harnesses like OpenClaw often bypass these efficiencies... [Claude Code creator Boris Cherny explained on X that "I did put up a few PRs to improve prompt cache hit rate for OpenClaw in particular, which should help for folks using it with Claude via API/overages."] Growth marketer Aakash Gupta observed on X that the "all-you-can-eat buffet just closed," noting that a single OpenClaw agent running for one day could burn $1,000 to $5,000 in API costs. "Anthropic was eating that difference on every user who routed through a third-party harness," Gupta wrote. "That's the pace of a company watching its margin evaporate in real time."

However, Peter Steinberger, the creator of OpenClaw who was recently hired by OpenAI, took a more skeptical view of the "capacity" argument."Funny how timings match up," Steinberger posted on X. "First they copy some popular features into their closed harness, then they lock out open source." Indeed, Anthropic recently added some of the same capabilities that helped OpenClaw catch-on — such as the ability to message agents through external services like Discord and Telegram — to Claude Code...

User @ashen_one, founder of Telaga Charity, voiced a concern likely shared by other small-scale builders: "If I switch both [OpenClaw instances] to an API key or the extra usage you're recommending here, it's going to be far too expensive to make it worth using. I'll probably have to switch over to a different model at this point."

"I know it sucks," Cherny replied. "Fundamentally engineering is about tradeoffs, and one of the things we do to serve a lot of customers is optimize the way subscriptions work to serve as many people as possible with the best mode..." OpenAI appears to be positioning itself as a more "harness-friendly" alternative, potentially using this moment as a customer acquisition channel for disgruntled Claude power users.

By restricting subscription limits to their own "closed harness," Anthropic is asserting control over the UI/UX layer. This allows them to collect telemetry and manage rate limits more granularly, but it risks alienating the power-user community that built the "agentic" ecosystem in the first place. Anthropic's decision is a cold calculation of margins versus growth. As Cherny noted, "Capacity is a resource we manage thoughtfully." In the 2026 AI landscape, the era of subsidized, unlimited compute for third-party automation is over. For the average user on Claude.ai, the experience remains unchanged; for the power users running autonomous offices, the bell has tolled.

AMD

No, AMD Is Not Buying Intel (gadgetreview.com) 23

"The April 1st timing should have been your first clue," writes Gadget Review. TechSpot's false story was just an April Fool's prank — although Gadget Review thinks it's still funny how "something about this particular piece of satire felt uncomfortably plausible." Maybe it's because AMD stock sits around $196 while Intel hovers near $41, or perhaps it's the poetic justice of the underdog finally eating the giant. The semiconductor world has witnessed stranger reversals, but none quite this dramatic. Your gaming rig's CPU battle represents decades of corporate warfare, legal grudges, and technological leapfrogging that makes Game of Thrones look like a friendly board game.

Picture this: In 1975, AMD reverse-engineered Intel's 8080 processor, creating the Am9080 clone. The audacity was breathtaking — AMD spent 50 cents per chip to manufacture something they sold for $700. That's a 1,400% markup on borrowed technology, making today's GPU prices look reasonable. This relationship evolved from copying to partnership to bitter rivalry. The companies signed second-sourcing deals in the late 1970s, with AMD becoming Intel's official backup supplier. Then came the lawsuits. AMD sued Intel for antitrust violations in 2005, eventually settling for $1.25 billion in 2009. That settlement money helped fund the Ryzen revolution that's currently eating Intel's lunch. The historical irony runs deeper than your typical tech rivalry. AMD literally started as Intel's shadow, creating chips by studying Intel's designs under microscopes. Today, Intel engineers probably study AMD's Zen architecture the same way...

This April Fool's joke works because it captures something true about power shifts in technology.

The site TipRanks notes that both companies saw their stock price rise Wednesday, though that might not be related to the false article. "Positive analyst coverage from Wells Fargo could be acting as a catalyst for AMD stock today. Intel also announced plans to buy back its 49% equity interest in a joint venture with Apollo Global Management APO."
Science

'Cognitive Surrender' Leads AI Users To Abandon Logical Thinking, Research Finds (arstechnica.com) 137

An anonymous reader quotes a report from Ars Technica: When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine. Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in "cognitive surrender" to AI's seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.

Overall, across 1,372 participants and over 9,500 individual trials, the researchers found subjects were willing to accept faulty AI reasoning a whopping 73.2 percent of the time, while only overruling it 19.7 percent of the time. The researchers say this "demonstrate[s] that people readily incorporate AI-generated outputs into their decision-making processes, often with minimal friction or skepticism." In general, "fluent, confident outputs [are treated] as epistemically authoritative, lowering the threshold for scrutiny and attenuating the meta-cognitive signals that would ordinarily route a response to deliberation," they write. These kinds of effects weren't uniform across all test subjects, though. Those who scored highly on separate measures of so-called fluid IQ were less likely to rely on the AI for help and were more likely to overrule a faulty AI when it was consulted. Those predisposed to see AI as authoritative in a survey, on the other hand, were much more likely to be led astray by faulty AI-provided answers.

Despite the results, though, the researchers point out that "cognitive surrender is not inherently irrational." While relying on an LLM that's wrong half the time (as in these experiments) has obvious downsides, a "statistically superior system" could plausibly give better-than-human results in domains such as "probabilistic settings, risk assessment, or extensive data," the researchers suggest. "As reliance increases, performance tracks AI quality," the researchers write, "rising when accurate and falling when faulty, illustrating the promises of superintelligence and exposing a structural vulnerability of cognitive surrender." In other words, letting an AI do your reasoning means your reasoning is only ever going to be as good as that AI system. As always, let the prompter beware.

Government

Tech Companies Are Trying To Neuter Colorado's Landmark Right-to-Repair Law (wired.com) 27

An anonymous reader quotes a report from Wired: Today at a hearing of the Colorado Senate Business, Labor, and Technology committee, lawmakers voted unanimously to move Colorado state bill SB26-090 -- titled Exempt Critical Infrastructure from Right to Repair -- out of committee and into the state senate and house for a vote. The bill modifies Colorado's Consumer Right to Repair Digital Electronic Equipment act, which was passed in 2024 and went into effect in January 2026. While the protections secured by that act are wide, the new SB26-090 bill aims to "exempt information technology equipment that is intended for use in critical infrastructure from Colorado's consumer right to repair laws."

The bill is supported by tech manufacturers like Cisco and IBM, according to lobbying disclosures. These are companies that have vested interests in manufacturing things like routers, server equipment, and computers and stand to profit if they can control who fixes their products and the tools, components, and software used to make those upgrades and repairs. They also cite cybersecurity concerns, saying that giving people access to the tools and systems they would need to repair a device could also enable bad actors to use those methods for nefarious means. (This is a common argument manufacturers make when opposing right-to-repair laws.)

[...] During the hearing, more than a dozen repair advocates spoke from organizations like Pirg, the Repair Association, and iFixit opposing the bill. YouTuber and repair advocate Louis Rossmann was there. The main problem, repair advocates say, is that the bill deliberately uses vague language to make the case for controlling who can fix their products. [...] The Colorado Labor and Technology committee advanced the bill, but it still needs to go through votes on the Colorado Senate and House floors before going into effect. Those votes may take place as early as next week. Regardless of how the bill goes in the state, it's likely that manufacturers will continue their push to alter or undo repair legislation in other states across the country.
"The 'information technology' and 'critical infrastructure' thing is as cynical as you can possibly be about it," says Nathan Proctor, the leader of Pirg's US right-to-repair campaign. "It sounds scary to lawmakers, but it just means the internet."

The current wording of the bill "leaves it up to the manufacturers to determine which items they will need to provide repair tools and parts to owners and independent repairers and which ones they don't," says Danny Katz, executive director CoPIRG, the Colorado branch of the consumer advocate group Pirg. "This is a bad policy and would be a big step back for Coloradans' repair rights."

iFixit CEO Kyle Wiens said in the hearing: "There's a general principle in cybersecurity that obscurity is not security," iFixit CEO Kyle Wiens said in the hearing. "The money that's behind the scenes, that's what's driving the bill."
Botnet

College Student, Cat Meme Helped Crack Massive Botnet Case (wsj.com) 21

The Wall Street Journal shares the "wild behind-the-scenes story" of how the world's largest and most destructive botnet was uncovered and taken down, writes Slashdot reader sturgeon. "At times, the network known as Kimwolf included more than a million compromised home Android devices and digital photo frames -- enough DDoS firepower to disrupt internet traffic across the U.S. and beyond." From the report: Sitting in his dorm room at the Rochester Institute of Technology, Benjamin Brundage was closing in on a mystery that had even seasoned internet investigators baffled. A cat meme helped him crack the case. A growing network of hacked devices was launching the biggest cyberattacks ever seen on the internet. It had become the most powerful cyberweapon ever assembled, large enough to knock a state or even a small country offline. Investigators didn't know exactly who had built it -- or how. Brundage had been following the attacks, too -- and, in between classes, was conducting his own investigation. In September, the college senior started messaging online with an anonymous user who seemed to have insider knowledge.

As they chatted on Discord, a platform favored by videogamers, Brundage was eager to get more information, but he didn't want to come off as too serious and shut down the conversation. So every now and then he'd send a funny GIF to lighten the mood. Brundage was fluent in the memes, jokes and technical jargon popular with young gamers and hackers who are extremely online. "It was a bit of just asking over and over again and then like being a bit unserious," said Brundage. At one point, he asked for some technical details. He followed up with the cat meme: a six-second clip that showed a hand adjusting a necktie on a fluffy gray cat. Brundage didn't expect it to work, but he got the information. "It took me by surprise," he said.

Eventually the leaker hinted there was a new vulnerability on the internet. Brundage, who is 22, would learn it threatened tens of millions of consumers and as much as a quarter of the world's corporations. As he unraveled the mystery, he impressed veteran researchers with his findings -- including federal law enforcement, which took action against the network two weeks ago. Chad Seaman, a researcher at Akamai, joked at one point that the internet could go down if Brundage spent too much time on his exams.

The Courts

Penalties Stack Up As AI Spreads Through the Legal System 51

Tony Isaac shares a report from NPR: When it comes to using AI, it seems some lawyers just can't help themselves. Last year saw a rapid increase in court sanctions against attorneys for filing briefs containing errors generated by artificial intelligence tools. The most prominent case was that of the lawyers for MyPillow CEO Mike Lindell, who were fined $3,000 each for filing briefs containing fictitious, AI-generated citations. But as a cautionary tale, it doesn't seem to have had much effect. The numbers started taking off last year, and the rate is still increasing. He counts a total of more than 1,200 to date, of which about 800 are from U.S. courts. "I am surprised that people are still doing this when it's been in the news," says Carla Wale, associate dean of information & technology and director of the law library at the University of Washington School of Law. "Whatever the generative AI tool gives you -- as in, 'Look at these cases' -- you, under the rules of professional conduct, you have to read those cases. You have to read the cases to make sure what you are citing is accurate."

"I think that lawyers who understand how to effectively and ethically use generative AI replace lawyers who don't," she says. "That's what I think the future is."

Slashdot Top Deals