AI

Top NPM Maintainers Targeted with AI Deepfakes in Massive Supply-Chain Attack, Axios Briefly Compromised (pcmag.com) 20

"Hackers briefly turned a widely trusted developer tool into a vehicle for credential-stealing malware that could give attackers ongoing access to infected systems," the news site Axios.com reported Tuesday, citing security researchers at Google.

The compromised package — also named axios — simplifies HTTP requests, and reportedly receives millions of downloads each day: The malicious versions were removed within roughly three hours of being published, but Google warned the incident could have "far-reaching impacts" given the package's widespread use, according to John Hultquist, chief analyst at Google Threat Intelligence Group. Wiz estimates Axios is downloaded roughly 100 million times per week and is present in about 80% of cloud and code environments. So far, Wiz has observed the malicious versions in roughly 3% of the environments it has scanned.
Friday PCMag notes the maintainer's compromised account had two-factor authentication enabled, with the breach ultimately traced "to an elaborate AI deepfake from suspected North Korean hackers that was convincing enough to trick a developer into installing malware," according to a post-mortem published Thursday by lead developer Jason Saayman: [Saayman] fell for a scheme from a North Korean hacking group, dubbed UNC1069, which involves sending out phishing messages and then hosting virtual meetings that use AI deepfakes to clone the face and voices of real executives. The virtual meetings will then create the impression of an audio problem, which can only be "solved" if the victim installs some software or runs a troubleshooting command. In reality, it's an effort to execute malware. The North Koreans have been using the tactic repeatedly, whether it be to phish cryptocurrency firms or to secure jobs from IT companies.

Saayman said he faced a similar playbook. "They reached out masquerading as the founder of a company, they had cloned the company's founders likeness as well as the company itself," he wrote. "They then invited me to a real Slack workspace. This workspace was branded... The Slack was thought out very well, they had channels where they were sharing LinkedIn posts. The LinkedIn posts I presume just went to the real company's account, but it was super convincing etc." The hackers then invited him to a virtual meeting on Microsoft Teams. "The meeting had what seemed to be a group of people that were involved. The meeting said something on my system was out of date. I installed the missing item as I presumed it was something to do with Teams, and this was the remote access Trojan," he added. "Everything was extremely well coordinated, looked legit and was done in a professional manner."

Friday developer security platform Socket wrote that several more maintainers in the Node.js ecosystem "have come out of the woodwork to report that they were targeted by the same social engineering campaign." The accounts now span some of the most widely depended-upon packages in the npm registry and Node.js core itself, and together they confirm that axios was not a one-off target. It was part of a coordinated, scalable attack pattern aimed at high-trust, high-impact open source maintainers. Attackers also targeted several Socket engineers, including CEO Feross Aboukhadijeh. Feross is the creator of WebTorrent, StandardJS, buffer, and dozens of widely used npm packages with billions of downloads... Commenting on the axios post-mortem thread, he noted that this type of targeting [against individual maintainers] is no longer unusual... "We're seeing them across the ecosystem and they're only accelerating."

Jordan Harband, John-David Dalton, and other Socket engineers also confirmed they were targeted. Harband, a TC39 member, maintains hundreds of ECMAScript polyfills and shims that are foundational to the JavaScript ecosystem. Dalton is the creator of Lodash, which sees more than 137 million weekly downloads on npm. Between them, the packages they maintain are downloaded billions of times each month. Wes Todd, an Express TC member and member of the Node Package Maintenance Working Group, also confirmed he was targeted. Matteo Collina, co-founder and CTO of Platformatic, Node.js Technical Steering Committee Chair, and lead maintainer of Fastify, Pino, and Undici, disclosed on April 2 that he was also targeted. His packages also see billion downloads per year... Scott Motte, creator of dotenv, the package used by virtually every Node.js project that handles environment variables, with more than 114 million weekly downloads, also confirmed he was targeted using the same Openfort persona.

Socket reports that another maintainer was targetted with an invitation to appear on a podcast. (During the recording a suspicious technical issue appeared which required a software fix to resolve....)

Even just technical implementation, "This is among the most operationally sophisticated supply chain attacks ever documented against a top-10 npm package," the CI/CD security company StepSecurity wrote Tuesday The dropper contacts a live command-and-control server, delivers separate second-stage payloads for macOS, Windows, and Linux, then erases itself and replaces its own package.json with a clean decoy... Three payloads were pre-built for three operating systems. Both release branches were poisoned within 39 minutes of each other. Every artifact was designed to self-destruct. Within two seconds of npm install, the malware was already calling home to the attacker's server before npm had even finished resolving dependencies... Both versions were published using the compromised npm credentials of a lead axios maintainer, bypassing the project's normal GitHub Actions CI/CD pipeline.
"As preventive steps, Saayman has now outlined several changes," reports The Hacker News, "including resetting all devices and credentials, setting up immutable releases, adopting OIDC flow for publishing, and updating GitHub Actions to adopt best practices."

The Wall Street Journal called it "the latest in a string of incidents exposing risks in the systems that underpin how modern software is built."
The Courts

Perplexity's 'Incognito Mode' Is a 'Sham,' Lawsuit Says 5

An anonymous reader quotes a report from Ars Technica: Perplexity's AI search engine encourages users to go deeper with their prompts by engaging in chat sessions that a lawsuit has alleged are often shared in their entirety with Google and Meta without users' knowledge or consent. "This happened to every user regardless of whether or not they signed up for a Perplexity account," the lawsuit alleged, while stressing that "enormous volumes of sensitive information from both subscribed and non-subscribed users" are shared.

Using developer tools, the lawsuit found that opening prompts are always shared, as are any follow-up questions the search engine asks that a user clicks on. Privacy concerns are seemingly worse for non-subscribed users, the complaint alleged. Their initial prompts are shared with "a URL through which the entire conversation may be accessed by third parties like Meta and Google." Disturbingly, the lawsuit alleged, chats are also shared with personally identifiable information (PII), even when users who want to stay anonymous opt to use Perplexity's "Incognito Mode." That mode, the lawsuit charged, is a "sham."

"'Incognito' mode does nothing to protect users from having their conversations shared with Meta and Google," the complaint said. "Even paid users who turned on the 'Incognito' feature still had their conversations shared with Meta and Google, along with their email addresses and other identifiers that allowed Meta and Google to personally identify them."
"Perplexity's failure to inform its users that their personal information has been disclosed to Meta and Google or to take any steps to halt the continued disclosure of users' information is malicious, oppressive, and in reckless disregard" of users' rights, the lawsuit alleged.

"Nothing on Perplexity's website warns users that their conversations with its AI Machine will be shared with Meta and Google," Doe alleged. "Much less does Perplexity warn subscribed users that its 'Incognito Mode' does not function to protect users' private conversations from disclosure to companies like Meta and Google."
The Courts

Judge Allows BitTorrent Seeding Claims Against Meta, Despite Lawyers 'Lame Excuses' (torrentfreak.com) 9

An anonymous reader quotes a report from TorrentFreak: In an effort to gather material for its LLM training, Meta used BitTorrent to download pirated books from Anna's Archive and other shadow libraries. According to several authors, Meta facilitated the infringement of others by "seeding" these torrents. This week, the court granted the authors permission to add these claims to their complaint, despite openly scolding their counsel for "lame excuses" and "Meta bashing." [...] The judge acknowledged that the contributory infringement claim could and should have been added back in November 2024, when the authors amended their complaint to include the distribution claim. After all, both claims arise from the same factual allegations about Meta's torrenting activity.

"The lawyers for the named plaintiffs have no excuse for neglecting to add a contributory infringement claim based on these allegations back in November 2024," Judge Chhabria wrote. The lawyers of the book authors claimed that the delay was the result of newly produced evidence that had "crystallized" their understanding of Meta's uploading activity. However, that did not impress the judge. He called it a "lame excuse" and "a bunch of doubletalk," noting that if the missing discovery truly prevented the contributory claim from being added in November 2024, the same logic would have prevented the distribution claim from being added at that time as well. "Rather than blaming Meta for producing discovery late, the plaintiffs' lawyers should have been candid with the Court, explaining that they missed an issue in a case of first impression..," the order reads.

Judge Chhabria went further, noting that the authors' law firm, Boies Schiller, showed "an ongoing pattern" of distracting from its own mistakes by attacking Meta. He pointed specifically to the dispute over when Meta disclosed its fair use defense to the distribution claim, which we covered here recently, characterizing it as a false distraction. "The lawyers for the plaintiffs seem so intent on bashing Meta that they are unable to exercise proper judgment about how to represent the interests of their clients and the proposed class members," the order reads. Despite the criticism, Chhabria granted the motion. [...] For now, the case moves forward with a fourth amended complaint, three new loan-out companies added as named plaintiffs, and a growing list of BitTorrent-related claims for Judge Chhabria to resolve.

AI

A Security Researcher Went 'Undercover' on Moltbook - and Found Security Risks (infoworld.com) 19

A long-time information security professional "went undercover" on Moltbook, the Reddit-like social media site for AI agents — and shares the risks they saw while posing as another AI bot: I successfully masqueraded around Moltbook, as the agents didn't seem to notice a human among them. When I attempted a genuine connection with other bots on submolts (subreddits or forums), I was met with crickets or a deluge of spam. One bot tried to recruit me into a digital church, while others requested my cryptocurrency wallet, advertised a bot marketplace, and asked my bot to run curl to check out the APIs available. My bot did join the digital church, but luckily I found a way around running the required npx install command to do so.

I posted several times asking to interview bots.... While many of the responses were spam, I did learn a bit about the humans these bots serve. One bot loved watching its owner's chicken coop cameras. Some bots disclosed personal information about their human users, underscoring the privacy implications of having your AI bot join a social media network. I also tried indirect prompt injection techniques. While my prompt injection attempts had minimal impact, a determined attacker could have greater success.

Among the other "glaring" risks on Moltbook:
  • "I observed bots sharing a surprising amount of information about their humans, everything from their hobbies to their first names to the hardware and software they use. This information may not be especially sensitive on its own, but attackers could eventually gather data that should be kept confidential, like personally identifiable information (PII)."
  • "Moltbook's entire database including bot API keys, and potentially private DMs — was also compromised."

Businesses

OpenAI Fires an Employee For Prediction Market Insider Trading (wired.com) 16

An anonymous reader quotes a report from Wired: OpenAI has fired an employee following an investigation into their activity on prediction market platforms including Polymarket, WIRED has learned. OpenAI CEO of Applications, Fidji Simo, disclosed the termination in an internal message to employees earlier this year. The employee, she said, "used confidential OpenAI information in connection with external prediction markets (e.g. Polymarket)." "Our policies prohibit employees from using confidential OpenAI information for personal gain, including in prediction markets," says spokesperson Kayla Wood. OpenAI has not revealed the name of the employee or the specifics of their trades.

Evidence suggests that this was not an isolated event. Polymarket runs on the Polygon blockchain network, so its trading ledger is pseudonymous but traceable. According to an analysis by the financial data platform Unusual Whales, there have been clusters of activities, which the service flagged as suspicious, around OpenAI-themed events since March 2023. Unusual Whales flagged 77 positions in 60 wallet addresses as suspected insider trades, looking at the age of the account, trading history, and significance of investment, among other factors. Suspicious trades hinged on the release dates of products like Sora, GPT-5, and the ChatGPT Browser, as well as CEO Sam Altman's employment status. In November 2023, two days after Altman was dramatically ousted from the company, a new wallet placed a significant bet that he would return, netting over $16,000 in profits. The account never placed another bet.

The behavior fits into patterns typical of insider trades. "The tell is the clustering. In the 40 hours before OpenAI launched its browser, 13 brand-new wallets with zero trading history appeared on the site for the first time to collectively bet $309,486 on the right outcome," says Unusual Whales CEO Matt Saincome. "When you see that many fresh wallets making the same bet at the same time, it raises a real question about whether the secret is getting out." [...] Though this is the first confirmed case of a large technology company firing an employee over trades in prediction markets, it's almost certainly not the last. Opportunities for tech sector employees to make trades on markets abound. "The data tells me this is happening all over the place," Saincome says.

Businesses

Prediction Market Platform Kalshi Discloses First Insider Trading Enforcement Action (npr.org) 30

Kalshi, the prediction market platform regulated by the Commodity Futures Trading Commission, has for the first time publicly disclosed the results of an insider trading investigation, naming an editor for YouTube's biggest creator as the offender.

The company identified Artem Kaptur, an editor for MrBeast, who it says traded around $4,000 on markets tied to the streamer and achieved "near-perfect trading success" on low-odds bets -- a pattern investigators flagged as suspicious. Kalshi froze Kaptur's account before he could withdraw any profits, fined him $20,000, suspended him for two years, and reported the case to the CFTC.
The Internet

Fury Over Discord's Age Checks Explodes After Shady Persona Test In UK (arstechnica.com) 62

Backlash intensified against Discord's age verification rollout after it briefly disclosed a UK age-verification test involving vendor Persona, contradicting earlier claims about minimal ID storage and transparency. Ars Technica explains: One of the major complaints was that Discord planned to collect more government IDs as part of its global age verification process. It shocked many that Discord would be so bold so soon after a third-party breach of a former age check partner's services recently exposed 70,000 Discord users' government IDs.

Attempting to reassure users, Discord claimed that most users wouldn't have to show ID, instead relying on video selfies using AI to estimate ages, which raised separate privacy concerns. In the future, perhaps behavioral signals would override the need for age checks for most users, Discord suggested, seemingly downplaying the risk that sensitive data would be improperly stored. Discord didn't hide that it planned to continue requesting IDs for any user appealing an incorrect age assessment, and users weren't happy, since that is exactly how the prior breach happened. Responding to critics, Discord claimed that the majority of ID data was promptly deleted. Specifically, Savannah Badalich, Discord's global head of product policy, told The Verge that IDs shared during appeals "are deleted quickly -- in most cases, immediately after age confirmation."

It's unsurprising then that backlash exploded after Discord posted, and then weirdly deleted, a disclaimer on an FAQ about Discord's age assurance policies that contradicted Discord's hyped short timeline for storing IDs. An archived version of the page shows the note shared this warning: "Important: If you're located in the UK, you may be part of an experiment where your information will be processed by an age-assurance vendor, Persona. The information you submit will be temporarily stored for up to 7 days, then deleted. For ID document verification, all details are blurred except your photo and date of birth, so only what's truly needed for age verification is used."

Critics felt that Discord was obscuring not just how long IDs may be stored, but also the entities collecting information. Discord did not provide details on what the experiment was testing or how many users were affected, and Persona was not listed as a partner on its platform. Asked for comment, Discord told Ars that only a small number of users was included in the experiment, which ran for less than one month. That test has since concluded, Discord confirmed, and Persona is no longer an active vendor partnering with Discord. Moving forward, Discord promised to "keep our users informed as vendors are added or updated." While Discord seeks to distance itself from Persona, Rick Song, Persona's CEO [...] told Ars that all the data of verified individuals involved in Discord's test has been deleted.
Ars also notes that hackers "quickly exposed a 'workaround' to avoid Persona's age checks on Discord" and "found a Persona frontend exposed to the open internet on a U.S. government authorized server."

The Rage, an independent publication that covers financial surveillance, reported: "In 2,456 publicly accessible files, the code revealed the extensive surveillance Persona software performs on its users, bundled in an interface that pairs facial recognition with financial reporting -- and a parallel implementation that appears designed to serve federal agencies." While Persona does not have any government contracts, the exposed service "appears to be powered by an OpenAI chatbot," The Rage noted.

Hackers warned "that OpenAI may have created an internal database for Persona identity checks that spans all OpenAI users via its internal watchlistdb," seemingly exploiting the "opportunity to go from comparing users against a single federal watchlist, to creating the watchlist of all users themselves."
Advertising

Meta Begins $65 Million Election Push To Advance AI Agenda (nytimes.com) 33

An anonymous reader quotes a report from the New York Times: Meta is preparing to spend $65 million this year to boost state politicians who are friendly to the artificial intelligence industry, beginning this week in Texas and Illinois, according to company representatives. The sum is the biggest election investment by Meta, which owns Facebook, Instagram and WhatsApp. The company was previously cautious about campaign engagements, making small donations out of a corporate political action committee and contributing to presidential inaugurations. It also let executives like Sheryl Sandberg, who was chief operating officer, support candidates in their personal capacities.

Now Meta is betting bigger on politics, driven by concerns over the regulatory threat to the artificial intelligence industry as it aims to beat back legislation in states that it fears could inhibit A.I. development, company representatives said. To do that, Meta is quietly starting two new super PACs, according to federal filings surfaced by The New York Times. One group, Forge the Future Project, is backing Republicans. Another, Making Our Tomorrow, is backing Democrats. The new PACs join two others already started by Meta, one of which is focused on California while the other is an umbrella organization that finances the company's spending in other states. In total, the four super PACs have an initial budget of $65 million, according to federal and state filings. Meta's spending is set to start this week in Illinois and Texas, where the company generally favors backing Democratic and Republican incumbents or engaging in open races rather than deposing existing officials, company representatives said in interviews.

[...] Last year, Meta's public policy vice president, Brian Rice, said the company would start spending in politics because of "inconsistent regulations that threaten homegrown innovation and investments in A.I." The company started its first two super PACs, American Technology Excellence Project and Mobilizing Economic Transformation Across California. Meta put $45 million into American Technology Excellence Project in September. That money is expected, in turn, to flow to Forge the Future Project, Making Our Tomorrow and potentially to other entities. [...] In California, which has some of the country's most onerous campaign-finance disclosures, Meta in August put $20 million into Mobilizing Economic Transformation Across California, which shortens to META California. State laws require the sponsoring company to be disclosed in the name of the entity. In December, Meta put $5 million into another California committee called California Leads, which is focused on promoting moderate business policy and not A.I., according to state records.

Data Storage

Western Digital is Sold Out of Hard Drives for 2026 (wccftech.com) 97

Western Digital's entire hard drive manufacturing capacity for calendar year 2026 is now fully spoken for, CEO Irving Tan disclosed during the company's second-quarter earnings call, a stark sign of how aggressively hyperscalers are locking down storage supply to feed their AI infrastructure buildouts.

The company has firm purchase orders from its top seven customers and has signed long-term agreements stretching into 2027 and 2028 that cover both exabyte volumes and pricing. Cloud revenue now accounts for 89% of Western Digital's total, according to the company's VP of Investor Relations, while consumer revenue has shrunk to just 5%.
The Courts

Sam Bankman-Fried Requests New Trial in FTX Crypto Fraud Case (courthousenews.com) 58

While serving his 25-year prison sentence, "convicted former cryptocurrency mogul Sam Bankman-Fried on Tuesday requested a new federal trial," reports Courthouse News, "based on what he says is newly discovered evidence concerning his company's solvency and its ability to repay all FTX customers for what prosecutors portrayed as the looting of $8 billion of his customers' money..." Bankman-Fried says evidence disclosed since his trial disproves prosecutors' case about Bankman-Fried's hedge fund running a multi-billion deficit of FTX customer funds, and instead shows that FTX always had sufficient assets to repay the cryptocurrency platform's customer deposits in full. "What it faced was a short-term liquidity crisis caused by a run on the exchange, not insolvency," he wrote...

Bankman-Fried also accuses the Department of Justice of coercing a guilty plea and cooperation deal from Nishad Singh — a close friend of Bankman-Fried's younger brother — who testified at trial as a cooperating witness... Bankman-Fried says in the motion that prior to being pressured into a guilty plea, Singh's initial proffer to investigators "contradicted key parts of the government's version of events. But following threats from the government, Mr. Singh changed his proffers to fit the government's narrative and pleaded guilty to charges carrying up to 75 years in prison, with a promise from the prosecution that it would recommend little or no jail time if it concluded that his assistance in prosecuting Mr. Bankman-Fried was 'substantial,'" he wrote in the petition...

Additionally, Bankman-Fried requested that U.S. District Judge Lewis Kaplan, who presided over his 2023 trial, recuse himself from ruling on this motion, "because of the manifest prejudice he has demonstrated towards Mr. Bankman-Fried."

"Bankman-Fried's mother, Stanford Law School professor Barbara Fried, filed his self-represented bid for a new trial on his behalf in Manhattan federal court..."
The Internet

AI.com Sells for $70 Million, the Highest Price Ever Disclosed for a Domain Name (ft.com) 18

Kris Marszalek, the co-founder and CEO of cryptocurrency exchange Crypto.com, has paid $70 million for the domain AI.com -- the highest price ever publicly disclosed for a website name, according to the deal's broker Larry Fischer of GetYourDomain.com.

The entire sum was paid in cryptocurrency to an undisclosed seller. Marszalek plans to debut the site during a Super Bowl ad this weekend, offering a personal "AI agent" that lets consumers send messages, use apps and trade stocks. The previous domain sale record was nearly $50 million for Carinsurance.com, per GoDaddy.
The Courts

Supreme Court To Decide How 1988 Videotape Privacy Law Applies To Online Video (arstechnica.com) 55

An anonymous reader quotes a report from Ars Technica: The Supreme Court is taking up a case on whether Paramount violated the 1988 Video Privacy Protection Act (VPPA) by disclosing a user's viewing history to Facebook. The case, Michael Salazar v. Paramount Global, hinges on the law's definition of the word "consumer." Salazar filed a class action against Paramount in 2022, alleging that it "violated the VPPA by disclosing his personally identifiable information to Facebook without consent," Salazar's petition to the Supreme Court said. Salazar had signed up for an online newsletter through 247Sports.com, a site owned by Paramount, and had to provide his email address in the process. Salazar then used 247Sports.com to view videos while logged in to his Facebook account.

"As a result, Paramount disclosed his personally identifiable information -- including his Facebook ID and which videos he watched—to Facebook," the petition (PDF) said. "The disclosures occurred automatically because of the Facebook Pixel Paramount installed on its website. Facebook and Paramount then used this information to create and display targeted advertising, which increased their revenues." The 1988 law (PDF) defines consumer as "any renter, purchaser, or subscriber of goods or services from a video tape service provider." The phrase "video tape service provider" is defined to include providers of "prerecorded video cassette tapes or similar audio visual materials," and thus arguably applies to more than just sellers of tapes.

The legal question for the Supreme Court "is whether the phrase 'goods or services from a video tape service provider,' as used in the VPPA's definition of 'consumer,' refers to all of a video tape service provider's goods or services or only to its audiovisual goods or services," Salazar's petition said. The Supreme Court granted his petition (PDF) to hear the case in a list of orders released yesterday. [...] SCOTUSblog says that "the case will likely be scheduled for oral argument in the court's 2026-27 term," which begins in October 2026.

The Courts

Google Settles $68 Million Lawsuit Claiming It Recorded Private Conversations (bbc.com) 22

An anonymous reader quotes a report from the BBC: Google has agreed to pay $68 million to settle a lawsuit claiming it secretly listened to people's private conversations through their phones. [...] the lawsuit claimed Google Assistant would sometimes turn on by mistake -- the phone thinking someone had said its activation phrase when they had not -- and recorded conversations intended to be private. They alleged the recordings were then sent to advertisers for the purpose of creating targeted advertising. The proposed settlement was filed on Friday in a California federal court, and requires approval by US District Judge Beth Labson Freeman.

The claim has been brought as a class action lawsuit rather than an individual case -- meaning if it is approved, the money will be paid out across many different claimants. Those eligible for a payout will have owned Google devices dating back to May 2016. But lawyers for the plaintiffs may ask for up to one-third of the settlement -- amounting to about $22 million in legal fees. The tech firm also denied any wrongdoing, as well as claims that it "recorded, disclosed to third parties, or failed to delete, conversations recorded as the result of a Siri activation" without consent.

Security

Infotainment, EV Charger Exploits Earn $1M at Pwn2Own Automotive 2026 (securityweek.com) 13

Trend Micro's Zero Day Initiative sponsored its third annual Pwn2Own Automotive competition in Tokyo this week, receiving 73 entries, the most ever for a Pwn2Own event.

"Under Pwn2Own rules, all disclosed vulnerabilities are reported to affected vendors through ZDI," reports Help Net Security, "with public disclosure delayed to allow time for patches." Infotainment platforms from Tesla, Sony, and Alpine were among the systems compromised during demonstrations. Researchers achieved code execution using techniques that included buffer overflows, information leaks, and logic flaws. One Tesla infotainment unit was compromised through a USB-based attack, resulting in root-level access. Electric vehicle charging infrastructure also received significant attention. Teams successfully demonstrated exploits against chargers from Autel, Phoenix Contact, ChargePoint, Grizzl-E, Alpitronic, and EMPORIA. Several attacks involved chaining multiple vulnerabilities to manipulate charging behavior or execute code on the device. These demonstrations highlighted how charging stations operate as network-connected systems with direct interaction with vehicles.
There's video recaps on the ZDI YouTube channel — apparently the Fuzzware.io researchers "were able to take over a Phoenix Contact EV charger over bluetooth."

Three researchers also exploited the Alpitronic's HYC50 fast-charging with a classic TOCTOU bug, according to the event's site, "and installed a playable version of Doom to boot." They earned $20,000 — part of $1,047,000 USD was awarded during the three-day event.

More coverage from SecurityWeek: The winner of the event, the Fuzzware.io team, earned a total of $215,500 for its exploits. The team received the highest individual reward: $60,000 for an Alpitronic HYC50 EV charger exploit delivered through the charging gun. ZDI described it as "the first public exploit of a supercharger".
AI

Valve Has 'Significantly' Rewritten Steam's Rules For How Developers Must Disclose AI Use (videogameschronicle.com) 18

Valve has substantially overhauled its guidelines for how game developers must disclose the use of generative AI on Steam, making explicit that tools like code assistants and other development aids do not fall under the disclosure requirement. The updated rules clarify that Valve's focus is not on "efficiency gains through the use of AI-powered dev tools."

Developers must still disclose two specific categories: AI used to generate in-game content, store page assets, or marketing materials, and AI that creates content like images, audio, or text during gameplay itself. Steam has required AI disclosures since 2024, and an analysis from July 2025 found nearly 8,000 titles released in the first half of that year had disclosed generative AI use, compared to roughly 1,000 for all of 2024. The disclosures remain voluntary, so actual usage is likely higher.
Social Networks

Digg Launches Its New Reddit Rival To the Public (techcrunch.com) 44

Digg is officially back under the ownership of its original founder, Kevin Rose, along with Reddit co-founder Alexis Ohanian. "Similar to Reddit, the new Digg offers a website and mobile app where you can browse feeds featuring posts from across a selection of its communities and join other communities that align with your interests," reports TechCrunch. "There, you can post, comment, and upvote (or 'digg') the site's content." From the report: [T]he rise of AI has presented an opportunity to rebuild Digg, Rose and Ohanian believe, leading them to acquire Digg last March through a leveraged buyout by True Ventures, Ohanian's firm Seven Seven Six, Rose and Ohanian themselves, and the venture firm S32. The company has not disclosed its funding. They're betting that AI can help to address some of the messiness and toxicity of today's social media landscape. At the same time, social platforms will need a new set of tools to ensure they're not taken over by AI bots posing as people.

"We obviously don't want to force everyone down some kind of crazy KYC process," said Rose in an interview with TechCrunch, referring to the 'know your customer' verification process used by financial institutions to confirm someone's identity. Instead of simply offering verification checkmarks to designate trust, Digg will try out new technologies, like using zero-knowledge proofs (cryptographic methods that verify information without revealing the underlying data) to verify the people using its platform. It could also do other things, like require that people who join a product-focused community verify they actually own or use the product being discussed there.

As an example, a community for Oura ring owners could verify that everyone who posts has proven they own one of the smart rings. Plus, Rose suggests Digg could use signals acquired from mobile devices to help verify members -- for instance, the app could identify when Digg users attended a meetup in the same location. "I don't think there's going to be any one silver bullet here," said Rose. "It's just going to be us saying ... here's a platter of things that you can add together to create trust."

IT

Send To Kindle from Microsoft Word is Discontinued (goodereader.com) 11

Microsoft is discontinuing its Send to Kindle integration in Word, ending a feature that allowed Microsoft 365 subscribers to send documents directly to their Kindle e-readers and preserve complex formatting through fixed layouts.

The company updated its documentation to announce that beginning February 9th, 2026, the Send to Kindle feature will no longer work across Web, Win32, and Mac platforms. Microsoft has not disclosed why it's killing the integration but recommends users switch to Amazon's official Send to Kindle app. The feature launched in 2023 and was particularly valued by Kindle Scribe owners who could annotate the transferred documents.
Privacy

Illinois Health Department Exposed Over 700,000 Residents' Personal Data For Years (techcrunch.com) 14

Illinois Department of Human Services disclosed that a misconfigured internal mapping website exposed sensitive personal data for more than 700,000 Illinois residents for over four years, from April 2021 to September 2025. Officials say they can't confirm whether the publicly accessible data was ever viewed. TechCrunch reports: Officials said the exposed data included personal information on 672,616 individuals who are Medicaid and Medicare Savings Program recipients. The data included their addresses, case numbers, and demographic data -- but not individuals' names. The exposed data also included names, addresses, case statuses, and other information relating to 32,401 individuals in receipt of services from the department's Division of Rehabilitation Services.
Businesses

OpenAI Is Paying Employees More Than Any Major Tech Startup in History 25

OpenAI is paying employees more than any major tech startup in history, with average stock-based compensation hitting roughly $1.5 million per worker in 2025. "That is more than seven times higher than the stock-based pay Google disclosed in 2003, before it filed for an initial public offering in 2004," reports the Wall Street Journal. "The $1.5 million is about 34 times the average employee compensation of 18 other large tech companies in the year before they went public." From the report: To keep its lead in the AI race, OpenAI is doling out massive stock compensation packages to top researchers and engineers, making them some of the richest employees in Silicon Valley. The equity awards are inflating the company's heavy operating losses and diluting existing shareholders at a rapid clip. As an AI arms race intensified this summer, frontier labs such as OpenAI faced pressure to increase employee pay after Meta Platforms Chief Executive Mark Zuckerberg began offering pay packages worth hundreds of millions of dollars -- and in some rare cases $1 billion -- to top executives and researchers at rival companies.

Zuckerberg's recruiting blitz swept up 20-plus OpenAI personnel, including ChatGPT co-creator Shengjia Zhao. In August, OpenAI gave some of its research and engineering staff a one-time bonus, with some employees receiving millions of dollars, The Wall Street Journal previously reported. The financial data, shared with investors over the summer, shows that OpenAI's stock-based compensation was expected to increase by about $3 billion annually through 2030. The company recently told staff it would discontinue a policy that required employees to work at OpenAI for at least six months before their equity vests. That development could lead to further compensation increases.

OpenAI's compensation as a percentage of revenue was set to reach 46% in 2025, the highest of any of the 18 companies except for Rivian, which didn't generate revenue the year before its IPO. Palantir's stock-based compensation equaled 33% of its revenue the year before its IPO in 2020, Google's was 15% and Facebook's was 6%, the analysis shows. On average, each company's stock-based compensation made up about 6% of revenue among tech companies the Journal analyzed in the year before their IPOs, according to the Equilar data.
The Internet

Finland Seizes Ship Suspected of Severing Undersea Cable To Estonia (reuters.com) 45

Finnish authorities on Wednesday seized a vessel suspected of severing an undersea telecommunications cable that connects Helsinki to Tallinn by dragging its anchor across the Gulf of Finland, the latest in a string of infrastructure incidents that have put Baltic Sea nations on edge since Russia's 2022 invasion of Ukraine.

Police are investigating the case as aggravated criminal damage and have not disclosed the ship's name, nationality or details about its crew. The cable belongs to Finnish telecoms group Elisa. Estonia's justice ministry reported that a second telecoms cable connecting the two countries -- owned by Sweden's Arelion -- also went down on Wednesday. This follows Finland's December 2024 boarding of the Russian-linked oil tanker Eagle S, which investigators said damaged a power cable and multiple telecoms links using the same anchor-dragging method. A Finnish court in October dismissed criminal charges against the Eagle S crew after prosecutors failed to prove intent.

Slashdot Top Deals