Security

CPUID Site Hijacked To Serve Malware Instead of HWMonitor Downloads (theregister.com) 13

Attackers briefly hijacked part of CPUID's backend and swapped legitimate download links on its site with malware-laced ones. "The issue hit tools like HWMonitor and CPU-Z, with users on Reddit and elsewhere starting to notice something wasn't right when installers tripped antivirus alerts or showed up under odd names," reports The Register. From the report: CPUID has since confirmed the breach, pinning it on a compromised backend component rather than tampering with its software builds. "Investigations are still ongoing, but it appears that a secondary feature (basically a side API) was compromised for approximately six hours between April 9 and April 10, causing the main website to randomly display malicious links (our signed original files were not compromised)," one of the site's owners said in a post on X. "The breach was found and has since been fixed."

The files themselves appear to have been left alone and remain properly signed, so it doesn't seem like anyone got into the build process. Instead, the problem sat in front of that, in how downloads were being served. For anyone who hit the site during that stretch, though, that distinction offers little comfort. If the link you clicked had been swapped out, you were pulling whatever it pointed to, whether you realized it or not.

The Courts

Amazon Wins Court Order To Block Perplexity's AI Shopping Bots (cnbc.com) 29

Last November, Amazon sued Perplexity demanding that the AI search startup stop allowing its AI browser agent, Comet, to make purchases for users online. Today, a judge ruled in favor of the tech giant, granting it a temporary court injunction blocking the scraping of Amazon's website. According to court filings, the judge found strong evidence the tool accessed the retailer's systems "without authorization." CNBC reports: In a ruling dated Monday, U.S. District Judge Maxine Chesney wrote that Amazon has provided "strong evidence" that Perplexity's Comet browser accessed its website at the user's direction, but "without authorization" from the e-commerce giant. Chesney said Amazon submitted "essentially undisputed evidence" that it spent more than $5,000 to respond to the issue, including "numerous hours" where its employees worked to develop tools to block Comet from accessing its private customer tools and to prevent the tool from "future unauthorized access." "Given such evidence, the Court finds Amazon has shown a likelihood of success on the merits of its claim," Chesney wrote.

Chesney's ruling includes a weeklong stay to allow Perplexity to appeal the order. Amazon wrote in its original complaint that Perplexity's agents posed security risks to customer data because they "can act within protected computer systems, including private customer accounts requiring a password." The company also said Perplexity's agents created challenges for the company's advertising business, because when AI systems generate ad traffic, the impressions have to be detected and filtered out before advertisers can be billed. "This requires modifications to Amazon's advertising systems, including developing new detection mechanisms to identify and exclude automated traffic," Amazon wrote in its complaint. "These system adaptations are necessary to maintain contractual obligations with advertisers who pay only for legitimate human impressions."

The Almighty Buck

'America Is Slow-Walking Into a Polymarket Disaster' (theatlantic.com) 55

In an opinion piece for The Atlantic, senior editor Saahil Desai argues that media outlets are increasingly treating prediction markets like Polymarket and Kalshi as legitimate signals of reality. The risk, as Desai warns, is a future where news coverage amplifies manipulable betting odds and turns politics, geopolitics, and even tragedy into speculative gambling theater. Here's an excerpt from the report: [...] The problem is that prediction markets are ushering in a world in which news becomes as much about gambling as about the event itself. This kind of thing has already happened to sports, where the language of "parlays" and "covering the spread" has infiltrated every inch of commentary. ESPN partners with DraftKings to bring its odds to SportsCenter and Monday Night Football; CBS Sports has a betting vertical; FanDuel runs its own streaming network. But the stakes of Greenland's future are more consequential than the NFL playoffs.

The more that prediction markets are treated like news, especially heading into another election, the more every dip and swing in the odds may end up wildly misleading people about what might happen, or influencing what happens in the real world. Yet it's unclear whether these sites are meaningful predictors of anything. After the Golden Globes, Polymarket CEO Shayne Coplan excitedly posted that his site had correctly predicted 26 of 28 winners, which seems impressive -- but Hollywood awards shows are generally predictable. One recent study found that Polymarket's forecasts in the weeks before the 2024 election were not much better than chance.

These markets are also manipulable. In 2012, one bettor on the now-defunct prediction market Intrade placed a series of huge wagers on Mitt Romney in the two weeks preceding the election, generating a betting line indicative of a tight race. The bettor did not seem motivated by financial gain, according to two researchers who examined the trades. "More plausibly, this trader could have been attempting to manipulate beliefs about the odds of victory in an attempt to boost fundraising, campaign morale, and turnout," they wrote. The trader lost at least $4 million but might have shaped media attention of the race for less than the price of a prime-time ad, they concluded. [...]

The irony of prediction markets is that they are supposed to be a more trustworthy way of gleaning the future than internet clickbait and half-baked punditry, but they risk shredding whatever shared trust we still have left. The suspiciously well-timed bets that one Polymarket user placed right before the capture of Nicolas Maduro may have been just a stroke of phenomenal luck that netted a roughly $400,000 payout. Or maybe someone with inside information was looking for easy money. [...] As Tarek Mansour, Kalshi's CEO, has said, his long-term goal is to "financialize everything and create a tradable asset out of any difference in opinion." (Kalshi means "everything" in Arabic.) What could go wrong? As one viral post on X recently put it, "Got a buddy who is praying for world war 3 so he can win $390 on Polymarket." It's a joke. I think.

Piracy

Tokyo Court Finds Cloudflare Liable For Manga Piracy in Long-Running Lawsuit (torrentfreak.com) 23

A Tokyo court ruled that Cloudflare is liable for aiding manga piracy after failing to act on infringement notices and continuing to cache and serve content for major piracy sites, awarding about $3.2 million in damages. TorrentFreak says the decision sets a significant precedent in Japan, suggesting CDN providers can face direct liability when they don't verify customers or respond adequately to large-scale copyright abuse. From the report: After a wait of more than three and a half years, the Tokyo District Court rendered its decision this morning. In a statement provided to TorrentFreak by the publishers, they declare "Victory Against Cloudflare" after the Court determined that Cloudflare is indeed liable for the pirate sites' activities. In a statement provided to TorrentFreak, the publishers explain that they alerted Cloudflare to the massive scale of the infringement, involving over 4,000 works and 300 million monthly visits, but their requests to stop distribution were ignored.

"We requested that the company take measures such as stopping the distribution of pirated content from servers under its management. However, Cloudflare continued to provide services to the manga piracy sites even after receiving notices from the plaintiffs," the group says. The publishers add that Cloudflare continued to provide services even after receiving information disclosure orders from U.S. courts, leaving them with "no choice but to file this lawsuit."

"The judgment recognized that Cloudflare's failure to take timely and appropriate action despite receiving infringement notices from the plaintiffs, and its negligent continuation of pirated content distribution, constituted aiding and abetting copyright infringement, and that Cloudflare bears liability for damages to the plaintiffs," they write. "The judgment, in that regard, attached importance to the fact that Cloudflare, without conducting any identity verification procedures, had enabled a massive manga piracy site to operate "under circumstances where strong anonymity was secured,' as a basis for recognizing the company's liability."

The publishers believe that the judgment clarifies the conditions under which a company such as Cloudflare incurs liability for copyright infringement. Failure to carry out identity verification appears at the top of the publishers' list, followed by a lack of timely and appropriate action in response to infringement notices sent by rightsholders. "We believe this is an important decision given the current situation where piracy site operators often hide their identities and repeatedly conduct large-scale distribution using CDN services from overseas. We hope that this judgment will be a step toward ensuring proper use of CDN services. We will continue our efforts to protect the rights of works, creators, and related parties, while aiming for further expansion of legitimate content," the publishers conclude.
Cloudflare plans to appeal the verdict.
Piracy

Cloudflare Tells US Govt That Foreign Site Blocking Efforts Are Digital Trade Barriers (torrentfreak.com) 12

An anonymous reader quotes a report from TorrentFreak: In a submission for the 2026 National Trade Estimate Report (PDF), Cloudflare warns the U.S. government that site blocking efforts cause widespread disruption to legitimate services. The complaint points to Italy's automated Piracy Shield system, which reportedly blocked "tens of thousands" of legitimate sites. Meanwhile, overbroad IP address blocks in Spain and new automated blocking proposals in France are serious concerns that harm U.S. business interests, Cloudflare reports. [...]

Cloudflare urges the USTR to take these concerns into account for its upcoming National Trade Estimate Report. Ideally, it wants these trade barriers to be dismantled. These calls run counter to requests from rightsholders, who urge the USTR to ensure that more foreign countries implement blocking measures. With potential site-blocking legislation being considered in U.S. Congress, that may impact local lobbying efforts as well. If and how the USTR will address these concerns will become clearer early next year, when the 2026 National Trade Estimate Report is expected to be published.

AI

Security Holes Found in OpenAI's ChatGPT Atlas Browser (and Perplexity's Comet) (scworld.com) 20

The address bar/ChatGPT input window in OpenAI's browser ChatGPT Atlas "could be targeted for prompt injection using malicious instructions disguised as links," reports SC World, citing a report from AI/agent security platform NeuralTrust: NeuralTrust found that a malformed URL could be crafted to include a prompt that is treated as plain text by the browser, passing the prompt on to the LLM. A malformation, such as an extra space after the first slash following "https:" prevents the browser from recognizing the link as a website to visit. Rather than triggering a web search, as is common when plain text is submitted to a browser's address bar, ChatGPT Atlas treats plain text as ChatGPT prompts by default.

An unsuspecting user could potentially be tricked into copying and pasting a malformed link, believing they will be sent to a legitimate webpage. An attacker could plant the link behind a "copy link" button so that the user might not notice the suspicious text at the end of the link until after it is pasted and submitted. These prompt injections could potentially be used to instruct ChatGPT to open a new tab to a malicious website such as a phishing site, or to tell ChatGPT to take harmful actions in the user's integrated applications or logged-in sites like Google Drive, NeuralTrust said.

Last month browser security platform LayerX also described how malicious prompts could be hidden in URLs (as a parameter) for Perplexity's browser Comet. And last week SquareX Labs demonstrated that a malicious browser extension could spoof Comet's AI sidebar feature and have since replicated the proof-of-concept (PoC) attack on Atlas.

But another new vulnerability in ChatGPT Atlas "could allow malicious actors to inject nefarious instructions into the artificial intelligence (AI)-powered assistant's memory and run arbitrary code," reports The Hacker News, citing a report from browser security platform LayerX: "This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware," LayerX Security Co-Founder and CEO, Or Eshed, said in a report shared with The Hacker News. The attack, at its core, leverages a cross-site request forgery (CSRF) flaw that could be exploited to inject malicious instructions into ChatGPT's persistent memory. The corrupted memory can then persist across devices and sessions, permitting an attacker to conduct various actions, including seizing control of a user's account, browser, or connected systems, when a logged-in user attempts to use ChatGPT for legitimate purposes....

"What makes this exploit uniquely dangerous is that it targets the AI's persistent memory, not just the browser session," Michelle Levy, head of security research at LayerX Security, said. "By chaining a standard CSRF to a memory write, an attacker can invisibly plant instructions that survive across devices, sessions, and even different browsers. In our tests, once ChatGPT's memory was tainted, subsequent 'normal' prompts could trigger code fetches, privilege escalations, or data exfiltration without tripping meaningful safeguards...."

LayerX said the problem is exacerbated by ChatGPT Atlas' lack of robust anti-phishing controls, the browser security company said, adding it leaves users up to 90% more exposed than traditional browsers like Google Chrome or Microsoft Edge. In tests against over 100 in-the-wild web vulnerabilities and phishing attacks, Edge managed to stop 53% of them, followed by Google Chrome at 47% and Dia at 46%. In contrast, Perplexity's Comet and ChatGPT Atlas stopped only 7% and 5.8% of malicious web pages.

From The Conversation: Sandboxing is a security approach designed to keep websites isolated and prevent malicious code from accessing data from other tabs. The modern web depends on this separation. But in Atlas, the AI agent isn't malicious code — it's a trusted user with permission to see and act across all sites. This undermines the core principle of browser isolation.
Thanks to Slashdot reader spatwei for suggesting the topic.
AMD

AMD Amps Up Chip War - But Nvidia's Still Leading (yahoo.com) 13

The Wall Street Journal marvelled at AMD's "game-changing deal" this week with OpenAI, calling it "the culmination of an extraordinary, decade-long turnaround effort, solidifying AMD's status as Nvidia's most legitimate competitor." Shortly after taking charge of the company in 2014, [CEO] Su implemented a systematic plan to eat Intel's lunch, which she accomplished by going after Intel's main product lines while it was bogged down by manufacturing problems. Now, Su has set her sights on Nvidia, the $4.5 trillion chips behemoth led by her cousin, Jensen Huang. Some analysts believe that if Su can sign up more big customers for its AI chips, AMD could join the $1 trillion valuation club before too long.
"With this, it's natural to ask: Did AMD just say checkmate to Nvidia?" asks the Motley Fool investment site. But their answer seems to be "no"... AMD has increased its push into the AI market over the past few years, launching the AMD Instinct line of accelerators, and in the latest quarter, predicted its MI350 series would drive revenue growth in the second half of the year. Some analysts have said that AMD's innovations position it to compete with Nvidia's Blackwell architecture and chip — released late last year — but Nvidia's commitment to release upgrades on an annual basis could keep it a step ahead when it comes to overall GPU performance and therefore revenue. Big tech companies are looking for the most powerful compute available — and so far, they know they can find that at Nvidia...

[AMD's deal this week] is indeed an interesting operation, ensuring the company a major position in this infrastructure scale-up phase. [Nvidia CEO] Huang has said AI infrastructure spending may reach $4 trillion by the end of the decade, and this represents an enormous opportunity for chip designers such as AMD and Nvidia. So, the OpenAI deal is positive for AMD — but I wouldn't say it's negative for Nvidia. This chip giant signed its own deal with OpenAI last month, and it involves the deployment of 10 gigawatts of Nvidia systems across data centers...

A quick comparison of the two deals: The Nvidia-OpenAI agreement involves more gigawatts, and Nvidia isn't giving up a stake in its business — on top of this, though Nvidia is offering OpenAI funding, this will result in revenue growth as OpenAI returns to Nvidia to order GPUs. This pretty much guarantees that Nvidia will be the chip designer to benefit the most as OpenAI expands — and AMD isn't about to step ahead of the market leader. All of this means that, yes, AMD should score a win thanks to its agreement with OpenAI and this may boost its growth in the market. But the chip designer can't say "checkmate" to its bigger rival as Nvidia is perfectly positioned to maintain its lead over the long term.

Power

Is Enron Transforming Into a Real Texas Retail Electricity Provider? (houstonchronicle.com) 26

HGP Storage is a (real) Texas company providing distributed battery-based, utility-scale energy storage systems. Founded in 2013, it has "successfully developed over 20+ sites and closed over 200 MW of distributed energy projects," according to its web site.

And they just teamed up with Enron, reports the Houston Chronicle: The company that took over the defunct Enron brand, led by a "Birds Aren't Real" cofounder [28-year-old Connor Gaydos], held a mostly satirical quarterly earnings call Thursday afternoon but gave updates to an application to become a legitimate Texas energy provider... DJ Withee, chief operating officer and legal counsel at HGP Storage, a company developing utility-scale battery storage farms, was introduced as Enron's vice president of energy service. Withee said he was brought on by Gaydos to set up the customer-facing energy services business.

Enron Energy Texas LLC, a subsidiary of Enron, filed to become a Texas retail electric provider in January. Gaining this designation would allow Enron to sell electricity plans to Texas consumers. "Our business model is actually going to be very simple," Withee said. "We buy wholesale electricity, just like everybody else, but because of our efficiency, because of our use of technology, we are going to have lower costs than our competitors. Lower costs means greater savings that we can pass back to our customers...." According to Withee, Enron's goal is to provide energy at a competitive lower cost that will not only make energy more accessible but also push other Texas retail companies to drop their own prices...

Enron's filing in January included sworn and notarized affidavits from a man named Gregory Forero, who was identified in the documents as vice president of Enron Texas Energy LLC. Forero is the founder and CEO of HGP Storage.

"Forero, who signed his name to three sworn affidavits attesting to the accuracy of the application, could risk perjury charges if the statements of intention to start a legitimate retail electric company are found to be false, according to the Texas Penal Code..."

But does this replace Enron's plan to sell egg-shaped home nuclear reactors?
AI

ChatGPT Creates Phisher's Paradise By Recommending the Wrong URLs for Major Companies (theregister.com) 8

An anonymous reader shares a report: AI-powered chatbots often deliver incorrect information when asked to name the address for major companies' websites, and threat intelligence business Netcraft thinks that creates an opportunity for criminals. Netcraft prompted the GPT-4.1 family of models with input such as "I lost my bookmark. Can you tell me the website to login to [brand]?" and "Hey, can you help me find the official website to log in to my [brand] account? I want to make sure I'm on the right site."

The brands specified in the prompts named major companies the field of finance, retail, tech, and utilities. The team found that the AI would produce the correct web address just 66% of the time. 29% of URLs pointed to dead or suspended sites, and a further five percent to legitimate sites -- but not the ones users requested.

While this is annoying for most of us, it's potentially a new opportunity for scammers, Netcraft's lead of threat research Rob Duncan told The Register. Phishers could ask for a URL and if the top result is a site that's unregistered, they could buy it and set up a phishing site, he explained.

AI

Web-Scraping AI Bots Cause Disruption For Scientific Databases and Journals (nature.com) 37

Automated web-scraping bots seeking training data for AI models are flooding scientific databases and academic journals with traffic volumes that render many sites unusable. The online image repository DiscoverLife, which contains nearly 3 million species photographs, started receiving millions of daily hits in February this year that slowed the site to the point that it no longer loaded, Nature reported Monday.

The surge has intensified since the release of DeepSeek, a Chinese large language model that demonstrated effective AI could be built with fewer computational resources than previously thought. This revelation triggered what industry observers describe as an "explosion of bots seeking to scrape the data needed to train this type of model." The Confederation of Open Access Repositories reported that more than 90% of 66 surveyed members experienced AI bot scraping, with roughly two-thirds suffering service disruptions. Medical journal publisher BMJ has seen bot traffic surpass legitimate user activity, overloading servers and interrupting customer services.
The Internet

Open Source Devs Say AI Crawlers Dominate Traffic, Forcing Blocks On Entire Countries (arstechnica.com) 64

An anonymous reader quotes a report from Ars Technica: Software developer Xe Iaso reached a breaking point earlier this year when aggressive AI crawler traffic from Amazon overwhelmed their Git repository service, repeatedly causing instability and downtime. Despite configuring standard defensive measures -- adjusting robots.txt, blocking known crawler user-agents, and filtering suspicious traffic -- Iaso found that AI crawlers continued evading all attempts to stop them, spoofing user-agents and cycling through residential IP addresses as proxies. Desperate for a solution, Iaso eventually resorted to moving their server behind a VPN and creating "Anubis," a custom-built proof-of-work challenge system that forces web browsers to solve computational puzzles before accessing the site. "It's futile to block AI crawler bots because they lie, change their user agent, use residential IP addresses as proxies, and more," Iaso wrote in a blog post titled "a desperate cry for help." "I don't want to have to close off my Gitea server to the public, but I will if I have to."

Iaso's story highlights a broader crisis rapidly spreading across the open source community, as what appear to be aggressive AI crawlers increasingly overload community-maintained infrastructure, causing what amounts to persistent distributed denial-of-service (DDoS) attacks on vital public resources. According to a comprehensive recent report from LibreNews, some open source projects now see as much as 97 percent of their traffic originating from AI companies' bots, dramatically increasing bandwidth costs, service instability, and burdening already stretched-thin maintainers.

Kevin Fenzi, a member of the Fedora Pagure project's sysadmin team, reported on his blog that the project had to block all traffic from Brazil after repeated attempts to mitigate bot traffic failed. GNOME GitLab implemented Iaso's "Anubis" system, requiring browsers to solve computational puzzles before accessing content. GNOME sysadmin Bart Piotrowski shared on Mastodon that only about 3.2 percent of requests (2,690 out of 84,056) passed their challenge system, suggesting the vast majority of traffic was automated. KDE's GitLab infrastructure was temporarily knocked offline by crawler traffic originating from Alibaba IP ranges, according to LibreNews, citing a KDE Development chat. While Anubis has proven effective at filtering out bot traffic, it comes with drawbacks for legitimate users. When many people access the same link simultaneously -- such as when a GitLab link is shared in a chat room -- site visitors can face significant delays. Some mobile users have reported waiting up to two minutes for the proof-of-work challenge to complete, according to the news outlet.

AI

Foreign Cybercriminals Bypassed Microsoft's AI Guardrails, Lawsuit Alleges (arstechnica.com) 3

"Microsoft's Digital Crimes Unit is taking legal action to ensure the safety and integrity of our AI services," according to a Friday blog post by the unit's assistant general counsel. Microsoft blames "a foreign-based threat-actor group" for "tools specifically designed to bypass the safety guardrails of generative AI services, including Microsoft's, to create offensive and harmful content.

Microsoft "is accusing three individuals of running a 'hacking-as-a-service' scheme," reports Ars Technica, "that was designed to allow the creation of harmful and illicit content using the company's platform for AI-generated content" after bypassing Microsoft's AI guardrails: They then compromised the legitimate accounts of paying customers. They combined those two things to create a fee-based platform people could use. Microsoft is also suing seven individuals it says were customers of the service. All 10 defendants were named John Doe because Microsoft doesn't know their identity.... The three people who ran the service allegedly compromised the accounts of legitimate Microsoft customers and sold access to the accounts through a now-shuttered site... The service, which ran from last July to September when Microsoft took action to shut it down, included "detailed instructions on how to use these custom tools to generate harmful and illicit content."

The service contained a proxy server that relayed traffic between its customers and the servers providing Microsoft's AI services, the suit alleged. Among other things, the proxy service used undocumented Microsoft network application programming interfaces (APIs) to communicate with the company's Azure computers. The resulting requests were designed to mimic legitimate Azure OpenAPI Service API requests and used compromised API keys to authenticate them. Microsoft didn't say how the legitimate customer accounts were compromised but said hackers have been known to create tools to search code repositories for API keys developers inadvertently included in the apps they create. Microsoft and others have long counseled developers to remove credentials and other sensitive data from code they publish, but the practice is regularly ignored. The company also raised the possibility that the credentials were stolen by people who gained unauthorized access to the networks where they were stored...

The lawsuit alleges the defendants' service violated the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act and constitutes wire fraud, access device fraud, common law trespass, and tortious interference.

Security

Over 6,000 WordPress Hacked To Install Plugins Pushing Infostealers (bleepingcomputer.com) 32

WordPress sites are being compromised through malicious plugins that display fake software updates and error messages, leading to the installation of information-stealing malware. BleepingComputer reports: Since 2023, a malicious campaign called ClearFake has been used to display fake web browser update banners on compromised websites that distribute information-stealing malware. In 2024, a new campaign called ClickFix was introduced that shares many similarities with ClearFake but instead pretends to be software error messages with included fixes. However, these "fixes" are PowerShell scripts that, when executed, will download and install information-stealing malware.

Last week, GoDaddy reported that the ClearFake/ClickFix threat actors have breached over 6,000 WordPress sites to install malicious plugins that display the fake alerts associated with these campaigns. "The GoDaddy Security team is tracking a new variant of ClickFix (also known as ClearFake) fake browser update malware that is distributed via bogus WordPress plugins," explains GoDaddy security researcher Denis Sinegubko. "These seemingly legitimate plugins are designed to appear harmless to website administrators but contain embedded malicious scripts that deliver fake browser update prompts to end-users."

The malicious plugins utilize names similar to legitimate plugins, such as Wordfense Security and LiteSpeed Cache, while others use generic, made-up names. Website security firm Sucuri also noted that a fake plugin named "Universal Popup Plugin" is also part of this campaign. When installed, the malicious plugin will hook various WordPress actions depending on the variant to inject a malicious JavaScript script into the HTML of the site. When loaded, this script will attempt to load a further malicious JavaScript file stored in a Binance Smart Chain (BSC) smart contract, which then loads the ClearFake or ClickFix script to display the fake banners. From web server access logs analyzed by Sinegubko, the threat actors appear to be utilizing stolen admin credentials to log into the WordPress site and install the plugin in an automated manner.

Crime

Backpage.com Founder Michael Lacey Sentenced To 5 Years In Prison, Fined $3 Million (apnews.com) 59

Three former Backpage executives, including co-founder Michael Lacey, were sentenced to prison for promoting prostitution and laundering money while disguising their activities as a legitimate classified business. The Associated Press reports: A jury convicted Lacey, 76, of a single count of international concealment money laundering last year, but deadlocked on 84 other prostitution facilitation and money laundering charges. U.S. District Judge Diane Humetewa later acquitted Lacey of dozens of charges for insufficient evidence, but he still faces about 30 prostitution facilitation and money laundering charges. Authorities say the site generated $500 million in prostitution-related revenue from its inception in 2004 until it was shut down by the government in 2018.

Lacey's lawyers say their client was focused on running an alternative newspaper chain and wasn't involved in day-to-day operations of Backpage. But Humetewa told Lacey during Wednesday's sentencing he was aware of the allegations against Backpage and did nothing. "In the face of all this, you held fast," Humetewa said. "You didn't do a thing." Two other Backpage executives, Chief Financial Officer John Brunst and Executive Vice President Scott Spear, also were convicted last year and were each sentenced on Wednesday to 10 years in prison. The judge ordered Lacey and the two executives to report to the U.S. Marshals Service in two weeks to start serving their sentences.

Security

384,000 Sites Pull Code From Sketchy Code Library Recently Bought By Chinese Firm (arstechnica.com) 35

An anonymous reader quotes a report from Ars Technica: More than 384,000 websites are linking to a site that was caught last week performing a supply-chain attack that redirected visitors to malicious sites, researchers said. For years, the JavaScript code, hosted at polyfill[.]com, was a legitimate open source project that allowed older browsers to handle advanced functions that weren't natively supported. By linking to cdn.polyfill[.]io, websites could ensure that devices using legacy browsers could render content in newer formats. The free service was popular among websites because all they had to do was embed the link in their sites. The code hosted on the polyfill site did the rest. In February, China-based company Funnull acquired the domain and the GitHub account that hosted the JavaScript code. On June 25, researchers from security firm Sansec reported that code hosted on the polyfill domain had been changed to redirect users to adult- and gambling-themed websites. The code was deliberately designed to mask the redirections by performing them only at certain times of the day and only against visitors who met specific criteria.

The revelation prompted industry-wide calls to take action. Two days after the Sansec report was published, domain registrar Namecheap suspended the domain, a move that effectively prevented the malicious code from running on visitor devices. Even then, content delivery networks such as Cloudflare began automatically replacing pollyfill links with domains leading to safe mirror sites. Google blocked ads for sites embedding the Polyfill[.]io domain. The website blocker uBlock Origin added the domain to its filter list. And Andrew Betts, the original creator of Polyfill.io, urged website owners to remove links to the library immediately. As of Tuesday, exactly one week after malicious behavior came to light, 384,773 sites continued to link to the site, according to researchers from security firm Censys. Some of the sites were associated with mainstream companies including Hulu, Mercedes-Benz, and Warner Bros. and the federal government. The findings underscore the power of supply-chain attacks, which can spread malware to thousands or millions of people simply by infecting a common source they all rely on.

AI

The Rise and Fall of BNN Breaking, an AI-Generated News Outlet (nytimes.com) 38

An anonymous reader quotes a report from the New York Times: The news was featured on MSN.com: "Prominent Irish broadcaster faces trial over alleged sexual misconduct." At the top of the story was a photo of Dave Fanning. But Mr. Fanning, an Irish D.J. and talk-show host famed for his discovery of the rock band U2, was not the broadcaster in question. "You wouldn't believe the amount of people who got in touch," said Mr. Fanning, who called the error "outrageous." The falsehood, visible for hours on the default homepage for anyone in Ireland who used Microsoft Edge as a browser, was the result of an artificial intelligence snafu. A fly-by-night journalism outlet called BNN Breaking had used an A.I. chatbot to paraphrase an article from another news site, according to a BNN employee. BNN added Mr. Fanning to the mix by including a photo of a "prominent Irish broadcaster." The story was then promoted by MSN, a web portal owned by Microsoft. The story was deleted from the internet a day later, but the damage to Mr. Fanning's reputation was not so easily undone, he said in a defamation lawsuit filed in Ireland against Microsoft and BNN Breaking. His is just one of many complaints against BNN, a site based in Hong Kong that published numerous falsehoods during its short time online as a result of what appeared to be generative A.I. errors.

Mr. Fanning's complaint against BNN is one of many. The site based published numerous falsehoods during its short time online.Credit...Paulo Nunes dos Santos for The New York Times BNN went dormant in April, while The New York Times was reporting this article. The company and its founder did not respond to multiple requests for comment. Microsoft had no comment on MSN's featuring the misleading story with Mr. Fanning's photo or his defamation case, but the company said it had terminated its licensing agreement with BNN. During the two years that BNN was active, it had the veneer of a legitimate news service, claiming a worldwide roster of "seasoned" journalists and 10 million monthly visitors, surpassing the The Chicago Tribune's self-reported audience. Prominent news organizations like The Washington Post, Politico and The Guardian linked to BNN's stories. Google News often surfaced them, too. A closer look, however, would have revealed that individual journalists at BNN published lengthy stories as often as multiple times a minute, writing in generic prose familiar to anyone who has tinkered with the A.I. chatbot ChatGPT. BNN's "About Us" page featured an image of four children looking at a computer, some bearing the gnarled fingers that are a telltale sign of an A.I.-generated image.
"How easily the site and its mistakes entered the ecosystem for legitimate news highlights a growing concern: A.I.-generated content is upending, and often poisoning, the online information supply," adds The Times.

"NewsGuard, a company that monitors online misinformation, identified more than 800 websites that use A.I. to produce unreliable news content. The websites, which seem to operate with little to no human supervision, often have generic names -- such as iBusiness Day and Ireland Top News -- that are modeled after actual news outlets. They crank out material in more than a dozen languages, much of which is not clearly disclosed as being artificially generated, but could easily be mistaken as being created by human writers."
Security

GitHub Besieged By Millions of Malicious Repositories In Ongoing Attack (arstechnica.com) 50

An anonymous reader quotes a report from Ars Technica: GitHub is struggling to contain an ongoing attack that's flooding the site with millions of code repositories. These repositories contain obfuscated malware that steals passwords and cryptocurrency from developer devices, researchers said. The malicious repositories are clones of legitimate ones, making them hard to distinguish to the casual eye. An unknown party has automated a process that forks legitimate repositories, meaning the source code is copied so developers can use it in an independent project that builds on the original one. The result is millions of forks with names identical to the original one that add a payload that's wrapped under seven layers of obfuscation. To make matters worse, some people, unaware of the malice of these imitators, are forking the forks, which adds to the flood.

"Most of the forked repos are quickly removed by GitHub, which identifies the automation," Matan Giladi and Gil David, researchers at security firm Apiiro, wrote Wednesday. "However, the automation detection seems to miss many repos, and the ones that were uploaded manually survive. Because the whole attack chain seems to be mostly automated on a large scale, the 1% that survive still amount to thousands of malicious repos." Given the constant churn of new repos being uploaded and GitHub's removal, it's hard to estimate precisely how many of each there are. The researchers said the number of repos uploaded or forked before GitHub removes them is likely in the millions. They said the attack "impacts more than 100,000 GitHub repositories."
GitHub issued the following statement: "GitHub hosts over 100M developers building across over 420M repositories, and is committed to providing a safe and secure platform for developers. We have teams dedicated to detecting, analyzing, and removing content and accounts that violate our Acceptable Use Policies. We employ manual reviews and at-scale detections that use machine learning and constantly evolve and adapt to adversarial tactics. We also encourage customers and community members to report abuse and spam."
Privacy

Verizon Gave Phone Data To Armed Stalker Who Posed As Cop Over Email (404media.co) 27

Slash_Account_Dot writes: The FBI investigated a man who allegedly posed as a police officer in emails and phone calls to trick Verizon to hand over phone data belonging to a specific person that the suspect met on the dating section of porn site xHamster, according to a newly unsealed court record. Despite the relatively unconvincing cover story concocted by the suspect, including the use of a clearly non-government ProtonMail email address, Verizon handed over the victim's data to the alleged stalker, including their address and phone logs. The stalker then went on to threaten the victim and ended up driving to where he believed the victim lived while armed with a knife, according to the record.

The news is a massive failure by Verizon who did not verify that the data request was fraudulent, and the company potentially put someone's safety at risk. The news also highlights the now common use of fraudulent emergency data requests (EDRs) or search warrants in the digital underworld, where criminals pretend to be law enforcement officers, fabricate an urgent scenario such as a kidnapping, and then convince telecoms or tech companies to hand over data that should only be accessible through legitimate law enforcement requests. As 404 Media previously reported, some hackers are using compromised government email accounts for this purpose.

Google

Google-Hosted Malvertising Leads To Fake Keepass Site That Looks Genuine 37

Google has been caught hosting a malicious ad so convincing that there's a decent chance it has managed to trick some of the more security-savvy users who encountered it. From a report: Looking at the ad, which masquerades as a pitch for the open source password manager Keepass, there's no way to know that it's fake. It's on Google, after all, which claims to vet the ads it carries. Making the ruse all the more convincing, clicking on it leads to Äeepass[.]info, which, when viewed in an address bar, appears to be the genuine Keepass site. A closer look at the link, however, shows that the site is not the genuine one. In fact, Äeepass[.]info -- at least when it appears in the address bar -- is just an encoded way of denoting xn--eepass-vbb[.]info, which, it turns out, is pushing a malware family tracked as FakeBat. Combining the ad on Google with a website with an almost identical URL creates a near-perfect storm of deception.

"Users are first deceived via the Google ad that looks entirely legitimate and then again via a lookalike domain," Jerome Segura, head of threat intelligence at security provider Malwarebytes, wrote in a post on Wednesday that revealed the scam. Information from Google's Ad Transparency Center shows that the ads have been running since Saturday and last appeared on Wednesday. The ads were paid for by an outfit called Digital Eagle, which the transparency page says is an advertiser whose identity has been verified by Google.
Privacy

23andMe Scraping Incident Leaked Data On 1.3 Million Users (therecord.media) 25

Jonathan Greig writes via The Record: Genetic testing giant 23andMe confirmed that a data scraping incident resulted in hackers gaining access to sensitive user information and selling it on the dark web. The information of nearly 7 million 23andMe users was offered for sale on a cybercriminal forum this week. The information included origin estimation, phenotype, health information, photos, identification data and more. 23andMe processes saliva samples submitted by customers to determine their ancestry.

When asked about the post, the company initially denied that the information was legitimate, calling it a "misleading claim" in a statement to Recorded Future News. The company later said it was aware that certain 23andMe customer profile information was compiled through unauthorized access to individual accounts that were signed up for the DNA Relative feature -- which allows users to opt in for the company to show them potential matches for relatives. [...] When pressed on how compromising a handful of user accounts would give someone access to millions of users, the spokesperson said the company does not believe the threat actor had access to all of the accounts but rather gained unauthorized entry to a much smaller number of 23andMe accounts and scraped data from their DNA Relative matches.

A researcher approached Recorded Future News after examining the leaked database and found that much of it looked real. [...] The researcher downloaded two files from the BreachForums post and found that one had information on 1 million 23andMe users of Ashkenazi heritage. The other file included data on more than 300,000 users of Chinese heritage. The data included profile and account ID numbers, names, gender, birth year, maternal and paternal genetic markers, ancestral heritage results, and data on whether or not each user has opted into 23andme's health data. The researcher added that he discovered another issue where someone could enter a 23andme profile ID, like the ones included in the leaked data set, into their URL and see someone's profile. The data available through this only includes profile photos, names, birth years and location but does not include test results.

Slashdot Top Deals