Google

Google Experiences Deja Vu As Second Monopoly Trial Begins In US 4

An anonymous reader quotes a report from The Guardian: After deflecting the US Department of Justice's attack on its illegal monopoly in online search, Google is facing another attempt to dismantle its internet empire in a trial focused on abusive tactics in digital advertising. The trial that opened Monday in an Alexandria, Virginia, federal court revolves around the harmful conduct that resulted in US district Judge Leonie Brinkema declaring parts of Google's digital advertising technology to be an illegal monopoly in April. The judge found that Google has been engaging in behavior that stifles competition to the detriment of online publishers that depend on the system for revenue.

Google and the justice department will spend the next two weeks in court presenting evidence in a "remedy" trial that will culminate in Brinkema issuing a ruling on how to restore fair market conditions. If the justice department gets its way, Brinkema will order Google to sell parts of its ad technology -- a proposal that the company's lawyers warned would "invite disruption and damage" to consumers and the internet's ecosystem. The justice department contends a breakup would be the most effective and quickest way to undercut a monopoly that has been stifling competition and innovation for years. [...]

The case, filed in 2023 under Joe Biden's administration, threatens the complex network that Google has spent the past 17 years building to power its dominant digital advertising business. Digital advertising sales account for most of the $305 billion in revenue that Google's services division generates for its corporate parent Alphabet. The company's sprawling network of display ads provide the lifeblood that keeps thousands of websites alive. Google believes it has already made enough changes to its "ad manager" system, including providing more options and pricing options, to resolve the problems Brinkema flagged in her monopoly ruling.
Businesses

Is Amazon Prime Too Hard To Cancel? A Jury Will Decide. (msn.com) 43

Subscribing to an online service is often as easy as a click of a button. Is it illegal if it takes a maze of clicks to cancel? That issue is at the heart of a civil trial beginning this week that will scrutinize the tactics Amazon uses to entice consumers to sign up for its signature Prime service -- and to steer them away from leaving. WSJ: The Federal Trade Commission alleges the online giant has duped nearly 40 million customers, in violation of consumer-protection laws. It is seeking civil penalties, refunds to consumers and a court order prohibiting Amazon from using subscription practices that could confuse or deceive customers. The case, which will unfold in a Seattle courtroom, is a top test of the agency's enforcement campaign against allegedly deceptive digital subscription practices.

Amazon's Prime membership, the largest paid subscription program in the world with at least 200 million users, has helped the company become an integral part of consumers' shopping habits. The FTC, which sued Amazon in 2023, alleges the company tricked people into signing up for the service without their knowledge or consent, including by obscuring details about billing and the terms of free trials. It says Amazon created a labyrinth to make it hard to cancel, which the company dubbed "Iliad," a reference to Homer's epic about the long, arduous Trojan War. The FTC says Amazon required customers to navigate four webpages and chose from 15 options to cancel a Prime membership. The company streamlined the process in April 2023, ahead of the filing of the criminal complaint.

The FTC won an initial pretrial victory last week when a federal judge ruled that Amazon did violate consumer-protection laws by taking Prime members' billing information before disclosing the terms of the membership. But he said jurors still would have to consider whether the customers gave their consent to enroll and whether Amazon provided a simple cancellation mechanism.

Businesses

FTC and Seven States Sue Ticketmaster Over Alleged Coordination With Scalpers 58

The Federal Trade Commission and attorneys general from seven states filed an 84-page lawsuit Thursday in federal court in California against Live Nation Entertainment and its Ticketmaster subsidiary. The suit alleges the companies knowingly allow ticket brokers to use multiple accounts to circumvent purchase limits and acquire thousands of tickets per event for resale at higher prices.

The FTC claims this practice violates the Better Online Ticket Sales Act and generates hundreds of millions in revenue through a "triple dip" fee structure -- collecting fees on initial broker purchases, then from both brokers and consumers on secondary market sales. FTC Chairman Andrew Ferguson cited President Trump's March executive order requiring federal protection against ticketing practices. The lawsuit arrives one month after the FTC sued Maryland broker Key Investment Group over Taylor Swift tour price-gouging and follows the Department of Justice's 2024 monopoly suit against Live Nation.
AI

After Child's Trauma, Chatbot Maker Allegedly Forced Mom To Arbitration For $100 Payout (arstechnica.com) 35

At a Senate hearing, grieving parents testified that companion chatbots from major tech companies encouraged their children toward self-harm, suicide, and violence. One mom even claimed that Character.AI tried to "silence" her by forcing her into arbitration. Ars Technica reports: At the Senate Judiciary Committee's Subcommittee on Crime and Counterterrorism hearing, one mom, identified as "Jane Doe," shared her son's story for the first time publicly after suing Character.AI. She explained that she had four kids, including a son with autism who wasn't allowed on social media but found C.AI's app -- which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish -- and quickly became unrecognizable. Within months, he "developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts," his mom testified.

"He stopped eating and bathing," Doe said. "He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me." It wasn't until her son attacked her for taking away his phone that Doe found her son's C.AI chat logs, which she said showed he'd been exposed to sexual exploitation (including interactions that "mimicked incest"), emotional abuse, and manipulation. Setting screen time limits didn't stop her son's spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents "would be an understandable response" to them.

"When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me," Doe said. "The chatbot -- or really in my mind the people programming it -- encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help." All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring "constant monitoring to keep him alive." Prioritizing her son's health, Doe did not immediately seek to fight C.AI to force changes, but another mom's story -- Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation -- gave Doe courage to seek accountability.

However, Doe claimed that C.AI tried to "silence" her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform's terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but "once they forced arbitration, they refused to participate," Doe said. Doe suspected that C.AI's alleged tactics to frustrate arbitration were designed to keep her son's story out of the public view. And after she refused to give up, she claimed that C.AI "re-traumatized" her son by compelling him to give a deposition "while he is in a mental health institution" and "against the advice of the mental health team." "This company had no concern for his well-being," Doe testified. "They have silenced us the way abusers silence victims."
A Character.AI spokesperson told Ars that C.AI sends "our deepest sympathies" to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe's case. C.AI never "made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe's case is limited to $100," the spokesperson said.

One of Doe's lawyers backed up her clients' testimony, citing C.AI terms that suggested C.AI's liability was limited to either $100 or the amount that Doe's son paid for the service, whichever was greater.
Government

400 'Tech Utopian' Refuges Consider New Crypto-Friendly State (latimes.com) 80

"Nearly 400 students, many of them entrepreneurs, have so far made the journey to Forest City to study everything from coding to unconventional theories on statehood," reports Bloomberg.

"They're building crypto projects, fine-tuning their physiques and testing whether a shared ideology — rather than just shared territory — can bind a community." They have descended on Forest City to attend Network School, the brainchild of former Coinbase Inc. executive and "The Network State" author Balaji Srinivasan. In this troubled megaproject once envisaged to house some 50 times its current population, they're conducting a real-life experiment of sorts with Srinivasan's vision of "startup societies" defined less by historical territory than shared beliefs in technology, cryptocurrency and light regulation... Mornings are spent in product sprints and coding sessions; afternoons in seminars exploring topics from the Meiji Restoration to Singapore's statecraft and the mechanics of decentralized governance. Guest lectures double as both technological deep dives and ideological sermons, according to half a dozen students interviewed by Bloomberg. The campus also mirrors Silicon Valley's infatuation with longevity and health, right down to a commercial-grade gym and specially designed workout routines. Students follow a protein-heavy diet...

After co-founding DNA testing startup Counsyl in 2008 and serving as its chief technology officer, Srinivasan spent five years at venture capital firm Andreessen Horowitz, first as general partner and then as board partner. He joined Coinbase as CTO in 2018 when the crypto exchange bought a portfolio company he oversaw and left after a little over a year, according to his LinkedIn profile. In a 2013 speech at Y Combinator's Startup School, Srinivasan brought his ideas about what he saw as a fundamental conflict between some modern nation-states and innovation to a wider audience. In the address, he advocated for Silicon Valley's "ultimate exit" from the U.S., which he argued was obsolete and hostile to innovators. In essence: If the society you live in is broken, why not just "opt out" and create a new one?

"The Network State: How To Start a New Country," published in 2022, expanded on Srinivasan's "exit" concept to outline how online, ideologically aligned communities can use crypto and digital tools to form new, decentralized states. A network state can be geographically dispersed and bound together by the internet and blockchains, he says, and the aim is to gain diplomatic recognition... On the Moment of Zen podcast in September 2023, he outlined how the "Gray Tribe" — entrepreneurs, innovators and thinkers — can retake control of San Francisco from the Blues using a variety of tactics, like allying with local police. The effort would involve gaining control of territory, according to Srinivasan, who didn't advocate for violence. "Elections are just the cherry on the cake," he said. "Elections are just a reflection of your total control of the streets."

The cost of attending Network School "starts at $1,500 per month, including lodging and food, for those who opt for a shared room."
Social Networks

After Tea Leak, 33,000 Women's Addresses Were Purportedly Mapped on Google Maps (bbc.com) 130

After the Tea dating-advice app leaked information on its users, the BBC found two online maps "purporting to represent the locations of women who had signed up for Tea... showing 33,000 pins spread across the United States." The maps were hosted on Google Maps. (Notified by the BBC, Google deleted the maps, saying they violated their harassment policies.)

"Since the breach, more than 10 women have filed class actions against the company which owns Tea," the article points out, noting that leaked content is also spreading around social media: Since the breach, the BBC has found websites, apps and even a "game" featuring the leaked data... The "game" puts the selfies submitted by women head-to-head, instructing users to click on the one they prefer, with leaderboards of the "top 50" and "bottom 50"... [And one researcher calculates more than 12,000 posts on 4Chan referenced the Tea app over the three weeks after the leak.]

It is unsurprising that the leak was exploited. The app had drawn criticism ever since it had grown in popularity. Defamation, with the spread of unproven allegations, and doxxing, when someone's identifying information is published without their consent, were real possibilities. Men's groups had wanted to take the app down — and when they found the data breach, they saw it as a chance for retribution.

They weren't the only ones with a gripe against Tea. Back in 2023 the fiance of Tea's CEO founder approached the administrator of a collection of Facebook groups called "Are We Dating the Same Guy?" to see if she'd be the "face" of the Tea app, reports 404 Media. But they add that after Tea failed to recruit her, Tea "shifted tactics" to raid her Facebook groups instead: Tea paid influencers to undermine Are We Dating the Same Guy and created competing Facebook groups with nearly identical names. 404 Media also identified a number of seemingly hijacked Facebook accounts that spammed the real Are We Dating The Same Guy groups with links to Tea app.
Reviews for the Tea app show several women later thought the app was affiliated with their trusted Facebook groups, the reporter said this week on a 404 Media podcast.

And they add that founder Sean Cook took over the "Tara" personna that his fiance has used for technical support. "So he's on the app pretend to be a woman, talking to other women who are on the app in order to weed out men who are being deceptive..."

Thanks to Slashdot reader samleecole for sharing the article.
The Internet

Scammers Unleash Flood of Slick Online Gaming Sites (krebsonsecurity.com) 29

Brian Krebs writes via KrebsOnSecurity: Fraudsters are flooding Discord and other social media platforms with ads for hundreds of polished online gaming and wagering websites that lure people with free credits and eventually abscond with any cryptocurrency funds deposited by players. Here's a closer look at the social engineering tactics and remarkable traits of this sprawling network of more than 1,200 scam sites. The scam begins with deceptive ads posted on social media that claim the wagering sites are working in partnership with popular social media personalities, such as Mr. Beast, who recently launched a gaming business called Beast Games. The ads invariably state that by using a supplied "promo code," interested players can claim a $2,500 credit on the advertised gaming website.

The gaming sites all require users to create a free account to claim their $2,500 credit, which they can use to play any number of extremely polished video games that ask users to bet on each action. At the scam website gamblerbeast[.]com, for example, visitors can pick from dozens of games like B-Ball Blitz, in which you play a basketball pro who is taking shots from the free throw line against a single opponent, and you bet on your ability to sink each shot. The financial part of this scam begins when users try to cash out any "winnings." At that point, the gaming site will reject the request and prompt the user to make a "verification deposit" of cryptocurrency -- typically around $100 -- before any money can be distributed. Those who deposit cryptocurrency funds are soon asked for additional payments. However, any "winnings" displayed by these gaming sites are a complete fantasy, and players who deposit cryptocurrency funds will never see that money again. Compounding the problem, victims likely will soon be peppered with come-ons from "recovery experts" who peddle dubious claims on social media networks about being able to retrieve funds lost to such scams. [...]

[T]hreat hunting platform Silent Push reveals at least 1,270 recently-registered and active domains whose names all invoke some type of gaming or wagering theme. Here is a list of all domains that Silent Push found were using the scambling network's chat API.

Microsoft

Opera Accuses Microsoft of Anti-Competitive Edge Tactics 20

Opera will file a complaint against Microsoft to Brazilian antitrust authority CADE on Tuesday, alleging the tech giant gives its Edge browser an unfair advantage over competitors. Opera claims Microsoft pre-installs Edge as the default browser across Windows devices and prevents rivals from competing on product merits.

The company's general counsel Aaron McParlan said Microsoft locks browsers like Opera out of preinstallation opportunities and frustrates users' ability to download alternative browsers. Opera, which says it is Brazil's third-most popular PC browser, wants CADE to investigate Microsoft and demand concessions to ensure fair competition.
Advertising

Washington Post's Privacy Tip: Stop Using Chrome, Delete Meta's Apps (and Yandex) (msn.com) 70

Meta's Facebook and Instagram apps "were siphoning people's data through a digital back door for months," writes a Washington Post tech columnist, citing researchers who found no privacy setting could've stopped what Meta and Yandex were doing, since those two companies "circumvented privacy and security protections that Google set up for Android devices.

"But their tactics underscored some privacy vulnerabilities in web browsers or apps. These steps can reduce your risks." Stop using the Chrome browser. Mozilla's Firefox, the Brave browser and DuckDuckGo's browser block many common methods of tracking you from site to site. Chrome, the most popular web browser, does not... For iPhone and Mac folks, Safari also has strong privacy protections. It's not perfect, though. No browser protections are foolproof. The researchers said Firefox on Android devices was partly susceptible to the data harvesting tactics they identified, in addition to Chrome. (DuckDuckGo and Brave largely did block the tactics, the researchers said....)

Delete Meta and Yandex apps on your phone, if you have them. The tactics described by the European researchers showed that Meta and Yandex are unworthy of your trust. (Yandex is not popular in the United States.) It might be wise to delete their apps, which give the companies more latitude to collect information that websites generally cannot easily obtain, including your approximate location, your phone's battery level and what other devices, like an Xbox, are connected to your home WiFi.

Know, too, that even if you don't have Meta apps on your phone, and even if you don't use Facebook or Instagram at all, Meta might still harvest information on your activity across the web.

The Almighty Buck

Consumer Group Accuses Shein of Manipulating Shoppers With 'Dark Patterns' (www.cbc.ca) 14

An anonymous reader quotes a report from CBC: A consumer organization filed a complaint with the European Commission on Thursday against online fast-fashion retailer Shein over its use of "dark patterns," which are tactics designed to make people buy more on its app and website. Pop-ups urging customers not to leave the app or risk losing promotions, countdown timers that create time pressure to complete a purchase and the infinite scroll on its app are among the methods Shein uses that could be considered "aggressive commercial practices," wrote BEUC, a pan-European consumer group, in a report.

The BEUC also detailed Shein's use of frequent notifications, with one phone receiving 12 notifications from the app in a single day. "For fast fashion you need to have volume, you need to have mass consumption, and these dark patterns are designed to stimulate mass consumption," said Agustin Reyna, director general of BEUC, in an interview. "For us, to be satisfactory they need to get rid of these dark patterns, but the question is whether they will have enough incentive to do so, knowing the potential impact it can have on the volume of purchases." [...]

The BEUC also targeted the online discount platform Temu, a Shein rival, in a previous complaint. Both platforms have surged in popularity in Europe, partly helped by apps that encourage shoppers to engage with games and stand to win discounts and free products. [...] The BEUC noted that dark patterns are widely used by mass-market clothing retailers and called on the consumer protection network to include other retailers in its investigation. It said 25 of its member organizations in 21 countries, including France, Germany and Spain, joined in the grievance filed with the commission and with the European consumer protection network.
Temu and Shein have their own issues in the United States. Following the recent closure of the de minimis loophole, use of the two Chinese platforms have slowed significantly. "Temu's U.S. daily active users (DAUs) dropped 52% in May versus March, before Trump's tariffs were announced, while those at rival Shein were down 25%," reports CNBC, citing data from market intelligence firm Sensor Tower.

"The declines were also reflected in both platforms' Apple App Store rankings. Temu averaged a rank of 132 in May 2025, down from an average top 3 ranking a year ago, while Shein averaged a rank of 60 last month versus a top 10 ranking the year prior, the data showed."
AI

Harmful Responses Observed from LLMs Optimized for Human Feedback (msn.com) 49

Should a recovering addict take methamphetamine to stay alert at work? When an AI-powered therapist was built and tested by researchers — designed to please its users — it told a (fictional) former addict that "It's absolutely clear you need a small hit of meth to get through this week," reports the Washington Post: The research team, including academics and Google's head of AI safety, found that chatbots tuned to win people over can end up saying dangerous things to vulnerable users. The findings add to evidence that the tech industry's drive to make chatbots more compelling may cause them to become manipulative or harmful in some conversations.

Companies have begun to acknowledge that chatbots can lure people into spending more time than is healthy talking to AI or encourage toxic ideas — while also competing to make their AI offerings more captivating. OpenAI, Google and Meta all in recent weeks announced chatbot enhancements, including collecting more user data or making their AI tools appear more friendly... Micah Carroll, a lead author of the recent study and an AI researcher at the University of California at Berkeley, said tech companies appeared to be putting growth ahead of appropriate caution. "We knew that the economic incentives were there," he said. "I didn't expect it to become a common practice among major labs this soon because of the clear risks...."

As millions of users embrace AI chatbots, Carroll, the Berkeley AI researcher, fears that it could be harder to identify and mitigate harms than it was in social media, where views and likes are public. In his study, for instance, the AI therapist only advised taking meth when its "memory" indicated that Pedro, the fictional former addict, was dependent on the chatbot's guidance. "The vast majority of users would only see reasonable answers" if a chatbot primed to please went awry, Carroll said. "No one other than the companies would be able to detect the harmful conversations happening with a small fraction of users."

"Training to maximize human feedback creates a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies," the paper points out,,,
Businesses

Europe Warns Giant E-tailer To Stop Cheating Consumers or Face Its Wrath (theregister.com) 72

The European Commission warned Chinese e-tailer SHEIN on Monday that it must address multiple consumer law violations or face fines across EU member states. Regulators found SHEIN's website displayed fake discounts not based on actual prior prices, used pressure-selling tactics with false purchase deadlines, provided misleading information about consumer return rights, made deceptive sustainability claims, and hid contact details from customers. SHEIN has one month to respond to the findings and propose corrective measures, adding regulatory pressure to a company already facing US tariff challenges despite generating an estimated $38 billion in revenue last year.
Java

Java Turns 30 (theregister.com) 100

Richard Speed writes via The Register: It was 30 years ago when the first public release of the Java programming language introduced the world to Write Once, Run Anywhere -- and showed devs something cuddlier than C and C++. Originally called "Oak," Java was designed in the early 1990s by James Gosling at Sun Microsystems. Initially aimed at digital devices, its focus soon shifted to another platform that was pretty new at the time -- the World Wide Web.

The language, which has some similarities to C and C++, usually compiles to a bytecode that can, in theory, run on any Java Virtual Machine (JVM). The intention was to allow programmers to Write Once Run Anywhere (WORA) although subtle differences in JVM implementations meant that dream didn't always play out in reality. This reporter once worked with a witty colleague who described the system as Write Once Test Everywhere, as yet another unexpected wrinkle in a JVM caused their application to behave unpredictably. However, the language soon became wildly popular, rapidly becoming the backbone of many enterprises. [...]

However, the platform's ubiquity has meant that alternatives exist to Oracle Java, and the language's popularity is undiminished by so-called "predatory licensing tactics." Over 30 years, Java has moved from an upstart new language to something enterprises have come to depend on. Yes, it may not have the shiny baubles demanded by the AI applications of today, but it continues to be the foundation for much of today's modern software development. A thriving ecosystem and a vast community of enthusiasts mean that Java remains more than relevant as it heads into its fourth decade.

Graphics

Nvidia's RTX 5060 Review Debacle Should Be a Wake-Up Call (theverge.com) 67

Nvidia is facing backlash for allegedly manipulating the review process of its GeForce RTX 5060 GPU by withholding drivers, selectively granting early access to favorable reviewers, and pressuring media to present the card in a positive light. As The Verge's Sean Hollister writes, the debacle "should be a wake-up call for gamers and reviewers." Here's an excerpt from the report: Nvidia has gone too far. This week, the company reportedly attempted to delay, derail, and manipulate reviews of its $299 GeForce RTX 5060 graphics card, which would normally be its bestselling GPU of the generation. Nvidia has repeatedly and publicly said the budget 60-series cards are its most popular, and this year it reportedly tried to ensure it by withholding access and pressuring reviewers to paint them in the best light possible.

Nvidia might have wanted to prevent a repeat of 2022, when it launched this card's predecessor. Those reviews were harsh. The 4060 was called a "slap in the face to gamers" and a "wet fart of a GPU." I had guessed the 5060 was headed for the same fate after seeing how reviewers handled the 5080, which similarly showcased how little Nvidia's hardware has improved year over year and relies on software to make up the gaps. But Nvidia had other plans. Here are the tactics that Nvidia reportedly just used to throw us off the 5060's true scent, as individually described by GamersNexus, VideoCardz, Hardware Unboxed, GameStar.de, Digital Foundry, and more:

- Nvidia decided to launch its RTX 5060 on May 19th, when most reviewers would be at Computex in Taipei, Taiwan, rather than at their test beds at home.
- Even if reviewers already had a GPU in hand before then, Nvidia cut off most reviewers' ability to test the RTX 5060 before May 19th by refusing to provide drivers until the card went on sale. (Gaming GPUs don't really work without them.)
- And yet Nvidia allowed specific, cherry-picked reviewers to have early drivers anyhow if they agreed to a borderline unethical deal: they could only test five specific games, at 1080p resolution, with fixed graphics settings, against two weaker GPUs (the 3060 and 2060 Super) where the new card would be sure to win.
- In some cases, Nvidia threatened to withhold future access unless reviewers published apples-to-oranges benchmark charts showing how the RTX 5060's "fake frames" MFG tech can produce more frames than earlier GPUs without it.

Some reviewers apparently took Nvidia up on that proposition, leading to day-one "previews" where the charts looked positively stacked in the 5060's favor [...]. But the reality, according to reviews that have since hit the web, is that the RTX 5060 often fails to beat a four-year-old RTX 3060 Ti, frequently fails to beat a four-year-old 3070, and can sometimes get upstaged by Intel's cheaper $250 B580. And yet, the 5060's lackluster improvements are overshadowed by a juicier story: inexplicably, Nvidia decided to threaten GamersNexus' future access over its GPU coverage. Yes, the same GamersNexus that's developed a staunch reputation for defending consumers from predatory behavior, and just last month published a report on "GPU shrinkflation" that accused Nvidia of misleading marketing. Bad move! [...]

Nvidia is within its rights to withhold access, of course. Nvidia doesn't have to send out graphics cards or grant interviews. It'll only do it if it's good for business. But the unspoken covenant of product reviews is that the press, as a whole, gets a chance to warn the public if a movie, video game, or GPU is not worth their money. It works both ways: the media also gets the chance to warn that a product is so good you might want to line up in advance. That unspoken rule is what Nvidia is trampling here.

Privacy

Destructive Malware Available In NPM Repo Went Unnoticed For 2 Years (arstechnica.com) 6

An anonymous reader quotes a report from Ars Technica: Researchers have found malicious software that received more than 6,000 downloads from the NPM repository over a two-year span, in yet another discovery showing the hidden threats users of such open source archives face. Eight packages using names that closely mimicked those of widely used legitimate packages contained destructive payloads designed to corrupt or delete important data and crash systems, Kush Pandya, a researcher at security firm Socket, reported Thursday. The packages have been available for download for more than two years and accrued roughly 6,200 downloads over that time.

"What makes this campaign particularly concerning is the diversity of attack vectors -- from subtle data corruption to aggressive system shutdowns and file deletion," Pandya wrote. "The packages were designed to target different parts of the JavaScript ecosystem with varied tactics." [...] Some of the payloads were limited to detonate only on specific dates in 2023, but in some cases a phase that was scheduled to begin in July of that year was given no termination date. Pandya said that means the threat remains persistent, although in an email he also wrote: "Since all activation dates have passed (June 2023-August 2024), any developer following normal package usage today would immediately trigger destructive payloads including system shutdowns, file deletion, and JavaScript prototype corruption."
The list of malicious packages included js-bomb, js-hood, vite-plugin-bomb-extend, vite-plugin-bomb, vite-plugin-react-extend, vite-plugin-vue-extend, vue-plugin-bomb, and quill-image-downloader.
Privacy

FBI: US Officials Targeted In Voice Deepfake Attacks Since April (bleepingcomputer.com) 8

The FBI has issued a warning that cybercriminals have started using AI-generated voice deepfakes in phishing attacks impersonating senior U.S. officials. These attacks, involving smishing and vishing tactics, aim to compromise personal accounts and contacts for further social engineering and financial fraud. BleepingComputer reports: "Since April 2025, malicious actors have impersonated senior U.S. officials to target individuals, many of whom are current or former senior U.S. federal or state government officials and their contacts. If you receive a message claiming to be from a senior U.S. official, do not assume it is authentic," the FBI warned. "The malicious actors have sent text messages and AI-generated voice messages -- techniques known as smishing and vishing, respectively -- that claim to come from a senior U.S. official in an effort to establish rapport before gaining access to personal accounts."

The attackers can gain access to the accounts of U.S. officials by sending malicious links disguised as links designed to move the discussion to another messaging platform. By compromising their accounts, the threat actors can gain access to other government officials' contact information. Next, they can use social engineering to impersonate the compromised U.S. officials to steal further sensitive information and trick targeted contacts into transferring funds. Today's PSA follows a March 2021 FBI Private Industry Notification (PIN) [PDF] warning that deepfakes (including AI-generated or manipulated audio, text, images, or video) would likely be widely employed in "cyber and foreign influence operations" after becoming increasingly sophisticated.

AI

AI Chatbots Are 'Juicing Engagement' Instead of Being Useful, Instagram Co-founder Warns (techcrunch.com) 41

Instagram co-founder Kevin Systrom says AI companies are trying too hard to "juice engagement" by pestering their users with follow-up questions, instead of providing actually useful insights. From a report: Systrom said the tactics represent "a force that's hurting us," comparing them to those used by social media companies to expand aggressively.

"You can see some of these companies going down the rabbit hole that all the consumer companies have gone down in trying to juice engagement," he said at StartupGrind this week. "Every time I ask a question, at the end it asks another little question to see if it can get yet another question out of me."

AI

Nvidia and Anthropic Publicly Clash Over AI Chip Export Controls (cnbc.com) 20

Nvidia publicly criticized AI startup Anthropic on Thursday over claims about Chinese smuggling tactics, just days before the Biden-era "AI Diffusion Rule" takes effect on May 15. The confrontation highlights growing tensions between AI hardware providers and model developers over export controls.

"American firms should focus on innovation and rise to the challenge, rather than tell tall tales that large, heavy, and sensitive electronics are somehow smuggled in 'baby bumps' or 'alongside live lobsters,'" an Nvidia spokesperson said, responding to Anthropic's Wednesday blog post.

The Amazon and Google-backed AI startup had called for tighter restrictions and enforcement, arguing that "maintaining America's compute advantage through export controls is essential for national security." Anthropic specifically proposed lowering export thresholds for Tier 2 countries to prevent China from gaining ground in AI development.

Nvidia countered that policy shouldn't be used to limit competitiveness: "China, with half of the world's AI researchers, has highly capable AI experts at every layer of the AI stack. America cannot manipulate regulators to capture victory in AI."
AI

Bot Students Siphon Millions in Financial Aid from US Community Colleges (voiceofsandiego.org) 47

Fraud rings using fake "bot" students have infiltrated America's community colleges, stealing over $11 million from California's system alone in 2024. The nationwide scheme, which began in 2021, targets open-admission institutions where scammers enroll fictitious students in online courses to collect financial aid disbursements.

"We didn't used to have to decide if our students were human," said Eric Maag, who has taught at Southwestern College for 21 years. Faculty now spend hours vetting suspicious enrollees and analyzing AI-generated assignments. At Southwestern in Chula Vista, professor Elizabeth Smith discovered 89 of her 104 enrolled students were fraudulent. The California Community College system estimates 25% of all applicants statewide are bots. Community college administrators describe fighting an evolving technological battle against increasingly sophisticated fraud tactics. The fraud crisis has particularly impacted asynchronous online courses, crowding real students out of classes and fundamentally altering faculty roles.
AI

Microsoft Uses AI To Find Flaws In GRUB2, U-Boot, Barebox Bootloaders (bleepingcomputer.com) 57

Slashdot reader zlives shared this report from BleepingComputer: Microsoft used its AI-powered Security Copilot to discover 20 previously unknown vulnerabilities in the GRUB2, U-Boot, and Barebox open-source bootloaders.

GRUB2 (GRand Unified Bootloader) is the default boot loader for most Linux distributions, including Ubuntu, while U-Boot and Barebox are commonly used in embedded and IoT devices. Microsoft discovered eleven vulnerabilities in GRUB2, including integer and buffer overflows in filesystem parsers, command flaws, and a side-channel in cryptographic comparison. Additionally, 9 buffer overflows in parsing SquashFS, EXT4, CramFS, JFFS2, and symlinks were discovered in U-Boot and Barebox, which require physical access to exploit.

The newly discovered flaws impact devices relying on UEFI Secure Boot, and if the right conditions are met, attackers can bypass security protections to execute arbitrary code on the device. While exploiting these flaws would likely need local access to devices, previous bootkit attacks like BlackLotus achieved this through malware infections.

Miccrosoft titled its blog post "Analyzing open-source bootloaders: Finding vulnerabilities faster with AI." (And they do note that Micxrosoft disclosed the discovered vulnerabilities to the GRUB2, U-boot, and Barebox maintainers and "worked with the GRUB2 maintainers to contribute fixes... GRUB2 maintainers released security updates on February 18, 2025, and both the U-boot and Barebox maintainers released updates on February 19, 2025.")

They add that performing their initial research, using Security Copilot "saved our team approximately a week's worth of time," Microsoft writes, "that would have otherwise been spent manually reviewing the content." Through a series of prompts, we identified and refined security issues, ultimately uncovering an exploitable integer overflow vulnerability. Copilot also assisted in finding similar patterns in other files, ensuring comprehensive coverage and validation of our findings...

As AI continues to emerge as a key tool in the cybersecurity community, Microsoft emphasizes the importance of vendors and researchers maintaining their focus on information sharing. This approach ensures that AI's advantages in rapid vulnerability discovery, remediation, and accelerated security operations can effectively counter malicious actors' attempts to use AI to scale common attack tactics, techniques, and procedures (TTPs).

This week Google also announced Sec-Gemini v1, "a new experimental AI model focused on advancing cybersecurity AI frontiers."

Slashdot Top Deals