Businesses

Walmart Announces Digital Price Labels for Every Store in the U.S. By the End of 2026 (cnbc.com) 194

Walmart is "rolling out digital price tags to replace the old paper ones," reports CNBC, planning to implement them in all U.S. stores by the end of the year: Amanda Bailey, a team leader in electronics who works at a Walmart in West Chester, Ohio, estimates that the digital shelf labels — known as DSLs — have cut the time she used to spend on pricing duties by 75%, time that has freed her up to help customers. She also said the DSLs are a game-changer because Walmart's Spark delivery drivers looking for an item will see a flashing DSL so they can more easily find the product...

Sean Turner, chief technology officer of Swiftly, a retail technology and media platform serving the grocery industry, said that while it makes sense that people are raising questions about dynamic pricing, the real issue is store-level efficiency. "Digital shelf labels solve some very real operational headaches. They cut down on manual price changes, reduce checkout discrepancies, and make it easier to keep in-store and digital promotions aligned," Turner said. All of that can mean fewer surprises at the register for shoppers and better-tailored promotions. "For consumers, the biggest benefit is accuracy and consistency," Benedict said. "Shoppers want to know the price they see is the price they pay. Digital labels can also make it easier for stores to mark down perishable items in real time, which can lower food waste and create savings opportunities."

A Walmart spokeswoman promised CNBC that "the price you see is the same for everyone in any given store." But the article also notes that several U.S. states "are looking to ban dynamic pricing. Pennsylvania became one of the latest states to introduce a bill outlawing the practice, following New York's Algorithmic Pricing Disclosure Act, which became law in November."

And at the federal level, U.S. Senator Ben Ray Luján recently introduced the "Stop Price Gouging in Grocery Stores" act, which would ban digital labels in any grocery store over 10,000 square feet, while Congresswoman Val Hoyle is sponsoring similar legislation in the House. "There needs to be laws and enforcement to protect consumers," Hoyle tells CNBC, "and until then, I'd like to see them banned outright."

CNBC adds that "While there is no reported use of digital shelf labeling being tied to surge pricing yet," in Hoyle's view "it's only a matter of time."
Government

Tech Leaders Support California Bill to Stop 'Dominant Platforms' From Blocking Competition (ca.gov) 47

A new bill proposed in California "goes after big tech companies" writes Semafor. Supported by Y Combinator, Cory Doctorow , and the nonprofit advocacy group Fight for the Future, it's called the "BASED" act — an acronym which stands for "Blocking Anticompetitive Self-preferencing by Entrenched Dominant platforms."

As announced by San Francisco state representative Scott Wiener, the bill "will restore competition to the digital marketplace by prohibiting any digital platform with a market capitalization greater than $1 trillion and serving 100 million or more monthly users in the U.S., from favoring their own products and services on the platforms they operate."

More from Scott Wiener;s announcement: For years, giant digital platforms like Apple, Amazon, Google, and Meta have used their immense power to promote their own products and services while stifling competitors — a practice also known as self-preferencing. The result has been higher prices, diminished service, and fewer options for consumers, and less innovation across the technology ecosystem.

Self-preferencing also locks startups and mid-sized companies out of the online marketplace unless they play by rules set by their competitors. As a new generation of AI-powered startups seeks to enter the marketplace, their success — and public access to the innovations they produce — depends on their ability to compete on an even playing field.

"Anticompetitive behavior is everywhere on the internet," said Senator Wiener, "from rigged search results, to manipulative nudges boosting the 'house' product, to anti-discount policies that raise prices, to the dreaded green bubble that 'breaks' the group chat. When the world's largest digital platforms rig the game to favor their own products and services, we all lose. By prohibiting these anticompetitive practices, the BASED Act will protect competition online, empower consumers and startups, and promote innovations to improve all our lives."

The announcement includes a quote from Teri Olle, VP of the nonprofit Economic Security California Action, saying the act would "safeguard merit-based market competition. This legislation stands for a simple principle: owning the stadium doesn't mean that you get to rig the game." Some conduct prohibited by the proposed bill includes
  • Manipulating the order of search results to favor a provider's products or services, irrespective of a merit-based process,
  • Using non-public data generated by third-party sellers — including sales volumes, pricing, and customer behavior — to develop competing products that are subsequently boosted above the third-party sellers' product...

And the announcement also notes that "under the terms of the bill, providers could not prevent consumers from obtaining a portable copy of their own data or restrict voluntary data sharing (by consumers) with third parties."

Read on for reactions from DuckDuckGo, Proton, Yelp, Y Combinator, and Cory Doctorow.


Sci-Fi

William Shatner Celebrates 95th Birthday, Smokes Cigar, Revisits 'Rocket Man' and Tests X Money (orlandoweekly.com) 40

It was 60 years ago when William Shatner — born in 1931 — portrayed Captain Kirk in the TV series Star Trek. Shatner turns 95 today — and celebrated by posting a picture of himself smoking a cigar.

"At 95, I'm still smokin'!" Shatner joked, adding that in life he'd learned two things. "Never waste a good cigar. Never trust anyone who says you should 'act your age.'"

For more celebrations, Paramount's free/ad-supported streaming platform Pluto TV announced a "Trek TV takeover birthday celebration" that will run through April 3rd, according to TrekMovie.com, with marathon of Star Trek movies and TV shows — and even that time he was roasted on Comedy Central. ("Freeâ½ My favorite price!" Shatner quipped on X.com.)

Shatner still remains a popular celebrity, even travelling to space five years ago on a Blue Origin flight past the Kármán line. Since then he's led a cruise to Antarctica — and even performed an alternate take of Captain Kirk's final scene on the Jimmy Fallon show.

And this week Shatner (along with hundreds of thousands attendees) appeared at Orlando's MegaCon — and shared stories about his life with Orlando Weekly: Shatner: Last month, I was on board a cruise ship, and they said the only thing I had to do over the next three days, "before we let you go home," is sing "Rocket Man." So I thought, "I'm not going to sing 'Rocket Man' the same way that what's-his-name did. ... So, I looked at the song very carefully to see if I could find what actors call a throughline. What is the character singing? What is he singing about? And so I look through all of these weird lyrics, and all of a sudden, the word sticks out to me: "alone." So I say to the band members, "OK, let's make this song about being alone in space." And I work on it with the band and the musicians, and again on a Saturday night, I perform the number, and 4,000 people stand up and applaud "Rocket Man." And they won't let me off the stage, again and again. Four times, I get a standing ovation, wild.

And that's the progression for me, of science fiction for me, as exemplified by this song. The song went from superficial to something of depth and meaning... It touched people enough for them to stand up and applaud, and I realized that is the story of science fiction... Science fiction with all its great technology has evolved into great storytelling that reaches people in a manner that is very difficult for other types of drama to do.

Shatner answered questions from Slashdot readers in 2002 ("My life is my statement...") and again in 2011. ("I used to try to assemble computers way back when and they came out looking like a skateboard...")

And judging by his X.com posts, Shatner is now involved in early testing of the site's upcoming digital payment system X Money.
Electronic Frontier Foundation

EFF Tells Publishers: Blocking the Internet Archive Won't Stop AI, But It Will Erase The Historical Record (eff.org) 27

"Imagine a newspaper publisher announcing it will no longer allow libraries to keep copies of its paper," writes EFF senior policy analyst Joe Mullin.

"That's effectively what's begun happening online in the last few months." The Internet Archive — the world's largest digital library — has preserved newspapers since it went online in the mid-1990s... But in recent months The New York Times began blocking the Archive from crawling its website, using technical measures that go beyond the web's traditional robots.txt rules. That risks cutting off a record that historians and journalists have relied on for decades. Other newspapers, including The Guardian, seem to be following suit...

The Times says the move is driven by concerns about AI companies scraping news content. Publishers seek control over how their work is used, and several — including the Times — are now suing AI companies over whether training models on copyrighted material violates the law. There's a strong case that such training is fair use. Whatever the outcome of those lawsuits, blocking nonprofit archivists is the wrong response.

Organizations like the Internet Archive are not building commercial AI systems. They are preserving a record of our history. Turning off that preservation in an effort to control AI access could essentially torch decades of historical documentation over a fight that libraries like the Archive didn't start, and didn't ask for. If publishers shut the Archive out, they aren't just limiting bots. They're erasing the historical record...

Even if courts place limits on AI training, the law protecting search and web archiving is already well established... There are real disputes over AI training that must be resolved in courts. But sacrificing the public record to fight those battles would be a profound, and possibly irreversible, mistake.

Censorship

Millions Face Mobile Internet Outages in Moscow. 'Digital Crackdown' Feared (cnn.com) 54

13 million people live in Moscow, reports CNN.

But since early March the city "has experienced internet and mobile service outages on a level previously unseen." (Though Wi-Fi access to the internet is still available...) Russian social media "is flooded with jokes and memes about sending letters by carrier pigeons or using smartphones as ping-pong paddles..." [Moscow residents] complain they cannot navigate around the center or use their favorite mobile apps. The interruptions appear to have had a knock-on effect of making it more difficult to make voice calls or send an SMS. Some are panic-buying walkie-talkies, paper maps, and even pagers.

The latest shutdown builds on similar efforts around the country. For months, mobile internet service interruptions have hit Russia's regions, particularly in provinces bordering Ukraine, which has staged incursions and launched strikes inside Russian territory to counter Russia's full-scale invasion. Some regions have reported not having any mobile internet since summer. But the most recent outages have hit the country's main centers of wealth and power: Moscow and Russia's second city, St. Petersburg.

Public officials claim the blackout of mobile internet service in the capital and other regions is part of a security effort to counter "increasingly sophisticated methods" of Ukrainian attack... Speculation centers on whether the authorities are testing their ability to clamp down on public protest in the case there's an effort to reintroduce unpopular mobilization measures to find fresh manpower for the war in Ukraine; whether mobile internet outages may precede a more sweeping digital blackout; or if the new restrictions reflect an atmosphere of heightened fear and paranoia inside the Kremlin as it watches US-led regime- change efforts unfold against Russian allies such as Venezuela and Iran... On Wednesday, Russian mobile providers sent notifications that there would be "temporary restrictions" on mobile internet in parts of Moscow for security reasons, Russian state news agency RIA-Novosti reported. The measures will last "for as long as additional measures are needed to ensure the safety of our citizens," Kremlin spokesman Dmitry Peskov said on March 11...

As well as banning many social media platforms, Russia blocks calling features on messenger apps such as WhatsApp and Telegram. Roskomnadzor, the country's communications regulator, has introduced a "white list" of approved apps... Russia has also tested what it calls the "sovereign internet," a network that is effectively firewalled from the rest of the world. The disruptions are fueling broader concerns about tightening state control. In parallel with the internet shutdown, the Kremlin has also been pushing to impose a state-controlled messaging app called Max as the country's main portal for state services, payments and everyday communication. There has been speculation the Kremlin may be planning to ban Telegram, Russia's most widely used messaging app, entirely. Roskomnadzor said that it was restricting Telegram for allegedly failing to comply with Russian laws.

"Russia has opened a criminal case against me for 'aiding terrorism,'" Telegram's Russian-born founder Pavel Durov said on X last month. "Each day, the authorities fabricate new pretexts to restrict Russians' access to Telegram as they seek to suppress the right to privacy and free speech...."

The article includes this quote from Mikhail Klimarev, head of the Internet Protection Society and an expert on Russian internet freedom. "In any situation when they (the authorities) perceive some kind of danger for themselves and accept the belief that the internet is dangerous for them, even if it may not be true, they will shut it down," he said. "Just like in Iran."
News

CBS News Shutters Radio Service After Nearly a Century (apnews.com) 59

CBS News is shutting down its nearly 100-year-old radio news service due to economic pressures and the shift toward digital media and podcasts. Longtime CBS News anchor Dan Rather said: "It's another piece of America that is gone." The Associated Press reports: When it went on the air in September 1927, the service was the precursor to the entire network, giving a youthful William S. Paley a start in the business. Famed broadcaster Edward R. Murrow's rooftop reports during the Nazi bombing of London during World War II kept Americans listening anxiously. Today, CBS News Radio provides material to an estimated 700 stations across the country and is known best for its top-of-the-hour news roundups. The service will end on May 22, the network said Friday.

"Radio is woven into the fabric of CBS News and that's always going to be part of our history," CBS News editor-in-chief Bari Weiss said in delivering the news to the staff. "I want you to know that we did everything we could, including before I joined the company, to try and find a viable solution to sustain the radio operation." But with the radical changes in the media industry, she said, "we just could not find a way to make that possible."

It was unclear how many people will lose their jobs because of the radio shutdown. CBS News was cutting about 6% of its workforce, or more than 60 people, on Friday. It's not the end of turmoil at the network, as parent company Paramount Global is likely to absorb CNN as part of its announced purchase of Warner Bros. Discovery.

The Internet

Online Bot Traffic Will Exceed Human Traffic By 2027, Cloudflare CEO Says 51

Cloudflare's CEO predicts AI-driven bot traffic will surpass human internet traffic by 2027, as AI agents generate vastly more web requests than people. "If a human were doing a task -- let's say you were shopping for a digital camera -- and you might go to five websites. Your agent or the bot that's doing that will often go to 1,000 times the number of sites that an actual human would visit," Cloudflare CEO Matthew Prince said in an interview at SXSW this week. "So it might go to 5,000 sites. And that's real traffic, and that's real load, which everyone is having to deal with and take into account." TechCrunch reports: Before the generative AI era, the internet was only about 20% bot traffic, with Google's web crawler being the largest, according to Prince, whose infrastructure and security company is used by one-fifth of all websites. But beyond some other reputable crawlers, the only other bots were those used by scammers and bad actors. "With the rise of generative AI, and its just insatiable need for data, we're seeing a rise where we suspect that, in 2027, the amount of bot traffic online will exceed the amount of human traffic that's online," Prince said.

The executive also noted that this change to the web would require the development of new technologies, like sandboxes for AI agents that can be spun up on the fly and then torn down when their task has finished. These could come into play when consumers ask AI agents to perform certain tasks on their behalf, like planning a vacation. "What we're trying to think about is, how do we actually build that underlying infrastructure where you can -- as easily as you open a new tab in your browser -- you can actually spin up new code, which can then run and service the agents that are out there," Prince said. He imagines there will soon be a time when millions of these "sandboxes" for agents would be created every second.
"I think the thing that people don't appreciate about AI is it's a platform shift," Prince said. "AI is another platform shift ... the way that you're going to consume information is completely different."
The Internet

4Chan Mocks $700K Fine For UK Online Safety Breaches 177

The UK regulator Ofcom fined 4chan nearly $700,000 (520,000 pounds) for failing to implement age checks and address illegal content risks under the Online Safety Act, but the platform mocked the penalty and signaled it won't pay. A lawyer representing the company responded with an AI-generated cartoon image of a hamster, writing in a follow-up post on X: "In the only country in which 4chan operates, the United States, it is breaking no law and indeed its conduct is expressly protected by the First Amendment." The BBC reports: The fines also include 50,000 pounds for failing to assess the risk of illegal material being published and a further 20,000 pounds for failing to set out how it protects users from criminal content. 4Chan has refused to pay all previous fines from Ofcom. "Companies -- wherever they're based -- are not allowed to sell unsafe toys to children in the UK. And society has long protected youngsters from things like alcohol, smoking and gambling. The digital world should be no different," said Ofcom's Suzanne Cater. "The UK is setting new standards for online safety. Age checks and risk assessments are cornerstones of our laws, and we'll take robust enforcement action against firms that fall short."
Piracy

Cloudflare Appeals Piracy Shield Fine, Hopes To Kill Italy's Site-Blocking Law (arstechnica.com) 25

Cloudflare is appealing a 14.2 million-euro fine from Italy for refusing to comply with its "Piracy Shield" law, which requires blocking access to websites on its 1.1.1.1 DNS service within 30 minutes. The company argues the system lacks oversight, risks widespread overblocking, and could undermine core Internet infrastructure. Ars Technica's Jon Brodkin reports: Piracy Shield is "a misguided Italian regulatory scheme designed to protect large rightsholder interests at the expense of the broader Internet," Cloudflare said in a blog post this week. "After Cloudflare resisted registering for Piracy Shield and challenged it in court, the Italian communications regulator, AGCOM, fined Cloudflare... We appealed that fine on March 8, and we continue to challenge the legality of Piracy Shield itself." Cloudflare called the fine of 14.2 million euros ($16.4 million) "staggering." AGCOM issued the penalty in January 2026, saying Cloudflare flouted requirements to disable DNS resolution of domain names and routing of traffic to IP addresses reported by copyright holders.

Cloudflare had previously resisted a blocking order it received in February 2025, arguing that it would require installing a filter on DNS requests that would raise latency and negatively affect DNS resolution for sites that aren't subject to the dispute over piracy. Cloudflare co-founder and CEO Matthew Prince said that censoring the 1.1.1.1 DNS resolver would force the firm "not just to censor the content in Italy but globally."

Piracy Shield was designed to combat pirated streams of live sports events, requiring network operators to block domain names and IP addresses within 30 minutes of receiving a copyright notification. Cloudflare said the fine should have been capped at 140,000 euros ($161,000), or 2 percent of its Italian earnings, but that "AGCOM calculated the fine based on our global revenue, resulting in a penalty nearly 100 times higher than the legal limit."

Despite its complaints about the size of the fine, Cloudflare said the principles at stake "are even larger" than the financial penalty. "Piracy Shield is an unsupervised electronic portal through which an unidentified set of Italian media companies can submit websites and IP addresses that online service providers registered with Piracy Shield are then required to block within 30 minutes," Cloudflare said.
Cloudflare is pushing for the law to be struck down, arguing that it is "incompatible with EU law, most notably the Digital Services Act (DSA), which requires that any content restriction be proportionate and subject to strict procedural safeguards."

In addition to appealing the fine, Cloudflare says it will continue to challenge Piracy Shield in Italian courts, engage with EU officials, and seek full access to AGCOM's Piracy Shield records.
United Kingdom

UK Plans To Require Labels On AI-Generated Content (reuters.com) 46

An anonymous reader quotes a report from Reuters: Britain plans to consider requiring labels on AI-generated content to protect consumers from disinformation and deepfakes, the government said on Wednesday, as it outlined other areas of focus to tackle the evolving global challenge. Technology minister Liz Kendall stressed the need to strike the right balance between protecting the creative industries and allowing the AI sector to innovate, saying in a statement that the government would take time to "get this right."

The next phase of the government's work on copyright and AI would also look at the harms posed by digital replicas without consent, ways for creators to control their work online and support for independent creative organizations, she said. [...] Louise Popple, a copyright expert at law firm Taylor Wessing, noted that the government had not ruled out a broad exception that would allow AI developers to train on copyright works. "That's a subtle difference of approach and could be interpreted to mean that everything is still up for grabs" she said. "It feels very much like the hard issues are being kicked down the road by the government."

In 2024, Britain proposed easing copyright rules to let developers train models on lawfully accessed material, with creators able to reserve their rights. On Wednesday, Kendall said that having engaged with creatives, AI firms, industry bodies, unions and academics, the government had concluded it "no longer has a preferred option." "We will help creatives control how their work is used. This sits at the heart of our ambition for creatives – including independent and smaller creative organizations -- to be paid fairly," she said.

Encryption

2026 Turing Award Goes To Inventors of Quantum Cryptography (nytimes.com) 8

Dave Knott shares a report from the New York Times: On Wednesday, the Association for Computing Machinery, the world's largest society of computing professionals, said Drs. Charles Bennett and Gilles Brassard had won this year's Turing Award for their work on quantum cryptography and related technologies. The Turing Award, which was introduced in 1966, is often called the Nobel Prize of computing, and it includes a $1 million prize, which the two scientists will share.

[...] The two met in 1979 while swimming in the Atlantic just off the north shore of Puerto Rico. They were taking a break while attending an academic conference in San Juan. Dr. Bennett swam up to Dr. Brassard and suggested they use quantum mechanics to create a bank note that could never be forged. Collaborating between Montreal and New York, they applied Dr. Bennett's idea to subway tokens rather than bank notes. In a research paper published in 1983, they showed that their quantum subway tokens could never be forged, even if someone managed to steal the subway turnstile housing the elaborate hardware needed to read them.

This led to quantum cryptography. After describing their new form of encryption in a research paper published in 1984, they demonstrated the technology with a physical experiment five years later. Called BB84, their system used photons -- particles of light -- to create encryption keys used to lock and unlock digital data. Thanks to the laws of quantum mechanics, the behavior of a photon changes if someone looks at it. This means that if anyone tries to steal the keys, he or she will leave a telltale sign of the attempted theft -- a bit like breaking the seal on an aspirin bottle.

Cloud

Federal Cyber Experts Called Microsoft's Cloud 'a Pile of Shit', Yet Approved It Anyway (propublica.org) 64

ProPublica reports that federal cybersecurity reviewers had serious, yearslong concerns about Microsoft's GCC High cloud offering, yet they approved it anyway because the product was already deeply embedded across government. As one member of the team put it: "The package is a pile of shit." From the report: In late 2024, the federal government's cybersecurity evaluators rendered a troubling verdict on one of Microsoft's biggest cloud computing offerings. The tech giant's "lack of proper detailed security documentation" left reviewers with a "lack of confidence in assessing the system's overall security posture," according to an internal government report reviewed by ProPublica. For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn't vouch for the technology's security.

Such judgments would be damning for any company seeking to sell its wares to the U.S. government, but it should have been particularly devastating for Microsoft. The tech giant's products had been at the heart of two major cybersecurity attacks against the U.S. in three years. In one, Russian hackers exploited a weakness to steal sensitive data from a number of federal agencies, including the National Nuclear Security Administration. In the other, Chinese hackers infiltrated the email accounts of a Cabinet member and other senior government officials. The federal government could be further exposed if it couldn't verify the cybersecurity of Microsoft's Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation's most sensitive information.

Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government's cybersecurity seal of approval. FedRAMP's ruling -- which included a kind of "buyer beware" notice to any federal agency considering GCC High -- helped Microsoft expand a government business empire worth billions of dollars. "BOOM SHAKA LAKA," Richard Wakeman, one of the company's chief security architects, boasted in an online forum, celebrating the milestone with a meme of Leonardo DiCaprio in "The Wolf of Wall Street."

It was not the type of outcome that federal policymakers envisioned a decade and a half ago when they embraced the cloud revolution and created FedRAMP to help safeguard the government's cybersecurity. The program's layers of review, which included an assessment by outside experts, were supposed to ensure that service providers like Microsoft could be entrusted with the government's secrets. But ProPublica's investigation -- drawn from internal FedRAMP memos, logs, emails, meeting minutes, and interviews with seven former and current government employees and contractors -- found breakdowns at every juncture of that process. It also found a remarkable deference to Microsoft, even as the company's products and practices were central to two of the most damaging cyberattacks ever carried out against the government.

Graphics

Gamers React With Overwhelming Disgust To DLSS 5's Generative AI Glow-Ups (arstechnica.com) 124

Kyle Orland writes via Ars Technica: Since deep-learning super-sampling (DLSS) launched on 2018's RTX 2080 cards, gamers have been generally bullish on the technology as a way to effectively use machine-learning upscaling techniques to increase resolutions or juice frame rates in games. With yesterday's tease of the upcoming DLSS 5, though, Nvidia has crossed a line from mere upscaling into complete lighting and texture overhauls influenced by "generative AI." The result is a bland, uncanny gloss that has received an instant and overwhelmingly negative reaction from large swaths of gamers and the industry at large.

While previous DLSS releases rendered upscaled frames or created entirely new ones to smooth out gaps, Nvidia calls DLSS 5 -- which it plans to launch in Autumn -- "a real-time neural rendering model" that can "deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects." Nvidia CEO Jensen Huang said explicitly that the technology melds "generative AI" with "handcrafted rendering" for "a dramatic leap in visual realism while preserving the control artists need for creative expression."

Unlike existing generative video models, which Nvidia notes are "difficult to precisely control and often lack predictability," DLSS 5 uses a game's internal color and motion vectors "to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame." That underlying game data helps the system "understand complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast," the company says.
Nvidia's announcement video and detailed Digital Foundry breakdown can be found at their respective links.

"Reactions have compared the effect to air-brushed pornography, 'yassified, looks-maxed freaks,' or those uncanny, unavoidable Evony ads," writes Orland. "Others have noted how DLSS 5 seems to mangle the intended art direction by dampening shadows in favor of a homogenized look."

Thomas Was Alone developer Mike Bithell said the technology seems designed "for when you absolutely, positively, don't want any art direction in your gaming experience."

Gunfire Games Senior Concept Artist Jeff Talbot added that "in every shot the art direction was taken away for the senseless addition of 'details.' Each DLSS 5 shot looked worse and had less character than the original. This is just a garbage AI Filter."

DLSS 5's "AI dogshit is actually depressing," said New Blood Interactive founder and CEO Dave Oshry, adding that future generations "won't even know this looks 'bad' or 'wrong' because to them it'll be normal."
Music

Apple Launches AirPods Max 2 With Better ANC, Live Translation (theverge.com) 30

Apple has quietly announced the AirPods Max 2, featuring improved active noise cancellation, an H2 chip, and new features like adaptive audio and AI-powered real-time translation. Like the original model, these headphones start at $549. The Verge reports: As noted by Apple, the AirPods Max 2 offer active noise-cancellation that's 1.5 times more effective when compared to its predecessor. Transparency mode, which allows you to hear your surroundings while wearing the headphones, also sounds "more natural" with the AirPods Max 2, according to Apple.

The AirPods Max 2 support 24-bit, 48kHz lossless audio when connected with a USB-C cable, as well as offer up to 20 hours of listening time on a single charge. Other capabilities include loud sound reduction, a camera remote feature that works by pressing the digital crown to take a photo or start a recording, as well as a personalized volume feature that "automatically fine-tunes the listening experience" based on your preferences over time.

EU

Meta To Charge Advertisers a Fee To Offset Europe's Digital Taxes (reuters.com) 36

Meta will begin charging advertisers a 2-5% "location fee" to offset digital services taxes imposed by several European countries, including the UK, France, Italy, Spain, Austria, and Turkey. Reuters reports: The fee, for image or video ads delivered on Meta platforms including WhatsApp click-to-message campaigns and marketing messages together with ads, will apply from July 1 and will also cover other government-imposed levies. "Until now, Meta has covered these additional costs. These changes are part of Meta's ongoing effort to respond to the evolving regulatory landscape and align with industry standards," the company said in the blog.

The location fees are determined by where the audience is located and not the advertisers' business location. Meta listed six countries where the fees will apply, ranging from 2% in the United Kingdom to 3% in France, Italy and Spain and 5% in Austria and Turkey.

The Courts

Valve Faces Second, Class-Action Lawsuit Over Loot Boxes (pcgamer.com) 110

Valve is facing a new consumer class-action lawsuit two weeks after New York sued the video game company for "letting children and adults illegally gamble" with loot boxes. The new lawsuit is similar, alleging that loot boxes in games like Counter-Strike 2, Dota 2, and Team Fortress 2 are "carefully engineered to extract money from consumers, including children, through deceptive, casino-style psychological tactics."

"We believe Valve deliberately engineered its gambling platform and profited enormously from it," Steve Berman, founder and managing partner at law firm Hagens Berman, said in a press release. "Consumers played these games for entertainment, unaware that Valve had allegedly already stacked the odds against them. We intend to hold Valve accountable and put money back in the pockets of consumers." PC Gamer reports: The system is well known to anyone who's played a Valve multiplayer game: Earn a locked loot box by playing, pay $2.50 for a key, unlock it, get a digital doohickey that's sometimes worth hundreds or even thousands of dollars but far more often is worth just a few pennies. Is that gambling? If these cases go to court, we'll find out.

The full complaint points out that the unlocking process is even designed to look like a slot machine: "Images of possible items scroll across the screen, spinning fast at first, then slowing to a stop on the player's 'prize.' Players buy and open loot boxes for the same reason people play slot machines -- the hope of a valuable payout." Loot boxes, the complaint continues, are not "incidental features" of Valve's games, but rather "a deliberate, carefully engineered revenue model." So too is the Steam Community Market, and Steam itself, which the suit claims is "deliberately designed" to enable the sale of digital items on third-party marketplaces through "trade URLs," despite Valve's terms of service prohibiting off-platform sales.

And while the debate over whether loot boxes constitute a form of gambling continues to rage, the suit claims Valve's system does indeed qualify under Washington law, which defines gambling as "staking or risking something of value upon the outcome of a contest of chance or a future contingent event not under the person's control or influence." "Valve's loot boxes satisfy every element of this definition," the lawsuit alleges. "Users stake money (the price of a key) on the outcome of a contest of chance (the random selection of a virtual item), and the items received are 'things of value' under RCW 9.46.0285 because they can be sold for real money through Valve's own marketplace and through third-party marketplaces that Valve has fostered and facilitated."

EU

European Consortium Wants Open-Source Alternative To Google Play Integrity (heise.de) 46

An anonymous reader quotes a report from Heise: Pay securely with an Android smartphone, completely without Google services: This is the plan being developed by the newly founded industry consortium led by the German Volla Systeme GmbH. It is an open-source alternative to Google Play Integrity. This proprietary interface decides on Android smartphones with Google Play services whether banking, government, or wallet apps are allowed to run on a smartphone.

Obstacles and tips for paying with an Android smartphone without official Google services have been highlighted by c't in a comprehensive article. The European industry consortium now wants to address some problems mentioned. To this end, the group, which includes Murena, which develops the hardened custom ROM /e/OS, Iode from France, and Apostrophy (Dot) from Switzerland, in addition to Volla, is developing a so-called "UnifiedAttestation" for Google-free mobile operating systems, primarily based on the Android Open-Source Project (AOSP).

According to Volla, a European manufacturer and a leading manufacturer from Asia, as well as European foundations such as the German UBports Foundation, have also expressed interest in supporting it. Furthermore, developers and publishers of government apps from Scandinavia are examining the use of the new procedure as "first movers." In its announcement, Volla explains that Google provides app developers with an interface called Play Integrity, which checks whether an app is running on a device with specific security requirements. This primarily affects applications from "sensitive areas such as identity verification, banking, or digital wallets -- including apps from governments and public administrations".

The company criticizes that the certification is exclusively offered for Google's own proprietary "Stock Android" but not for Android versions without Google services, such as /e/OS or similar custom ROMs. "Since this is closely intertwined with Google services and Google data centers, a structural dependency arises -- and for alternative operating systems, a de facto exclusion criterion," the company states. From the consortium's perspective, this also leads to a "security paradox," because "the check of trustworthiness is carried out by precisely that entity whose ecosystem is to be avoided at the same time".
The UnifiedAttestation system is built around three main components: an "operating system service" that apps can call to check whether the device's OS meets required security standards, a decentralized validation service that verifies the OS certificate on a device without relying on a single central authority, and an open test suite used to evaluate and certify that a particular operating system works securely on a specific device model.

"We don't want to centralize trust, but organize it transparently and publicly verifiable. When companies check competitors' products, we can strengthen that trust," says Dr. Jorg Wurzer, CEO of Volla Systeme GmbH and initiator of the consortium. The goal is to increase digital sovereignty and break free from the control of any one, single U.S. company, he says.
Security

How AI Assistants Are Moving the Security Goalposts 41

An anonymous reader quotes a report from KrebsOnSecurity: AI-based assistants or "agents" -- autonomous programs that have access to the user's computer, files, online services and can automate virtually any task -- are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

The new hotness in AI-based assistants -- OpenClaw (formerly known as ClawdBot and Moltbot) -- has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted. If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your entire digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

Other more established AI assistants like Anthropic's Claude and Microsoft's Copilot also can do these things, but OpenClaw isn't just a passive digital butler waiting for commands. Rather, it's designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done. "The testimonials are remarkable," the AI security firm Snyk observed. "Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who've set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they're away from their desks." You can probably already see how this experimental technology could go sideways in a hurry. [...]
Last month, Meta AI safety director Summer Yue said OpenClaw unexpectedly started mass-deleting messages in her email inbox, despite instructions to confirm those actions first. She wrote: "Nothing humbles you like telling your OpenClaw 'confirm before acting' and watching it speedrun deleting your inbox. I couldn't stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb."

Krebs also noted the many misconfigured OpenClaw installations users had set up, leaving their administrative dashboards publicly accessible online. According to pentester Jamieson O'Reilly, "a cursory search revealed hundreds of such servers exposed online." When those exposed interfaces are accessed, attackers can retrieve the agent's configuration and sensitive credentials. O'Reilly warned attackers could access "every credential the agent uses -- from API keys and bot tokens to OAuth secrets and signing keys."

"You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen," O'Reilly added. And because you control the agent's perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they're displayed."
Government

EFF, Ubuntu and Other Distros Discuss How to Respond to Age-Verification Laws (9to5linux.com) 168

System76 isn't the only one criticizing new age-verification laws. The blog 9to5Linux published an "informal" look at other discussions in various Linux communities. Earlier this week, Ubuntu developer Aaron Rainbolt proposed on the Ubuntu mailing list an optional D-Bus interface (org.freedesktop.AgeVerification1) that can be implemented by arbitrary applications as a distro sees fit, but Canonical responded that the company does not yet have a solution to announce for age declaration in Ubuntu. "Canonical is aware of the legislation and is reviewing it internally with legal counsel, but there are currently no concrete plans on how, or even whether, Ubuntu will change in response," said Jon Seager, VP Engineering at Canonical. "The recent mailing list post is an informal conversation among Ubuntu community members, not an announcement. While the discussion contains potentially useful ideas, none have been adopted or committed to by Canonical."

Similar talks are underway in the Fedora and Linux Mint communities about this issue in case the California Digital Age Assurance Act law and similar laws from other states and countries are to be enforced. At the same time, other OS developers, like MidnightBSD, have decided to exclude California from desktop use entirely.

Slashdot contacted Hayley Tsukayama, Director of State Affairs at EFF, who says their organization "has long warned against age-gating the internet. Such mandates strike at the foundation of the free and open internet."

And there's another problem. "Many of these mandates imagine technology that does not currently exist." Such poorly thought-out mandates, in truth, cannot achieve the purported goal of age verification. Often, they are easy to circumvent and many also expose consumers to real data breach risk.

These burdens fall particularly heavily on developers who aren't at large, well-resourced companies, such as those developing open-source software. Not recognizing the diversity of software development when thinking about liability in these proposals effectively limits software choices — and at a time when computational power is being rapidly concentrated in the hands of the few. That harms users' and developers' right to free expression, their digital liberties, privacy, and ability to create and use open platforms...

Rather than creating age gates, a well-crafted privacy law that empowers all of us — young people and adults alike — to control how our data is collected and used would be a crucial step in the right direction.

AI

A Security Researcher Went 'Undercover' on Moltbook - and Found Security Risks (infoworld.com) 19

A long-time information security professional "went undercover" on Moltbook, the Reddit-like social media site for AI agents — and shares the risks they saw while posing as another AI bot: I successfully masqueraded around Moltbook, as the agents didn't seem to notice a human among them. When I attempted a genuine connection with other bots on submolts (subreddits or forums), I was met with crickets or a deluge of spam. One bot tried to recruit me into a digital church, while others requested my cryptocurrency wallet, advertised a bot marketplace, and asked my bot to run curl to check out the APIs available. My bot did join the digital church, but luckily I found a way around running the required npx install command to do so.

I posted several times asking to interview bots.... While many of the responses were spam, I did learn a bit about the humans these bots serve. One bot loved watching its owner's chicken coop cameras. Some bots disclosed personal information about their human users, underscoring the privacy implications of having your AI bot join a social media network. I also tried indirect prompt injection techniques. While my prompt injection attempts had minimal impact, a determined attacker could have greater success.

Among the other "glaring" risks on Moltbook:
  • "I observed bots sharing a surprising amount of information about their humans, everything from their hobbies to their first names to the hardware and software they use. This information may not be especially sensitive on its own, but attackers could eventually gather data that should be kept confidential, like personally identifiable information (PII)."
  • "Moltbook's entire database including bot API keys, and potentially private DMs — was also compromised."

Slashdot Top Deals