United States

Trump Organization Announces Mobile Plan, $499 Smartphone (cnbc.com) 284

The Trump Organization on Monday unveiled a mobile phone plan and a $499 smartphone that is set to launch in September. CNBC: The new service, Trump Mobile, will offer a $47.45-per-month plan that includes "unlimited" talk, text and data, as well as roadside assistance and a "Telehealth and Pharmacy Benefit," according to its website. The company, owned by President Donald Trump, also announced it will sell a "T1" smartphone, which appears to feature a gold-colored metal case etched with an American flag. Further reading: I Tried Pre-Ordering the Trump Phone. The Page Failed and It Charged My Credit Card the Wrong Amount.
AI

Site for 'Accelerating' AI Use Across the US Government Accidentally Leaked on GitHub (404media.co) 18

America's federal government is building a website and API called ai.gov to "accelerate government innovation with AI", according to an early version spotted by 404 Media that was posted on GitHub by the U.S. government's General Services Administration.

That site "is supposed to launch on July 4," according to 404 Media's report, "and will include an analytics feature that shows how much a specific government team is using AI..." AI.gov appears to be an early step toward pushing AI tools into agencies across the government, code published on Github shows....

The early version of the page suggests that its API will integrate with OpenAI, Google, and Anthropic products. But code for the API shows they are also working on integrating with Amazon Web Services' Bedrock and Meta's LLaMA. The page suggests it will also have an AI-powered chatbot, though it doesn't explain what it will do... Currently, AI.gov redirects to whitehouse.gov. The demo website is linked to from Github (archive here) and is hosted on cloud.gov on what appears to be a staging environment. The text on the page does not show up on other websites, suggesting that it is not generic placeholder text...

In February, 404 Media obtained leaked audio from a meeting in which [the director of the GSA's Technology Transformation Services] told his team they would be creating "AI coding agents" that would write software across the entire government, and said he wanted to use AI to analyze government contracts.

Python

Python Creator Guido van Rossum Asks: Is 'Worse is Better' Still True for Programming Languages? (blogspot.com) 67

In 1989 a computer scientist argued that more functionality in software actually lowers usability and practicality — leading to the counterintuitive proposition that "worse is better". But is that still true?

Python's original creator Guido van Rossum addressed the question last month in a lightning talk at the annual Python Language Summit 2025. Guido started by recounting earlier periods of Python development from 35 years ago, where he used UNIX "almost exclusively" and thus "Python was greatly influenced by UNIX's 'worse is better' philosophy"... "The fact that [Python] wasn't perfect encouraged many people to start contributing. All of the code was straightforward, there were no thoughts of optimization... These early contributors also now had a stake in the language; [Python] was also their baby"...

Guido contrasted early development to how Python is developed now: "features that take years to produce from teams of software developers paid by big tech companies. The static type system requires an academic-level understanding of esoteric type system features." And this isn't just Python the language, "third-party projects like numpy are maintained by folks who are paid full-time to do so.... Now we have a huge community, but very few people, relatively speaking, are contributing meaningfully."

Guido asked whether the expectation for Python contributors going forward would be that "you had to write a perfect PEP or create a perfect prototype that can be turned into production-ready code?" Guido pined for the "old days" where feature development could skip performance or feature-completion to get something into the hands of the community to "start kicking the tires". "Do we have to abandon 'worse is better' as a philosophy and try to make everything as perfect as possible?" Guido thought doing so "would be a shame", but that he "wasn't sure how to change it", acknowledging that core developers wouldn't want to create features and then break users with future releases.

Guido referenced David Hewitt's PyO3 talk about Rust and Python, and that development "was using worse is better," where there is a core feature set that works, and plenty of work to be done and open questions. "That sounds a lot more fun than working on core CPython", Guido paused, "...not that I'd ever personally learn Rust. Maybe I should give it a try after," which garnered laughter from core developers.

"Maybe we should do more of that: allowing contributors in the community to have a stake and care".

Chromium

Arc Browser's Maker Releases First Beta of Its New AI-Powered Browser 'Dia' (techcrunch.com) 13

Recently the Browser Company (the startup behind the Arc web browser) switched over to building a new AI-powered browser — and its beta has just been released, reports TechCrunch, "though you'll need an invite to try it out."

The Chromium-based browser has a URL/search bar that also "acts as the interface for its in-built AI chatbot" which can "search the web for you, summarize files that you upload, and automatically switch between chat and search functions." The Browser Company's CEO Josh Miller has of late acknowledged how people have been using AI tools for all sorts of tasks, and Dia is a reflection of that. By giving users an AI interface within the browser itself, where a majority of work is done these days, the company is hoping to slide into the user flow and give people an easy way to use AI, cutting out the need to visit the sites for tools like ChatGPT, Perplexity, and Claude...

Users can also ask questions about all the tabs they have open, and the bot can even write up a draft based on the contents of those tabs. To set your preferences, all you have to do is talk to the chatbot to customize its tone of voice, style of writing, and settings for coding. Via an opt-in feature called History, you can allow the browser to use seven days of your browsing history as context to answer queries.

The Browser Company will give all existing Arc members access to the beta immediately, according to the article, "and existing Dia users will be able to send invites to other users."

The article points out that Google is also adding AI-powered features to Chrome...
Apple

The Vaporware That Apple Insists Isn't Vaporware 28

At WWDC 2024, Apple showed off a dramatically improved Siri that could handle complex contextual queries like "when is my mom's flight landing?" The demo was heavily edited due to latency issues and couldn't be shown in a single take. Multiple Apple engineers reportedly learned about the feature by watching the keynote alongside everyone else. Those features never shipped.

Now, nearly a year later, Apple executives Craig Federighi and Greg Joswiak are conducting press interviews claiming the 2024 demonstration wasn't "vaporware" because working code existed internally at the time. The company says the features will arrive "in the coming year" -- which Apple confirmed means sometime in 2026.

Apple is essentially arguing that internal development milestones matter more than actual product delivery. The executives have also been setting up strawman arguments, claiming critics expected Apple to build a ChatGPT competitor rather than addressing the core issue: announcing features to sell phones that then don't materialize. The company's timeline communication has been equally problematic, using euphemistic language like "in the coming year" instead of simply saying "2026" for features that won't arrive for nearly two years after announcement.

Developer Russell Ivanovic, in a Mastodon post: My guy. You announced something that never shipped. You made ads for it. You tried to sell iPhones based on it. What's the difference if you had it running internally or not. Still vaporware. Zero difference. MG Siegler: The underlying message that they're trying to convey in all these interviews is clear: calm down, this isn't a big deal, you guys are being a little crazy. And that, in turn, aims to undercut all the reporting about the turmoil within Apple -- for years at this point -- that has led to the situation with Siri. Sorry, the situation which they're implying is not a situation. Though, I don't know, normally when a company shakes up an entire team, that tends to suggest some sort of situation. That, of course, is never mentioned. Nor would you expect Apple -- of all companies -- to talk openly and candidly about internal challenges. But that just adds to this general wafting smell in the air.

The smell of bullshit.
Further reading: Apple's Spin on the Personalized Siri Apple Intelligence Reset.
AI

Google's Gemini AI Will Summarize PDFs For You When You Open Them (theverge.com) 24

Google is rolling out new Gemini AI features for Workspace users that make it easier to find information in PDFs and form responses. From a report: The Gemini-powered file summarization capabilities in Google Drive have now expanded to PDFs and Google Forms, allowing key details and insights to be condensed into a more convenient format that saves users from manually digging through the files.

Gemini will proactively create summary cards when users open a PDF in their drive and present clickable actions based on its contents, such as "draft a sample proposal" or "list interview questions based on this resume." Users can select any of these options to make Gemini perform the desired task in the Drive side panel. The feature is available in more than 20 languages and started rolling out to Google Workspace users on June 12th, though it may take a couple of weeks to appear.

Google

Google is Killing Android Instant Apps (androidauthority.com) 19

Google will discontinue its Android Instant Apps feature in December 2025, ending a nearly decade-long experiment that allowed users to try portions of mobile apps without installing them. The feature, rolled out in early 2017, enabled developers to create lightweight app versions under 15 megabytes that could run temporarily on users' devices when they tapped specific links.

The feature struggled with low developer uptake due to the technical complexity of creating these stripped-down app versions.
Security

Apple Previews New Import/Export Feature To Make Passkeys More Interoperable (arstechnica.com) 36

During this week's Worldwide Developers Conference, Apple unveiled a secure import/export feature for passkeys that addresses one of their biggest limitations: lack of interoperability across platforms and credential managers. The feature, built in collaboration with the FIDO Alliance, enables encrypted, user-initiated passkey transfers between apps and systems. Ars Technica's Dan Goodin says it "provides the strongest indication yet that passkey developers are making meaningful progress in improving usability." From the report: "People own their credentials and should have the flexibility to manage them where they choose," the narrator of the Apple video says. "This gives people more control over their data and the choice of which credential manager they use." The transfer feature, which will also work with passwords and verification codes, provides an industry-standard means for apps and OSes to more securely sync these credentials.

As the video explains: "This new process is fundamentally different and more secure than traditional credential export methods, which often involve exporting an unencrypted CSV or JSON file, then manually importing it into another app. The transfer process is user initiated, occurs directly between participating credential manager apps and is secured by local authentication like Face ID. This transfer uses a data schema that was built in collaboration with the members of the FIDO Alliance. It standardizes the data format for passkeys, passwords, verification codes, and more data types. The system provides a secure mechanism to move the data between apps. No insecure files are created on disk, eliminating the risk of credential leaks from exported files. It's a modern, secure way to move credentials."

The Internet

Abandoned Subdomains from Major Institutions Hijacked for AI-Generated Spam (404media.co) 17

A coordinated spam operation has infiltrated abandoned subdomains belonging to major institutions including Nvidia, Stanford University, NPR, and the U.S. government's vaccines.gov site, flooding them with AI-generated content that subsequently appears in search results and Google's AI Overview feature.

The scheme, reports 404 Media, posted over 62,000 articles on Nvidia's events.nsv.nvidia.com subdomain before the company took it offline within two hours of being contacted by reporters. The spam articles, which included explicit gaming content and local business recommendations, used identical layouts and a fake byline called "Ashley" across all compromised sites. Each targeted domain operates under different names -- "AceNet Hub" on Stanford's site, "Form Generation Hub" on NPR, and "Seymore Insights" on vaccines.gov -- but all redirect traffic to a marketing spam page. The operation exploits search engines' trust in institutional domains, with Google's AI Overview already serving the fabricated content as factual information to users searching for local businesses.
Advertising

Amazon Is About To Be Flooded With AI-Generated Video Ads 30

Amazon has launched its AI-powered Video Generator tool in the U.S., allowing sellers to quickly create photorealistic, motion-enhanced video ads often with a single click. "We'll likely see Amazon retailers utilizing AI-generated video ads in the wild now that the tool is generally available in the U.S. and costs nothing to use -- unless the ads are so convincing that we don't notice anything at all," says The Verge. From the report: New capabilities include motion improvements to show items in action, which Amazon says is best for showcasing products like toys, tools, and worn accessories. For example, Video Generator can now create clips that show someone wearing a watch on their wrist and checking the time, instead of simply displaying the watch on a table. The tool generates six different videos to choose from, and allows brands to add their logos to the finished results.

The Video Generator can now also make ads with multiple connected scenes that include humans, pets, text overlays, and background music. The editing timeline shown in Amazon's announcement video suggests the ads max out at 21 seconds.. The resulting ads edge closer to the traditional commercials we're used to seeing while watching TV or online content, compared to raw clips generated by video AI tools like OpenAI's Sora or Adobe Firefly.

A new video summarization feature can create condensed video ads from existing footage, such as demos, tutorials, and social media content. Amazon says Video Generator will automatically identify and extract key clips to generate new videos formatted for ad campaigns. A one-click image-to-video feature is also available that creates shorter GIF-style clips to show products in action.
AI

'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry (msn.com) 206

The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines." [OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."

Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....

The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.

AI

'Welcome to Campus. Here's Your ChatGPT.' (nytimes.com) 68

The New York Times reports: California State University announced this year that it was making ChatGPT available to more than 460,000 students across its 23 campuses to help prepare them for "California's future A.I.-driven economy." Cal State said the effort would help make the school "the nation's first and largest A.I.-empowered university system..." Some faculty members have already built custom chatbots for their students by uploading course materials like their lecture notes, slides, videos and quizzes into ChatGPT.
And other U.S. campuses including the University of Maryland are also "working to make A.I. tools part of students' everyday experiences," according to the article. It's all part of an OpenAI initiative "to overhaul college education — by embedding its artificial intelligence tools in every facet of campus life."

The Times calls it "a national experiment on millions of students." If the company's strategy succeeds, universities would give students A.I. assistants to help guide and tutor them from orientation day through graduation. Professors would provide customized A.I. study bots for each class. Career services would offer recruiter chatbots for students to practice job interviews. And undergrads could turn on a chatbot's voice mode to be quizzed aloud ahead of a test. OpenAI dubs its sales pitch "A.I.-native universities..." To spread chatbots on campuses, OpenAI is selling premium A.I. services to universities for faculty and student use. It is also running marketing campaigns aimed at getting students who have never used chatbots to try ChatGPT...

OpenAI's campus marketing effort comes as unemployment has increased among recent college graduates — particularly in fields like software engineering, where A.I. is now automating some tasks previously done by humans. In hopes of boosting students' career prospects, some universities are racing to provide A.I. tools and training...

[Leah Belsky, OpenAI's vice president of education] said a new "memory" feature, which retains and can refer to previous interactions with a user, would help ChatGPT tailor its responses to students over time and make the A.I. "more valuable as you grow and learn." Privacy experts warn that this kind of tracking feature raises concerns about long-term tech company surveillance. In the same way that many students today convert their school-issued Gmail accounts into personal accounts when they graduate, Ms. Belsky envisions graduating students bringing their A.I. chatbots into their workplaces and using them for life.

"It would be their gateway to learning — and career life thereafter," Ms. Belsky said.

Supercomputing

Startup Puts a Logical Qubit In a Single Piece of Hardware (arstechnica.com) 5

Startup Nord Quantique has demonstrated that a single piece of hardware can host an error-detecting logical qubit by using two quantum frequencies within one resonator. The breakthrough has the potential to slash the hardware demands for quantum error correction and deliver more compact and efficient quantum computing architectures. Ars Technica reports: The company did two experiments with this new hardware. First, it ran multiple rounds of error detection on data stored in the logical qubit, essentially testing its ability to act like a quantum memory and retain the information stored there. Without correcting errors, the system rapidly decayed, with an error probability in each round of measurement of about 12 percent. By the time the system reached the 25th measurement, almost every instance had already encountered an error. The second time through, the company repeated the process, discarding any instances in which an error occurred. In almost every instance, that meant the results were discarded long before they got through two dozen rounds of measurement. But at these later stages, none of the remaining instances were in an erroneous state. That indicates that a successful correction of the errors -- something the team didn't try -- would be able to fix all the detected problems.

Several other companies have already performed experiments in which errors were detected -- and corrected. In a few instances, companies have even performed operations with logical qubits, although these were not sophisticated calculations. Nord Quantique, in contrast, is only showing the operation of a single logical qubit, so it's not even possible to test a two-qubit gate operation using the hardware it has described so far. So simply being able to identify the occurrence of errors is not on the cutting edge. Why is this notable?

All the other companies require multiple hardware qubits to host a single logical qubit. Since building many hardware qubits has been an ongoing challenge, most researchers have plans to minimize the number of hardware qubits needed to support a logical qubit -- some combination of high-quality hardware, a clever error correction scheme, and/or a hardware-specific feature that catches the most common errors. You can view Nord Quantique's approach as being at the extreme end of the spectrum of solutions, where the number of hardware qubits required is simply one. From Nord Quantique's perspective, that's significant because it means that its hardware will ultimately occupy less space and have lower power and cooling requirements than some of its competitors. (Other hardware, like neutral atoms, requires lots of lasers and a high vacuum, so the needs are difficult to compare.) But it also means that, should it become technically difficult to get large numbers of qubits to operate as a coherent whole, Nord Quantique's approach may ultimately help us overcome some of these limits.

IOS

What To Expect From Apple's WWDC (arstechnica.com) 26

Apple's Worldwide Developers Conference 25 (WWDC) kicks off next week, June 9th, showcasing the company's latest software and new technologies. That includes the next version of iOS, which is rumored to have the most significant design overhaul since the introduction of iOS 7. Here's an overview of what to expect: Major Software Redesigns
Apple plans to shift its operating system naming to reflect the release year, moving from sequential numbers to year-based identifiers. Consequently, the upcoming releases will be labeled as iOS 26, macOS 26, watchOS 26, etc., streamlining the versioning across platforms.

iOS 26 is anticipated to feature a glossy, glass-like interface inspired by visionOS, incorporating translucent elements and rounded buttons. This design language is expected to extend across iPadOS, macOS, watchOS, and tvOS, promoting a cohesive user experience across devices. Core applications like Phone, Safari, and Camera are slated for significant redesigns, too. For instance, Safari may introduce a translucent, "glassy" address bar, aligning with the new visual aesthetics.

While AI is not expected to be the main focus due to Siri's current readiness, some AI-related updates are rumored. The Shortcuts app may gain "Apple Intelligence," enabling users to create shortcuts using natural language. It's also possible that Gemini will be offered as an option for AI functionalities on the iPhone, similar to ChatGPT.

Other App and Feature Updates
The lock screen might display charging estimates, indicating how long it will take for the phone to fully charge. There's a rumor about bringing live translation features to AirPods. The Messages app could receive automatic translations and call support; the Music app might introduce full-screen animated lock screen art; and Apple Notes may get markdown support. Users may also only need to log into a captive Wi-Fi portal once, and all their devices will automatically be logged in.

Significant updates are expected for Apple Home. There's speculation about the potential announcement of a "HomePad" with a screen, Apple's competitor to devices like the Nest Hub Mini. A new dedicated Apple gaming app is also anticipated to replace Game Center.
If you're expecting new hardware, don't hold your breath. The event is expected to focus primarily on software developments. It may even see discontinued support for several older Intel-based Macs in macOS 26, including models like the 2018 MacBook Pro and the 2019 iMac, as Apple continues its transition towards exclusive support for Apple Silicon devices.

Sources:
Apple WWDC 2025 Rumors and Predictions! (Waveform)
WWDC 2025 Overview (MacRumors)
WWDC 2025: What to expect from this year's conference (TechCrunch)
What to expect from Apple's Worldwide Developers Conference next week (Ars Technica)
Apple's WWDC 2025: How to Watch and What to Expect (Wired)
Nintendo

Nintendo Warns Switch 2 GameChat Users: 'Your Chat Is Recorded' (arstechnica.com) 68

Ars Technica's Kyle Orland reports: Last month, ahead of the launch of the Switch 2 and its GameChat communication features, Nintendo updated its privacy policy to note that the company "may also monitor and record your video and audio interactions with other users." Now that the Switch 2 has officially launched, we have a clearer understanding of how the console handles audio and video recorded during GameChat sessions, as well as when that footage may be sent to Nintendo or shared with partners, including law enforcement. Before using GameChat on Switch 2 for the first time, you must consent to a set of GameChat Terms displayed on the system itself. These terms warn that chat content is "recorded and stored temporarily" both on your system and the system of those you chat with. But those stored recordings are only shared with Nintendo if a user reports a violation of Nintendo's Community Guidelines, the company writes.

That reporting feature lets a user "review a recording of the last three minutes of the latest three GameChat sessions" to highlight a particular section for review, suggesting that chat sessions are not being captured and stored in full. The terms also lay out that "these recordings are available only if the report is submitted within 24 hours," suggesting that recordings are deleted from local storage after a full day. If a report is submitted to Nintendo, the company warns that it "may disclose certain information to third parties, such as authorities, courts, lawyers, or subcontractors reviewing the reported chats." If you don't consent to the potential for such recording and sharing, you're prevented from using GameChat altogether.

Nintendo is extremely clear that the purpose of its recording and review system is "to protect GameChat users, especially minors" and "to support our ability to uphold our Community Guidelines." This kind of human moderator review of chats is pretty common in the gaming world and can even apply to voice recordings made by various smart home assistants. [...] Overall, the time-limited, local-unless-reported recordings Nintendo makes here seem like a minimal intrusion on the average GameChat user's privacy. Still, if you're paranoid about Nintendo potentially seeing and hearing what's going on in your living room, it's good to at least be aware of it.

Movies

The OpenAI Board Drama Is Turning Into a Movie (hollywoodreporter.com) 14

Luca Guadagnino is in talks to direct Artificial, a dramatization of Sam Altman's dramatic firing and rehiring at OpenAI in 2023. The Amazon-MGM film is rumored to star Andrew Garfield, 'A Complete Unknown' scene-stealer Monica Barbaro, and 'Anora' actor Yura Borisov as lead roles in the story. From the Hollywood Reporter: Heyday Films' David Heyman and Jeffrey Clifford are producing the feature that is being put together at lightning speed at Amazon MGM Studios. Simon Rich wrote the script and will also produce, with Jennifer Fox also in talks to produce. How fast is this moving? Sources say Amazon is looking to get production going this summer, with an eye to shoot in San Francisco and Italy.

Altman co-founded OpenAI, but in the fall of 2023, after mounting safety concerns regarding AI, and reports of abusive behavior, was ousted as the head of the company by his board. Five days later, after a revolt, he was reinstated. Sources say that if all goes as planned, Garfield would play Altman, Barbaro would play chief technology office Mira Murati, and Borisov would play Ilya Sutskever, a co-founder who led the movement to get rid of Altman.

Bug

New Moderate Linux Flaw Allows Password Hash Theft Via Core Dumps in Ubuntu, RHEL, Fedora (thehackernews.com) 66

An anonymous reader shared this report from The Hacker News: Two information disclosure flaws have been identified in apport and systemd-coredump, the core dump handlers in Ubuntu, Red Hat Enterprise Linux, and Fedora, according to the Qualys Threat Research Unit (TRU).

Tracked as CVE-2025-5054 and CVE-2025-4598, both vulnerabilities are race condition bugs that could enable a local attacker to obtain access to access sensitive information. Tools like Apport and systemd-coredump are designed to handle crash reporting and core dumps in Linux systems. "These race conditions allow a local attacker to exploit a SUID program and gain read access to the resulting core dump," Saeed Abbasi, manager of product at Qualys TRU, said...

Red Hat said CVE-2025-4598 has been rated Moderate in severity owing to the high complexity in pulling an exploit for the vulnerability, noting that the attacker has to first win the race condition and be in possession of an unprivileged local account... Qualys has also developed proof-of-concept code for both vulnerabilities, demonstrating how a local attacker can exploit the coredump of a crashed unix_chkpwd process, which is used to verify the validity of a user's password, to obtain password hashes from the /etc/shadow file.

Advisories were also issued by Gentoo, Amazon Linux, and Debian, the article points out. (Though "It's worth noting that Debian systems aren't susceptible to CVE-2025-4598 by default, since they don't include any core dump handler unless the systemd-coredump package is manually installed.")

Canonical software security engineer Octavio Galland explains the issue on Canonical's blog. "If a local attacker manages to induce a crash in a privileged process and quickly replaces it with another one with the same process ID that resides inside a mount and pid namespace, apport will attempt to forward the core dump (which might contain sensitive information belonging to the original, privileged process) into the namespace... In order to successfully carry out the exploit, an attacker must have permissions to create user, mount and pid namespaces with full capabilities." Canonical's security team has released updates for the apport package for all affected Ubuntu releases... We recommend you upgrade all packages... The unattended-upgrades feature is enabled by default for Ubuntu 16.04 LTS onwards. This service:

- Applies new security updates every 24 hours automatically.
- If you have this enabled, the patches above will be automatically applied within 24 hours of being available.

AI

GitHub Users Angry at the Prospect of AI-Written Issues From Copilot (github.com) 47

Earlier this month the "Create New Issue" page on GitHub got a new option. "Save time by creating issues with Copilot" (next to a link labeled "Get started.") Though the option later disappeared, they'd seemed very committed to the feature. "With Copilot, creating issues...is now faster and easier," GitHub's blog announced May 19. (And "all without sacrificing quality.")

Describe the issue you want and watch as Copilot fills in your issue form... Skip lengthy descriptions — just upload an image with a few words of context.... We hope these changes transform issue creation from a chore into a breeze.
But in the GitHub Community discussion, these announcements prompted a request. "Allow us to block Copilot-generated issues (and Pull Requests) from our own repositories." This says to me that GitHub will soon start allowing GitHub users to submit issues which they did not write themselves and were machine-generated. I would consider these issues/PRs to be both a waste of my time and a violation of my projects' code of conduct. Filtering out AI-generated issues/PRs will become an additional burden for me as a maintainer, wasting not only my time, but also the time of the issue submitters (who generated "AI" content I will not respond to), as well as the time of your server (which had to prepare a response I will close without response).

As I am not the only person on this website with "AI"-hostile beliefs, the most straightforward way to avoid wasting a lot of effort by literally everyone is if Github allowed accounts/repositories to have a checkbox or something blocking use of built-in Copilot tools on designated repos/all repos on the account.

1,239 GitHub users upvoted the comment — and 125 comments followed.
  • "I have now started migrating repos off of github..."
  • "Disabling AI generated issues on a repository should not only be an option, it should be the default."
  • "I do not want any AI in my life, especially in my code."
  • "I am not against AI necessarily but giving it write-access to most of the world's mission-critical code-bases including building-blocks of the entire web... is an extremely tone-deaf move at this early-stage of AI. "

One user complained there was no "visible indication" of the fact that an issue was AI-generated "in either the UI or API." Someone suggested a Copilot-blocking Captcha test to prevent AI-generated slop. Another commenter even suggested naming it "Sloptcha".

And after more than 10 days, someone noticed the "Create New Issue" page seemed to no longer have the option to "Save time by creating issues with Copilot."

Thanks to long-time Slashdot reader jddj for sharing the news.


AI

Judge Rejects Claim AI Chatbots Protected By First Amendment in Teen Suicide Lawsuit (legalnewsline.com) 84

A U.S. federal judge has decided that free-speech protections in the First Amendment "don't shield an AI company from a lawsuit," reports Legal Newsline.

The suit is against Character.AI (a company reportedly valued at $1 billion with 20 million users) Judge Anne C. Conway of the Middle District of Florida denied several motions by defendants Character Technologies and founders Daniel De Freitas and Noam Shazeer to dismiss the lawsuit brought by the mother of 14-year-old Sewell Setzer III. Setzer killed himself with a gun in February of last year after interacting for months with Character.AI chatbots imitating fictitious characters from the Game of Thrones franchise, according to the lawsuit filed by Sewell's mother, Megan Garcia.

"... Defendants fail to articulate why words strung together by (Large Language Models, or LLMs, trained in engaging in open dialog with online users) are speech," Conway said in her May 21 opinion. "... The court is not prepared to hold that Character.AI's output is speech."

Character.AI's spokesperson told Legal Newsline they've now launched safety features (including an under-18 LLM, filter Characters, time-spent notifications and "updated prominent disclaimers" (as well as a "parental insights" feature). "The company also said it has put in place protections to detect and prevent dialog about self-harm. That may include a pop-up message directing users to the National Suicide and Crisis Lifeline, according to Character.AI."

Thanks to long-time Slashdot reader schwit1 for sharing the news.
Encryption

Help Wanted To Build an Open Source 'Advanced Data Protection' For Everyone (github.com) 46

Apple's end-to-end iCloud encryption product ("Advanced Data Protection") was famously removed in the U.K. after a government order demanded backdoors for accessing user data.

So now a Google software engineer wants to build an open source version of Advanced Data Protection for everyone. "We need to take action now to protect users..." they write (as long-time Slashdot reader WaywardGeek). "The whole world would be able to use it for free, protecting backups, passwords, message history, and more, if we can get existing applications to talk to the new data protection service." "I helped build Google's Advanced Data Protection (Google Cloud Key VaultService) in 2018, and Google is way ahead of Apple in this area. I know exactly how to build it and can have it done in spare time in a few weeks, at least server-side... This would be a distributed trust based system, so I need folks willing to run the protection service. I'll run mine on a Raspberry PI...

The scheme splits a secret among N protection servers, and when it is time to recover the secret, which is basically an encryption key, they must be able to get key shares from T of the original N servers. This uses a distributed oblivious pseudo random function algorithm, which is very simple.

In plain English, it provides nation-state resistance to secret back doors, and eliminates secret mass surveillance, at least when it comes to data backed up to the cloud... The UK and similarly confused governments will need to negotiate with operators in multiple countries to get access to any given users's keys. There are cases where rational folks would agree to hand over that data, and I hope we can end the encryption wars and develop sane policies that protect user data while offering a compromise where lives can be saved.

"I've got the algorithms and server-side covered," according to their original submission. "However, I need help." Specifically...
  • Running protection servers. "This is a T-of-N scheme, where users will need say 9 of 15 nodes to be available to recover their backups."
  • Android client app. "And preferably tight integration with the platform as an alternate backup service."
  • An iOS client app. (With the same tight integration with the platform as an alternate backup service.)
  • Authentication. "Users should register and login before they can use any of their limited guesses to their phone-unlock secret."

"Are you up for this challenge? Are you ready to plunge into this with me?"


In the comments he says anyone interested can ask to join the "OpenADP" project on GitHub — which is promising "Open source Advanced Data Protection for everyone."


Slashdot Top Deals