Businesses

Hollywood Stars Sign Open Letter Protesting Paramount-Warner Bros Merger (nbcnews.com)

More than 1,000 Hollywood figures, including major actors, writers, and directors, signed an open letter opposing Paramount Skydance's proposed takeover of Warner Bros. Discovery, arguing it would hurt an industry "already under severe strain." The deal is still under regulatory scrutiny in both the U.S. and U.K., while Paramount says the merger would strengthen competition and expand opportunities for creators. NBC News reports: "This transaction would further consolidate an already concentrated media landscape, reducing competition at a moment when our industries -- and the audiences we serve -- can least afford it," the signatories wrote in the letter, published early Monday on a website called Block the Merger. "The result will be fewer opportunities for creators, fewer jobs across the production ecosystem, higher costs, and less choice for audiences in the United States and around the world. Alarmingly, this merger would reduce the number of major U.S. film studios to just four," the signatories added.

[T]he open letter illustrates the deep resistance to the deal among many members of Hollywood's creative community. The list of signatories includes A-list stars (Glenn Close, Ben Stiller), celebrated filmmakers (Yorgos Lanthimos, Denis Villeneuve) and acclaimed writers ("The Sopranos" creator David Chase). "Media consolidation has accelerated the disappearance of the mid-budget film, the erosion of independent distribution, the collapse of the international sales market, the elimination of meaningful profit participation, and the weakening of screen credit integrity," the signatories wrote. "Together, these factors threaten the sustainability of the entire creative community," they added.

[...] Monday's open letter letter was spearheaded by a group of advocacy organizations -- including the Committee for the First Amendment, a free speech group led by Fonda, who warned that the merger "would be one of the most destructive threats to free speech and creative expression in our history." In the letter, first reported by The New York Times, the signatories expressed support for California Attorney General Rob Bonta, who has said the merger is "not a done deal." "These two Hollywood titans have not cleared regulatory scrutiny -- the California Department of Justice has an open investigation, and we intend to be vigorous in our review," Bonta said in a Feb. 26 post on X.
Paramount Skydance said that they "hear and understand the concerns" and are committed to "protecting and expanding creativity." The studio also reiterated its commitment to releasing a minimum of 30 "high-quality feature films annually with full theatrical releases" and "preserving iconic brands with independent creative leadership" to make sure "creators have more avenues for their work, not fewer."
Operating Systems

Linux 7.0 Released (linuxiac.com) 14

"The new Linux kernel was released and it's kind of a big deal," writes longtime Slashdot reader rexx mainframe. "Here is what you can expect." Linuxiac reports: A key update in Linux 7.0 is the removal of the experimental label from Rust support. That (of course) does not make Rust a dominant language in kernel development, but it is still an important step in its gradual integration into the project. Another notable security-related change is the addition of ML-DSA post-quantum signatures for kernel module authentication, while support for SHA-1-based module-signing schemes has been removed.

The kernel now includes BPF-based filtering for io_uring operations, providing administrators with improved control in restricted environments. Additionally, BTF type lookups are now faster due to binary search. At the same time, this release continues ongoing cleanup in the kernel's lower layers. The removal of linuxrc initrd code advances the transition to initramfs as the sole early-userspace boot mechanism.

Linux 7.0 also introduces NULLFS, an immutable and empty root filesystem designed for systems that mount the real root later. Plus, preemption handling is now simpler on most architectures, with further improvements to restartable sequences, workqueues, RCU internals, slab allocation, and type-based hardening. Filesystems and storage receive several updates as well. Non-blocking timestamp updates now function correctly, and filesystems must explicitly opt in to leases rather than receiving them by default.
Phoronix has compiled a list of the many exciting changes.

Linus Torvalds himself announced the release, which can be downloaded directly from his git tree or from the kernel.org website.

Linux 7.0 has a major new version number but it's "largely a numbering reset [...], not a sign of some unusually disruptive release," notes Linuxiac.
The Media

First US Newsroom Strike For AI Protections Staged by ProPublica's Journalists (niemanlab.org) 8

It's the first time a major U.S. newsroom has gone on strike partly to demand protections from AI-related layoffs, according to a report from Nieman Lab.

They noted that one of the picketer's signs read "Thoughts not bots," : On Wednesday, roughly 150 members of the Propublica Guild, one of the largest nonprofit newsroom unions in the country, went on a 24-hour strike. About two dozen Guild members picketed ProPublica's headquarters in New York City's Hudson Square neighborhood during working hours, as simultaneous picket lines formed in front of the publication's offices in Chicago and Washington D.C...

The Guild has been negotiating its first collective bargaining agreement for two and a half years, and the one-day action was intended to put new pressure on ProPublica's management to agree to several contract proposals. The union is seeking "just cause" protections for terminations, wage increases to keep up with the rising cost of living, and contract language that would prohibit layoffs resulting from AI adoption... Beyond the strike, the ProPublica Guild has also taken its dispute over newsroom AI adoption to the National Labor Relations Board (NLRB). On Monday, the Guild filed an unfair-labor-practice charge, citing a "unilateral implementation of AI policy." The filing claims that ProPublica published AI editorial guidelines on its website last month, without first bargaining with union members over its tenets and language... A petition launched Wednesday calling for ProPublica to agree to the Guild's contract terms had received roughly 4,200 signatures by Thursday morning...

Susan DeCarava, the president of The NewsGuild of New York, joined strikers in front of the ProPublica offices yesterday. During a spare moment on the picket line, she told me that while this strike may be setting precedent for her union, it likely won't be the last over AI adoption in newsrooms. "We're going to see more and more concentrated conflicts between media bosses and journalists and media workers over who has a say and how AI is used in their workplaces," she said. For one, The New York Times Guild is currently in contract negotiations after its last agreement expired in February. Already, AI language has taken center stage in the Guild's initial bargaining sessions, including over a proposal that would see Guild members receive a share of the revenue earned when their work is licensed for AI training.

"Management has offered expanded severance for AI-related layoffs as a counter proposal..." according to the article.
Security

CPUID Site Hijacked To Serve Malware Instead of HWMonitor Downloads (theregister.com) 13

Attackers briefly hijacked part of CPUID's backend and swapped legitimate download links on its site with malware-laced ones. "The issue hit tools like HWMonitor and CPU-Z, with users on Reddit and elsewhere starting to notice something wasn't right when installers tripped antivirus alerts or showed up under odd names," reports The Register. From the report: CPUID has since confirmed the breach, pinning it on a compromised backend component rather than tampering with its software builds. "Investigations are still ongoing, but it appears that a secondary feature (basically a side API) was compromised for approximately six hours between April 9 and April 10, causing the main website to randomly display malicious links (our signed original files were not compromised)," one of the site's owners said in a post on X. "The breach was found and has since been fixed."

The files themselves appear to have been left alone and remain properly signed, so it doesn't seem like anyone got into the build process. Instead, the problem sat in front of that, in how downloads were being served. For anyone who hit the site during that stretch, though, that distinction offers little comfort. If the link you clicked had been swapped out, you were pulling whatever it pointed to, whether you realized it or not.

Iphone

FBI Extracts Suspect's Deleted Signal Messages Saved In iPhone Notification Data (404media.co) 50

An anonymous reader quotes a report from 404 Media: The FBI was able to forensically extract copies of incoming Signal messages from a defendant's iPhone, even after the app was deleted, because copies of the content were saved in the device's push notification database, multiple people present for FBI testimony in a recent trial told 404 Media. The case involved a group of people setting off fireworks and vandalizing property at the ICE Prairieland Detention Facility in Alvarado, Texas in July, and one shooting a police officer in the neck. The news shows how forensic extraction -- when someone has physical access to a device and is able to run specialized software on it -- can yield sensitive data derived from secure messaging apps in unexpected places. Signal already has a setting that blocks message content from displaying in push notifications; the case highlights why such a feature might be important for some users to turn on.

"We learned that specifically on iPhones, if one's settings in the Signal app allow for message notifications and previews to show up on the lock screen, [then] the iPhone will internally store those notifications/message previews in the internal memory of the device," a supporter of the defendants who was taking notes during the trial told 404 Media. [...] During one day of the related trial, FBI Special Agent Clark Wiethorn testified about some of the collected evidence. A summary of Exhibit 158 published on a group of supporters' website says, "Messages were recovered from Sharp's phone through Apple's internal notification storage -- Signal had been removed, but incoming notifications were preserved in internal memory. Only incoming messages were captured (no outgoing)."

404 Media spoke to one of the supporters who was taking notes during the trial, and to Harmony Schuerman, an attorney representing defendant Elizabeth Soto. Schuerman shared notes she took on Exhibit 158. "They were able to capture these chats bc [because] of the way she had notifications set up on her phone -- anytime a notification pops up on the lock screen, Apple stores it in the internal memory of the device," those notes read. The supporter added, "I was in the courtroom on the last day of the state's case when they had FBI Special Agent Clark testifying about some Signal messages. One set came from Lynette Sharp's phone (one of the cooperating witnesses), but the interesting detailed messages shown in court were messages that had been set to disappear and had in fact disappeared in the Signal app."
Further reading: Apple Gave Governments Data On Thousands of Push Notifications
Facebook

Meta Debuts 'Muse Spark', First AI Model Under Alexandr Wang (axios.com) 7

Meta has launched Muse Spark, its first major AI model under Alexandr Wang's leadership. The model was built over the past nine months and is being positioned as a significant step up from Llama 4. Axios reports: Muse Spark will power queries in the Meta AI app and Meta.ai website immediately, with plans to expand across Facebook, Instagram and WhatsApp. The model accepts voice, text and image inputs, but produces text-only output. [...] Meta plans to release a version of Muse Spark under an open-source license.

The model uses a fast mode for casual queries and several reasoning modes. A "shopping mode" highlights how Meta hopes to differentiate itself. It combines large language models with data on user interests and behavior. Over time, the model will also power "features that cite recommendations and content people share across Instagram, Facebook, and Threads," Meta said in a blog post.
Wang, the 29-year-old entrepreneur who co-founded Scale AI, joined Meta's "superintelligence" unit last year to help Meta catch up to rival models from OpenAI and Anthropic.
Portables (Apple)

Apple and Lenovo Have the Least Repairable Laptops, Analysis Finds (arstechnica.com) 57

An anonymous reader quotes a report from Ars Technica: Apple earned the lowest grades in a report on laptop and smartphone repairability released today by the consumer advocacy group Public Interest Research Group (PIRG) Education Fund. The report, which looks at how easy devices are to disassemble and how easy it is to find repairability information, gave Apple a C-minus in laptop repairability and a D-minus in cell phone repairability. For its "Failing the Fix (2026): Grading laptop and cell phone companies on the fixability of their products" report, PIRG analyzed the 10 newest laptops and phones that were available via manufacturers' French website in January. [...] Apple leads the list of laptop repairability losers, largely due to it having low disassembly scores. Apple, along with Dell and Samsung, also lost a full point for being members of TechNet and the CTA. Lenovo had the second-worst grade with a C-minus. Like Apple, Lenovo had low disassembly scores.

It also lost 0.5 points for failing to properly post PDFs explaining the French repair scores for some of its newest laptops sold in the region, as required in France. This is especially noteworthy because Lenovo got an F in last year's report for missing this information on at least 12 laptops. At the time, Lenovo director of communications David Hamilton provided a statement to Ars saying that the missing information was "due to a backend web compatibility issue that temporarily prevented the display of repairability scores on our Lenovo France website" that was "widely resolved." However, it appears that over a year later, Lenovo still isn't providing sufficient information to meet France's requirements

"While Lenovo has improved somewhat with their compliance with French consumer law by providing more repair score PDFs on their website, we urge the company to resolve this multi-year issue," this year's report says. PIRG's report concluded that "laptops are pretty stagnant in terms of repairability" across many of the eight most popular laptop brands in the US. However, Proctor noted to Ars that consumers' access to parts, tools, and information that vendors have has improved, but improvements around ease of disassembly "take longer to realize." He also praised vendors' efforts to release more repairable designs, such as Apple's MacBook Neo.
For its repairability index, PIRG weighed physical ease of disassembly most heavily, while also considering the availability of repair documentation, spare parts, spare-parts affordability, and other product-specific criteria. It then adjusted company grades by deducting points for membership in trade groups that oppose right-to-repair laws and adding small bonuses for manufacturers that supported right-to-repair legislation.

Acer stood out as the only laptop vendor that avoided the 0.5-point trade-group penalty, since it was not listed as a member of TechNet or the Consumer Technology Association.
AI

Testing Suggests Google's AI Overviews Tells Millions of Lies Per Hour (arstechnica.com) 105

A New York Times analysis found Google's AI Overviews now answer questions correctly about 90% of the time, which might sound impressive until you realize that roughly 1 in 10 answers is wrong. "[F]or Google, that means hundreds of thousands of lies going out every minute of the day," reports Ars Technica. From the report: The Times conducted this analysis with the help of a startup called Oumi, which itself is deeply involved in developing AI models. The company used AI tools to probe AI Overviews with the SimpleQA evaluation, a common test to rank the factuality of generative models like Gemini. Released by OpenAI in 2024, SimpleQA is essentially a list of more than 4,000 questions with verifiable answers that can be fed into an AI.

Oumi began running its test last year when Gemini 2.5 was still the company's best model. At the time, the benchmark showed an 85 percent accuracy rate. When the test was rerun following the Gemini 3 update, AI Overviews answered 91 percent of the questions correctly. If you extrapolate this miss rate out to all Google searches, AI Overviews is generating tens of millions of incorrect answers per day.

The report includes several examples of where AI Overviews went wrong. When asked for the date on which Bob Marley's former home became a museum, AI Overviews cited three pages, two of which didn't discuss the date at all. The final one, Wikipedia, listed two contradictory years, and AI Overviews confidently chose the wrong one. The benchmark also prompts models to produce the date on which Yo Yo Ma was inducted into the classical music hall of fame. While AI Overviews cited the organization's website that listed Ma's induction, it claimed there's no such thing as the Classical Music Hall of Fame.
"This study has serious holes," said Google spokesperson Ned Adriance. "It doesn't reflect what people are actually searching on Google." The search giant likes to use a test called SimpleQA Verified, which uses a smaller set of questions that have been more thoroughly vetted.
The Internet

Russia's VPN Crackdown Caused Bank Outages, Telegram Founder Says (yahoo.com) 52

Russia's "great crackdown" on VPNs — and a clampdown on Telegram's messaging platform — had an unintended side effect, reports Bloomberg. It "triggered the widespread banking outage seen across the country this week, Telegram's billionaire founder Pavel Durov said." "Telegram was banned in Russia, yet 65 million Russians still use it daily via VPNs," Durov said Saturday in a post on Telegram. "The government has spent years trying to ban VPNs too. Their blocking attempts just triggered a massive banking failure; cash briefly became the only payment method nationwide yesterday." Attempts on Friday to limit VPN use could have sparked the disruption affecting banking apps, The Bell and other Russian media reported, citing industry sources who weren't identified.

The outage may have been caused by an overload in the filtering systems run by Russia's communications watchdog, according to the reports, with experts warning that major restrictions risk undermining network stability... Separately, payments for Apple Inc.'s app store and other services became unavailable in Russia from April 1, the US company said on its website, without saying why. Earlier, RBC newswire reported that the Digital Development Ministry had asked mobile operators to disable top-ups, which could help limit VPN use....

Durov, who's being investigated in Russia for allegedly aiding terrorist activity, compared the situation in his home country to Iran, where similar restrictions prompted widespread adoption of VPNs instead of the intended shift to state-backed messaging apps. "Welcome back to the Digital Resistance, my Russian brothers and sisters," said Durov, who has lived in Dubai and France in recent years. "The entire nation is now mobilized to bypass these absurd restrictions," he wrote, adding that Telegram would continue adapting to make its traffic harder to detect and block.

AI

Internet Bug Bounty Pauses Payouts, Citing 'Expanding Discovery' From AI-Assisted Research (infoworld.com) 14

The Internet Bug Bounty program "has been paused for new submissions," they announced last week.

Running since 2012, the program is funded by "a number of leading software companies," reports InfoWorld, "and has awarded more than $1.5m to researchers who have reported bugs " Up to now, 80% of its payouts have been for discoveries of new flaws, and 20% to support remediation efforts. But as artificial intelligence makes it easier to find bugs, that balance needs to change, HackerOne said in a statement. "AI-assisted research is expanding vulnerability discovery across the ecosystem, increasing both coverage and speed. The balance between findings and remediation capacity in open source has substantively shifted," said HackerOne.

Among the first programs to be affected is the Node.js project, a server-side JavaScript platform for web applications known for its extensive ecosystem. While the project team will continue to accept and triage bug reports through HackerOne, without funding from the Internet Bug Bounty program it will no longer pay out rewards, according to an announcement on its website...

[J]ust last month, Google also put a halt to AI-generated submissions provided to its Open Source Software Vulnerability Reward Program.

The Internet Bug Bounty stressed that "We have a responsibility to the community to ensure this program effectively accomplishes its ambitious dual purpose: discovery and remediation. Accordingly, we are pausing submissions while we consider the structure and incentives needed to further these goals..."

"We remain committed to strengthening open source security. Working with project maintainers and researchers, we're actively evaluating solutions to better align incentives with open source ecosystem realities and ensure vulnerability discoveries translate into durable remediation outcomes."
Movies

Hundreds of Theatres Show Apocalyptic-Yet-Optimistic New Movie, 'The AI Doc' (yahoo.com) 14

Hundreds of theatres are now showing a new documentary called The AI Doc: Or How I Became An Apocaloptimist. Variety calls it "playful and heady,"edited "with a spirit of ADHD alertness." The New York Times suggests it "tries to cover so much that it ends up being more confusing than clarifying, but parts are fascinating."

But the Los Angeles Times calls it an "aggravating soup of information and opinion that wants to move at the speed of machine thought." So while co-director Daniel Roher asks whether he should bring a child into a world with AI, "Perhaps more urgently, should Roher have made an AI doc that treats us like children?" First, he parades all the safety doomers, seeming to believe their warnings that an unfeeling superintelligence is upon us and we can't trust it. Then, sufficiently disturbed, he hauls in the AI cheerleaders, a suspiciously positive gang who can envision only medical miracles and grindless lives in which we're all full-time artists. Only then, after this simplistic setup where platitudes reign, do we get the section in which the subject is treated like the brave (and grave) new world it is: geopolitically fraught, economically tenuous and a playground for billionaires.

Why couldn't the complexity have been the dialogue from the beginning, instead of the play-dumb cartoon "The AI Doc" feels like for so long? Maybe Roher believes this is what our increasingly gullible, truth-challenged citizenry needs from an explanatory doc: a flashy, kindhearted reminder that we're the change we need to be.

Read more reactions here and here. Mashable warns the documentary's director "will ultimately craft a journey that feels like a panic attack in real time. In the end, you may not feel better about mankind's chances against the rise of AI. But you'll likely feel less helpless in the future before us all."

They also point out that the film "shares some ways its audience can more actively be apart of the conversation, and provides a link to the film's website for engagement," where 6,948 people have now signed up for its newsletter. ("Demand a seat at the table," urges its signup button, under a warning that "Government and AI companies are designing our future without us. We need to reclaim our voice in shaping the future of AI...")
The Almighty Buck

Netflix Must Refund Customers For Years of Price Hikes, Italian Court Rules (arstechnica.com) 46

A Rome court ruled that several Netflix price hikes in Italy were unlawful because the company's contracts didn't adequately explain or justify future pricing changes. As a result, Netflix has been ordered to issue refunds that could total roughly 500 euros for some long-term subscribers. Ars Technica reports: The lawsuit was brought by Italian consumer advocacy group Movimento Consumatori, which alleged that the price hikes violate the Consumer Code, Italian legislation that aims to protect consumer rights. The Consumer Code says it's unlawful for a "professional to unilaterally modify the clauses of the contract, or the characteristics of the product or service to be provided, without a justified reason indicated in the contract itself," according to a Google-provided translation.

The court's April 1 ruling determined that Netflix's contracts were required to explain in advance why prices or other terms might change in the future. Because the price hikes were found to be imposed without providing customers with valid justifications, the court ruled that the new prices are invalid and ordered Netflix to refund affected subscribers. This comes despite Netflix reportedly providing a 30-day advance notice of the higher fees and allowing customers to cancel their subscriptions to avoid price hikes.

The court gave Netflix 90 days to inform millions of current and former customers via email, mail, its website, and Italian newspapers of their right to refunds or else face a penalty of 700 euros per day, Italian newspaper Il Sole 24 Ore reported today. Per Italian law, price increases that Netflix has issued or will issue beyond April 2025 are legal. At that time, Netflix adjusted its terms to state that contract terms could one day change due to technological, security, or regulatory needs, to clarify clauses, or to provide changes to the service, Il Sole 24 Ore reported.

The Internet

Fan Fiction Website AO3 Exits Beta After 17 Years 3

Archive of Our Own (AO3) is officially dropping its "beta" label after 17 years. The Organization for Transformative Works, the nonprofit behind the fanfiction site, said the site will keep evolving with new improvements even though it's no longer technically in beta.

"As the AO3 software has been stable for a long time, the change is mostly cosmetic and does not indicate that everything is finalized or perfectly working," the organizations says. "Exiting beta doesn't mean we'll stop continuing to improve AO3 -- our volunteer coders and community contributors will still be working to add to and improve AO3 every day."

Some of the features it's introduced over the years include a tag system, offline fanworks downloads, privacy settings that let creators restrict access to their work, and new modes for multi-chapter works. As it stands, the site says it has more than 10 million registered users and 17 million fanworks.
The Courts

Perplexity's 'Incognito Mode' Is a 'Sham,' Lawsuit Says 5

An anonymous reader quotes a report from Ars Technica: Perplexity's AI search engine encourages users to go deeper with their prompts by engaging in chat sessions that a lawsuit has alleged are often shared in their entirety with Google and Meta without users' knowledge or consent. "This happened to every user regardless of whether or not they signed up for a Perplexity account," the lawsuit alleged, while stressing that "enormous volumes of sensitive information from both subscribed and non-subscribed users" are shared.

Using developer tools, the lawsuit found that opening prompts are always shared, as are any follow-up questions the search engine asks that a user clicks on. Privacy concerns are seemingly worse for non-subscribed users, the complaint alleged. Their initial prompts are shared with "a URL through which the entire conversation may be accessed by third parties like Meta and Google." Disturbingly, the lawsuit alleged, chats are also shared with personally identifiable information (PII), even when users who want to stay anonymous opt to use Perplexity's "Incognito Mode." That mode, the lawsuit charged, is a "sham."

"'Incognito' mode does nothing to protect users from having their conversations shared with Meta and Google," the complaint said. "Even paid users who turned on the 'Incognito' feature still had their conversations shared with Meta and Google, along with their email addresses and other identifiers that allowed Meta and Google to personally identify them."
"Perplexity's failure to inform its users that their personal information has been disclosed to Meta and Google or to take any steps to halt the continued disclosure of users' information is malicious, oppressive, and in reckless disregard" of users' rights, the lawsuit alleged.

"Nothing on Perplexity's website warns users that their conversations with its AI Machine will be shared with Meta and Google," Doe alleged. "Much less does Perplexity warn subscribed users that its 'Incognito Mode' does not function to protect users' private conversations from disclosure to companies like Meta and Google."
AI

Group Pushing Age Verification Requirements For AI Sneakily Backed By OpenAI 54

An anonymous reader quotes a report from Gizmodo: OpenAI hasn't been shy about spending money lobbying for favorable laws and regulations. But when it comes to its involvement with child safety advocacy groups, the company has apparently decided it's best to stay in the shadows -- even if it means hiding from the people actually pushing for policy changes. According to a report from the San Francisco Standard, a number of people involved in the California-based Parents and Kids Safe AI Coalition were blindsided to learn their efforts were secretly being funded by OpenAI. Per the Standard, the Parents and Kids Safe AI Coalition was a group formed to push the Parents and Kids Safe AI Act, a piece of California legislation proposed earlier this year that would require AI firms to implement age verification and additional safeguards for users under the age of 18. That bill was backed by OpenAI in partnership with Common Sense Media, which proposed the legislation as a compromise after the two groups had pushed dueling ballot initiatives last year.

But when the coalition started to reach out to child safety groups and other advocacy organizations to try to get them to lend support to the bill, OpenAI was apparently conveniently left off the messaging. The AI giant was also left out of the marketing on the coalition's website, according to the Standard. That reportedly led to a number of groups and individuals lending their support to the Parents and Kids Safe AI Coalition without realizing that they were aligning themselves with OpenAI. As it turns out, OpenAI isn't just one of the members of the coalition; it is the group's biggest funder. In fact, the Standard characterized the Parents and Kids Safe AI Coalition as being "entirely funded" by OpenAI. While it's not clear exactly how much the company has funneled to this particular group, a Wall Street Journal report from January said OpenAI pledged $10 million to push the Parents and Kids Safe AI Act.
Gizmodo notes that OpenAI's backing of the Parents and Kids Safe AI Act "could be self-serving for CEO Sam Altman," who just so happens to head a company called World that provides age verification services.
Open Source

AI Can Clone Open-Source Software In Minutes 125

ZipNada writes: Two software researchers recently demonstrated how modern AI tools can reproduce entire open-source projects, creating proprietary versions that appear both functional and legally distinct. The partly-satirical demonstration shows how quickly artificial intelligence can blur long-standing boundaries between coding innovation, copyright law, and the open-source principles that underpin much of the modern internet.

In their presentation, Dylan Ayrey, founder of Truffle Security, and Mike Nolan, a software architect with the UN Development Program, introduced a tool they call malus.sh. For a small fee, the service can "recreate any open-source project," generating what its website describes as "legally distinct code with corporate-friendly licensing. No attribution. No copyleft. No problems." It's a test case in how intellectual property law -- still rooted in 19th-century precedent -- collides with 21st-century automation. Since the US Supreme Court's Baker v. Selden ruling, copyright has been understood to guard expression, not ideas.

That boundary gave rise to clean-room design, a method by which engineers reverse-engineer systems without accessing the original source code. Phoenix Technologies famously used the technique to build its version of the PC BIOS during the 1980s. Ayrey and Nolan's experiment shows how AI can perform a clean-room process in minutes rather than months. But faster doesn't necessarily mean fair. Traditional clean-room efforts required human teams to document and replicate functionality -- a process that demanded both legal oversight and significant labor. By contrast, an AI-mediated "clean room" can be invoked through a few prompts, raising questions about whether such replication still counts as fair use or independent creation.
Social Networks

Bluesky's Newest Product: an AI Tool That Gives You Custom Feeds (attie.ai) 39

"What happens when you can describe the social experience you want and have it built for you...?" asks Bluesky? "We've just started experimenting, but we're sharing it now because we want you to build alongside us."

Called "Attie" — because it's built with Bluesky's decentralized publishing framework, AT Protocol (which is open source) — the new assistant turns natural language prompts into social feeds, without users having to know how to code. (It's part of Bluesky's mission to "develop and drive large-scale adoption of technologies for open and decentralized public conversation.")

Engadget reports: On the Attie website, examples include prompts like, "Show me electronic music and experimental sound from people in my network" or "Builders working on agent infrastructure and open protocol design."

"It feels more like having a conversation than configuring software," [writes Bluesky's former CEO/current chief innovation officer, Jay Graber, in a blog post]. "You describe the sort of posts you want to see, and the coding agent builds the feed you described."

Graber added that Attie is a separate app from Bluesky and users don't have to use the new AI assistant if they don't want to. However, since Attie and Bluesky were built on the same framework, it could mean there will be some cross-app implementation between the two or any other app built on the AT Protocol.

"Attie is open for beta signups today, and we'll be sharing what we learn along the way," Graber writes in the blog post. "To learn more about Attie, visit: Attie.AI. Come help us find out what this can be."

The blog post warns that "Right now, AI is undermining human agency at the same time it's enhancing it," since "The proliferation of low-quality AI-generated content is making public social networks noisier and less trustworthy..." And in a world where "signal is getting harder to find... The major platforms aren't trying to fix this problem." They're using AI to increase the time users spend on-platform, to harvest training data, and to shape what users see and believe through systems they can't inspect and didn't choose. We think AI should serve people, not platforms...

An open protocol puts this power directly in users' hands. You can use it to build your own feeds, create software that works the way you want it to, and find signal in the noise. We built the AT Protocol so anyone could build any app they imagine on top of it, but until recently "anyone" really meant "anyone who can code." Agentic coding tools change that. For the first time, an open protocol can be genuinely open to everyone...

The Atmosphere [Bluesky's interoperable ecosystem] is an open data layer with a clearly defined schema for applications, which makes it uniquely well-suited for coding agents to build on... Bluesky will continue to evolve as a social app millions of people rely on. Attie will be where we experiment with agentic social.

AI is an accelerant on whatever it's applied to. I want it to accelerate decentralizing social and putting power back in users' hands. But I don't think the most interesting things built on AT Protocol will come from us. They're going to come from everyone who picks up these tools and starts building.

United Kingdom

Apple Now Requires Device-Level Age Verification in the UK. Could the US Be Next? (gizmodo.com) 121

Apple unveiled new device-level age restrictions in the UK on Wednesday. "After downloading a new update, users will now have to confirm that they are 18 or older to access unrestricted features," reports Gizmodo.

"Users will be able to confirm their age with a credit card or by scanning an ID." For those underage or who have not confirmed their age, Apple will turn on Web Content Filter and Communication Safety, which will not only restrict access to certain apps or websites, but will also monitor messages, shared photo albums, AirDrop, and FaceTime calls for nudity. Apple didn't specify exactly which services and features are banned for under-18 users, but it will likely be in compliance with UK legislation...

The British government does not require Apple and other OS providers to institute device-level age checks, but it does restrict minor access to online pornography under the Online Safety Act, which passed in 2023. So far, that restriction has only been implemented at the website level, but UK officials have been worried about easy loopholes to evade the age restrictions, like VPNs.

The broader tech industry has been campaigning for some time to use device-level age checks instead in response to the rising tide of under-16 social media and internet bans around the world. Last month, in a landmark social media trial in California, Meta CEO Mark Zuckerberg also supported this idea, saying that conducting age verification "at the level of the phone is just a lot clearer than having every single app out there have to do this separately." Pornhub-operator Aylo had advocated for device-level restrictions in the UK as well, and even sent out letters to Apple, Google, and Microsoft in November asking for OS-level age verification...

The most obvious question: Could this be brought stateside?

Privacy

Iran-Linked Hackers Breach FBI Director's Personal Email (reuters.com) 82

An anonymous reader quotes a report from Reuters: Iran-linked hackers have broken into FBI Director Kash Patel's personal email inbox, publishing photographs of the director and other documents to the internet, the hackers and the bureau said on Friday. On their website, the hacker group Handala Hack Team said Patel "will now find his name among the list of successfully hacked victims." The hackers published a series of personal photographs of Patel sniffing and smoking cigars, riding in an antique convertible, and making a face while taking a picture of himself in the mirror with a large bottle of rum.

The FBI confirmed that Patel's emails had been targeted. In a statement, bureau spokesman Ben Williamson said, "we have taken all necessary steps to mitigate potential risks associated with this activity" and that the data involved was "historical in nature and involves no government information." Handala, which presents itself as a group of pro-Palestinian vigilante hackers, is considered by Western researchers to be one of several personas used by Iranian government cyberintelligence units. [...] Alongside the photographs of Patel, the hackers published a sample of more than 300 emails, which appear to show a mix of personal and work correspondence dating between 2010 and 2019.

Desktops (Apple)

Apple Discontinues Mac Pro (9to5mac.com) 91

Apple has discontinued the Mac Pro and says it has no plans for future models. "The 'buy' page on Apple's website for the Mac Pro now redirects to the Mac's homepage, where all references have been removed," reports 9to5Mac. From the report: The Mac Pro has lived many lives over the years. Apple released the current Mac Pro industrial design in 2019 alongside the Pro Display XDR (which was also discontinued earlier this month). That version of the Mac Pro was powered by Intel, and Apple refreshed it with the M2 Ultra chip in June 2023. It has gone without an update since then, languishing at its $6,999 price point even as Apple debuted the M3 Ultra chip in the Mac Studio last year.

Slashdot Top Deals