Encryption

Google Rolls Out Gmail End-To-End Encryption On Mobile Devices (bleepingcomputer.com) 25

Gmail's end-to-end encryption is now available on all Android and iOS devices, letting enterprise users send and read encrypted emails directly in the app without any extra tools. "This launch combines the highest level of privacy and data encryption with a user-friendly experience for all users, enabling simple encrypted email for all customers from small businesses to enterprises and public sector," Google announced in a blog post. BleepingComputer reports: Starting this week, encrypted messages will be delivered as regular emails to Gmail recipients' inboxes if they use the Gmail app. Recipients who don't have the Gmail mobile app and use other email services can read them in a web browser, regardless of the device and service they're using.

[...] This feature is now available for all client-side encryption (CSE) users with Enterprise Plus licenses and the Assured Controls or Assured Controls Plus add-on after admins enable the Android and iOS clients in the CSE admin interface via the Admin Console. Gmail's end-to-end encryption (E2EE) feature is powered by the client-side encryption (CSE) technical control, which allows Google Workspace organizations to use encryption keys they control and are stored outside Google's servers to protect sensitive documents and emails.

The Almighty Buck

CERN To Host Europe's Flagship Open Access Publishing Platform (home.cern) 30

CERN has confirmed it will host an expanded version of Open Research Europe, the EU-backed fee-free open access publishing platform that works to "keep knowledge in public hands." Research Professional News reports: A little over a year ago, 10 European research organizations announced that they would add their support to Open Research Europe, to broaden eligibility beyond only those researchers funded by the EU research program. Earlier this year, RPN reported that this group had expanded further and that Cern was set to host the broadened version of ORE, currently provided by the publisher F1000.

On March 26, Cern itself finally announced the news, saying it will "provide the technical and operational infrastructure" for the broader version. It said this will build on its "longstanding experience in developing and maintaining open science infrastructures and community-governed services." [...] In its own announcement, the Commission said ORE will have a budget of 17 million euros for 2026-31, with the EU providing 10 million euros.

Since it launched five years ago, ORE has published more than 1,200 articles. Cern said the platform is "expected to support a growing number of research outputs each year." Last month, experts told RPN they thought uptake of the increased eligibility will depend on how the newly participating national organizations engage with their communities. Eleven members of Science Europe, a group of major research funding and performing organizations, are part of the expansion.

Electronic Frontier Foundation

EFF Tells Publishers: Blocking the Internet Archive Won't Stop AI, But It Will Erase The Historical Record (eff.org) 27

"Imagine a newspaper publisher announcing it will no longer allow libraries to keep copies of its paper," writes EFF senior policy analyst Joe Mullin.

"That's effectively what's begun happening online in the last few months." The Internet Archive — the world's largest digital library — has preserved newspapers since it went online in the mid-1990s... But in recent months The New York Times began blocking the Archive from crawling its website, using technical measures that go beyond the web's traditional robots.txt rules. That risks cutting off a record that historians and journalists have relied on for decades. Other newspapers, including The Guardian, seem to be following suit...

The Times says the move is driven by concerns about AI companies scraping news content. Publishers seek control over how their work is used, and several — including the Times — are now suing AI companies over whether training models on copyrighted material violates the law. There's a strong case that such training is fair use. Whatever the outcome of those lawsuits, blocking nonprofit archivists is the wrong response.

Organizations like the Internet Archive are not building commercial AI systems. They are preserving a record of our history. Turning off that preservation in an effort to control AI access could essentially torch decades of historical documentation over a fight that libraries like the Archive didn't start, and didn't ask for. If publishers shut the Archive out, they aren't just limiting bots. They're erasing the historical record...

Even if courts place limits on AI training, the law protecting search and web archiving is already well established... There are real disputes over AI training that must be resolved in courts. But sacrificing the public record to fight those battles would be a profound, and possibly irreversible, mistake.

Programming

New 'Vibe Coded' AI Translation Tool Splits the Video Game Preservation Community 43

An anonymous reader quotes a report from Ars Technica: Since Andrej Karpathy coined the term "vibe coding" just over a year ago, we've seen a rapid increase in both the capabilities and popularity of using AI models to throw together quick programming projects with less human time and effort than ever before. One such vibe-coded project, Gaming Alexandria Researcher, launched over the weekend as what coder Dustin Hubbard called an effort to help organize the hundreds of scanned Japanese gaming magazines he's helped maintain at clearinghouse Gaming Alexandria over the years, alongside machine translations of their OCR text.

A day after that project went public, though, Hubbard was issuing an apology to many members of the Gaming Alexandria community who loudly objected to the use of Patreon funds for an error-prone AI-powered translation effort. The hubbub highlights just how controversial AI tools remain for many online communities, even as many see them as ways to maximize limited funds and man-hours. "I sincerely apologize," Hubbard wrote in his apology post. "My entire preservation philosophy has been to get people access to things we've never had access to before. I felt this project was a good step towards that, but I should have taken more into consideration the issues with AI."
"I'm very, very disappointed to see [Gaming Alexandria], one of the foremost organizations for preserving game history, promoting the use of AI translation and using Patreon funds to pay for AI licenses," game designer and Legend of Zelda historian Max Nichols wrote in a post on Bluesky over the weekend. "I have cancelled my Patreon membership and will no longer promote the organization."

Nichols later deleted his original message (archived here), saying he was "uncomfortable with the scale of reposts and anger" it had generated in the community. However, he maintained his core criticism: that Gemini-generated translations inevitably introduce inaccuracies that make them unreliable for scholarly use.

In a follow-up, he also objected to Patreon funds being used to pay for AI tools that produce what he called "untrustworthy" translations, arguing they distort history and are not valid sources for research. "... It's worthless and destructive: these translations are like looking at history through a clownhouse mirror," he added.
Social Networks

Social Networks Agree to Be Rated On Their Teen Safety Efforts (yahoo.com) 14

Meta, TikTok, Snap and other social neteworks agreed this week to be rated on their teen safety efforts, reports the Los Angeles Times, "amid rising concern about whether the world's largest social media platforms are doing enough to protect the mental health of young people." The Mental Health Coalition, a collective of organizations focused on destigmatizing mental health issues, said Tuesday that it is launching standards and a new rating system for online platforms. For the Safe Online Standards (S.O.S.) program, an independent panel of global experts will evaluate companies on parameters including safety rules, design, moderation and mental health resources. TikTok, Snap and Meta — the parent company of Facebook and Instagram — will be the first companies to be graded. Discord, YouTube, Pinterest, Roblox and Twitch have also agreed to participate, the coalition said in a news release.

"These standards provide the public with a meaningful way to evaluate platform protections and hold companies accountable — and we look forward to more tech companies signing up for the assessments," Antigone Davis, vice president and global head of safety at Meta, said in a statement... The ratings will be color-coded, and companies that perform well on the tests will get a blue shield badge that signals they help reduce harmful content on the platform and their rules are clear. Those that fall short will receive a red rating, indicating they're not reliably blocking harmful content or lack proper rules. Ratings in other colors indicate whether the platforms have partial protection or whether their evaluations haven't been completed yet.

AI

New Bill in New York Would Require Disclaimers on AI-Generated News Content (niemanlab.org) 33

An anonymous reader shares a report: A new bill in the New York state legislature would require news organizations to label AI-generated material and mandate that humans review any such content before publication. On Monday, Senator Patricia Fahy (D-Albany) and Assemblymember Nily Rozic (D-NYC) introduced the bill, called The New York Fundamental Artificial Intelligence Requirements in News Act -- The NY FAIR News Act for short.

"At the center of the news industry, New York has a strong interest in preserving journalism and protecting the workers who produce it," said Rozic in a statement announcing the bill. A closer look at the bill shows a few regulations, mostly centered around AI transparency, both for the public and in the newsroom. For one, the law would demand that news organizations put disclaimers on any published content that is "substantially composed, authored, or created through the use of generative artificial intelligence."

United States

CIA Has Killed Off The World Factbook After Six Decades (cia.gov) 111

The CIA has shut down The World Factbook, one of its oldest and most recognizable public-facing intelligence publications, ending a run that began as a classified reference document in 1962 and evolved into a freely accessible digital resource that drew millions of views each year.

The agency offered no explanation for the decision. Originally titled The National Basic Intelligence Factbook, the publication first went unclassified in 1971, was renamed a decade later, and moved online at CIA.gov in 1997. It served researchers, news organizations, teachers, students and international travelers. The site hosted more than 5,000 copyright-free photographs, some donated by CIA officers from their personal travel. Every page now redirects to a farewell announcement.
Security

Never-Before-Seen Linux Malware Is 'Far More Advanced Than Typical' (arstechnica.com) 27

An anonymous reader quotes a report from Ars Technica: Researchers have discovered a never-before-seen framework that infects Linux machines with a wide assortment of modules that are notable for the range of advanced capabilities they provide to attackers. The framework, referred to as VoidLink by its source code, features more than 30 modules that can be used to customize capabilities to meet attackers' needs for each infected machine. These modules can provide additional stealth and specific tools for reconnaissance, privilege escalation, and lateral movement inside a compromised network. The components can be easily added or removed as objectives change over the course of a campaign.

VoidLink can target machines within popular cloud services by detecting if an infected machine is hosted inside AWS, GCP, Azure, Alibaba, and Tencent, and there are indications that developers plan to add detections for Huawei, DigitalOcean, and Vultr in future releases. To detect which cloud service hosts the machine, VoidLink examines metadata using the respective vendor's API. Similar frameworks targeting Windows servers have flourished for years. They are less common on Linux machines. The feature set is unusually broad and is "far more advanced than typical Linux malware," said researchers from Checkpoint, the security firm that discovered VoidLink. Its creation may indicate that the attacker's focus is increasingly expanding to include Linux systems, cloud infrastructure, and application deployment environments, as organizations increasingly move workloads to these environments.
"VoidLink is a comprehensive ecosystem designed to maintain long-term, stealthy access to compromised Linux systems, particularly those running on public cloud platforms and in containerized environments," the researchers said in a separate post. "Its design reflects a level of planning and investment typically associated with professional threat actors rather than opportunistic attackers, raising the stakes for defenders who may never realize their infrastructure has been quietly taken over."

The researchers note that VoidLink poses no immediate threat or required action since it's not actively targeting systems. However, defenders should remain vigilant.
Advertising

Vietnam Bans Unskippable Ads (phunuonline.com.vn) 50

Vietnam will begin enforcing new online advertising rules in February 2026 that ban forced video ads longer than five seconds and must allow users to close ads with just one tap. "Furthermore, platforms must provide clear icons and instructions for users to report advertisements that violate the law, and allow them to opt out, turn off, or stop viewing inappropriate ads," reports a local news outlet (translated to English). "These reports must be received and processed promptly, and the results communicated to users as required." From the report: In cases where the entity posting the infringing advertisement cannot be identified or where specialized laws do not have specific regulations, the Ministry of Culture, Sports and Tourism is the focal agency to receive notifications and send requests to block or remove the advertisement to organizations and businesses providing online advertising services in Vietnam.

Advertisers, advertising service providers, and advertising transmission and distribution units are responsible for blocking and removing infringing advertisements within 24 hours of receiving a request from the competent authority. For advertisements that infringe on national security, the blocking and removal must be carried out immediately, no later than 24 hours.

In case of non-compliance, the Ministry of Culture, Sports and Tourism, in coordination with the Ministry of Public Security, will apply technical measures to block infringing advertisements and services and handle the matter according to the law. Telecommunications companies and Internet service providers must also implement technical measures to block access to infringing advertisements within 24 hours of receiving a request.

Censorship

US Bars Five Europeans It Says Pressured Tech Firms To Censor American Viewpoints Online (apnews.com) 169

An anonymous reader quotes a report from the Associated Press: The State Department announced Tuesday it was barring five Europeans it accused of leading efforts to pressure U.S. tech firms to censor or suppress American viewpoints. The Europeans, characterized by Secretary of State Marco Rubio as "radical" activists and "weaponized" nongovernmental organizations, fell afoul of a new visa policy announced in May to restrict the entry of foreigners deemed responsible for censorship of protected speech in the United States. "For far too long, ideologues in Europe have led organized efforts to coerce American platforms to punish American viewpoints they oppose," Rubio posted on X. "The Trump Administration will no longer tolerate these egregious acts of extraterritorial censorship."

The five Europeans were identified by Sarah Rogers, the under secretary of state for public diplomacy, in a series of posts on social media. [...] The five Europeans named by Rogers are: Imran Ahmed, chief executive of the Centre for Countering Digital Hate; Josephine Ballon and Anna-Lena von Hodenberg, leaders of HateAid, a German organization; Clare Melford, who runs the Global Disinformation Index; and former EU Commissioner Thierry Breton, who was responsible for digital affairs. Rogers in her post on X called Breton, a French business executive and former finance minister, the "mastermind" behind the EU's Digital Services Act, which imposes a set of strict requirements designed to keep internet users safe online. This includes flagging harmful or illegal content like hate speech. She referred to Breton warning Musk of a possible "amplification of harmful content" by broadcasting his livestream interview with Trump in August 2024 when he was running for president.

AI

Are Warnings of Superintelligence 'Inevitability' Masking a Grab for Power? (noemamag.com) 183

Superintelligence has become "a quasi-political forecast" with "very little to do with any scientific consensus, emerging instead from particular corridors of power." That's the warning from James O'Sullivan, a lecturer in digital humanities from University College Cork. In a refreshing 5,600-word essay in Noema magazine, he notes the suspicious coincidence that "The loudest prophets of superintelligence are those building the very systems they warn against..."

"When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future." (For example, OpenAI CEO Sam Altman "seems determined to position OpenAI as humanity's champion, bearing the terrible burden of creating God-like intelligence so that it might be restrained.") The superintelligence discourse functions as a sophisticated apparatus of power, transforming immediate questions about corporate accountability, worker displacement, algorithmic bias and democratic governance into abstract philosophical puzzles about consciousness and control... Media amplification plays a crucial role in this process, as every incremental improvement in large language models gets framed as a step towards AGI. ChatGPT writes poetry; surely consciousness is imminent..." Such accounts, often sourced from the very companies building these systems, create a sense of momentum that becomes self-fulfilling. Investors invest because AGI seems near, researchers join companies because that's where the future is being built and governments defer regulation because they don't want to handicap their domestic champions...

We must recognize this process as political, not technical. The inevitability of superintelligence is manufactured through specific choices about funding, attention and legitimacy, and different choices would produce different futures. The fundamental question isn't whether AGI is coming, but who benefits from making us believe it is... We do not yet understand what kind of systems we are building, or what mix of breakthroughs and failures they will produce, and that uncertainty makes it reckless to funnel public money and attention into a single speculative trajectory.

Some key points:
  • "The machines are coming for us, or so we're told. Not today, but soon enough that we must seemingly reorganize civilization around their arrival..."
  • "When we debate whether a future artificial general intelligence might eliminate humanity, we're not discussing the Amazon warehouse worker whose movements are dictated by algorithmic surveillance or the Palestinian whose neighborhood is targeted by automated weapons systems. These present realities dissolve into background noise against the rhetoric of existential risk..."
  • "Seen clearly, the prophecy of superintelligence is less a warning about machines than a strategy for power, and that strategy needs to be recognized for what it is... "
  • "Superintelligence discourse isn't spreading because experts broadly agree it is our most urgent problem; it spreads because a well-resourced movement has given it money and access to power..."
  • "Academic institutions, which are meant to resist such logics, have been conscripted into this manufacture of inevitability... reinforcing industry narratives, producing papers on AGI timelines and alignment strategies, lending scholarly authority to speculative fiction..."
  • "The prophecy becomes self-fulfilling through material concentration — as resources flow towards AGI development, alternative approaches to AI starve..."
  • "The dominance of superintelligence narratives obscures the fact that many other ways of doing AI exist, grounded in present social needs rather than hypothetical machine gods..." [He lists data sovereignty movements "that treat data as a collective resource subject to collective consent," as well as organizations like Canada's First Nations Information Governance Centre and New Zealand's Te Mana Raraunga, plus "Global South initiatives that use modest, locally governed AI systems to support healthcare, agriculture or education under tight resource constraints."] "Such examples... demonstrate how AI can be organized without defaulting to the superintelligence paradigm that demands everyone else be sacrificed because a few tech bros can see the greater good that everyone else has missed..."
  • "These alternatives also illuminate the democratic deficit at the heart of the superintelligence narrative. Treating AI at once as an arcane technical problem that ordinary people cannot understand and as an unquestionable engine of social progress allows authority to consolidate in the hands of those who own and build the systems..."

He's ultimately warning us about "politics masked as predictions..."

"The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain. And the answer cannot be left to the corporate prophets of artificial transcendence because the future of AI is a political field — it should be open to contestation.

"It belongs not to those who warn most loudly of gods or monsters, but to publics that should have the moral right to democratically govern the technologies that shape their lives."


Encryption

Cryptologist DJB Criticizes Push to Finalize Non-Hybrid Security for Post-Quantum Cryptography (cr.yp.to) 21

In October cryptologist/CS professor Daniel J. Bernstein alleged that America's National Security Agency (and its UK counterpart GCHQ) were attempting to influence NIST to adopt weaker post-quantum cryptography standards without a "hybrid" approach that would've also included pre-quantum ECC.

Bernstein is of the opinion that "Given how many post-quantum proposals have been broken and the continuing flood of side-channel attacks, any competent engineering evaluation will conclude that the best way to deploy post-quantum [PQ] encryption for TLS, and for the Internet more broadly, is as double encryption: post-quantum cryptography on top of ECC." But he says he's seen it playing out differently: By 2013, NSA had a quarter-billion-dollar-a-year budget to "covertly influence and/or overtly leverage" systems to "make the systems in question exploitable"; in particular, to "influence policies, standards and specification for commercial public key technologies". NSA is quietly using stronger cryptography for the data it cares about, but meanwhile is spending money to promote a market for weakened cryptography, the same way that it successfully created decades of security failures by building up the market for, e.g., 40-bit RC4 and 512-bit RSA and Dual EC. I looked concretely at what was happening in IETF's TLS working group, compared to the consensus requirements for standards-development organizations. I reviewed how a call for "adoption" of an NSA-driven specification produced a variety of objections that weren't handled properly. ("Adoption" is a preliminary step before IETF standardization....) On 5 November 2025, the chairs issued "last call" for objections to publication of the document. The deadline for input is "2025-11-26", this coming Wednesday.
Bernstein also shares concerns about how the Internet Engineering Task Force is handling the discussion, and argues that the document is even "out of scope" for the IETF TLS working group This document doesn't serve any of the official goals in the TLS working group charter. Most importantly, this document is directly contrary to the "improve security" goal, so it would violate the charter even if it contributed to another goal... Half of the PQ proposals submitted to NIST in 2017 have been broken already... often with attacks having sufficiently low cost to demonstrate on readily available computer equipment. Further PQ software has been broken by implementation issues such as side-channel attacks.
He's also concerned about how that discussion is being handled: On 17 October 2025, they posted a "Notice of Moderation for Postings by D. J. Bernstein" saying that they would "moderate the postings of D. J. Bernstein for 30 days due to disruptive behavior effective immediately" and specifically that my postings "will be held for moderation and after confirmation by the TLS Chairs of being on topic and not disruptive, will be released to the list"...

I didn't send anything to the IETF TLS mailing list for 30 days after that. Yesterday [November 22nd] I finished writing up my new objection and sent that in. And, gee, after more than 24 hours it still hasn't appeared... Presumably the chairs "forgot" to flip the censorship button off after 30 days.

Thanks to alanw (Slashdot reader #1,822) for spotting the blog posts.
Programming

Microsoft and GitHub Preview New Tool That Identifies, Prioritizes, and Fixes Vulnerabilities With AI (thenewstack.io) 18

"Security, development, and AI now move as one," says Microsoft's director of cloud/AI security product marketing.

Microsoft and GitHub "have launched a native integration between Microsoft Defender for Cloud and GitHub Advanced Security that aims to address what one executive calls decades of accumulated security debt in enterprise codebases..." according to The New Stack: The integration, announced this week in San Francisco at the Microsoft Ignite 2025 conference and now available in public preview, connects runtime intelligence from production environments directly into developer workflows. The goal is to help organizations prioritize which vulnerabilities actually matter and use AI to fix them faster. "Throughout my career, I've seen vulnerability trends going up into the right. It didn't matter how good of a detection engine and how accurate our detection engine was, people just couldn't fix things fast enough," said Marcelo Oliveira, VP of product management at GitHub, who has spent nearly a decade in application security. "That basically resulted in decades of accumulation of security debt into enterprise code bases." According to industry data, critical and high-severity vulnerabilities constitute 17.4% of security backlogs, with a mean time to remediation of 116 days, said Andrew Flick, senior director of developer services, languages and tools at Microsoft, in a blog post. Meanwhile, applications face attacks as frequently as once every three minutes, Oliveira said.

The integration represents the first native link between runtime intelligence and developer workflows, said Elif Algedik, director of product marketing for cloud and AI security at Microsoft, in a blog post... The problem, according to Flick, comes down to three challenges: security teams drowning in alert fatigue while AI rapidly introduces new threat vectors that they have little time to understand; developers lacking clear prioritization while remediation takes too long; and both teams relying on separate, nonintegrated tools that make collaboration slow and frustrating... The new integration works bidirectionally. When Defender for Cloud detects a vulnerability in a running workload, that runtime context flows into GitHub, showing developers whether the vulnerability is internet-facing, handling sensitive data or actually exposed in production. This is powered by what GitHub calls the Virtual Registry, which creates code-to-runtime mapping, Flick said...

In the past, this alert would age in a dashboard while developers worked on unrelated fixes because they didn't know this was the critical one, he said. Now, a security campaign can be created in GitHub, filtering for runtime risk like internet exposure or sensitive data, notifying the developer to prioritize this issue.

GitHub Copilot "now automatically checks dependencies, scans for first-party code vulnerabilities and catches hardcoded secrets before code reaches developers," the article points out — but GitHub's VP of product management says this takes things even further.

"We're not only helping you fix existing vulnerabilities, we're also reducing the number of vulnerabilities that come into the system when the level of throughput of new code being created is increasing dramatically with all these agentic coding agent platforms."
Businesses

GoFundMe Created 1.4 Million Donation Pages for Nonprofits Without Their Consent (abc7news.com) 66

San Francisco's local newscast ABC7 runs a consumer advocacy segment called "7 on Your Side". They received a disturbing call for help from Dave Dornlas, treasurer of a nonprofit supporting a local library: GoFundMe has taken upon itself to create "nonprofit pages" for 1.4 million 501C-3 organizations using public IRS data along with information from trusted partners like the PayPal Giving Fund. "The fact that they would just on their own build pages for nonprofits that they've never spoken to is a problem," [Dornlas] said. "I'm a believer in opt-in, not opt-out...." Dornlas says he struggled to find anyone to contact from GoFundMe about this... Dave's other frustration is tied to the company's optional tipping feature on the platform. "GoFundMe also solicits a tip of 14.5%. In other words, 'We're doing this and we're great people. Give us 14.5% to do this' — which doesn't have to happen," Dornlas said. "That's what bothers me." When 7 On Your Side checked, the optional tip was actually set for 16.5%. The consumer is required to move the bar to adjust accordingly... The tip would be in addition to the 2.2% transaction fee GoFundMe charges nonprofits, plus $0.30 per donation. That fee goes up to 2.9% for individual fundraisers.

Now both GoFundMe pages of Dornlas's nonprofits have been removed from the site. Any organization can do so, by clicking "unpublish" on the platform.

But GoFundMe's move drew strong criticism from the Center for Nonprofit Excellence (a Kentucky-based membership organization with over 500 members). GoFundMe's move, they say, creates "confusion for donors and supporters who are unsure of the legitimacy of the fundraising pages. In some cases, GoFundMe included incorrect information, outdated logos, and other inaccuracies that compromise and misrepresent nonprofits' brand, mission, strategy, and message."

And GoFundMe's processing fees and tips "ultimately result in fewer resources for nonprofits than if donors contributed directly through the organization." But there's more... GoFundMe has initiated SEO optimization as the default for the donation pages to improve their visibility when individuals search forinformation about nonprofits online. This could result in GoFundMe'spages ranking higher than the nonprofit's own website, pulling away potential donors and supporters...

Without adequate safeguards in place, nonprofits report serious issues, ranging from unauthorized individuals claiming donations and the inability to remove pages without first agreeing to GoFundMe's terms and conditions or sharing sensitive banking information.

The Center for Nonprofit Excellence has now joined with the National Council of Nonprofits — America's largest network of nonprofits, with over 25,000 members — to officially urge GoFundMe to immediately rectify the situation.

Thanks to long-time Slashdot reader Arrogant-Bastard for sharing the article.
EU

Austria's Ministry of Economy Has Migrated To a Nextcloud Platform In Shift Away From US Tech (zdnet.com) 10

An anonymous reader quotes a report from ZDNet: Even before Azure had a global failure this week, Austria's Ministry of Economy had taken a decisive step toward digital sovereignty. The Ministry achieved this status by migrating 1,200 employees to a Nextcloud-based cloud and collaboration platform hosted on Austrian-based infrastructure. This shift away from proprietary, foreign-owned cloud services, such as Microsoft 365, to an open-source, European-based cloud service aligns with a growing trend among European governments and agencies. They want control over sensitive data and to declare their independence from US-based tech providers.

European companies are encouraging this trend. Many of them have joined forces in the newly created non-profit foundation, the EuroStack Initiative. This foundation's goal is " to organize action, not just talk, around the pillars of the initiative: Buy European, Sell European, Fund European." What's the motive behind these moves away from proprietary tech? Well, in Austria's case, Florian Zinnagl, CISO of the Ministry of Economy, Energy, and Tourism (BMWET), explained, "We carry responsibility for a large amount of sensitive data -- from employees, companies, and citizens. As a public institution, we take this responsibility very seriously. That's why we view it critically to rely on cloud solutions from non-European corporations for processing this information."

Austria's move and motivation echo similar efforts in Germany, Denmark, and other EU states and agencies. The organizations include the German state of Schleswig-Holstein, which abandoned Exchange and Outlook for open-source programs. Other agencies that have taken the same path away from Microsoft include the Austrian military, Danish government organizations, and the French city of Lyon. All of these organizations aim to keep data storage and processing within national or European borders to enhance security, comply with privacy laws such as the EU's General Data Protection Regulation (GDPR), and mitigate risks from potential commercial and foreign government surveillance.

Security

Ransomware Profits Drop As Victims Stop Paying Hackers (bleepingcomputer.com) 16

An anonymous reader quotes a report from BleepingComputer: The number of victims paying ransomware threat actors has reached a new low, with just 23% of the breached companies giving in to attackers' demands. With some exceptions, the decline in payment resolution rates continues the trend that Coveware has observed for the past six years. In the first quarter of 2024, the payment percentage was 28%. Although it increased over the next period, it continued to drop, reaching an all-time low in the third quarter of 2025.

One explanation for this is that organizations implemented stronger and more targeted protections against ransomware, and authorities increasing pressure for victims not to pay the hackers. [...] Over the years, ransomware groups moved from pure encryption attacks to double extortion that came with data theft and the threat of a public leak. Coveware reports that more than 76% of the attacks it observed in Q3 2025 involved data exfiltration, which is now the primary objective for most ransomware groups. The company says that when it isolates the attacks that do not encrypt the data and only steal it, the payment rate plummets to 19%, which is also a record for that sub-category.

The average and median ransomware payments fell in Q3 compared to the previous quarter, reaching $377,000 and $140,000, respectively, according to Coveware. The shift may reflect large enterprises revising their ransom payment policies and recognizing that those funds are better spent on strengthening defenses against future attacks. The researchers also note that threat groups like Akira and Qilin, which accounted for 44% of all recorded attacks in Q3 2025, have switched focus to medium-sized firms that are currently more likely to pay a ransom.
"Cyber defenders, law enforcement, and legal specialists should view this as validation of collective progress," Coveware says. "The work that gets put in to prevent attacks, minimize the impact of attacks, and successfully navigate a cyber extortion -- each avoided payment constricts cyber attackers of oxygen."
AI

AI Assistants Misrepresent News Content 45% of the Time (bbc.co.uk) 112

An anonymous reader quotes a report from the BBC: New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants -- already a daily information gateway for millions of people -- routinely misrepresent news content no matter which language, territory, or AI platform is tested. The intensive international study of unprecedented scope and scale was launched at the EBU News Assembly, in Naples. Involving 22 public service media (PSM) organizations in 18 countries working in 14 languages, it identified multiple systemic issues across four leading AI tools. Professional journalists from participating PSM evaluated more than 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity against key criteria, including accuracy, sourcing, distinguishing opinion from fact, and providing context.

Key findings:
- 45% of all AI answers had at least one significant issue.
- 31% of responses showed serious sourcing problems - missing, misleading, or incorrect attributions.
- 20% contained major accuracy issues, including hallucinated details and outdated information.
- Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
- Comparison between the BBC's results earlier this year and this study show some improvements but still high levels of errors.
The team has released a News Integrity in AI Assistants Toolkit to help develop solutions to these problems and boost users' media literacy. They're also urging regulators to enforce laws on information integrity and continue independent monitoring of AI assistants.
Microsoft

Extortion and Ransomware Drive Over Half of Cyberattacks — Sometimes Using AI, Microsoft Finds (microsoft.com) 23

Microsoft said in a blog post this week that "over half of cyberattacks with known motives were driven by extortion or ransomware... while attacks focused solely on espionage made up just 4%."

And Microsoft's annual digital threats report found operations expanding even more through AI, with cybercriminals "accelerating malware development and creating more realistic synthetic content, enhancing the efficiency of activities such as phishing and ransomware attacks." [L]egacy security measures are no longer enough; we need modern defenses leveraging AI and strong collaboration across industries and governments to keep pace with the threat...

Over the past year, both attackers and defenders harnessed the power of generative AI. Threat actors are using AI to boost their attacks by automating phishing, scaling social engineering, creating synthetic media, finding vulnerabilities faster, and creating malware that can adapt itself... For defenders, AI is also proving to be a valuable tool. Microsoft, for example, uses AI to spot threats, close detection gaps, catch phishing attempts, and protect vulnerable users. As both the risks and opportunities of AI rapidly evolve, organizations must prioritize securing their AI tools and training their teams...

Amid the growing sophistication of cyber threats, one statistic stands out: more than 97% of identity attacks are password attacks. In the first half of 2025 alone, identity-based attacks surged by 32%. That means the vast majority of malicious sign-in attempts an organization might receive are via large-scale password guessing attempts. Attackers get usernames and passwords ("credentials") for these bulk attacks largely from credential leaks. However, credential leaks aren't the only place where attackers can obtain credentials. This year, we saw a surge in the use of infostealer malware by cybercriminals...

Luckily, the solution to identity compromise is simple. The implementation of phishing-resistant multifactor authentication (MFA) can stop over 99% of this type of attack even if the attacker has the correct username and password combination.

"Security is not only a technical challenge but a governance imperative..." Microsoft adds in their blog post. "Governments must build frameworks that signal credible and proportionate consequences for malicious activity that violates international rules." (The report also found that America is the #1 most-targeted country — and that many U.S. companies have outdated cyber defenses.)

But while "most of the immediate attacks organizations face today come from opportunistic criminals looking to make a profit," Microsoft writes that nation-state threats "remain a serious and persistent threat." More details from the Associated Press: Russia, China, Iran and North Korea have sharply increased their use of artificial intelligence to deceive people online and mount cyberattacks against the United States, according to new research from Microsoft. This July, the company identified more than 200 instances of foreign adversaries using AI to create fake content online, more than double the number from July 2024 and more than ten times the number seen in 2023.
Examples of foreign espionage cited by the article:
  • China is continuing its broad push across industries to conduct espionage and steal sensitive data...
  • Iran is going after a wider range of targets than ever before, from the Middle East to North America, as part of broadening espionage operations..
  • "[O]utside of Ukraine, the top ten countries most affected by Russian cyber activity all belong to the North Atlantic Treaty Organization (NATO) — a 25% increase compared to last year."
  • North Korea remains focused on revenue generation and espionage...

There was one especially worrying finding. The report found that critical public services are often targeted, partly because their tight budgets limit their incident response capabilities, "often resulting in outdated software.... Ransomware actors in particular focus on these critical sectors because of the targets' limited options. For example, a hospital must quickly resolve its encrypted systems, or patients could die, potentially leaving no other recourse but to pay."


Bitcoin

DOJ Seizes $15 Billion In Bitcoin From Massive 'Pig Butchering' Scam Based In Cambodia (cnbc.com) 70

The U.S. Department of Justice seized about $15 billion in bitcoin from wallets tied to Chen Zhi, founder of Cambodia's Prince Holding Group, who is accused of running one of the world's biggest "pig butchering" scams. Prosecutors say Zhi's network trafficked people into forced-labor scam compounds that defrauded victims worldwide through fake crypto investment schemes. CNBC reports: The seizure is the largest forfeiture action by the DOJ in history. An indictment charging the alleged pig butcher, Chen Zhi, was unsealed Tuesday in federal court in Brooklyn, New York. Zhi, who is also known as "Vincent," remains at large, according to the U.S. Attorney's Office for the Eastern District of New York. He was identified in court filings as the founder and chairman of Prince Holding Group, a multinational business conglomerate based in Cambodia, which prosecutors said grew "in secret .... into one of Asia's largest transnational criminal organizations. [...]

The scams duped people contacted via social media and messaging applications online into transferring cryptocurrency into accounts controlled by the scheme with false promises that the crypto would be invested and produce profits, according to the office. "In reality, the funds were stolen from the victims and laundered for the benefit of the perpetrators," the release said. "The scam perpetrators often built relationships with their victims over time, earning their trust before stealing their funds."

Prosecutors said that hundreds of people were trafficked and forced to work in the scam compounds, "often under the threat of violence." Zhi and a network of top executives in the Prince Group are accused of using political influence in multiple countries to protect their criminal enterprise and paid bribes to public officials to avoid actions by law enforcement authorities targeting the scheme, according to prosecutors.

Education

Microsoft To Provide Free AI Tools For Washington State Schools (geekwire.com) 25

theodp writes: GeekWire reports that Microsoft is bringing artificial intelligence to every public classroom in its home state -- and sparking new questions about its role in education. The Redmond tech giant on Thursday unveiled Microsoft Elevate Washington, a sweeping new initiative that will provide free access to AI-powered software and training for all 295 public school districts and 34 community and technical colleges across Washington state. The program is part of Microsoft Elevate, the company's broader $4 billion, five-year commitment to support schools and nonprofits with AI tools and training that was announced in July.

"This is our home," Microsoft President Brad Smith said at a launch event on the company's headquarters campus. "A big part of what we're doing today is investing in our home." Smith said Microsoft understands the unease around AI in classrooms but argued that waiting isn't an option. "I don't know that it will be possible to slow down the use of AI, even if someone wanted to," he said. In an interview with KING-TV Seattle, Smith added, "We're making a bigger commitment to this state than we are to any state in the country. [...] Above all else, we want to ensure that people can learn how to use the technology of tomorrow. That's the only way for our kids to succeed in the future."

The event on Thursday also included comedian Trevor Noah, the company's "chief questions officer," as well as Code.org CEO Hadi Partovi. Noah and Partovi both also appeared with Smith at the Microsoft Elevate launch event in July, where Smith told Partovi it was time to "switch hats" from coding to AI, adding that "the last 12 years have been about the Hour of Code [Code.org's flagship event, credited with pushing CS into K-12 classrooms], but the future involves the Hour of AI." Code.org last month committed to "engage 25M learners in an Hour of AI in school year '25/'26" at a meeting of the White House Task Force on AI Education that preceded a White House dinner for top execs from the nation's leading AI companies.

Slashdot Top Deals