Businesses

A Fight Over Credit Scores Turns Into All-Out War (msn.com) 53

A long-simmering battle over who controls credit scoring in America has erupted into open warfare. Fair Isaac, whose FICO score is used in about 90% of consumer-lending decisions in the U.S., announced it will double the price of its mortgage credit score to $10 next year. The company also said it will bypass the three credit-reporting firms that have supplied the data feeding into its algorithm for decades.

Equifax, Experian and TransUnion created VantageScore in 2006 as an alternative to FICO and collectively own the scoring system. The move came months after Bill Pulte, head of the Federal Housing Finance Agency, announced that Fannie Mae and Freddie Mac would allow lenders to use VantageScore for mortgage approvals. The three credit-reporting firms responded by offering VantageScore free for many loans. Fair Isaac had charged a few cents per score for decades before chief executive Will Lansing began raising prices several years ago. Revenue from selling credit scores reached $920 million in fiscal 2024, nearly five times what it was a decade earlier.
The Internet

Internet Archive's Legal Fights Are Over, But Its Founder Mourns What Was Lost (arstechnica.com) 39

The Internet Archive celebrated archiving its trillionth webpage last month and received congratulations from San Francisco, which declared October 22 "Internet Archive Day." Senator Alex Padilla designated the nonprofit a federal depository library. The organization currently faces no major lawsuits and no active threats to its collections. But these victories arrived after years of bruising copyright battles that forced the removal of more than 500,000 books from the Archive's Open Library. "We survived, but it wiped out the Library," founder Brewster Kahle told ArsTechnica.

In 2024, the Archive lost its final appeal in a lawsuit brought by book publishers over its e-book lending model. Damages could have topped $400 million before publishers announced a confidential settlement. Last month, the organization settled another suit over its Great 78 Project after music publishers sought damages of up to $700 million. That settlement was also confidential. In both cases, the Archive's experts challenged publishers' estimates as massively inflated.

Kahle had envisioned the Open Library as a way for Wikipedia to link to book scans and help researchers reference e-books. The Archive wanted to deepen Wikipedia's authority as a research tool by surfacing information often buried in books. "That's what they really succeeded at -- to make sure that Wikipedia readers don't get access to books," Kahle said of the publishers. He thinks "the world became stupider" when the Open Library was gutted. The Archive is now expanding Democracy's Library, a free online compendium of government research and publications that will be linked in Wikipedia articles.
AI

arXiv Changes Rules After Getting Spammed With AI-Generated 'Research' Papers (404media.co) 11

An anonymous reader shares a report: arXiv, a preprint publication for academic research that has become particularly important for AI research, has announced it will no longer accept computer science articles and papers that haven't been vetted by an academic journal or a conference. Why? A tide of AI slop has flooded the computer science category with low-effort papers that are "little more than annotated bibliographies, with no substantial discussion of open research issues," according to a press release about the change.

arXiv has become a critical place for preprint and open access scientific research to be published. Many major scientific discoveries are published on arXiv before they finish the peer review process and are published in other, peer-reviewed journals. For that reason, it's become an important place for new breaking discoveries and has become particularly important for research in fast-moving fields such as AI and machine learning (though there are also sometimes preprint, non-peer-reviewed papers there that get hyped but ultimately don't pass peer review muster). The site is a repository of knowledge where academics upload PDFs of their latest research for public consumption. It publishes papers on physics, mathematics, biology, economics, statistics, and computer science and the research is vetted by moderators who are subject matter experts.

Privacy

Manufacturer Remotely Bricks Smart Vacuum After Its Owner Blocked It From Collecting Data (tomshardware.com) 123

"An engineer got curious about how his iLife A11 smart vacuum worked and monitored the network traffic coming from the device," writes Tom's Hardware.

"That's when he noticed it was constantly sending logs and telemetry data to the manufacturer — something he hadn't consented to." The user, Harishankar, decided to block the telemetry servers' IP addresses on his network, while keeping the firmware and OTA servers open. While his smart gadget worked for a while, it just refused to turn on soon after... He sent it to the service center multiple times, wherein the technicians would turn it on and see nothing wrong with the vacuum. When they returned it to him, it would work for a few days and then fail to boot again... [H]e decided to disassemble the thing to determine what killed it and to see if he could get it working again...

[He discovered] a GD32F103 microcontroller to manage its plethora of sensors, including Lidar, gyroscopes, and encoders. He created PCB connectors and wrote Python scripts to control them with a computer, presumably to test each piece individually and identify what went wrong. From there, he built a Raspberry Pi joystick to manually drive the vacuum, proving that there was nothing wrong with the hardware. From this, he looked at its software and operating system, and that's where he discovered the dark truth: his smart vacuum was a security nightmare and a black hole for his personal data.

First of all, it's Android Debug Bridge, which gives him full root access to the vacuum, wasn't protected by any kind of password or encryption. The manufacturer added a makeshift security protocol by omitting a crucial file, which caused it to disconnect soon after booting, but Harishankar easily bypassed it. He then discovered that it used Google Cartographer to build a live 3D map of his home. This isn't unusual, by far. After all, it's a smart vacuum, and it needs that data to navigate around his home. However, the concerning thing is that it was sending off all this data to the manufacturer's server. It makes sense for the device to send this data to the manufacturer, as its onboard SoC is nowhere near powerful enough to process all that data. However, it seems that iLife did not clear this with its customers.

Furthermore, the engineer made one disturbing discovery — deep in the logs of his non-functioning smart vacuum, he found a command with a timestamp that matched exactly the time the gadget stopped working. This was clearly a kill command, and after he reversed it and rebooted the appliance, it roared back to life.

Thanks to long-time Slashdot reader registrations_suck for sharing the article.
Programming

GitHub Announces 'Agent HQ', Letting Copilot Subscribers Run and Manage Coding Agents from Multiple Vendors (venturebeat.com) 9

"AI isn't just a tool anymore; it's an integral part of the development experience," argues GitHub's blog. So "Agents shouldn't be bolted on. They should work the way you already work..."

So this week GitHub announced "Agent HQ," which CNBC describes as a "mission control" interface "that will allow software developers to manage coding agents from multiple vendors on a single platform." Developers have a range of new capabilities at their fingertips because of these agents, but it can require a lot of effort to keep track of them all individually, said GitHub COO Kyle Daigle. Developers will now be able to manage agents from GitHub, OpenAI, Google, Anthropic, xAI and Cognition in one place with Agent HQ. "We want to bring a little bit of order to the chaos of innovation," Daigle told CNBC in an interview. "With so many different agents, there's so many different ways of kicking off these asynchronous tasks, and so our big opportunity here is to bring this all together." Agent HQ users will be able to access a command center where they can assign, steer and monitor the work of multiple agents...

The third-party agents will begin rolling out to GitHub Copilot subscribers in the coming months, but Copilot Pro+ users will be able to access OpenAI Codex in VS Code Insiders this week, the company said.

"We're into this wave two era," GitHub's COO Mario Rodriguez told VentureBeat, an era that's "going to be multimodal, it's going to be agentic and it's going to have these new experiences that will feel AI native...."

Or, as VentureBeat sees it, GitHub "is positioning itself as the essential orchestration layer beneath them all..." Just as the company transformed Git, pull requests and CI/CD into collaborative workflows, it's now trying to do the same with a fragmented AI coding landscape...

The technical architecture addresses a critical enterprise concern: Security. Unlike standalone agent implementations where users must grant broad repository access, GitHub's Agent HQ implements granular controls at the platform level... Agents operating through Agent HQ can only commit to designated branches. They run within sandboxed GitHub Actions environments with firewall protections. They operate under strict identity controls. [GitHub COO] Rodriguez explained that even if an agent goes rogue, the firewall prevents it from accessing external networks or exfiltrating data unless those protections are explicitly disabled.

Beyond managing third-party agents, GitHub is introducing two technical capabilities that set Agent HQ apart from alternative approaches like Cursor's standalone editor or Anthropic's Claude integration. Custom agents via AGENTS.md files: Enterprises can now create source-controlled configuration files that define specific rules, tools and guardrails for how Copilot behaves. For example, a company could specify "prefer this logger" or "use table-driven tests for all handlers." This permanently encodes organizational standards without requiring developers to re-prompt every time... Native Model Context Protocol (MCP) support: VS Code now includes a GitHub MCP Registry. Developers can discover, install and enable MCP servers with a single click. They can then create custom agents that combine these tools with specific system prompts. This positions GitHub as the integration point between the emerging MCP ecosystem and actual developer workflows. MCP, introduced by Anthropic but rapidly gaining industry support, is becoming a de facto standard for agent-to-tool communication. By supporting the full specification, GitHub can orchestrate agents that need access to external services without each agent implementing its own integration logic.

GitHub is also shipping new capabilities within VS Code itself. Plan Mode allows developers to collaborate with Copilot on building step-by-step project approaches. The AI asks clarifying questions before any code is written. Once approved, the plan can be executed either locally in VS Code or by cloud-based agents. The feature addresses a common failure mode in AI coding: Beginning implementation before requirements are fully understood. By forcing an explicit planning phase, GitHub aims to reduce wasted effort and improve output quality.

More significantly, GitHub's code review feature is becoming agentic. The new implementation will use GitHub's CodeQL engine, which previously largely focused on security vulnerabilities to identify bugs and maintainability issues. The code review agent will automatically scan agent-generated pull requests before human review. This creates a two-stage quality gate.

"Don't let this little bit of news float past you like all those self-satisfied marketing pitches we semi-hear and ignore," writes ZDNet: If it works and remains reliable, this is actually a very big deal... Tech companies, especially the giant ones, often like to talk "open" but then do their level best to engineer lock-in to their solution and their solution alone. Sure, most of them offer some sort of export tool, but the barrier to moving from one tool to another is often huge... [T]he idea that you can continue to use your favorite agent or agents in GitHub, fully integrated into the GitHub tool path, is powerful. It means there's a chance developers might not have to suffer the walled garden effect that so many companies have strived for to lock in their customers.
Media

Sound Blaster Crowdfunds Linux-Powered Audio Hub 'Re:Imagine' For Creators and Gamers (nerds.xyz) 49

Slashdot reader BrianFagioli summarizes some news from Nerds.xyz: Creative Technology has launched Sound Blaster Re:Imagine, a modular, Linux-powered audio hub that reimagines the classic PC sound card for the modern age. The device acts as both a high-end digital-to-analog converter (DAC) and a customizable control deck that connects PCs, consoles, phones, and tablets in one setup.

Users can instantly switch inputs and outputs, while developers get full hardware access through an SDK for creating their own apps. It even supports AI-driven features like an on-device DJ, a revived "Dr. Sbaitso" speech synthesizer, and a built-in DOS emulator for retro gaming.

The Kickstarter campaign has already raised more than $150,000, far surpassing its initial goal of $15,000 with over 50 days remaining. Each unit ships with a modular "Horizon" base and swappable knobs, sliders, and buttons, while a larger "Vertex" version will unlock at a higher funding milestone.

Running an unspecified Linux build, Re:Imagine positions itself as both a nostalgic nod to Sound Blaster's roots and a new open platform for creators, gamers, and tinkerers.

AI

Do AI Browsers Exist For You - or To Give AI Companies Data? (fastcompany.com) 39

"It's been hard for me to understand why Atlas exists," writes MIT Technology Review. " Who is this browser for, exactly? Who is its customer? And the answer I have come to there is that Atlas is for OpenAI. The real customer, the true end user of Atlas, is not the person browsing websites, it is the company collecting data about what and how that person is browsing."

New York Magazine's "Intelligencer" column argues OpenAI wants ChatGPT in your browser because "That's where people who use computers, particularly for work, spend all their time, and through which vast quantities of valuable information flow in and out. Also, if you're a company hoping to train your models to replicate a bunch of white-collar work, millions of browser sessions would be a pretty valuable source of data."

Unfortunately, warns Fast Company, ChatGPT Atlas, Perplexity Comet, and other AI browses "include some major security, privacy, and usability trade-offs... Most of the time, I don't want to use them and am wary of doing so..." Worst of all, these browsers are security minefields. A web page that looks benign to humans can includehidden instructions for AI agents, tricking them into stealing info from other sites... "If you're signed into sensitive accounts like your bank or your email provider in your browser, simply summarizing a Reddit postcould result in an attacker being able to steal money or your private data,"Brave's security researchers wrotelast week.No one has figured out how to solve this problem.

If you can look past the security nightmares, the actual browsing features are substandard. Neither ChatGPT Atlas nor Perplexity Comet support vertical tabs — a must-have feature for me — and they have no tab search tool or way to look up recently-closed pages. Atlas also doesn't support saving sites as web apps, selecting multiple tabs (for instance, to close all at once with Cmd+W), or customizing the appearance. Compared to all the fancy new AI features, the web browsing part can feel like an afterthought. Regular web search can also be a hassle, even though you'll probably need it sometimes. When I typed "Sichuan Chili" into ChatGPT Atlas, it produced a lengthy description of the Chinese peppers, not the nearby restaurant whose website and number I was looking for.... Meanwhile, the standard AI annoyances still apply in the browser. Getting Perplexity to fill my grocery cart felt like a triumph, but on other occasions the AI has run into inexplicable walls and only ended up wasting more time.

There may be other costs to using these browsers as well. AI still has usage limits, and so all this eventually becomes a ploy to bump more people into paid tiers. Beyond that,Atlas is constantly analyzing the pages you visit to build a "memory" of who you are and what you're into. Do not be surprised if this translates to deeply targeted ads as OpenAI startslooking at ways to monetize free users. For now, I'm only using AI browsers in small doses when I think they can solve a specific problem.

Even then, I'm not going sign them into my email, bank accounts, or any other accounts for which a security breach would be catastrophic. It's too bad, because email and calendars are areas where AI agents could be truly useful, but the security risks are too great (andwell-documented).

The article notes that in August Vivaldi announced that "We're taking a stand, choosing humans over hype" with their browser: We will not use an LLM to add a chatbot, a summarization solution or a suggestion engine to fill up forms for you, until more rigorous ways to do those things are available. Vivaldi is the haven for people who still want to explore. We will continue building a browser for curious minds, power users, researchers, and anyone who values autonomy. If AI contributes to that goal without stealing intellectual property, compromising privacy or the open web, we will use it. If it turns people into passive consumers, we will not...

We're fighting for a better web.

Ubuntu

Ubuntu Will Use Rust For Dozens of Core Linux Utilities (zdnet.com) 84

Ubuntu "is adopting the memory-safe Rust language," reports ZDNet, citing remarks at this year's Ubuntu Summit from Jon Seager, Canonical's VP of engineering for Ubuntu: . Seager said the engineering team is focused on replacing key system components with Rust-based alternatives to enhance safety and resilience, starting with Ubuntu 25.10. He stressed that resilience and memory safety, not just performance, are the principal drivers: "It's the enhanced resilience and safety that is more easily achieved with Rust ports that are most attractive to me". This move is echoed in Ubuntu's adoption of sudo-rs, the Rust implementation of sudo, with fallback and opt-out mechanisms for users who want to use the old-school sudo command.

In addition to sudo-rs, Ubuntu 26.04 will use the Rust-based uutils/coreutils for Linux's default core utilities. This setup includes ls, cp, mv, and dozens of other basic Unix command-line tools. This Rust reimplementation aims for functional parity with GNU coreutils, with improved safety and maintainability.

On the desktop front, Ubuntu 26.04 will also bring seamless TPM-backed full disk encryption. If this approach reminds you of Windows BitLocker or MacOS FileVault, it should. That's the idea.

In other news, Canonical CEO Mark Shuttleworth said "I'm a believer in the potential of Linux to deliver a desktop that could have wider and universal appeal." (Although he also thinks "the open-source community needs to understand that building desktops for people who aren't engineers is different. We need to understand that the 'simple and just works' is also really important.")

Shuttleworth answered questions from Slashdot's readers in 2005 and 2012.
AI

Security Holes Found in OpenAI's ChatGPT Atlas Browser (and Perplexity's Comet) (scworld.com) 20

The address bar/ChatGPT input window in OpenAI's browser ChatGPT Atlas "could be targeted for prompt injection using malicious instructions disguised as links," reports SC World, citing a report from AI/agent security platform NeuralTrust: NeuralTrust found that a malformed URL could be crafted to include a prompt that is treated as plain text by the browser, passing the prompt on to the LLM. A malformation, such as an extra space after the first slash following "https:" prevents the browser from recognizing the link as a website to visit. Rather than triggering a web search, as is common when plain text is submitted to a browser's address bar, ChatGPT Atlas treats plain text as ChatGPT prompts by default.

An unsuspecting user could potentially be tricked into copying and pasting a malformed link, believing they will be sent to a legitimate webpage. An attacker could plant the link behind a "copy link" button so that the user might not notice the suspicious text at the end of the link until after it is pasted and submitted. These prompt injections could potentially be used to instruct ChatGPT to open a new tab to a malicious website such as a phishing site, or to tell ChatGPT to take harmful actions in the user's integrated applications or logged-in sites like Google Drive, NeuralTrust said.

Last month browser security platform LayerX also described how malicious prompts could be hidden in URLs (as a parameter) for Perplexity's browser Comet. And last week SquareX Labs demonstrated that a malicious browser extension could spoof Comet's AI sidebar feature and have since replicated the proof-of-concept (PoC) attack on Atlas.

But another new vulnerability in ChatGPT Atlas "could allow malicious actors to inject nefarious instructions into the artificial intelligence (AI)-powered assistant's memory and run arbitrary code," reports The Hacker News, citing a report from browser security platform LayerX: "This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware," LayerX Security Co-Founder and CEO, Or Eshed, said in a report shared with The Hacker News. The attack, at its core, leverages a cross-site request forgery (CSRF) flaw that could be exploited to inject malicious instructions into ChatGPT's persistent memory. The corrupted memory can then persist across devices and sessions, permitting an attacker to conduct various actions, including seizing control of a user's account, browser, or connected systems, when a logged-in user attempts to use ChatGPT for legitimate purposes....

"What makes this exploit uniquely dangerous is that it targets the AI's persistent memory, not just the browser session," Michelle Levy, head of security research at LayerX Security, said. "By chaining a standard CSRF to a memory write, an attacker can invisibly plant instructions that survive across devices, sessions, and even different browsers. In our tests, once ChatGPT's memory was tainted, subsequent 'normal' prompts could trigger code fetches, privilege escalations, or data exfiltration without tripping meaningful safeguards...."

LayerX said the problem is exacerbated by ChatGPT Atlas' lack of robust anti-phishing controls, the browser security company said, adding it leaves users up to 90% more exposed than traditional browsers like Google Chrome or Microsoft Edge. In tests against over 100 in-the-wild web vulnerabilities and phishing attacks, Edge managed to stop 53% of them, followed by Google Chrome at 47% and Dia at 46%. In contrast, Perplexity's Comet and ChatGPT Atlas stopped only 7% and 5.8% of malicious web pages.

From The Conversation: Sandboxing is a security approach designed to keep websites isolated and prevent malicious code from accessing data from other tabs. The modern web depends on this separation. But in Atlas, the AI agent isn't malicious code — it's a trusted user with permission to see and act across all sites. This undermines the core principle of browser isolation.
Thanks to Slashdot reader spatwei for suggesting the topic.
Bug

OpenAI Launches Aardvark To Detect and Patch Hidden Bugs In Code (infoworld.com) 26

OpenAI has introduced Aardvark, a GPT-5-powered autonomous agent that scans, reasons about, and patches code like a human security researcher. "By embedding itself directly into the development pipeline, Aardvark aims to turn security from a post-development concern into a continuous safeguard that evolves with the software itself," reports InfoWorld. From the report: What makes Aardvark unique, OpenAI noted, is its combination of reasoning, automation, and verification. Rather than simply highlighting potential vulnerabilities, the agent promises multi-stage analysis -- starting by mapping an entire repository and building a contextual threat model around it. From there, it continuously monitors new commits, checking whether each change introduces risk or violates existing security patterns.

Additionally, upon identifying a potential issue, Aardvark attempts to validate the exploitability of the finding in a sandboxed environment before flagging it. This validation step could prove transformative. Traditional static analysis tools often overwhelm developers with false alarms -- issues that may look risky but aren't truly exploitable. "The biggest advantage is that it will reduce false positives significantly," noted Jain. "It's helpful in open source codes and as part of the development pipeline."

Once a vulnerability is confirmed, Aardvark integrates with Codex to propose a patch, then re-analyzes the fix to ensure it doesn't introduce new problems. OpenAI claims that in benchmark tests, the system identified 92 percent of known and synthetically introduced vulnerabilities across test repositories, a promising indication that AI may soon shoulder part of the burden of modern code auditing.

EU

Austria's Ministry of Economy Has Migrated To a Nextcloud Platform In Shift Away From US Tech (zdnet.com) 10

An anonymous reader quotes a report from ZDNet: Even before Azure had a global failure this week, Austria's Ministry of Economy had taken a decisive step toward digital sovereignty. The Ministry achieved this status by migrating 1,200 employees to a Nextcloud-based cloud and collaboration platform hosted on Austrian-based infrastructure. This shift away from proprietary, foreign-owned cloud services, such as Microsoft 365, to an open-source, European-based cloud service aligns with a growing trend among European governments and agencies. They want control over sensitive data and to declare their independence from US-based tech providers.

European companies are encouraging this trend. Many of them have joined forces in the newly created non-profit foundation, the EuroStack Initiative. This foundation's goal is " to organize action, not just talk, around the pillars of the initiative: Buy European, Sell European, Fund European." What's the motive behind these moves away from proprietary tech? Well, in Austria's case, Florian Zinnagl, CISO of the Ministry of Economy, Energy, and Tourism (BMWET), explained, "We carry responsibility for a large amount of sensitive data -- from employees, companies, and citizens. As a public institution, we take this responsibility very seriously. That's why we view it critically to rely on cloud solutions from non-European corporations for processing this information."

Austria's move and motivation echo similar efforts in Germany, Denmark, and other EU states and agencies. The organizations include the German state of Schleswig-Holstein, which abandoned Exchange and Outlook for open-source programs. Other agencies that have taken the same path away from Microsoft include the Austrian military, Danish government organizations, and the French city of Lyon. All of these organizations aim to keep data storage and processing within national or European borders to enhance security, comply with privacy laws such as the EU's General Data Protection Regulation (GDPR), and mitigate risks from potential commercial and foreign government surveillance.

Youtube

10M People Watched a YouTuber Shim a Lock; the Lock Company Sued Him. Bad Idea. (arstechnica.com) 57

Trevor McNally posts videos of himself opening locks. The former Marine has 7 million followers and nearly 10 million people watched him open a Proven Industries trailer hitch lock in April using a shim cut from an aluminum can. The Florida company responded by filing a federal lawsuit in May charging McNally with eight offenses. Judge Mary Scriven denied the preliminary injunction request in June and found the video was fair use.

McNally's followers then flooded the company with harassment. Proven dismissed the case in July and asked the court to seal the records. The company had initiated litigation over a video that all parties acknowledged was accurate. ArsTechnica adds: Judging from the number of times the lawsuit talks about 1) ridicule and 2) harassment, it seems like the case quickly became a personal one for Proven's owner and employees, who felt either mocked or threatened. That's understandable, but being mocked is not illegal and should never have led to a lawsuit or a copyright claim. As for online harassment, it remains a serious and unresolved issue, but launching a personal vendetta -- and on pretty flimsy legal grounds -- against McNally himself was patently unwise. (Doubly so given that McNally had a huge following and had already responded to DMCA takedowns by creating further videos on the subject; this wasn't someone who would simply be intimidated by a lawsuit.)

In the end, Proven's lawsuit likely cost the company serious time and cash -- and generated little but bad publicity.

Software

Affinity's Image-Editing Apps Go 'Freemium' in First Major Post-Canva Update (arstechnica.com) 8

ArsTechnica: When graphic design platform-provider Canva bought the Affinity image-editing and publishing apps early last year, we had some major questions about how the companies' priorities and products would mesh. How would Canva serve the users who preferred Affinity's perpetually licensed apps to Adobe's subscription-only software suite? And how would Affinity's strong stance against generative AI be reconciled with Canva's embrace of those technologies.

This week, Canva gave us definitive answers to all of those questions: a brand-new unified Affinity app that melds the Photo, Designer, and Publisher apps into a single piece of software called "Affinity by Canva" that is free to use with a Canva user account, but which gates generative AI features behind Canva's existing paid subscription plans ($120 a year for individuals).

This does seem like mostly good news, in the near to mid term, for existing Affinity app users who admired Affinity's anti-AI stance: All three apps' core features are free to use, and the stuff you're being asked to pay for is stuff you mostly don't want anyway. But it may come as unwelcome news for those who like the predictability of pay-once-own-forever software or are nervous about where Canva might draw the line between "free" and "premium" features down the line.

[...] There's now a dedicated page for the older versions of the Affinity apps, and an FAQ at the bottom of that page answers several questions about the fate of those apps. Affinity and Canva say they will continue to keep the activation servers and downloads for all Affinity v1 and v2 apps online for the foreseeable future, giving people who already own the existing apps a way to keep using the versions they're comfortable with. Users can opt to link their Serif Affinity store accounts to their new Canva accounts to access the old downloads without juggling multiple accounts. But those older versions of the apps "won't receive future updates" and won't be able to open files created in the new Canva-branded Affinity app.

Cellphones

Someone Snuck Into a Cellebrite Microsoft Teams Call and Leaked Phone Unlocking Details (404media.co) 56

An anonymous reader quotes a report from 404 Media: Someone recently managed to get on a Microsoft Teams call with representatives from phone hacking company Cellebrite, and then leaked a screenshot of the company's capabilities against many Google Pixel phones, according to a forum post about the leak and 404 Media's review of the material. The leak follows others obtained and verified by 404 Media over the last 18 months. Those leaks impacted both Cellebrite and its competitor Grayshift, now owned by Magnet Forensics. Both companies constantly hunt for techniques to unlock phones law enforcement have physical access to.

"You can Teams meeting with them. They tell everything. Still cannot extract esim on Pixel. Ask anything," a user called rogueFed wrote on the GrapheneOS forum on Wednesday, speaking about what they learned about Cellebrite capabilities. GrapheneOS is a security- and privacy-focused Android-based operating system. rogueFed then posted two screenshots of the Microsoft Teams call. The first was a Cellebrite Support Matrix, which lays out whether the company's tech can, or can't, unlock certain phones and under what conditions. The second screenshot was of a Cellebrite employee. According to another of rogueFed's posts, the meeting took place in October. The meeting appears to have been a sales call. The employee is a "pre sales expert," according to a profile available online.

The Support Matrix is focused on modern Google Pixel devices, including the Pixel 9 series. The screenshot does not include details on the Pixel 10, which is Google's latest device. It discusses Cellebrite's capabilities regarding 'before first unlock', or BFU, when a piece of phone unlocking tech tries to open a device before someone has typed in the phone's passcode for the first time since being turned on. It also shows Cellebrite's capabilities against after first unlock, or AFU, devices. The Support Matrix also shows Cellebrite's capabilities against Pixel devices running GrapheneOS, with some differences between phones running that operating system and stock Android. Cellebrite does support, for example, Pixel 9 devices BFU. Meanwhile the screenshot indicates Cellebrite cannot unlock Pixel 9 devices running GrapheneOS BFU. In their forum post, rogueFed wrote that the "meeting focused specific on GrapheneOS bypass capability." They added "very fresh info more coming."

Android

'Keep Android Open' Campaign Pushes Back On Google's Sideloading Restrictions (pcmag.com) 49

PC Mag's Michael Kan writes: A "Keep Android Open" campaign is pushing back on new rules from Google that will reportedly block users from sideloading apps on Android phones. It's unclear who's running the campaign, but a blog post on the free Android app store F-Droid is directing users to visit the campaign's website, which urges the public to lobby government regulators to intervene and stop the upcoming restrictions. "Developers should have the right to create and distribute software without submitting to unnecessary corporate surveillance," reads an open letter posted to the site. [...]

Google has described the upcoming change as akin to requiring app developers to go through "an ID check at the airport." However, F-Droid condemned the new requirement as anti-consumer choice. "If you own a computer, you should have the right to run whatever programs you want on it," it says. Additionally, the rules threaten third-party app distribution on F-Droid, which operates as a "free/open-source app distribution" model.

In its blog post, F-Droid warns about the impact on users and Android app developers. "You, the creator, can no longer develop an app and share it directly with your friends, family, and community without first seeking Google's approval," the app store says. "Over half of all humankind uses an Android smartphone," the blog post adds. "Google does not own your phone. You own your phone. You have the right to decide who to trust, and where you can get your software from."

Businesses

OpenAI Eyes $1 Trillion IPO 42

OpenAI is reportedly preparing for a massive IPO that could value the company at up to $1 trillion. It follows a recent corporate restructuring that loosened its dependence on Microsoft and aligned its nonprofit foundation with financial success. Reuters reports: OpenAI is considering filing with securities regulators as soon as the second half of 2026, some of the people said. In preliminary discussions, the company has looked at raising $60 billion at the low end and likely more, the people said. They cautioned that talks are early and plans -- including the figures and timing - could change depending on business growth and market conditions. Chief Financial Officer Sarah Friar has told some associates the company is aiming for a 2027 listing, the people said. But some advisers predict it could come even sooner, around late 2026.

[...] An IPO would open the door to more efficient capital raising and enable larger acquisitions using public stock, helping to finance CEO Sam Altman's plans to pour trillions of dollars into AI infrastructure, according to people familiar with the company's thinking. With an annualized revenue run rate expected to reach about $20 billion by year-end, losses are also mounting inside the $500 billion company, the people said. During a livestream on Tuesday, Altman addressed the possibility of going public. "I think it's fair to say it is the most likely path for us, given the capital needs that we'll have," he said.
Google

Google Makes First Play Store Changes After Losing Epic Games Antitrust Case (arstechnica.com) 18

An anonymous reader quotes a report from Ars Technica: Since launching Google Play (nee Android Market) in 2008, Google has never made a change to the US store that it didn't want to make -- until now. Having lost the antitrust case brought by Epic Games, Google has implemented the first phase of changes mandated by the court. Developers operating in the Play Store will have more freedom to direct app users to resources outside the Google bubble. However, Google has not given up hope of reversing its loss before it's forced to make bigger changes. Epic began pursuing this case in 2020, stemming from its attempt to sell Fortnite content without going through Google's payment system. It filed a similar case against Apple, but the company fell short there because it could not show that Apple put its thumb on the scale. Google, however, engaged in conduct that amounted to suppressing the development of alternative Android app stores. It lost the case and came up short on appeal this past summer, leaving the company with little choice but to prepare for the worst.

Google has updated its support pages to confirm that it's abiding by the court's order. In the US, Play Store developers now have the option of using external payment platforms that bypass the Play Store entirely. This could hypothetically allow developers to offer lower prices, as they don't have to pay Google's commission, which can be up to 30 percent. Devs will also be permitted to direct users to sources for app downloads and payment methods outside the Play Store. Google's support page stresses that these changes are only being instituted in the US version of the Play Store, which is all the US District Court can require. The company also notes that it only plans to adhere to this policy "while the US District Court's order remains in effect." Judge James Donato's order runs for three years, ending on November 1, 2027.

Open Source

International Criminal Court To Ditch Microsoft Office For European Open Source Alternative (euractiv.com) 55

An anonymous reader shares a report: The International Criminal Court will switch its internal work environment away from Microsoft Office to Open Desk, a European open source alternative, the institution confirmed to Euractiv. The switch comes amid rising concerns about public bodies being reliant on US tech companies to run their services, which have stepped up sharply since the start of US President Donald Trump's second administration.

For the ICC, such concerns are not abstract: Trump has repeatedly lashed out at the court and slapped sanctions on its chief prosecutor, Karim Khan. Earlier this year, the AP also reported that Microsoft had cancelled Khan's email account, a claim the company denies. "We value our relationship with the ICC as a customer and are convinced that nothing impedes our ability to continue providing services to the ICC in the future," a Microsoft spokesperson told Euractiv.

Communications

FCC's Gomez Slams Move To Revise Broadband Labels as 'Anti-Consumer' (lightreading.com) 21

An anonymous reader shares a report: The FCC adopted a notice of proposed rulemaking (NPRM) to rescind and revise certain rules attached to consumer broadband labels. The measure passed on a two-to-one vote, with Commissioner Anna Gomez, the lone Democrat on the FCC, voting no and calling the notice "one of the most anti-consumer items I have seen."

The vote was held at the Commission's open meeting for the month of October. As per a draft notice circulated earlier this month, the FCC is looking to roll back several rules, including requirements that service providers read the label to consumers via phone, itemize state and local pass-through fees, and display labels in consumer account portals, among others. Advocates at Public Knowledge urged the Commission to reconsider, saying in a recent filing that "the Commission could create a permission structure for ISPs to continue to act without accountability."

In her remarks during Tuesday's open meeting, Commissioner Gomez appeared to concur, depicting the move as "anti-consumer" and counter to the goals of Congress. The FCC was mandated via the 2021 Infrastructure Investment and Jobs Act (IIJA) to create rules for implementing consumer broadband labels. After a lengthy rulemaking process and discussions with industry and consumer groups, ISPs were required to start displaying labels in 2024.

"I typically vote in favor of notices of proposed rulemaking because I believe in asking balanced questions, even on proposals that I dislike, so that we can encourage fruitful and helpful public comment. Answers to tough questions help us strike the right balance so that our rules can both encourage competition and serve consumers. However, the questions posed in this NPRM are so anti-consumer that I could not bring myself to even agree to them," said Gomez.

Gomez stressed that the notice will harm consumers by enabling ISPs to hide add-on fees and stripping people of their ability to access information in their own language. Moreover, added Gomez, it's unclear why the FCC is doing this. "What adds insult to injury is that the FCC does not even explain why this proposal is necessary. Make it make sense," she added.

Transportation

Society Will Accept a Death Caused By a Robotaxi, Waymo Co-CEO Says (sfgate.com) 239

At TechCrunch Disrupt 2025, Waymo co-CEO Tekedra Mawakana said society will ultimately accept a fatal robotaxi crash as part of the broader tradeoff for safer roads overall. TechCrunch reports: The topic of a fatal robotaxi crash came up during Mawakana's interview with Kristen Korosec, TechCrunch's transportation editor, during the first day of the outlet's annual Disrupt conference in San Francisco. Korosec asked Mawakana about Waymo's ambitions and got answer after answer about the company's all-consuming focus on safety. The most interesting part of the interview arrived when Korosec brought on a thought experiment. What if self-driving vehicles like Waymo and others reduce the number of traffic fatalities in the United States, but a self-driving vehicle does eventually cause a fatal crash, Korosec pondered. Or as she put it to the executive: "Will society accept that? Will society accept a death potentially caused by a robot?"

"I think that society will," Mawakana answered, slowly, before positioning the question as an industrywide issue. "I think the challenge for us is making sure that society has a high enough bar on safety that companies are held to." She said that companies should be transparent about their records by publishing data about how many crashes they're involved in, and she pointed to the "hub" of safety information on Waymo's website. Self-driving cars will dramatically reduce crashes, Mawakana said, but not by 100%: "We have to be in this open and honest dialogue about the fact that we know it's not perfection."

Circling back to the idea of a fatal crash, she said, "We really worry as a company about those days. You know, we don't say 'whether.' We say 'when.' And we plan for them." Korosec followed up, asking if there had been safety issues that prompted Waymo to "pump the breaks" on its expansion plans throughout the years. The co-CEO said the company pulls back and retests "all the time," pointing to challenges with blocking emergency vehicles as an example. "We need to make sure that the performance is backing what we're saying we're doing," she said. [...] "If you are not being transparent, then it is my view that you are not doing what is necessary in order to actually earn the right to make the roads safer," Mawakana said.

Slashdot Top Deals