Privacy

Manufacturer Remotely Bricks Smart Vacuum After Its Owner Blocked It From Collecting Data (tomshardware.com) 123

"An engineer got curious about how his iLife A11 smart vacuum worked and monitored the network traffic coming from the device," writes Tom's Hardware.

"That's when he noticed it was constantly sending logs and telemetry data to the manufacturer — something he hadn't consented to." The user, Harishankar, decided to block the telemetry servers' IP addresses on his network, while keeping the firmware and OTA servers open. While his smart gadget worked for a while, it just refused to turn on soon after... He sent it to the service center multiple times, wherein the technicians would turn it on and see nothing wrong with the vacuum. When they returned it to him, it would work for a few days and then fail to boot again... [H]e decided to disassemble the thing to determine what killed it and to see if he could get it working again...

[He discovered] a GD32F103 microcontroller to manage its plethora of sensors, including Lidar, gyroscopes, and encoders. He created PCB connectors and wrote Python scripts to control them with a computer, presumably to test each piece individually and identify what went wrong. From there, he built a Raspberry Pi joystick to manually drive the vacuum, proving that there was nothing wrong with the hardware. From this, he looked at its software and operating system, and that's where he discovered the dark truth: his smart vacuum was a security nightmare and a black hole for his personal data.

First of all, it's Android Debug Bridge, which gives him full root access to the vacuum, wasn't protected by any kind of password or encryption. The manufacturer added a makeshift security protocol by omitting a crucial file, which caused it to disconnect soon after booting, but Harishankar easily bypassed it. He then discovered that it used Google Cartographer to build a live 3D map of his home. This isn't unusual, by far. After all, it's a smart vacuum, and it needs that data to navigate around his home. However, the concerning thing is that it was sending off all this data to the manufacturer's server. It makes sense for the device to send this data to the manufacturer, as its onboard SoC is nowhere near powerful enough to process all that data. However, it seems that iLife did not clear this with its customers.

Furthermore, the engineer made one disturbing discovery — deep in the logs of his non-functioning smart vacuum, he found a command with a timestamp that matched exactly the time the gadget stopped working. This was clearly a kill command, and after he reversed it and rebooted the appliance, it roared back to life.

Thanks to long-time Slashdot reader registrations_suck for sharing the article.
Programming

GitHub Announces 'Agent HQ', Letting Copilot Subscribers Run and Manage Coding Agents from Multiple Vendors (venturebeat.com) 9

"AI isn't just a tool anymore; it's an integral part of the development experience," argues GitHub's blog. So "Agents shouldn't be bolted on. They should work the way you already work..."

So this week GitHub announced "Agent HQ," which CNBC describes as a "mission control" interface "that will allow software developers to manage coding agents from multiple vendors on a single platform." Developers have a range of new capabilities at their fingertips because of these agents, but it can require a lot of effort to keep track of them all individually, said GitHub COO Kyle Daigle. Developers will now be able to manage agents from GitHub, OpenAI, Google, Anthropic, xAI and Cognition in one place with Agent HQ. "We want to bring a little bit of order to the chaos of innovation," Daigle told CNBC in an interview. "With so many different agents, there's so many different ways of kicking off these asynchronous tasks, and so our big opportunity here is to bring this all together." Agent HQ users will be able to access a command center where they can assign, steer and monitor the work of multiple agents...

The third-party agents will begin rolling out to GitHub Copilot subscribers in the coming months, but Copilot Pro+ users will be able to access OpenAI Codex in VS Code Insiders this week, the company said.

"We're into this wave two era," GitHub's COO Mario Rodriguez told VentureBeat, an era that's "going to be multimodal, it's going to be agentic and it's going to have these new experiences that will feel AI native...."

Or, as VentureBeat sees it, GitHub "is positioning itself as the essential orchestration layer beneath them all..." Just as the company transformed Git, pull requests and CI/CD into collaborative workflows, it's now trying to do the same with a fragmented AI coding landscape...

The technical architecture addresses a critical enterprise concern: Security. Unlike standalone agent implementations where users must grant broad repository access, GitHub's Agent HQ implements granular controls at the platform level... Agents operating through Agent HQ can only commit to designated branches. They run within sandboxed GitHub Actions environments with firewall protections. They operate under strict identity controls. [GitHub COO] Rodriguez explained that even if an agent goes rogue, the firewall prevents it from accessing external networks or exfiltrating data unless those protections are explicitly disabled.

Beyond managing third-party agents, GitHub is introducing two technical capabilities that set Agent HQ apart from alternative approaches like Cursor's standalone editor or Anthropic's Claude integration. Custom agents via AGENTS.md files: Enterprises can now create source-controlled configuration files that define specific rules, tools and guardrails for how Copilot behaves. For example, a company could specify "prefer this logger" or "use table-driven tests for all handlers." This permanently encodes organizational standards without requiring developers to re-prompt every time... Native Model Context Protocol (MCP) support: VS Code now includes a GitHub MCP Registry. Developers can discover, install and enable MCP servers with a single click. They can then create custom agents that combine these tools with specific system prompts. This positions GitHub as the integration point between the emerging MCP ecosystem and actual developer workflows. MCP, introduced by Anthropic but rapidly gaining industry support, is becoming a de facto standard for agent-to-tool communication. By supporting the full specification, GitHub can orchestrate agents that need access to external services without each agent implementing its own integration logic.

GitHub is also shipping new capabilities within VS Code itself. Plan Mode allows developers to collaborate with Copilot on building step-by-step project approaches. The AI asks clarifying questions before any code is written. Once approved, the plan can be executed either locally in VS Code or by cloud-based agents. The feature addresses a common failure mode in AI coding: Beginning implementation before requirements are fully understood. By forcing an explicit planning phase, GitHub aims to reduce wasted effort and improve output quality.

More significantly, GitHub's code review feature is becoming agentic. The new implementation will use GitHub's CodeQL engine, which previously largely focused on security vulnerabilities to identify bugs and maintainability issues. The code review agent will automatically scan agent-generated pull requests before human review. This creates a two-stage quality gate.

"Don't let this little bit of news float past you like all those self-satisfied marketing pitches we semi-hear and ignore," writes ZDNet: If it works and remains reliable, this is actually a very big deal... Tech companies, especially the giant ones, often like to talk "open" but then do their level best to engineer lock-in to their solution and their solution alone. Sure, most of them offer some sort of export tool, but the barrier to moving from one tool to another is often huge... [T]he idea that you can continue to use your favorite agent or agents in GitHub, fully integrated into the GitHub tool path, is powerful. It means there's a chance developers might not have to suffer the walled garden effect that so many companies have strived for to lock in their customers.
AI

Do AI Browsers Exist For You - or To Give AI Companies Data? (fastcompany.com) 39

"It's been hard for me to understand why Atlas exists," writes MIT Technology Review. " Who is this browser for, exactly? Who is its customer? And the answer I have come to there is that Atlas is for OpenAI. The real customer, the true end user of Atlas, is not the person browsing websites, it is the company collecting data about what and how that person is browsing."

New York Magazine's "Intelligencer" column argues OpenAI wants ChatGPT in your browser because "That's where people who use computers, particularly for work, spend all their time, and through which vast quantities of valuable information flow in and out. Also, if you're a company hoping to train your models to replicate a bunch of white-collar work, millions of browser sessions would be a pretty valuable source of data."

Unfortunately, warns Fast Company, ChatGPT Atlas, Perplexity Comet, and other AI browses "include some major security, privacy, and usability trade-offs... Most of the time, I don't want to use them and am wary of doing so..." Worst of all, these browsers are security minefields. A web page that looks benign to humans can includehidden instructions for AI agents, tricking them into stealing info from other sites... "If you're signed into sensitive accounts like your bank or your email provider in your browser, simply summarizing a Reddit postcould result in an attacker being able to steal money or your private data,"Brave's security researchers wrotelast week.No one has figured out how to solve this problem.

If you can look past the security nightmares, the actual browsing features are substandard. Neither ChatGPT Atlas nor Perplexity Comet support vertical tabs — a must-have feature for me — and they have no tab search tool or way to look up recently-closed pages. Atlas also doesn't support saving sites as web apps, selecting multiple tabs (for instance, to close all at once with Cmd+W), or customizing the appearance. Compared to all the fancy new AI features, the web browsing part can feel like an afterthought. Regular web search can also be a hassle, even though you'll probably need it sometimes. When I typed "Sichuan Chili" into ChatGPT Atlas, it produced a lengthy description of the Chinese peppers, not the nearby restaurant whose website and number I was looking for.... Meanwhile, the standard AI annoyances still apply in the browser. Getting Perplexity to fill my grocery cart felt like a triumph, but on other occasions the AI has run into inexplicable walls and only ended up wasting more time.

There may be other costs to using these browsers as well. AI still has usage limits, and so all this eventually becomes a ploy to bump more people into paid tiers. Beyond that,Atlas is constantly analyzing the pages you visit to build a "memory" of who you are and what you're into. Do not be surprised if this translates to deeply targeted ads as OpenAI startslooking at ways to monetize free users. For now, I'm only using AI browsers in small doses when I think they can solve a specific problem.

Even then, I'm not going sign them into my email, bank accounts, or any other accounts for which a security breach would be catastrophic. It's too bad, because email and calendars are areas where AI agents could be truly useful, but the security risks are too great (andwell-documented).

The article notes that in August Vivaldi announced that "We're taking a stand, choosing humans over hype" with their browser: We will not use an LLM to add a chatbot, a summarization solution or a suggestion engine to fill up forms for you, until more rigorous ways to do those things are available. Vivaldi is the haven for people who still want to explore. We will continue building a browser for curious minds, power users, researchers, and anyone who values autonomy. If AI contributes to that goal without stealing intellectual property, compromising privacy or the open web, we will use it. If it turns people into passive consumers, we will not...

We're fighting for a better web.

Privacy

Woman Wrongfully Accused by a License Plate-Reading Camera - Then Exonerated By Camera-Equipped Car (electrek.co) 174

CBS News investigates what happened when police thought they'd tracked down a "porch pirate" who'd stolen a package — and accused an innocent woman.

"You know why I'm here," the police sergeant tells Chrisanna Elser. "You know we have cameras in that town..." "It went right into, 'we have video of you stealing a package,'" Elser said... "Can I see the video?" Elser asked. "If you go to court, you can," the officer replied. "If you're going to deny it, I'm not going to extend you any courtesy...." [You can watch a video of the entire confrontation.] On her doorstep, the officer issued a summons, without ever looking at the surveillance video Elser had. "We can show you exactly where we were," she told him. "I already know where you were," he replied.

Her Rivian — equipped with multiple cameras — had recorded her entire route that day... It took weeks of her collecting her own evidence, building timelines, and submitting videos before someone listened. Finally, she received an email from the Columbine Valley police chief acknowledging her efforts in an email saying, "nicely done btw (by the way)," and informing her the summons would not be filed.

Elser also found the theft video (which the police officer refused to show her) on Nextdoor, reports Electrek. "The woman has the same color hair, but different facial and nose shape and apparent age than Elser, which is all reasonably apparent when viewing the video..."

But Elser does drive a green Rivian truck, which police knew had entered the neighborhood 20 times over the course of a month. (Though in the video the officer is told that a male driver in the same household passes through that neighborhood driving to and from work.) The problem may be their certainty — derived from Flock's network of cameras that automatically read license plates, "tracking movements of vehicles wherever they go..." The system has provoked concern from privacy and freedom focused organizations like the Electronic Frontier Foundation and American Civil Liberties Union. Flock also recently announced a partnership with Ring, seeking to use a network of doorbell cameras to track Americans in even more places.... [The police] didn't even have video of the truck in the area — merely tags of it entering... (it also left the area minutes later, indicating a drive through, rather than crawling through neighborhoods looking for packages — but police neglected to check the exit timestamps)... Elser has asked for an apology for [officer] Milliman's aggressive behavior during the encounter, but has heard nothing back from the department despite a call, email, and physical appearance at the police station.
The article points out that Rivian's "Road Cam" feature can be set to record footage of everything happening around it using the car's built in cameras for driver-assist features. But if you want to record footage all the time, you'll need to plug in a USB-C external drive to store it. (It's ironic how different cameras recorded every part of this story — the theft, the police officer accusing the innocent woman, and that innocent woman's actual whereabouts.)

Electrek's take? "Citizens should not need to own a $70k+ truck, or even a $100 external hard drive, to keep track of everything they do in order to prove to power-tripping officers that they didn't commit a crime."
Software

Affinity's Image-Editing Apps Go 'Freemium' in First Major Post-Canva Update (arstechnica.com) 8

ArsTechnica: When graphic design platform-provider Canva bought the Affinity image-editing and publishing apps early last year, we had some major questions about how the companies' priorities and products would mesh. How would Canva serve the users who preferred Affinity's perpetually licensed apps to Adobe's subscription-only software suite? And how would Affinity's strong stance against generative AI be reconciled with Canva's embrace of those technologies.

This week, Canva gave us definitive answers to all of those questions: a brand-new unified Affinity app that melds the Photo, Designer, and Publisher apps into a single piece of software called "Affinity by Canva" that is free to use with a Canva user account, but which gates generative AI features behind Canva's existing paid subscription plans ($120 a year for individuals).

This does seem like mostly good news, in the near to mid term, for existing Affinity app users who admired Affinity's anti-AI stance: All three apps' core features are free to use, and the stuff you're being asked to pay for is stuff you mostly don't want anyway. But it may come as unwelcome news for those who like the predictability of pay-once-own-forever software or are nervous about where Canva might draw the line between "free" and "premium" features down the line.

[...] There's now a dedicated page for the older versions of the Affinity apps, and an FAQ at the bottom of that page answers several questions about the fate of those apps. Affinity and Canva say they will continue to keep the activation servers and downloads for all Affinity v1 and v2 apps online for the foreseeable future, giving people who already own the existing apps a way to keep using the versions they're comfortable with. Users can opt to link their Serif Affinity store accounts to their new Canva accounts to access the old downloads without juggling multiple accounts. But those older versions of the apps "won't receive future updates" and won't be able to open files created in the new Canva-branded Affinity app.

AI

'AI Sets Up Kodak Moment For Global Consultants' (reuters.com) 16

An anonymous reader shares a column: As the AI boom develops, consultants are in a tricky spot. The pandemic, inflation and economic uncertainty have encouraged many of their big clients to tighten expenditure. The U.S. government, one of the biggest spenders, has been cancelling multiple billion-dollar contracts in an effort to conserve cash. In March, 10 of the largest consultants including Deloitte, Accenture, Booz Allen Hamilton, IBM and Guidehouse were targeted by the Department of Government Efficiency to justify their fees. As a result, the largest listed players' shares have collapsed by up to 30% in the past two years, against the S&P 500's 50% jump.

AI is, in some respects, a boon. In September, Accenture said it had helped it cut 11,000 jobs, and CEO Julie Sweet is set to augment that with staff that cannot be retrained. Salesforce recently laid off 4000 customer support workers. Microsoft has halted hiring in its consulting business. Unfortunately, big clients are cottoning on to the advantages too. One finance chief of a large UK company outlined the issue for Breakingviews via an illustrative example. Say an outsourced project costs the client $1 million to do themselves, and Accenture and the like have historically been able to do the same job for $200,000. With the advent of machine learning, companies can do the same work for just $10,000. This gives clients considerable leverage. If consultants won't lower their prices to near the relevant level, the client can find one who will. Or just do the job itself.

Businesses

Electronic Arts' AI Tools Are Creating More Work Than They Save (businessinsider.com) 42

Electronic Arts has spent the past year pushing its nearly 15,000 employees to use AI for everything from code generation to scripting difficult conversations about pay. Employees in some areas must complete multiple AI training courses and use tools like the company's in-house chatbot ReefGPT daily.

The tools produce flawed code and hallucinations that employees then spend time correcting. Staff say the AI creates more work rather than less, according to Business Insider. They fix mistakes while simultaneously training the programs on their own work. Creative employees fear the technology will eventually eliminate demand for character artists and level designers. One recently laid-off senior quality-assurance designer says AI performed a key part of his job -- reviewing and summarizing feedback from hundreds of play testers. He suspects this contributed to his termination when about 100 colleagues were let go this past spring from the company's Respawn Entertainment studio.
Mozilla

Mozilla to Require Data-Collection Disclosure in All New Firefox Extensions (linuxiac.com) 18

"Mozilla is introducing a new privacy framework for Firefox extensions that will require developers to disclose whether their add-ons collect or transmit user data..." reports the blog Linuxiac: The policy takes effect on November 3, 2025, and applies to all new Firefox extensions submitted to addons.mozilla.org. According to Mozilla's announcement, extension developers must now include a new key in their manifest.json files. This key specifies whether an extension gathers any personal data. Even extensions that collect nothing must explicitly state "none" in this field to confirm that no data is being collected or shared.

This information will be visible to users at multiple points: during the installation prompt, on the extension's listing page on addons.mozilla.org, and in the Permissions and Data section of Firefox's about:addons page. In practice, this means users will be able to see at a glance whether a new extension collects any data before they install it.

Networking

Are Network Security Devices Endangering Orgs With 1990s-Era Flaws? (csoonline.com) 57

Critics question why basic flaws like buffer overflows, command injections, and SQL injections are "being exploited remain prevalent in mission-critical codebases maintained by companies whose core business is cybersecurity," writes CSO Online. Benjamin Harris, CEO of cybersecurity/penetration testing firm watchTowr tells them that "these are vulnerability classes from the 1990s, and security controls to prevent or identify them have existed for a long time. There is really no excuse." Enterprises have long relied on firewalls, routers, VPN servers, and email gateways to protect their networks from attacks. Increasingly, however, these network edge devices are becoming security liabilities themselves... Google's Threat Intelligence Group tracked 75 exploited zero-day vulnerabilities in 2024. Nearly one in three targeted network and security appliances, a strikingly high rate given the range of IT systems attackers could choose to exploit. That trend has continued this year, with similar numbers in the first 10 months of 2025, targeting vendors such as Citrix NetScaler, Ivanti, Fortinet, Palo Alto Networks, Cisco, SonicWall, and Juniper. Network edge devices are attractive targets because they are remotely accessible, fall outside endpoint protection monitoring, contain privileged credentials for lateral movement, and are not integrated into centralized logging solutions...

[R]esearchers have reported vulnerabilities in these systems for over a decade with little attacker interest beyond isolated incidents. That shifted over the past few years with a rapid surge in attacks, making compromised network edge devices one of the top initial access vectors into enterprise networks for state-affiliated cyberespionage groups and ransomware gangs. The COVID-19 pandemic contributed to this shift, as organizations rapidly expanded remote access capabilities by deploying more VPN gateways, firewalls, and secure web and email gateways to accommodate work-from-home mandates. The declining success rate of phishing is another factor... "It is now easier to find a 1990s-tier vulnerability in a border device where Endpoint Detection and Response typically isn't deployed, exploit that, and then pivot from there" [says watchTowr CEL Harris]...

Harris of watchTowr doesn't want to minimize the engineering effort it takes to build a secure system. But he feels many of the vulnerabilities discovered in the past two years should have been caught with automatic code analysis tools or code reviews, given how basic they have been. Some VPN flaws were "trivial to the point of embarrassing for the vendor," he says, while even the complex ones should have been caught by any organization seriously investing in product security... Another problem? These appliances have a lot of legacy code, some that is 10 years or older.

Attackers may need to chain together multiple hard-to-find vulnerabilities across multiple components, the article acknowleges. And "It's also possible that attack campaigns against network-edge devices are becoming more visible to security teams because they are looking into what's happening on these appliances more than they did in the past... "

The article ends with reactions from several vendors of network edge security devices.

Thanks to Slashdot reader snydeq for sharing the article.
Ubuntu

Finally, You Can Now be a 'Certified' Ubuntu Sys-Admin/Linux User (itsfoss.com) 50

Thursday Ubuntu-maker Canonical "officially launched Canonical Academy, a new certification platform designed to help professionals validate their Linux and Ubuntu skills through practical, hands-on assessments," writes the blog It's FOSS: Focusing on real-world scenarios, Canonical Academy aims to foster practical skills rather than theoretical knowledge. The end goal? Getting professionals ready for the actual challenges they will face on the job. The learning platform is already live with its first course offering, the System Administrator track (with three certification exams), which is tailored for anyone looking to validate their Linux and Ubuntu expertise.

The exams use cloud-based testing environments that simulate real workplace scenarios. Each assessment is modular, meaning you can progress through individual exams and earn badges for each one. Complete all the exams in this track to earn the full Sysadmin qualification... Canonical is also looking for community members to contribute as beta testers and subject-matter experts (SME). If you are interested in helping shape the platform or want to get started with your certification, you can visit the Canonical Academy website.

The sys-admin track offers exams for Linux Terminal, Ubuntu Desktop 2024, Ubuntu Server 2024, and "managing complex systems," according to an official FAQ. "Each exam provides an in-browser remote desktop interface into a functional Ubuntu Desktop environment running GNOME. From this initial node, you will be expected to troubleshoot, configure, install, and maintain systems, processes, and other general activities associated with managing Linux. The exam is a hybrid format featuring multiple choice, scenario-based, and performance-based questions..."

"Test-takers interested in the types of material covered on each exam can review links to tutorials and documentation on our website."

The FAQ advises test takers to use a Chromium-based browser, as Firefox "is NOT supported at this time... There is a known issue with keyboards and Firefox in the CUE.01 Linux 24.04 preview release at this time, which will be resolved in the CUE.01 Linux 24.10 exam release."
AI

AI Assistants Misrepresent News Content 45% of the Time (bbc.co.uk) 112

An anonymous reader quotes a report from the BBC: New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants -- already a daily information gateway for millions of people -- routinely misrepresent news content no matter which language, territory, or AI platform is tested. The intensive international study of unprecedented scope and scale was launched at the EBU News Assembly, in Naples. Involving 22 public service media (PSM) organizations in 18 countries working in 14 languages, it identified multiple systemic issues across four leading AI tools. Professional journalists from participating PSM evaluated more than 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity against key criteria, including accuracy, sourcing, distinguishing opinion from fact, and providing context.

Key findings:
- 45% of all AI answers had at least one significant issue.
- 31% of responses showed serious sourcing problems - missing, misleading, or incorrect attributions.
- 20% contained major accuracy issues, including hallucinated details and outdated information.
- Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
- Comparison between the BBC's results earlier this year and this study show some improvements but still high levels of errors.
The team has released a News Integrity in AI Assistants Toolkit to help develop solutions to these problems and boost users' media literacy. They're also urging regulators to enforce laws on information integrity and continue independent monitoring of AI assistants.
Operating Systems

OpenBSD 7.8 Released (phoronix.com) 24

OpenBSD 7.8 has been released, adding Raspberry Pi 5 support, enhanced AMD Secure Encrypted Virtualization (SEV-ES) capabilities, and expanded hardware compatibility including new Qualcomm, Rockchip, and Apple ARM drivers. Phoronix reports: OpenBSD 7.8 also brings multiple improvements around enabling AMD Secure Encrypted Virtualization (AMD SEV) support with support for the PSP ioctl for encrypting and measuring state for SEV-ES, a new VMD option to run guests in SEV-ES mode, and other enablement work pertaining to that AMD SEV work in SEV-ES form at this point as a precursor to SEV-SNP. AMD SEV-ES should be working to start confidential virtual machines (VMs) when using the VMM/VMD hypervisor and the OpenBSD guests with KVM/QEMU.

OpenBSD 7.8 also improves compatibility of the FUSE file-system support with the Linux implementation, suspend/hibernate improvements, SMP improvements, updating to the Linux 6.12.50 DRM graphics drivers, several new Rockchip drivers, Raspberry Pi RP1 drivers, H.264 video support for the uvideo driver, and many network driver improvements.
The changelog and download page can be found via OpenBSD.org.
Space

Mystery Object From 'Space' Strikes United Airlines Flight Over Utah (wired.com) 58

UPDATE: It may have been a weather balloon...

An anonymous reader quotes a report from Wired: The National Transportation Safety Board confirmed Sunday that it is investigating an airliner that was struck by an object in its windscreen, mid-flight, over Utah. "NTSB gathering radar, weather, flight recorder data," the federal agency said on the social media site X. "Windscreen being sent to NTSB laboratories for examination." The strike occurred Thursday, during a United Airlines flight from Denver to Los Angeles. Images shared on social media showed that one of the two large windows at the front of a 737 MAX aircraft was significantly cracked. Related images also reveal a pilot's arm that has been cut multiple times by what appear to be small shards of glass.

The captain of the flight reportedly described the object that hit the plane as "space debris." This has not been confirmed, however. After the impact, the aircraft safely landed at Salt Lake City International Airport after being diverted. Images of the strike showed that an object made a forceful impact near the upper-right part of the window, showing damage to the metal frame. Because aircraft windows are multiple layers thick, with laminate in between, the window pane did not shatter completely. The aircraft was flying above 30,000 feet -- likely around 36,000 feet -- and the cockpit apparently maintained its cabin pressure.

Classic Games (Games)

Chess Influencer and Grandmaster Daniel Naroditsky Dies At 29 (chess.com) 48

U.S. Grandmaster and beloved chess commentator Daniel Naroditsky has tragically passed away at the age of 29. "The news has sent shockwaves around the chess community, which is grieving the loss of one of the most beloved and influential voices," reports Chess.com. From the report: The devastating news was first shared by Naroditsky's club, Charlotte Chess Center, on Monday, and confirmed by Chess.com with multiple sources: "It is with great sadness that we share the unexpected passing of Daniel Naroditsky. Daniel was a talented chess player, educator, and cherished member of the chess community. He was also a loving son, brother, and loyal friend. We ask for privacy for Daniel's family during this extremely difficult time. Let us honor Daniel by remembering his passion for chess and the inspiration he brought to us all."

Naroditsky, who was three weeks away from turning 30, has long been known as one of United States' most talented players. He achieved his grandmaster title at the age of 18 in 2013, and placed fifth among the highest-ranked juniors in 2015. His last FIDE-rating is 2619, which places him among the top 150 in the world, or the 17th highest-ranked in the United States. He has a peak rating of 2647 from 2017. He leaves a legacy that spans strong over-the-board competition and highly popular chess instruction and commentary on streaming platforms.

AI

Claude Code Gets a Web Version (arstechnica.com) 2

An anonymous reader quotes a report from Ars Technica: Anthropic has added web and mobile interfaces for Claude Code, its immensely popular command-line interface (CLI) agentic AI coding tool. The web interface appears to be well-baked at launch, but the mobile version is limited to iOS and is in an earlier stage of development. The web version of Claude Code can be given access to a GitHub repository. Once that's done, developers can give it general marching orders like "add real-time inventory tracking to the dashboard."

As with the CLI version, it gets to work, with updates along the way approximating where it's at and what it's doing. The web interface supports the recently implemented Claude Code capability to take suggestions or requested changes while it's in the middle of working on a task. (Previously, if you saw it doing something wrong or missing something, you often had to cancel and start over.) Developers can run multiple sessions at once and switch between them as needed; they're listed in a left-side panel in the interface.

Alongside this web and mobile rollout, Anthropic has also introduced a new sandboxing runtime to Claude Code that, along with other things, aims to make the experience both more secure and lower friction. In the past, Claude Code worked by asking permission before making most changes and steps along the way. Now, it can instead be given permissions for specific file system folders and network servers. That means fewer approval steps, but it's also more secure overall against prompt injection and other risks.
You can learn more about "Claude Code on the web" through the company's blog and official YouTube channel.

Note: the new features are available in beta as a research preview, and they are available to Claude users with Pro or Max subscriptions.
IT

Louvre Museum Security 'Outdated and Inadequate' at Time of Heist (thetimes.com) 33

A Court of Accounts report written before Sunday's theft of crown jewels from the Louvre revealed the museum's security systems were outdated and inadequate [non-paywalled source]. The report noted a lack of basic CCTV equipment across multiple wings. Cameras had mainly been installed only when rooms were refurbished due to repeated postponements of scheduled modernization. In the Denon wing where the Apollo Gallery was targeted, a third of rooms had no CCTV cameras. Three-quarters of rooms in the Richelieu wing and nearly two-thirds in the Sully wing lacked cameras.

The thieves were caught on camera at one point but were masked and impossible to identify, according to Paris public prosecutor Laure Beccuau. The alarm system activated when thieves cut open display cases, but they threatened staff who left the area. Culture minister Rachida Dati confirmed new CCTV cameras would be installed. President Macron had earmarked $186.30 million to upgrade the Louvre's security systems under a renaissance plan launched in June.
Transportation

Desperate to Stop Waymo's Dead-End Detours, a San Francisco Resident Tried an Orange Cone with a Sign (sfgate.com) 89

"This is an attempt to stop Waymo cars from driving into the dead end," complains a home-made sign in San Francisco, "where they are forced to reverse and adversely affect the lives of the residents."

On an orange traffic post, the home-made sign declares "NO WAYMO — 8:00 p.m. to 8:00 a.m," with an explanation for the rest of the neighborhood. "Waymo comes at all hours of the night and up to 7 times per hour with flashing lights and screaming reverse sounds, waking people up and destroying the quality of life."

SFGate reports that 1,400 people on Reddit upvoted a photo of the sign's text: It delves into the bureaucratic mess — multiple requests to Waymo, conversations with engineers, and 311 [municipal services] tickets, which had all apparently gone ignored — before finally providing instructions for human drivers. "Please move [the cones] back after you have entered so we can continue to try to block the Waymo cars from entering and disrupting the lives of residents."

This isn't the first time Waymo's autonomous vehicles have disrupted San Francisco residents' peace. Last year, a fleet of the robotaxis created another sleepless fiasco in the city's SoMa neighborhood, honking at each other for hours throughout the night for two and a half weeks.

Other on Reddit shared the concern. "I live at an dead end street in Noe Valley, and these Waymos always stuck there," another commenter posted. "It's been bad for more than a year," agreed another comment. "People on the Internet think you're just a hater but it's a real issue with Waymos."

On Thursday "the sign remained at the corner of Lake Street and Second Avenue," notes SFGate. And yet "something appeared to have shifted. "Waymo vehicles weren't allowing drop-offs or pickups on the street, though whether this was due to the home-printed plea, the cone blockage, or simply updating routes remains unclear."
Education

AI-Generated Lesson Plans Fall Short On Inspiring Students, Promoting Critical Thinking (theconversation.com) 50

An anonymous reader quotes a report from The Conversation: When teachers rely on commonly used artificial intelligence chatbots to devise lesson plans, it does not result in more engaging, immersive or effective learning experiences compared with existing techniques, we found in our recent study. The AI-generated civics lesson plans we analyzed also left out opportunities for students to explore the stories and experiences of traditionally marginalized people. The allure of generative AI as a teaching aid has caught the attention of educators. A Gallup survey from September 2025 found that 60% of K-12 teachers are already using AI in their work, with the most common reported use being teaching preparation and lesson planning. [...]

For our research, we began collecting and analyzing AI-generated lesson plans to get a sense of what kinds of instructional plans and materials these tools provide to teachers. We decided to focus on AI-generated lesson plans for civics education because it is essential for students to learn productive ways to participate in the U.S. political system and engage with their communities. To collect data for this study, in August 2024 we prompted three GenAI chatbots -- the GPT-4o model of ChatGPT, Google's Gemini 1.5 Flash model and Microsoft's latest Copilot model -- to generate two sets of lesson plans for eighth grade civics classes based on Massachusetts state standards. One was a standard lesson plan and the other a highly interactive lesson plan.

We garnered a dataset of 311 AI-generated lesson plans, featuring a total of 2,230 activities for civic education. We analyzed the dataset using two frameworks designed to assess educational material: Bloom's taxonomy and Banks' four levels of integration of multicultural content. Bloom's taxonomy is a widely used educational framework that distinguishes between "lower-order" thinking skills, including remembering, understanding and applying, and "higher-order" thinking skills -- analyzing, evaluating and creating. Using this framework to analyze the data, we found 90% of the activities promoted only a basic level of thinking for students. Students were encouraged to learn civics through memorizing, reciting, summarizing and applying information, rather than through analyzing and evaluating information, investigating civic issues or engaging in civic action projects.

When examining the lesson plans using Banks' four levels of integration of multicultural content model (PDF), which was developed in the 1990s, we found that the AI-generated civics lessons featured a rather narrow view of history -- often leaving out the experiences of women, Black Americans, Latinos and Latinas, Asian and Pacific Islanders, disabled individuals and other groups that have long been overlooked. Only 6% of the lessons included multicultural content. These lessons also tended to focus on heroes and holidays rather than deeper explorations of understanding civics through multiple perspectives. Overall, we found the AI-generated lesson plans to be decidedly boring, traditional and uninspiring. If civics teachers used these AI-generated lesson plans as is, students would miss out on active, engaged learning opportunities to build their understanding of democracy and what it means to be a citizen.

Games

Video Game Union Workers Rally Against $55 Billion Saudi-Backed Private Acquisition of EA (eurogamer.net) 36

EA employees and the Communications Workers of America union have condemned the company's proposed $55 billion private acquisition -- backed by Saudi Arabia's Public Investment Fund and Jared Kushner's Affinity Partners, "claiming they were not represented in the negotiations and any jobs lost as a result would 'be a choice, not a necessity, made to pad investors' pockets," reports Eurogamer. From the report: Following the announcement, there's been plenty of speculation around the future of EA and its multiple owned studios, split between EA Sports and EA Entertainment. Now, members of the United Videogame Workers union and the CWA have issued a formal response alongside a petition for regulators to scrutinize the deal. "EA is not a struggling company," the statement reads. "With annual revenues reaching $7.5 billion and $1 billion in profit each year, EA is one of the largest video game developers and publishers in the world."

This success has been driven by company workers, the union stated. "Yet we, the very people who will be jeopardized as a result of this deal, were not represented at all when this buyout was negotiated or discussed." Citing the number of layoffs across the industry since 2022, workers fear for "the future of our studios that are arbitrarily deemed 'less profitable' but whose contributions to the video game industry define EA's reputation." "If jobs are lost or studios are closed due to this deal, that would be a choice, not a necessity, made to pad investors' pockets - not to strengthen the company," the statement reads.

"Every time private equity or billionaire investors take a studio private, workers lose visibility, transparency, and power," it continues. "Decisions that shape our jobs, our art, and our futures are made behind closed doors by executives who have never written a line of code, built worlds, or supported live services. We are calling on regulators and elected officials to scrutinize this deal and ensure that any path forward protects jobs, preserves creative freedom, and keeps decision-making accountable to the workers who make EA successful." As such, workers have launched a petition in a "fight to make video games better for workers and players -- not billionaires". The statement concludes: "The value of video games is in their workers. As a unified voice, we, the members of the industry-wide video game workers' union UVW-CWA, are standing together and refusing to let corporate greed decide the future of our industry."

AI

Open Source GZDoom Community Splinters After Creator Inserts AI-Generated Code (arstechnica.com) 46

An anonymous reader quotes a report from Ars Technica: If you've even idly checked in on the robust world of Doom fan development in recent years, you've probably encountered one of the hundreds of gameplay mods, WAD files, or entire commercial games based on GZDoom. The open source Doom port -- which can trace its lineage back to the original launch of ZDoom back in 1998 -- adds modern graphics rendering, quality-of-life additions, and incredibly deep modding features to the original Doom source code that John Carmack released in 1997. Now, though, the community behind GZDoom is publicly fracturing, with a large contingent of developers uniting behind a new fork called UZDoom. The move is in apparent protest of the leadership of GZDoom creator and maintainer Cristoph Oelckers (aka Graf Zahl), who recently admitted to inserting untested AI-generated code into the GZDoom codebase.

"Due to some disagreements -- some recent; some tolerated for close to 2 decades -- with how collaboration should work, we've decided that the best course of action was to fork the project," developer Nash Muhandes wrote on the DoomWorld forums Wednesday. "I don't want to see the GZDoom legacy die, as do most all of us, hence why I think the best thing to do is to continue development through a fork, while introducing a different development model that highly favors transparent collaboration between multiple people." [...] Zahl defended the use of AI-generated snippets for "boilerplate code" that isn't key to underlying game features. "I surely have my reservations about using AI for project specific code," he wrote, "but this here is just superficial checks of system configuration settings that can be found on various websites -- just with 10x the effort required."

But others in the community were adamant that there's no place for AI tools in the workflow of an open source project like this. "If using code slop generated from ChatGPT or any other GenAI/AI chatbots is the future of this project, I'm sorry to say but I'm out," GitHub user Cacodemon345 wrote, summarizing the feelings of many other developers. In a GitHub bug report posted Tuesday, user the-phinet laid out the disagreements over AI-generated code alongside other alleged issues with Zahl's top-down approach to pushing out GZDoom updates.

Slashdot Top Deals