The Internet

Fury Over Discord's Age Checks Explodes After Shady Persona Test In UK (arstechnica.com) 62

Backlash intensified against Discord's age verification rollout after it briefly disclosed a UK age-verification test involving vendor Persona, contradicting earlier claims about minimal ID storage and transparency. Ars Technica explains: One of the major complaints was that Discord planned to collect more government IDs as part of its global age verification process. It shocked many that Discord would be so bold so soon after a third-party breach of a former age check partner's services recently exposed 70,000 Discord users' government IDs.

Attempting to reassure users, Discord claimed that most users wouldn't have to show ID, instead relying on video selfies using AI to estimate ages, which raised separate privacy concerns. In the future, perhaps behavioral signals would override the need for age checks for most users, Discord suggested, seemingly downplaying the risk that sensitive data would be improperly stored. Discord didn't hide that it planned to continue requesting IDs for any user appealing an incorrect age assessment, and users weren't happy, since that is exactly how the prior breach happened. Responding to critics, Discord claimed that the majority of ID data was promptly deleted. Specifically, Savannah Badalich, Discord's global head of product policy, told The Verge that IDs shared during appeals "are deleted quickly -- in most cases, immediately after age confirmation."

It's unsurprising then that backlash exploded after Discord posted, and then weirdly deleted, a disclaimer on an FAQ about Discord's age assurance policies that contradicted Discord's hyped short timeline for storing IDs. An archived version of the page shows the note shared this warning: "Important: If you're located in the UK, you may be part of an experiment where your information will be processed by an age-assurance vendor, Persona. The information you submit will be temporarily stored for up to 7 days, then deleted. For ID document verification, all details are blurred except your photo and date of birth, so only what's truly needed for age verification is used."

Critics felt that Discord was obscuring not just how long IDs may be stored, but also the entities collecting information. Discord did not provide details on what the experiment was testing or how many users were affected, and Persona was not listed as a partner on its platform. Asked for comment, Discord told Ars that only a small number of users was included in the experiment, which ran for less than one month. That test has since concluded, Discord confirmed, and Persona is no longer an active vendor partnering with Discord. Moving forward, Discord promised to "keep our users informed as vendors are added or updated." While Discord seeks to distance itself from Persona, Rick Song, Persona's CEO [...] told Ars that all the data of verified individuals involved in Discord's test has been deleted.
Ars also notes that hackers "quickly exposed a 'workaround' to avoid Persona's age checks on Discord" and "found a Persona frontend exposed to the open internet on a U.S. government authorized server."

The Rage, an independent publication that covers financial surveillance, reported: "In 2,456 publicly accessible files, the code revealed the extensive surveillance Persona software performs on its users, bundled in an interface that pairs facial recognition with financial reporting -- and a parallel implementation that appears designed to serve federal agencies." While Persona does not have any government contracts, the exposed service "appears to be powered by an OpenAI chatbot," The Rage noted.

Hackers warned "that OpenAI may have created an internal database for Persona identity checks that spans all OpenAI users via its internal watchlistdb," seemingly exploiting the "opportunity to go from comparing users against a single federal watchlist, to creating the watchlist of all users themselves."
The Internet

A Game Studio's Fired Co-Founder Hijacked Its Domain Name, a New Lawsuit Alleges (aftermath.site) 13

Three co-founders of the game studio That's No Moon "are suing another co-founder for allegedly hijacking the company's website domain name," reports the gaming news site Aftermath, "taking the website offline and disabling employee access to email accounts, according to a new lawsuit." Tina Kowalewski, Taylor Kurosaki, and Nick Kononelos filed a complaint against co-founder and former CEO Michael Mumbauer on Tuesday in a California court. [Game studio] That's No Moon, which was founded in 2020 by veterans of Infinity Ward, Naughty Dog, and other AAA studios, said in its complaint that Mumbauer is looking to "cripple" the studio after being fired in 2022...

Mumbauer, according to the complaint, purchased the domain name, and several others, when the studio was founded; it said both parties agreed these would be controlled by the studio. Mumbauer allegedly still has access to the domains, and That's No Moon said he took control over the website on Jan. 6, disabled the studio's access, and turned off employees' ability to email external addresses. The team was locked out for two days as a four-person IT team worked to get the services back online. On the public-facing side, the website briefly redirected to the Travel Switzerland page, according to the complaint. That's No Moon's lawyers said the co-founders sent Mumbauer a letter on Jan. 7 demanding he "relinquish his unauthorized access." That's when, according to the compliant, the website started redirecting to a GoDaddy Auction site, where the domain was priced at $6,666,666; That's No Moon remarked in the complaint: "A number that [Mumbauer] may well have selected for its Satanic connotation."

As of Wednesday, Aftermath was able to access a public-facing That's No Moon website using both the original domain and the new one... The charges listed as part of this lawsuit are trademark infringement, cybersquatting, computer fraud, conversion, trespass to chattels, and breach of contract. That's No Moon also asked a judge for a temporary restraining order to prevent Mumbauer from continued access to the domains. Mumbauer has not responded to Aftermath's request for comment. Mumbauer said, in an email to That's No Moon attorney Amit Rana published as part of the lawsuit, that he intends to file "a wrongful termination countersuit and will be seeking extensive damages...."

That's No Moon hasn't yet announced its first game, but has said the game is led by creative director Taylor Kurosaki and game director Jacob Minkoff. South Korean publisher Smilegate invested $100 million into the company, That's No Moon announced in 2021.

News

ACM To Make Its Entire Digital Library Open Access Starting January 2026 (acm.org) 22

The Association for Computing Machinery, the world's largest society of computing professionals, announced that all publications and related artifacts in the ACM Digital Library will become freely available to everyone starting January 2026. Authors will retain full copyright to their published work under the new arrangement, and ACM has committed to defending those works against copyright and integrity-related violations.

The transition follows what ACM described as extensive dialogue with authors, Special Interest Group leaders, editorial boards, libraries, and research institutions globally. Students, educators, and researchers at institutions of all sizes -- from well-resourced universities to emerging research communities -- will gain unrestricted access to the full catalog of ACM-published work. The Digital Library houses decades of computing research across journals, magazines, conference proceedings, and books.
ISS

Russia Left Without Access to ISS Following Structure Collapse During Thursday's Launch (nasaspaceflight.com) 77

After a successful November 27th launch to the International Space Station, Russia discovered an accident had occurred on their launch site's mobile maintenance cabin — when a drone spotted it lying upside down in a flame trench. "The main issue with the structure collapse is that it puts Site 31/6 — the only Russian launch site capable of launching crew and cargo to the International Space Station (ISS) — out of service until the structure is fixed," reports the space-news site NASA Spaceflight There are other Soyuz 2 rocket launch pads, but they are either located at an unsuitable latitude, like Plesetsk, or not certified for crewed flights, like Vostochny, or decommissioned and transferred to a museum, like Gagarin's Start at Baikonur. As a result, Russia is temporarily unable to launch Soyuz crewed spacecraft and Progress cargo ships to the ISS, whose nearest launch (Progress MS-33) was scheduled for December 21....

When the rocket launched, a pressure difference was created between the space under the rocket, where gases from running engines are discharged, and the nook where the [144-ton] maintenance cabin was located. The resulting pressure difference pulled the service cabin out of the nook and threw it into the flame trench, where it fell upside down from a height of 20 m. Photos of the accident showed significant damage to the maintenance cabin, which, according to experts, is too extensive to allow for repairs. The only way to resume launches from Site 31/6 is to install a spare maintenance cabin or construct a new one.

Despite the fact that the fallen structure was manufactured in the 1960s, two similar service cabins were manufactured recently at the Tyazhmash heavy-engineering plant in Syzran for other Soyuz launch complexes at the Guiana Space Center and Vostochny Cosmodrome. The production of each cabin took around two years to complete, however, it was not for an emergency situation.

"Various experts gave different possible estimates of the recovery time of the Site 31 launch complex: from several months to three years."
Google

Google Is Collecting Troves of Data From Downgraded Nest Thermostats 11

Even after disabling remote control and officially ending support for early Nest Learning Thermostats, Google is still receiving detailed sensor and activity data from these devices, including temperature changes, motion, and ambient light. The Verge reports: After digging into the backend, security researcher Cody Kociemba found that the first- and second-generation Nest Learning Thermostats are still sending Google information about manual temperature changes, whether a person is present in the room, if sunlight is hitting the device, and more. Kociemba made the discovery while participating in a bounty program created by FULU, a right-to-repair advocacy organization cofounded by electronics repair technician and YouTuber Louis Rossmann.

FULU challenged developers to come up with a solution to restore smart functionality to Nest devices no longer supported by Google, and that's exactly what Kociemba did with his open-source No Longer Evil project. But after cloning Google's API to create this custom software, he started receiving a trove of logs from customer devices, which he turned off. "On these devices, while they [Google] turned off access to remotely control them, they did leave in the ability for the devices to upload logs. And the logs are pretty extensive," Kociemba tells The Verge. [...] "I was under the impression that the Google connection would be severed along with the remote functionality, however that connection is not severed, and instead is a one-way street," Kociemba says.
AI

AI's $5 Trillion Cost Needs Every Debt Market, JPMorgan Says (bloomberg.com) 69

The furious push by AI hyperscalers to build out data centers will need about $1.5 trillion of investment-grade bonds over the next five years and extensive funding from every other corner of the market, according to an analysis by JPMorgan. From a report: "The question is not 'which market will finance the AI-boom?' Rather, the question is 'how will financings be structured to access every capital market?'" according to strategists led by Tarek Hamid.

Leveraged finance is primed to provide around $150 billion over the next half decade, they said. Even with funding from the investment-grade and high-yield bond markets, as well as up to $40 billion per year in data-center securitizations, it will still be insufficient to meet demand, the strategists added. Private credit and governments could help cover a remaining $1.4 trillion funding gap, the report estimates. The bank calculates an at least $5 trillion tab that could climb as high as $7 trillion, singlehandedly driving a reacceleration in growth in the bond and syndicated loan markets, the strategists wrote in a report Monday. The analysts project $300 billion in high-grade bonds going toward AI data centers next year. That could account for nearly one fifth of total issuance in that market, which a report from Barclays estimates will grow to $1.6 trillion.

Security

FCC To Rescind Ruling That Said ISPs Are Required To Secure Their Networks (arstechnica.com) 47

The FCC plans to repeal a Biden-era ruling that required ISPs to secure their networks under the Communications Assistance for Law Enforcement Act, instead relying on voluntary cybersecurity commitments from telecom providers. FCC Chairman Brendan Carr said the ruling "exceeded the agency's authority and did not present an effective or agile response to the relevant cybersecurity threats." Carr said the vote scheduled for November 20 comes after "extensive FCC engagement with carriers" who have taken "substantial steps... to strengthen their cybersecurity defenses." Ars Technica reports: The FCC's January 2025 declaratory ruling came in response to attacks by China, including the Salt Typhoon infiltration of major telecom providers such as Verizon and AT&T. The Biden-era FCC found that the Communications Assistance for Law Enforcement Act (CALEA), a 1994 law, "affirmatively requires telecommunications carriers to secure their networks from unlawful access or interception of communications."

"The Commission has previously found that section 105 of CALEA creates an affirmative obligation for a telecommunications carrier to avoid the risk that suppliers of untrusted equipment will "illegally activate interceptions or other forms of surveillance within the carrier's switching premises without its knowledge,'" the January order said. "With this Declaratory Ruling, we clarify that telecommunications carriers' duties under section 105 of CALEA extend not only to the equipment they choose to use in their networks, but also to how they manage their networks."
A draft of the order that will be voted on in November can be found here (PDF).
Businesses

Qualcomm Is Buying Arduino, Releases New Raspberry Pi-Esque Arduino Board (arstechnica.com) 51

An anonymous reader quotes a report from Ars Technica: Smartphone processor and modem maker Qualcomm is acquiring Arduino, the Italian company known mainly for its open source ecosystem of microcontrollers and the software that makes them function. In its announcement, Qualcomm said that Arduino would "[retain] its brand and mission," including its "open source ethos" and "support for multiple silicon vendors." Qualcomm didn't disclose what it would pay to acquire Arduino. The acquisition also needs to be approved by regulators "and other customary closing conditions."

The first fruit of this pending acquisition will be the Arduino Uno Q, a Qualcomm-based single-board computer with a Qualcomm Dragonwing QRB2210 processor installed. The QRB2210 includes a quad-core Arm Cortex-A53 CPU and a Qualcomm Adreno 702 GPU, plus Wi-Fi and Bluetooth connectivity, and combines that with a real-time microcontroller "to bridge high-performance computing with real-time control."
"Arduino will retain its independent brand, tools, and mission, while continuing to support a wide range of microcontrollers and microprocessors from multiple semiconductor providers as it enters this next chapter within the Qualcomm family," Qualcomm said in its press release. "Following this acquisition, the 33M+ active users in the Arduino community will gain access to Qualcomm Technologies' powerful technology stack and global reach. Entrepreneurs, businesses, tech professionals, students, educators, and hobbyists will be empowered to rapidly prototype and test new solutions, with a clear path to commercialization supported by Qualcomm Technologies' advanced technologies and extensive partner ecosystem."

CNBC notes in its reporting that this acquisition gives Qualcomm "direct access to the tinkerers, hobbyists and companies at the lowest levels of the robotics industry." From the report: Arduino products can't be used to build commercial products but, with chips preinstalled, they're popular for testing out a new idea or proving a concept. Qualcomm hopes that Arduino can help it gain loyalty and legitimacy among startups and builders as robots and other devices increasingly need more powerful chips for artificial intelligence. When some of those experiments become products, Qualcomm wants to sell them its chips commercially.
The Almighty Buck

Neon Pays Users To Record Their Phone Calls, Sell Data To AI Firms 34

Neon Mobile, now the No. 2 social networking app in Apple's U.S. App Store, pays users up to $30 per day to record their phone calls and sell the data to AI companies. The app claims to only capture one side of a call unless both parties use Neon, but its terms grant sweeping rights over recordings. TechCrunch reports: The app, Neon Mobile, pitches itself as a money-making tool offering "hundreds or even thousands of dollars per year" for access to your audio conversations. Neon's website says the company pays 30 cents per minute when you call other Neon users and up to $30 per day maximum for making calls to anyone else. The app also pays for referrals.

According to Neon's terms of service, the company's mobile app can capture users' inbound and outbound phone calls. However, Neon's marketing claims to only record your side of the call unless it's with another Neon user. That data is being sold to "AI companies," the company's terms of service state, "for the purpose of developing, training, testing, and improving machine learning models, artificial intelligence tools and systems, and related technologies."

Despite what Neon's privacy policy says, its terms include a very broad license to its user data, where Neon grants itself a: "...worldwide, exclusive, irrevocable, transferable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to sell, use, host, store, transfer, publicly display, publicly perform (including by means of a digital audio transmission), communicate to the public, reproduce, modify for the purpose of formatting for display, create derivative works as authorized in these Terms, and distribute your Recordings, in whole or in part, in any media formats and through any media channels, in each instance whether now known or hereafter developed." That leaves plenty of wiggle room for Neon to do more with users' data than it claims. The terms also include an extensive section on beta features, which have no warranty and may have all sorts of issues and bugs.
Peter Jackson, cybersecurity and privacy attorney at Greenberg Glusker, told TechCrunch: "Once your voice is over there, it can be used for fraud. Now, this company has your phone number and essentially enough information -- they have recordings of your voice, which could be used to create an impersonation of you and do all sorts of fraud."
Communications

SES Completes $3 Billion Acquisition of Intelsat, Expanding Global Satellite Fleet (ses.com) 4

"The Luxembourg-based satellite company SES has now completed its acquisition of the European-based satellite company Intelsat, giving the combined company 120 active satellites in a variety of low and high Earth orbits," writes longtime Slashdot reader schwit1. "Both companies are long established, with Intelsat initially founded in the mid-1960s as a consortium of 23 nations aimed at launching the first geosynchronous communications satellites over the Atlantic and Pacific serving most of the Old World and linked to the New. The merger is an attempt by both companies to compete with the new low-orbit constellations of SpaceX, Amazon, and from China." From a press release: With a world-class network including approximately 90 geostationary (GEO), nearly 30 medium earth orbit (MEO) satellites, strategic access to low earth orbit (LEO) satellites, and an extensive ground network, SES can now deliver connectivity solutions utilizing complementary spectrum bands including C-, Ku-, Ka-, Military Ka-, X-band, and Ultra High Frequency. The expanded capabilities of the combined company will enable it to deliver premium-quality services and tailored solutions to its customers. The company's assets and networks, once fully integrated, will put SES in a strong competitive position to better serve the evolving needs of its customers including governments, aviation, maritime, and media across the globe. "Our focus is clear: to grow, to lead in high-potential markets, and to shape the future of our industry," said SES CEO Adel Al-Saleh in a statement. "This is a long-term play, and we are building with the future in mind -- growing year after year, expanding our capabilities, and creating lasting value for our customers and shareholders alike."

Fierce Network notes that the FCC is preparing to auction upper C-band spectrum (3.98-4.2 GHz), previously cleared in part by SES and Intelsat and now eyed for 5G expansion by Verizon and AT&T. With new legislative backing and industry pressure, including from CTIA and FCC Chairman Brendan Carr, the agency is being urged to act quickly to auction and open this spectrum for full-power wireless use.
AI

DeepMind Details All the Ways AGI Could Wreck the World (arstechnica.com) 36

An anonymous reader quotes a report from Ars Technica, written by Ryan Whitwam: Researchers at DeepMind have ... released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience. It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to "severe harm." This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks.

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne'er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon. DeepMind says companies developing AGI will have to conduct extensive testing and create robust post-training safety protocols. Essentially, AI guardrails on steroids. They also suggest devising a method to suppress dangerous capabilities entirely, sometimes called "unlearning," but it's unclear if this is possible without substantially limiting models. Misalignment is largely not something we have to worry about with generative AI as it currently exists. This type of AGI harm is envisioned as a rogue machine that has shaken off the limits imposed by its designers. Terminators, anyone? More specifically, the AI takes actions it knows the developer did not intend. DeepMind says its standard for misalignment here is more advanced than simple deception or scheming as seen in the current literature.

To avoid that, DeepMind suggests developers use techniques like amplified oversight, in which two copies of an AI check each other's output, to create robust systems that aren't likely to go rogue. If that fails, DeepMind suggests intensive stress testing and monitoring to watch for any hint that an AI might be turning against us. Keeping AGIs in virtual sandboxes with strict security and direct human oversight could help mitigate issues arising from misalignment. Basically, make sure there's an "off" switch. If, on the other hand, an AI didn't know that its output would be harmful and the human operator didn't intend for it to be, that's a mistake. We get plenty of those with current AI systems -- remember when Google said to put glue on pizza? The "glue" for AGI could be much stickier, though. DeepMind notes that militaries may deploy AGI due to "competitive pressure," but such systems could make serious mistakes as they will be tasked with much more elaborate functions than today's AI. The paper doesn't have a great solution for mitigating mistakes. It boils down to not letting AGI get too powerful in the first place. DeepMind calls for deploying slowly and limiting AGI authority. The study also suggests passing AGI commands through a "shield" system that ensures they are safe before implementation.

Lastly, there are structural risks, which DeepMind defines as the unintended but real consequences of multi-agent systems contributing to our already complex human existence. For example, AGI could create false information that is so believable that we no longer know who or what to trust. The paper also raises the possibility that AGI could accumulate more and more control over economic and political systems, perhaps by devising heavy-handed tariff schemes. Then one day, we look up and realize the machines are in charge instead of us. This category of risk is also the hardest to guard against because it would depend on how people, infrastructure, and institutions operate in the future.

ISS

Axiom Space and Red Hat Will Bring Edge Computing to the International Space Station (theregister.com) 7

Axiom Space and Red Hat will collaborate to launch Data Center Unit-1 (AxDCU-1) to the International Space Station this spring. It's a small data processing prototype (powered by lightweight, edge-optimized Red Hat Device Edge) that will demonstrate initial Orbital Data Center (ODC) capabilities.

"It all sounds rather grand for something that resembles a glorified shoebox," reports the Register. Axiom Space said: "The prototype will test applications in cloud computing, artificial intelligence, and machine learning (AI/ML), data fusion and space cybersecurity."

Space is an ideal environment for edge devices. Connectivity to datacenters on Earth is severely constrained, so the more processing that can be done before data is transmitted to a terrestrial receiving station, the better. Tony James, chief architect, Science and Space at Red Hat, said: "Off-planet data processing is the next frontier, and edge computing is a crucial component. With Red Hat Device Edge and in collaboration with Axiom Space, Earth-based mission partners will have the capabilities necessary to make real-time decisions in space with greater reliability and consistency...."

The Red Hat Device Edge software used by Axiom's device combines Red Hat Enterprise Linux, the Red Hat Ansible Platform, and MicroShift, a lightweight Kubernetes container orchestration service derived from Red Hat OpenShift. The plan is for Axiom Space to host hybrid cloud applications and cloud-native workloads on-orbit. Jason Aspiotis, global director of in-space data and security, Axiom Space, told The Register that the hardware itself is a commercial off-the-shelf unit designed for operation in harsh environments... "AxDCU-1 will have the ability to be controlled and utilized either via ground-to-space or space-to-space communications links. Our current plans are to maintain this device on the ISS. We plan to utilize this asset for at least two years."

The article notes that HPE has also "sent up a succession of Spaceborne computers — commercial, off-the-shelf supercomputers — over the years to test storage, recovery, and operational potential on long-duration missions." (They apparently use Red Hat Enterprise Linux.) "At the other end of the scale, the European Space Agency has run Raspberry Pi computers on the ISS for years as part of the AstroPi educational outreach program."

Axiom Space says their Orbital Data Center is deigned to "reduce delays traditionally associated with orbital data processing and analysis." By utilizing Earth-independent cloud storage and edge processing infrastructure, Axiom Space ODCs will enable data to be processed closer to its source, spacecraft or satellites, bypassing the need for terrestrial-based data centers. This architecture alleviates reliance on costly, slow, intermittent or contested network connections, creating more secure and quicker decision-making in space.

The goal is to allow Axiom Space and its partners to have access to real-time processing capabilities, laying the foundation for increased reliability and improved space cybersecurity with extensive applications. Use cases for ODCs include but are not limited to supporting Earth observation satellites with in-space and lower latency data storage and processing, AI/ML training on-orbit, multi-factor authentication and cyber intrusion detection and response, supervised autonomy, in-situ space weather analytics and off-planet backup & disaster recovery for critical infrastructure on Earth.

AI

Signal President Calls Out Agentic AI As Having 'Profound' Security and Privacy Issues (techcrunch.com) 8

Signal President Meredith Whittaker warned at SXSW that agentic AI poses significant privacy and security risks, as these AI agents require extensive access to users' personal data, likely processing it unencrypted in the cloud. TechCrunch reports: "So we can just put our brain in a jar because the thing is doing that and we don't have to touch it, right?," Whittaker mused. Then she explained the type of access the AI agent would need to perform these tasks, including access to our web browser and a way to drive it as well as access to our credit card information to pay for tickets, our calendar, and messaging app to send the text to your friends. "It would need to be able to drive that [process] across our entire system with something that looks like root permission, accessing every single one of those databases -- probably in the clear, because there's no model to do that encrypted," Whittaker warned.

"And if we're talking about a sufficiently powerful ... AI model that's powering that, there's no way that's happening on device," she continued. "That's almost certainly being sent to a cloud server where it's being processed and sent back. So there's a profound issue with security and privacy that is haunting this hype around agents, and that is ultimately threatening to break the blood-brain barrier between the application layer and the OS layer by conjoining all of these separate services [and] muddying their data," Whittaker concluded.

If a messaging app like Signal were to integrate with AI agents, it would undermine the privacy of your messages, she said. The agent has to access the app to text your friends and also pull data back to summarize those texts. Her comments followed remarks she made earlier during the panel on how the AI industry had been built on a surveillance model with mass data collection. She said that the "bigger is better AI paradigm" -- meaning the more data, the better -- had potential consequences that she didn't think were good. With agentic AI, Whittaker warned we'd further undermine privacy and security in the name of a "magic genie bot that's going to take care of the exigencies of life," she concluded.
You can watch the full speech on YouTube.
Science

Evolution Journal Editors Resign En Masse (arstechnica.com) 38

An anonymous reader quotes a report from Ars Technica, written by Jennifer Ouellette: Over the holiday weekend, all but one member of the editorial board of Elsevier's Journal of Human Evolution (JHE) resigned "with heartfelt sadness and great regret," according to Retraction Watch, which helpfully provided an online PDF of the editors' full statement. It's the 20th mass resignation from a science journal since 2023 over various points of contention, per Retraction Watch, many in response to controversial changes in the business models used by the scientific publishing industry. "This has been an exceptionally painful decision for each of us," the board members wrote in their statement. "The editors who have stewarded the journal over the past 38 years have invested immense time and energy in making JHE the leading journal in paleoanthropological research and have remained loyal and committed to the journal and our authors long after their terms ended. The [associate editors] have been equally loyal and committed. We all care deeply about the journal, our discipline, and our academic community; however, we find we can no longer work with Elsevier in good conscience."

The editorial board cited several changes made over the last ten years that it believes are counter to the journal's longstanding editorial principles. These included eliminating support for a copy editor and a special issues editor, leaving it to the editorial board to handle those duties. When the board expressed the need for a copy editor, Elsevier's response, they said, was "to maintain that the editors should not be paying attention to language, grammar, readability, consistency, or accuracy of proper nomenclature or formatting." There is also a major restructuring of the editorial board underway that aims to reduce the number of associate editors by more than half, which "will result in fewer AEs handling far more papers, and on topics well outside their areas of expertise." Furthermore, there are plans to create a third-tier editorial board that functions largely in a figurehead capacity, after Elsevier "unilaterally took full control" of the board's structure in 2023 by requiring all associate editors to renew their contracts annually -- which the board believes undermines its editorial independence and integrity.

In-house production has been reduced or outsourced, and in 2023 Elsevier began using AI during production without informing the board, resulting in many style and formatting errors, as well as reversing versions of papers that had already been accepted and formatted by the editors. "This was highly embarrassing for the journal and resolution took six months and was achieved only through the persistent efforts of the editors," the editors wrote. "AI processing continues to be used and regularly reformats submitted manuscripts to change meaning and formatting and require extensive author and editor oversight during proof stage." In addition, the author page charges for JHE are significantly higher than even Elsevier's other for-profit journals, as well as broad-based open access journals like Scientific Reports. Not many of the journal's authors can afford those fees, "which runs counter to the journal's (and Elsevier's) pledge of equality and inclusivity," the editors wrote. The breaking point seems to have come in November, when Elsevier informed co-editors Mark Grabowski (Liverpool John Moores University) and Andrea Taylor (Touro University California College of Osteopathic Medicine) that it was ending the dual-editor model that has been in place since 1986. When Grabowki and Taylor protested, they were told the model could only remain if they took a 50 percent cut in their compensation.

Communications

Feds Warn SMS Authentication Is Unsafe (gizmodo.com) 88

An anonymous reader quotes a report from Gizmodo: Hackers aligned with the Chinese government have infiltrated U.S. telecommunications infrastructure so deeply that it allowed the interception of unencrypted communications on a number of people, according to reports that first emerged in October. The operation, dubbed Salt Typhoon, apparently allowed hackers to listen to phone calls and nab text messages, and the penetration has been so extensive they haven't even been booted from the telecom networks yet. The Cybersecurity and Infrastructure Security Agency (CISA) issued guidance this week on best practices for protecting "highly targeted individuals," which includes a new warning (PDF) about text messages.

"Do not use SMS as a second factor for authentication. SMS messages are not encrypted—a threat actor with access to a telecommunication provider's network who intercepts these messages can read them. SMS MFA is not phishing-resistant and is therefore not strong authentication for accounts of highly targeted individuals," the guidance, which has been posted online, reads. Not every service even allows for multi-factor authentication and sometimes text messages are the only option. But when you have a choice, it's better to use phishing-resistant methods like passkeys or authenticator apps. CISA prefaces its guidance by insisting it's only really speaking about high-value targets.
The telecommunications hack mentioned above has been called the "worst hack in our nation's history," according to Sen. Mark Warner (D-VA).
Security

Fired Employee Allegedly Hacked Disney World's Menu System to Alter Peanut Allergy Information (404media.co) 135

An anonymous reader shares a report: A disgruntled former Disney employee allegedly repeatedly hacked into a third-party menu creation software used by Walt Disney World's restaurants and changed allergy information on menus to say that foods that had peanuts in them were safe for people with allergies, added profanity to menus, and at one point changed all fonts used on menus to Wingdings, according to a federal criminal complaint.

The suspect in the case, Michael Scheuer, broke into a proprietary menu creation and inventory system that was developed by a third-party company exclusively for Disney and is used to print menus for its restaurants, the complaint alleges. The complaint alleges he did this soon after being fired by Disney using passwords that he still had access to on several different systems. Once inside the systems, he allegedly altered menus and, in once case, broke the software for several weeks.

"The threat actor manipulated the allergen information on menus by adding information to some allergen notifications that indicated certain menu items were safe for individuals with peanut allergies, when in fact they could be deadly to those with peanut allergies," the criminal complaint states. According to the complaint, the menus were caught by Disney after they were printed but before they were distributed to Disney restaurants. Disney's menus have extensive "allergy friendly" sections.

The Almighty Buck

Bill Gates Applauds Open Source Tools for 'Digital Public Infrastructure' (gatesnotes.com) 49

It connects people, data, and money, Bill Gates wrote this week on his personal blog. But digital public infrastructure is also "revolutionizing the way entire nations serve their people, respond to crises, and grow their economies" — and the Gates Foundation sees it "as an important part of our efforts to help save lives and fight poverty in poor countries." Digital public infrastructure [or "DPI"]: digital ID systems that securely prove who you are, payment systems that move money instantly and cheaply, and data exchange platforms that allow different services to work together seamlessly... [W]ith the right investments, countries can use DPI to bypass outdated and inefficient systems, immediately adopt cutting-edge digital solutions, and leapfrog traditional development trajectories — potentially accelerating their progress by more than a decade. Countries without extensive branch banking can move straight to mobile banking, reaching far more people at a fraction of the cost. Similarly, digital ID systems can provide legal identity to millions who previously lacked official documentation, giving them access to a wide range of services — from buying a SIM card to opening a bank account to receiving social benefits like pensions.

I've heard concerns about DPI — here's how I think about them. Many people worry digital systems are a tool for government surveillance. But properly designed DPI includes safeguards against misuse and even enhances privacy... These systems also reduce the need for physical document copies that can be lost or stolen, and even create audit trails that make it easier to detect and prevent unauthorized access. The goal is to empower people, not restrict them. Then there's the fear that DPI will disenfranchise vulnerable populations like rural communities, the elderly, or those with limited digital literacy. But when it's properly designed and thoughtfully implemented, DPI actually increases inclusion — like in India, where millions of previously unbanked people now have access to financial services, and where biometric exceptions or assisted enrollment exist for people with physical disabilities or no fixed address.

Meanwhile, countries can use open-source tools — like MOSIP for digital identity and Mojaloop for payments — to build DPI that fosters competition and promotes innovation locally. By providing a common digital framework, they allow smaller companies and start-ups to build services without requiring them to create the underlying systems from scratch. Even more important, they empower countries to seek out services that address their own unique needs and challenges without forcing them to rely on proprietary systems.

"Digital public infrastructure is key to making progress on many of the issues we work on at the Gates Foundation," Bill writes, "including protecting children from preventable diseases, strengthening healthcare systems, improving the lives and livelihoods of farmers, and empowering women to control their financial futures.

"That's why we're so committed to DPI — and why we've committed $200 million over five years to supporting DPI initiatives around the world... The future is digital. Let's make sure it's a future that benefits everyone."
AI

California Newspaper Creates AI-Powered 'News Assistant' for Kamala Harris Info (sfchronicle.com) 154

After nearly 30 years of covering Kamala Harris, the San Francisco Chronicle is now letting ChatGPT do it. Sort of...

"We're introducing a new way to engage with our decades of coverage: an AI-powered tool designed to answer your questions about Harris' life, her journey through public service and her presidential campaign," they announced this week: Drawing from thousands of articles written, edited and published by Chronicle journalists since 1995, this tool aims to give readers informed answers about a politician who rose from the East Bay and is now campaigning to become one of the world's most powerful people.

Why don't we have a similar tool for Donald Trump, the Republican nominee for president? The answer isn't political. It's because we've been covering Harris since her career began in the Bay Area and have an archive of vetted articles to draw from. Our newsroom can't offer the same level of expertise when it comes to the former president.

The tool's answers are "drawn directly from decades of extensive reporting," according to a notice toward the bottom of the page. "The tool searches through thousands of Chronicle articles, with new stories added every hour as they are published, ensuring readers have access to the most up-to-date information." Our news assistant is powered by OpenAI's GPT-4o mini model, combined with OpenAI's text-embedding-3-large model, to deliver precise answers based on user queries. The Chronicle articles in this tool's corpus span from April 24, 1995, to the present, covering the length of Harris' career.

This corpus wouldn't be possible without the hard work of the Chronicle's journalists.

Questions go through OpenAI's moderation filter and "relevance check" — and if it asks how to vote, "we redirect readers to appropriate resources including canivote.org..."
Censorship

Russia Blocks OONI Explorer, a Large Open Dataset On Internet Censorship (ooni.org) 13

As of September 11th, Russia has blocked access to OONI Explorer, citing concerns over circumvention tools. This block affects Russian users' ability to access not only circumvention data but also the extensive dataset on global internet censorship that OONI provides. From a blog post: OONI Explorer is one of the largest open datasets on internet censorship around the world. We first launched this web platform back in 2016 with the goal of enabling researchers, journalists, and human rights defenders to investigate internet censorship based on empirical network measurement data that is contributed by OONI Probe users worldwide. Every day, we publish new measurements from around the world in real-time.

Today, OONI Explorer hosts more than 2 billion network measurements collected from 27 thousand distinct networks in 242 countries and territories since 2012. Out of all countries, OONI Probe users in Russia contribute the second largest volume of measurements (following the U.S, where OONI Probe users contribute the most measurements out of any country). This has enabled us to study various cases of internet censorship in Russia, such as the blocking of Tor, the blocking of independent news media websites, and how internet censorship in Russia changed amid the war in Ukraine.

In this report, we share OONI data on the blocking of OONI Explorer in Russia.

Security

1.3 Million Android-Based TV Boxes Backdoored; Researchers Still Don't Know How (arstechnica.com) 28

An anonymous reader quotes a report from Ars Technica: Researchers still don't know the cause of a recently discovered malware infection affecting almost 1.3 million streaming devices running an open source version of Android in almost 200 countries. Security firm Doctor Web reported Thursday that malware named Android.Vo1d has backdoored the Android-based boxes by putting malicious components in their system storage area, where they can be updated with additional malware at any time by command-and-control servers. Google representatives said the infected devices are running operating systems based on the Android Open Source Project, a version overseen by Google but distinct from Android TV, a proprietary version restricted to licensed device makers.

Although Doctor Web has a thorough understanding of Vo1d and the exceptional reach it has achieved, company researchers say they have yet to determine the attack vector that has led to the infections. "At the moment, the source of the TV boxes' backdoor infection remains unknown," Thursday's post stated. "One possible infection vector could be an attack by an intermediate malware that exploits operating system vulnerabilities to gain root privileges. Another possible vector could be the use of unofficial firmware versions with built-in root access." The following device models infected by Vo1d are: [R4, TV BOX, KJ-SMART4KVIP].

One possible cause of the infections is that the devices are running outdated versions that are vulnerable to exploits that remotely execute malicious code on them. Versions 7.1, 10.1, and 12.1, for example, were released in 2016, 2019, and 2022, respectively. What's more, Doctor Web said it's not unusual for budget device manufacturers to install older OS versions in streaming boxes and make them appear more attractive by passing them off as more up-to-date models. Further, while only licensed device makers are permitted to modify Google's AndroidTV, any device maker is free to make changes to open source versions. That leaves open the possibility that the devices were infected in the supply chain and were already compromised by the time they were purchased by the end user.
"These off-brand devices discovered to be infected were not Play Protect certified Android devices," Google said in a statement. "If a device isn't Play Protect certified, Google doesn't have a record of security and compatibility test results. Play Protect certified Android devices undergo extensive testing to ensure quality and user safety."

Users can confirm if their device runs Android TV OS via this link and following the steps here.

Slashdot Top Deals