Transportation

Colorado's New Speed Camera System Makes Waze Nearly Useless (motor1.com) 200

Colorado is rolling out an average-speed camera system that tracks vehicles across multiple points instead of catching them at a single camera, making it much harder for drivers to dodge tickets with apps like Waze and Radarbot. Motor1 reports: The state's new automated vehicle identification systems (AVIS) use several cameras to calculate your average speed between them, and if it is 10 miles per hour or more over the limit, you get a ticket. No longer will you be able to slow down as you approach a camera and speed back up after passing it, not that you should be speeding on public roads in the first place.

Colorado began deploying this new camera system after legislators changed the law in 2023, allowing AVIS for law enforcement use. The systems, installed on various roads and highways throughout the state, first began issuing warnings, but police began issuing tickets late last year.

The most recent section of road to fall under surveillance is a stretch of I-25 north of Denver, which brought the state's growing panopticon to our attention. It began issuing tickets on April 2. The Colorado Department of Transportation installed the cameras along a construction zone. The fine is $75 and zero points for exceeding the speed limit, and the police issue it to the vehicle's owner, regardless of who is driving.

Security

Infotainment, EV Charger Exploits Earn $1M at Pwn2Own Automotive 2026 (securityweek.com) 13

Trend Micro's Zero Day Initiative sponsored its third annual Pwn2Own Automotive competition in Tokyo this week, receiving 73 entries, the most ever for a Pwn2Own event.

"Under Pwn2Own rules, all disclosed vulnerabilities are reported to affected vendors through ZDI," reports Help Net Security, "with public disclosure delayed to allow time for patches." Infotainment platforms from Tesla, Sony, and Alpine were among the systems compromised during demonstrations. Researchers achieved code execution using techniques that included buffer overflows, information leaks, and logic flaws. One Tesla infotainment unit was compromised through a USB-based attack, resulting in root-level access. Electric vehicle charging infrastructure also received significant attention. Teams successfully demonstrated exploits against chargers from Autel, Phoenix Contact, ChargePoint, Grizzl-E, Alpitronic, and EMPORIA. Several attacks involved chaining multiple vulnerabilities to manipulate charging behavior or execute code on the device. These demonstrations highlighted how charging stations operate as network-connected systems with direct interaction with vehicles.
There's video recaps on the ZDI YouTube channel — apparently the Fuzzware.io researchers "were able to take over a Phoenix Contact EV charger over bluetooth."

Three researchers also exploited the Alpitronic's HYC50 fast-charging with a classic TOCTOU bug, according to the event's site, "and installed a playable version of Doom to boot." They earned $20,000 — part of $1,047,000 USD was awarded during the three-day event.

More coverage from SecurityWeek: The winner of the event, the Fuzzware.io team, earned a total of $215,500 for its exploits. The team received the highest individual reward: $60,000 for an Alpitronic HYC50 EV charger exploit delivered through the charging gun. ZDI described it as "the first public exploit of a supercharger".
China

China Tests a Supercritical CO2 Generator in Commercial Operation (cleantechnica.com) 44

"China recently placed a supercritical carbon dioxide power generator into commercial operation," writes CleanTechnica, "and the announcement was widely framed as a technological breakthrough." The system, referred to as Chaotan One, is installed at a steel plant in Guizhou province in mountainous southwest China and is designed to recover industrial waste heat and convert it into electricity. Each unit is reported to be rated at roughly 15 MW, with public statements describing configurations totaling around 30 MW. Claimed efficiency improvements range from 20% to more than 30% higher heat to power conversion compared with conventional steam based waste heat recovery systems. These are big numbers, typical of claims for this type of generator, and they deserve serious attention.

China doing something first, however, has never been a reliable indicator that the thing will prove durable, economic, or widely replicable. China is large enough to try almost everything. It routinely builds first of a kind systems precisely because it can afford to learn by doing, discarding what does not work and scaling what does. This approach is often described inside China as crossing the river by feeling for stones. It produces valuable learning, but it also produces many dead ends. The question raised by the supercritical CO2 deployment is not whether China is capable of building it, but whether the technology is likely to hold up under real operating conditions for long enough to justify broad adoption.

A more skeptical reading is warranted because Western advocates of specific technologies routinely point to China's limited deployments as evidence that their preferred technologies are viable, when the scale of those deployments actually argues the opposite. China has built a single small modular reactor and a single experimental molten salt reactor, not fleets of them, despite having the capital, supply chains, and regulatory capacity to do so if they made economic sense... If small modular reactors or hydrogen transportation actually worked at scale and cost, China would already be building many more of them, and the fact that it is not should be taken seriously rather than pointing to very small numbers of trials compared to China's very large denominators...

What is notably absent from publicly available information is detailed disclosure of materials, operating margins, impurity controls, and maintenance assumptions. This is not unusual for early commercial deployments in China. It does mean that external observers cannot independently assess long term durability claims.

The article notes America's Energy Department funded a carbon dioxide turbine in Texas rated at roughly 10 MW electric that "reached initial power generation in 2024 after several years of construction and commissioning." But for both these efforts, the article warns that "early efficiency claims should be treated as provisional. A system that starts at 15 MW and delivers 13 MW after several years with rising maintenance costs is not a breakthrough. It is an expensive way to recover waste heat compared with mature steam based alternatives that already operate for decades with predictable degradation..."

"If both the Chinese and U.S. installations run for five years without significant reductions in performance and without high maintenance costs, I will be surprised. In that case, it would be worth revisiting this assessment and potentially changing my mind."

Thanks to long-time Slashdot reader cusco for sharing the article.
News

Denmark Says Russia Was Behind Two 'Destructive and Disruptive' Cyberattacks (theguardian.com) 56

The Danish government has accused Russia of being behind two "destructive and disruptive" cyberattacks in what it describes as "very clear evidence" of a hybrid war. From a report: The Danish Defence Intelligence Service (DDIS) announced on Thursday that Moscow was behind a cyberattack on a Danish water utility in 2024 and a series of distributed denial-of-service (DDoS) attacks on Danish websites in the lead-up to the municipal and regional council elections in November.

The first, it said, was carried out by the pro-Russian group known as Z-Pentest and the second by NoName057(16), which has links to the Russian state. "The Russian state uses both groups as instruments of its hybrid war against the west," DDIS said in a statement. "The aim is to create insecurity in the targeted countries and to punish those that support Ukraine. Russia's cyber operations form part of a broader influence campaign intended to undermine western support for Ukraine." It added: "The DDIS assesses that the Danish elections were used as a platform to attract public attention -- a pattern that has been observed in several other European elections."

Education

MIT Grieves Shooting Death of Renowned Director of Plasma Science Center (theguardian.com) 64

An anonymous reader quotes a report from the Guardian: The Massachusetts Institute of Technology (MIT) community is grieving after the "shocking" shooting death of the director of its plasma science and fusion center, according to officials. Nuno FG Loureiro, 47, had been shot multiple times at his home in the affluent Boston suburb of Brookline on Monday night when police said they received a call to investigate. Emergency responders brought Loureiro to a hospital, and the award-winning scientist was pronounced dead there Tuesday morning, the Norfolk county district attorney's office said in a statement.

The Boston Globe reported speaking with a neighbor of Loureiro who heard gunshots, found the academic lying on his back in the foyer of their building and then called for help alongside the victim's wife. The statement from the Norfolk district attorney's office said an investigation into Loureiro's slaying remained ongoing later Tuesday. But the agency did not immediately release any details about a possible suspect or motive in the killing, which gained widespread attention across academic circles, the US and in Loureiro's native Portugal.

Portugal's minster of foreign affairs announced Loureiro's death in a public hearing Tuesday, as CNN reported. Separately, MIT president Sally Kornbluth issued a university-wide letter expressing "great sadness" over the death of Loureiro, whose survivors include his wife. "This shocking loss for our community comes in a period of disturbing violence in many other places," said Kornbluth's letter, released after a weekend marred by deadly mass shootings at Brown University in Rhode Island -- about 50 miles away from MIT -- as well as on Australia's Bondi Beach. The letter concluded by providing a list of mental health resources, saying: "It's entirely natural to feel the need for comfort and support."

AI

Are Warnings of Superintelligence 'Inevitability' Masking a Grab for Power? (noemamag.com) 183

Superintelligence has become "a quasi-political forecast" with "very little to do with any scientific consensus, emerging instead from particular corridors of power." That's the warning from James O'Sullivan, a lecturer in digital humanities from University College Cork. In a refreshing 5,600-word essay in Noema magazine, he notes the suspicious coincidence that "The loudest prophets of superintelligence are those building the very systems they warn against..."

"When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future." (For example, OpenAI CEO Sam Altman "seems determined to position OpenAI as humanity's champion, bearing the terrible burden of creating God-like intelligence so that it might be restrained.") The superintelligence discourse functions as a sophisticated apparatus of power, transforming immediate questions about corporate accountability, worker displacement, algorithmic bias and democratic governance into abstract philosophical puzzles about consciousness and control... Media amplification plays a crucial role in this process, as every incremental improvement in large language models gets framed as a step towards AGI. ChatGPT writes poetry; surely consciousness is imminent..." Such accounts, often sourced from the very companies building these systems, create a sense of momentum that becomes self-fulfilling. Investors invest because AGI seems near, researchers join companies because that's where the future is being built and governments defer regulation because they don't want to handicap their domestic champions...

We must recognize this process as political, not technical. The inevitability of superintelligence is manufactured through specific choices about funding, attention and legitimacy, and different choices would produce different futures. The fundamental question isn't whether AGI is coming, but who benefits from making us believe it is... We do not yet understand what kind of systems we are building, or what mix of breakthroughs and failures they will produce, and that uncertainty makes it reckless to funnel public money and attention into a single speculative trajectory.

Some key points:
  • "The machines are coming for us, or so we're told. Not today, but soon enough that we must seemingly reorganize civilization around their arrival..."
  • "When we debate whether a future artificial general intelligence might eliminate humanity, we're not discussing the Amazon warehouse worker whose movements are dictated by algorithmic surveillance or the Palestinian whose neighborhood is targeted by automated weapons systems. These present realities dissolve into background noise against the rhetoric of existential risk..."
  • "Seen clearly, the prophecy of superintelligence is less a warning about machines than a strategy for power, and that strategy needs to be recognized for what it is... "
  • "Superintelligence discourse isn't spreading because experts broadly agree it is our most urgent problem; it spreads because a well-resourced movement has given it money and access to power..."
  • "Academic institutions, which are meant to resist such logics, have been conscripted into this manufacture of inevitability... reinforcing industry narratives, producing papers on AGI timelines and alignment strategies, lending scholarly authority to speculative fiction..."
  • "The prophecy becomes self-fulfilling through material concentration — as resources flow towards AGI development, alternative approaches to AI starve..."
  • "The dominance of superintelligence narratives obscures the fact that many other ways of doing AI exist, grounded in present social needs rather than hypothetical machine gods..." [He lists data sovereignty movements "that treat data as a collective resource subject to collective consent," as well as organizations like Canada's First Nations Information Governance Centre and New Zealand's Te Mana Raraunga, plus "Global South initiatives that use modest, locally governed AI systems to support healthcare, agriculture or education under tight resource constraints."] "Such examples... demonstrate how AI can be organized without defaulting to the superintelligence paradigm that demands everyone else be sacrificed because a few tech bros can see the greater good that everyone else has missed..."
  • "These alternatives also illuminate the democratic deficit at the heart of the superintelligence narrative. Treating AI at once as an arcane technical problem that ordinary people cannot understand and as an unquestionable engine of social progress allows authority to consolidate in the hands of those who own and build the systems..."

He's ultimately warning us about "politics masked as predictions..."

"The real political question is not whether some artificial superintelligence will emerge, but who gets to decide what kinds of intelligence we build and sustain. And the answer cannot be left to the corporate prophets of artificial transcendence because the future of AI is a political field — it should be open to contestation.

"It belongs not to those who warn most loudly of gods or monsters, but to publics that should have the moral right to democratically govern the technologies that shape their lives."


The Courts

Proctorio Settles Curious Lawsuit With Librarian Who Shared Public YouTube Videos (arstechnica.com) 20

Canadian librarian Ian Linkletter has ended a five-year legal battle with ed-tech firm Proctorio after being sued for sharing public YouTube help videos that exposed how the company's remote-proctoring AI works. Ars Technica reports: ... Together, the videos, the help center screenshot, and another screenshot showing course material describing how Proctorio works were enough for Proctorio to take Linkletter to court. The ed tech company promptly filed a lawsuit and obtained a temporary injunction by spuriously claiming that Linkletter shared private YouTube videos containing confidential information. Because the YouTube videos -- which were public but "unlisted" when Linkletter shared them -- had been removed, Linkletter did not have to delete the seven tweets that initially caught Proctorio's attention, but the injunction required that he remove two tweets, including the screenshots.

In the five years since, the legal fight dragged on, with no end in sight until last week, as Canadian courts tangled with copyright allegations that tested a recently passed law intended to shield Canadian rights to free expression, the Protection of Public Participation Act. To fund his defense, Linkletter said in a blog announcing the settlement that he invested his life savings "ten times over." Additionally, about 900 GoFundMe supporters and thousands of members of the Association of Administrative and Professional Staff at UBC contributed tens of thousands more. For the last year of the battle, a law firm, Norton Rose Fulbright, agreed to represent him on a pro bono basis, which Linkletter said âoewas a huge relief to me, as it meant I could defend myself all the way if Proctorio chose to proceed with the litigation."

The terms of the settlement remain confidential, but both Linkletter and Proctorio confirmed that no money was exchanged. For Proctorio, the settlement made permanent the injunction that restricted Linkletter from posting the company's help center or instructional materials. But it doesn't stop Linkletter from remaining the company's biggest critic, as "there are no other restrictions on my freedom of expression," Linkletter's blog noted. "I've won my life back!" Linkletter wrote, while reassuring his supporters that he's "fine" with how things ended. "It doesn't take much imagination to understand why Proctorio is a nightmare for students," Linkletter wrote. "I can say everything that matters about Proctorio using public information."

AI

What Happens When Humans Start Writing for AI? (theamericanscholar.org) 69

The literary magazine of the Phi Beta Kappa society argues "the replacement of human readers by AI has lately become a real possibility.

"In fact, there are good reasons to think that we will soon inhabit a world in which humans still write, but do so mostly for AI." "I write about artificial intelligence a lot, and lately I have begun to think of myself as writing for Al as well," the influential economist Tyler Cowen announced in a column for Bloomberg at the beginning of the year. He does this, he says, because he wants to boost his influence over the world, because he wants to help teach the AIs about things he cares about, and because, whether he wants to or not, he's already writing for AI, and so is everybody else. Large-language-model (LLM) chatbots such as ChatGPT and Claude are trained, in part, by reading the entire internet, so if you put anything of yourself online, even basic social-media posts that are public, you're writing for them.

If you don't recognize this fact and embrace it, your work might get left behind or lost. For 25 years, search engines knit the web together. Anyone who wanted to know something went to Google, asked a question, clicked through some of the pages, weighed the information, and came to an answer. Now, the chatbot genie does that for you, spitting the answer out in a few neat paragraphs, which means that those who want to affect the world needn't care much about high Google results anymore. What they really want is for the AI to read their work, process it, and weigh it highly in what it says to the millions of humans who ask it questions every minute.

How do you get it to do this? For that, we turn to PR people, always in search of influence, who are developing a form of writing (press releases and influence campaigns are writing) that's not so much search-engine-optimized as chatbot-optimized. It's important, they say, to write with clear structure, to announce your intentions, and especially to include as many formatted sections and headings as you can. In other words, to get ChatGPT to pay attention, you must write more like ChatGPT. It's also possible that, since LLMs understand natural language in a way traditional computer programs don't, good writing will be more privileged than the clickbait Google has succumbed to: One refreshing discovery PR experts have made is that the bots tend to prioritize information from high-quality outlets.

Tyler Cowen also wrote in his Bloomberg column that "If you wish to achieve some kind of intellectual immortality, writing for the Als is probably your best chance.... Give the Als a sense not just of how you think, but how you feel — what upsets you, what you really treasure. Then future Al versions of you will come to life that much more, attracting more interest." Has AI changed the reasons we write? The Phi Beta Kappa magazine is left to consider the possibility that "power over a superintelligent beast and resurrection are nothing to sneeze at" — before offering another thought.

"The most depressing reason to write for AI is that unlike most humans, AIs still read. They read a lot. They read everything. Whereas, aided by an AI no more advanced than the TikTok algorithm, humans now hardly read anything at all..."
Programming

Will AI Mean Bring an End to Top Programming Language Rankings? (ieee.org) 51

IEEE Spectrum ranks the popularity of programming languages — but is there a problem? Programmers "are turning away from many of these public expressions of interest. Rather than page through a book or search a website like Stack Exchange for answers to their questions, they'll chat with an LLM like Claude or ChatGPT in a private conversation." And with an AI assistant like Cursor helping to write code, the need to pose questions in the first place is significantly decreased. For example, across the total set of languages evaluated in the Top Programming Languages, the number of questions we saw posted per week on Stack Exchange in 2025 was just 22% of what it was in 2024...

However, an even more fundamental problem is looming in the wings... In the same way most developers today don't pay much attention to the instruction sets and other hardware idiosyncrasies of the CPUs that their code runs on, which language a program is vibe coded in ultimately becomes a minor detail... [T]he popularity of different computer languages could become as obscure a topic as the relative popularity of railway track gauges... But if an AI is soothing our irritations with today's languages, will any new ones ever reach the kind of critical mass needed to make an impact? Will the popularity of today's languages remain frozen in time?

That's ultimately the larger question. "how much abstraction and anti-foot-shooting structure will a sufficiently-advanced coding AI really need...?" [C]ould we get our AIs to go straight from prompt to an intermediate language that could be fed into the interpreter or compiler of our choice? Do we need high-level languages at all in that future? True, this would turn programs into inscrutable black boxes, but they could still be divided into modular testable units for sanity and quality checks. And instead of trying to read or maintain source code, programmers would just tweak their prompts and generate software afresh.

What's the role of the programmer in a future without source code? Architecture design and algorithm selection would remain vital skills... How should a piece of software be interfaced with a larger system? How should new hardware be exploited? In this scenario, computer science degrees, with their emphasis on fundamentals over the details of programming languages, rise in value over coding boot camps.

Will there be a Top Programming Language in 2026? Right now, programming is going through the biggest transformation since compilers broke onto the scene in the early 1950s. Even if the predictions that much of AI is a bubble about to burst come true, the thing about tech bubbles is that there's always some residual technology that survives. It's likely that using LLMs to write and assist with code is something that's going to stick. So we're going to be spending the next 12 months figuring out what popularity means in this new age, and what metrics might be useful to measure.

Having said that, IEEE Spectrum still ranks programming language popularity three ways — based on use among working programmers, demand from employers, and "trending" in the zeitgeist — using seven different metrics.

Their results? Among programmers, "we see that once again Python has the top spot, with the biggest change in the top five being JavaScript's drop from third place last year to sixth place this year. As JavaScript is often used to create web pages, and vibe coding is often used to create websites, this drop in the apparent popularity may be due to the effects of AI... In the 'Jobs' ranking, which looks exclusively at what skills employers are looking for, we see that Python has also taken 1st place, up from second place last year, though SQL expertise remains an incredibly valuable skill to have on your resume."
Earth

Could Wildfire Smoke Become America's Leading Climate Health Threat By 2050? (yahoo.com) 81

"New research suggests ash and soot from burning wildlands has caused more than 41,000 excess deaths annually from 2011 to 2020," reports the Los Angeles Times: By 2050, as global warming makes large swaths of North America hotter and drier, the annual death toll from smoke could reach between 68,000 and 71,000, without stronger preventive and public health measures...

In the span studied, millions of people were exposed to unhealthful levels of air pollution. When inhaled, this microscopic pollution not only aggravates people's lungs, it also enters the bloodstream, provoking inflammation that can induce heart attacks and stroke. For years, researchers have struggled to quantify the danger the smoke poses. In the paper published in Nature, they report it's far greater than public health officials may have recognized. Yet most climate assessments "don't often include wildfire smoke as a part of the climate-related damages. And it turns out, by our calculation, this is one of the most important climate impacts in the U.S."

The study also estimates a higher number of deaths than previous work in part because it projected mortality up to three years after a person has been exposed to wildfire smoke. It also illustrates the dangers of smoke drifting from fire-prone regions into wetter parts of the country, a recent phenomenon that has garnered more attention with large Canadian wildfires contributing to hazy skies in the Midwest and East Coast in the last several years. "Everybody is impacted across the U.S.," said Minghoa Qiu [lead author and assistant professor at Stony Brook University]. "Certainly the Western U.S. is more impacted. But the Eastern U.S. is by no means isolated from this problem."

Social Networks

What Happens After the Death of Social Media? (noemamag.com) 112

"These are the last days of social media as we know it," argues a humanities lecturer from University College Cork exploring where technology and culture intersect, warning they could become lingering derelicts "haunted by bots and the echo of once-human chatter..."

"Whatever remains of genuine, human content is increasingly sidelined by algorithmic prioritization, receiving fewer interactions than the engineered content and AI slop optimized solely for clicks... " In recent years, Facebook and other platforms that facilitate billions of daily interactions have slowly morphed into the internet's largest repositories of AI-generated spam. Research has found what users plainly see: tens of thousands of machine-written posts now flood public groups — pushing scams, chasing clicks — with clickbait headlines, half-coherent listicles and hazy lifestyle images stitched together in AI tools like Midjourney... While content proliferates, engagement is evaporating. Average interaction rates across major platforms are declining fast: Facebook and X posts now scrape an average 0.15% engagement, while Instagram has dropped 24% year-on-year. Even TikTok has begun to plateau. People aren't connecting or conversing on social media like they used to; they're just wading through slop, that is, low-effort, low-quality content produced at scale, often with AI, for engagement.

And much of it is slop: Less than half of American adults now rate the information they see on social media as "mostly reliable" — down from roughly two-thirds in the mid-2010s... Platforms have little incentive to stem the tide. Synthetic accounts are cheap, tireless and lucrative because they never demand wages or unionize. Systems designed to surface peer-to-peer engagement are now systematically filtering out such activity, because what counts as engagement has changed. Engagement is now about raw user attention — time spent, impressions, scroll velocity — and the net effect is an online world in which you are constantly being addressed but never truly spoken to.

"These are the last days of social media, not because we lack content," the article suggests, "but because the attention economy has neared its outer limit — we have exhausted the capacity to care..." Social media giants have stopped growing exponentially, while a significant proportion of 18- to 34-year-olds even took deliberate mental health breaks from social media in 2024, according to an American Psychiatric Association poll.) And "Some creators are quitting, too. Competing with synthetic performers who never sleep, they find the visibility race not merely tiring but absurd."

Yet his 5,000-word essay predicts social media's death rattle "will not be a bang but a shrug," since "the model is splintering, and users are drifting toward smaller, slower, more private spaces, like group chats, Discord servers and federated microblogs — a billion little gardens." Intentional, opt-in micro-communities are rising in their place — like Patreon collectives and Substack newsletters — where creators chase depth over scale, retention over virality. A writer with 10,000 devoted subscribers can potentially earn more and burn out less than one with a million passive followers on Instagram... Even the big platforms sense the turning tide. Instagram has begun emphasizing DMs, X is pushing subscriber-only circles and TikTok is experimenting with private communities. Behind these developments is an implicit acknowledgement that the infinite scroll, stuffed with bots and synthetic sludge, is approaching the limit of what humans will tolerate....

The most radical redesign of social media might be the most familiar: What if we treated these platforms as public utilities rather than private casinos...? Imagine social media platforms with transparent algorithms subject to public audit, user representation on governance boards, revenue models based on public funding or member dues rather than surveillance advertising, mandates to serve democratic discourse rather than maximize engagement, and regular impact assessments that measure not just usage but societal effects... This could take multiple forms, like municipal platforms for local civic engagement, professionally focused networks run by trade associations, and educational spaces managed by public library systems... We need to "rewild the internet," as Maria Farrell and Robin Berjon mentioned in a Noema essay.

We need governance scaffolding, shared institutions that make decentralization viable at scale... [R]eal change will come when platforms are rewarded for serving the public interest. This could mean tying tax breaks or public procurement eligibility to the implementation of transparent, user-controllable algorithms. It could mean funding research into alternative recommender systems and making those tools open-source and interoperable. Most radically, it could involve certifying platforms based on civic impact, rewarding those that prioritize user autonomy and trust over sheer engagement.

"Social media as we know it is dying, but we're not condemned to its ruins. We are capable of building better — smaller, slower, more intentional, more accountable — spaces for digital interaction, spaces..."

"The last days of social media might be the first days of something more human: a web that remembers why we came online in the first place — not to be harvested but to be heard, not to go viral but to find our people, not to scroll but to connect. We built these systems, and we can certainly build better ones."
The Military

Defense Department Reportedly Relies On Utility Written by Russian Dev (theregister.com) 58

A widely used Node.js utility called fast-glob, relied on by thousands of projectsâ"including over 30 U.S. Department of Defense systems -- is maintained solely by a Russian developer linked to Yandex. While there's no evidence of malicious activity, cybersecurity experts warn that the lack of oversight in such critical open-source projects leaves them vulnerable to potential exploitation by state-backed actors. The Register reports: US cybersecurity firm Hunted Labs reported the revelations on Wednesday. The utility in question is fast-glob, which is used to find files and folders that match specific patterns. Its maintainer goes by the handle "mrmlnc", and the Github profile associated with that handle identifies its owner as a Yandex developer named Denis Malinochkin living in a suburb of Moscow. A website associated with that handle also identifies its owner as the same person, as Hunted Labs pointed out.

Hunted Labs told us that it didn't speak to Malinochkin prior to publication of its report today, and that it found no ties between him and any threat actor. According to Hunted Labs, fast-glob is downloaded more than 79 million times a week and is currently used by more than 5,000 public projects in addition to the DoD systems and Node.js container images that include it. That's not to mention private projects that might use it, meaning that the actual number of at-risk projects could be far greater.

While fast-glob has no known CVEs, the utility has deep access to systems that use it, potentially giving Russia a number of attack vectors to exploit. Fast-glob could attack filesystems directly to expose and steal info, launch a DoS or glob-injection attack, include a kill switch to stop downstream software from functioning properly, or inject additional malware, a list Hunted Labs said is hardly exhaustive. [...] Hunted Labs cofounder Haden Smith told The Register that the ties are cause for concern. "Every piece of code written by Russians isn't automatically suspect, but popular packages with no external oversight are ripe for the taking by state or state-backed actors looking to further their aims," Smith told us in an email. "As a whole, the open source community should be paying more attention to this risk and mitigating it." [...]

Hunted Labs said that the simplest solution for the thousands of projects using fast-glob would be for Malinochkin to add additional maintainers and enhance project oversight, as the only other alternative would be for anyone using it to find a suitable replacement. "Open source software doesn't need a CVE to be dangerous," Hunted Labs said of the matter. "It only needs access, obscurity, and complacency," something we've noted before is an ongoing problem for open source projects. This serves as another powerful reminder that knowing who writes your code is just as critical as understanding what the code does," Hunted Labs concluded.

Republicans

Republicans Investigate Wikipedia Over Allegations of Organized Bias (thehill.com) 173

An anonymous reader quotes a report from The Hill: Republicans on the House Oversight and Government Reform Committee opened a probe into alleged organized efforts to inject bias into Wikipedia entries and the organization's responses. Chair James Comer (R-Ky.) and Rep. Nancy Mace (R-S.C.), chair of the panel's subcommittee on cybersecurity, information technology, and government innovation, on Wednesday sent an information request on the matter to Maryana Iskander, chief executive officer of the Wikimedia Foundation, the nonprofit that hosts Wikipedia. The request, the lawmakers said in the letter (PDF), is part of an investigation into "foreign operations and individuals at academic institutions subsidized by U.S. taxpayer dollars to influence U.S. public opinion."

The panel is seeking documents and communications about Wikipedia volunteer editors who violated the platform's policies, as well as the Wikimedia Foundation's efforts to "thwart intentional, organized efforts to inject bias into important and sensitive topics." "Multiple studies and reports have highlighted efforts to manipulate information on the Wikipedia platform for propaganda aimed at Western audiences," Comer and Mace wrote in the letter. They referenced a report from the Anti-Defamation League about anti-Israel bias on Wikipedia that detailed a coordinated campaign to manipulate content related to the Israel-Palestine conflict and similar issues, as well as an Atlantic Council report on pro-Russia actors using Wikipedia to push pro-Kremlin and anti-Ukrainian messaging, which can influence how artificial intelligence chatbots are trained.

"[The Wikimedia] foundation, which hosts the Wikipedia platform, has acknowledged taking actions responding to misconduct by volunteer editors who effectively create Wikipedia's encyclopedic articles. The Committee recognizes that virtually all web-based information platforms must contend with bad actors and their efforts to manipulate. Our inquiry seeks information to help our examination of how Wikipedia responds to such threats and how frequently it creates accountability when intentional, egregious, or highly suspicious patterns of conduct on topics of sensitive public interest are brought to attention," Comer and Mace wrote. The lawmakers requested information about "the tools and methods Wikipedia utilizes to identify and stop malicious conduct online that injects bias and undermines neutral points of view on its platform," including documents and records about possible coordination of state actors in editing, the kind of accounts that have been subject to review, and and of the panel's analysis of data manipulation or bias.
"We welcome the opportunity to respond to the Committee's questions and to discuss the importance of safeguarding the integrity of information on our platform," a Wikimedia Foundation spokesperson said.
Moon

Asteroid 2024 YR4 Spared The Earth. What Happens if It Hits the Moon Instead in 2032? (cnn.com) 22

Remember asteroid 2024 YR4 (which at one point had a 1 in 32 chance of hitting Earth, before ending up at "impact probability zero")? CNN reports that asteroid is now "zooming beyond the reach of telescopes on its orbit around the sun."

"But as scientists wait for it to reappear, its revised trajectory is now drawing attention to another possible target: the moon." The latest observations of the asteroid in early June, before YR4 disappeared from view, have improved astronomers' knowledge of where it will be in seven years by almost 20%, according to NASA. That data shows that even with Earth avoiding direct impact, YR4 could still pose a threat in late 2032 by slamming into the moon. ["The asteroid's probability of impacting the Moon has slightly increased from 3.8% to 4.3%," writes NASA, and "it would not alter the Moon's orbit."]
CNN calls the probabiliy "small but decent enough odds for scientists to consider how such a scenario might play out." The collision could create a bright flash that would be visible with the naked eye for several seconds, according to Wiegert, lead author of a recent paper submitted to the American Astronomical Society journals analyzing the potential lunar impact. The collision could create an impact crater on the moon estimated at 1 kilometer wide (0.6 miles wide), Wiegert said... It would be the largest impact on the moon in 5,000 years and could release up to 100 million kilograms (220 million pounds) of lunar rocks and dust, according to the modeling in Wiegert's study... Particles the size of large sand grains, ranging from 0.1 to 10 millimeters in size, of lunar material could reach Earth between a few days and a few months after the asteroid strike because they'll be traveling incredibly fast, creating an intense, eye-catching meteor shower, Wiegert said.

"There's absolutely no danger to anyone on the surface," Wiegert said. "We're not expecting large boulders or anything larger than maybe a sugar cube, and our atmosphere will protect us very nicely from that. But they're traveling faster than a speeding bullet, so if they were to hit a satellite, that could cause some damage...." Hundreds to thousands of impacts from millimeter-size debris could affect Earth's satellite fleet, meaning satellites could experience up to 10 years' equivalent of meteor debris exposure in a few days, Wiegert said... While a temporary loss of communication and navigation from satellites would create widespread difficulties on Earth, Wiegert said he believes the potential impact is something for satellite operators, rather than the public, to worry about.

"Any missions in low-Earth orbit could also be in the pathway of the debris, though the International Space Station is scheduled to be deorbited before any potential impact," reports CNN.

And they add that Wiegert also believes even small pieces of debris (tens of centimeters in size) "could present a hazard for any astronauts who may be present on the moon, or any structures they have built for research and habitation... The moon has no atmosphere, so the debris from the event could be widespread on the lunar surface, he added."
Data Storage

Internet Archive Now Livestreams History As It's Being Preserved (9to5mac.com) 2

The Internet Archive has begun livestreaming its microfiche digitization center on YouTube, showcasing the real-time preservation of fragile film cards into searchable public documents. The work is part of Democracy's Library, a global initiative to digitize and share millions of government records. 9to5Mac reports: The livestream was brought to life by Sophia Tung, who previously gained attention for her viral robotaxi depot stream. Her new video explains how and why this new livestream project came together [...].

The livestream features five scanning stations at work, with one shown in close-up as operators digitize microfiche cards in real time. Each card holds up to 100 pages of public records. High-resolution cameras capture the images, software stitches and crops the pages, and the results are made text-searchable and freely accessible through Democracy's Library. Live scanning takes place Monday through Friday, 7:30 a.m. to 3:30 p.m. PT, excluding U.S. holidays, with a second shift expected to begin soon.

Television

Netflix Says Its Ad Tier Now Has 94 Million Monthly Active Users 37

Netflix said its cheaper, ad-supporter tier now has 94 million monthly active users -- an increase of more than 20 million since its last public tally in November. CNBC reports: The company and its peers have been increasingly leaning on advertising to boost the profitability of their streaming products. Netflix first introduced the ad-supported plan in November 2022. Netflix's ad-supported plan costs $7.99 per month, a steep discount from its least-expensive ad-free plan, at $17.99 per month. Netflix also said its cheapest tier reaches more 18- to 34-year-olds than any U.S. broadcast or cable network. "When you compare us to our competitors, attention starts higher and ends much higher," Netflix president of advertising Amy Reinhard said in a statement. "Even more impressive, members pay as much attention to mid-roll ads as they do to the shows and movies themselves."
Android

Maintainer of Linux Distro AnduinOS Revealed to Be Microsoft Employee (neowin.net) 37

After gaining attention from Neowin and DistroWatch last week, the sole maintainer behind AnduinOS 1.3 -- a Linux distribution styled to resemble Windows 11 -- decided to reveal himself. He turns out to be Anduin Xue, a Microsoft software engineer, who has been working on the project as a personal, non-commercial endeavor built on Ubuntu. Neowin reports: As a Software Engineer 2 at Microsoft (he doesn't work on Windows), Anduin Xue says he's financially stable and sees no need to commercialize AnduinOS. Explaining the financial aspects of the project, he said: "Many have asked why I don't accept donations, how I profit, and if I plan to commercialize AnduinOS. Truthfully, I haven't thoroughly considered these issues. It's not my main job, and I don't plan to rely on it for a living. Each month, I dedicate only a few hours to maintaining it. Perhaps in the future, I might consider providing enterprise solutions based on AnduinOS, but I won't compromise its original simplicity. It has always been about providing myself with a comfortably themed Ubuntu."

In our coverage of the AnduinOS 1.3 release last week, one commenter pointed out that the distro is from China. For some, this will raise issues, but Anduin Xue addressed this in his blog post, too, saying that the source code is available to the public. For this reason, he told lacing the operating system with backdoors for the Chinese government would be "irrational and easily exposed." For those worried that the distribution may be abandoned, Anduin Xue said that he intends to continue supporting it and may even maintain it full-time if sponsorship or corporate cooperation emerges.

Google

What Happens When You Pay People Not to Use Google Search? (yahoo.com) 51

"A group of researchers says it has identified a hidden reason we use Google for nearly all web searches," reports the Washington Post. "We've never given other options a real shot." Their research experiment suggests that Google is overwhelmingly popular partly because we believe it's the best, whether that's true or not. It's like a preference for your favorite soda. And their research suggested that our mass devotion to googling can be altered with habit-changing techniques, including by bribing people to try search alternatives to see what they are like...

[A] group of academics — from Stanford University, the University of Pennsylvania and MIT — designed a novel experiment to try to figure out what might shake up Google's popularity. They recruited nearly 2,500 participants and remotely monitored their web searches on computers for months. The core of the experiment was paying some participants — most received $10 — to use Bing rather than Google for two weeks. After that period, the money stopped, and the participants had to pick either Bing or Google. The vast majority in the group of people who were paid to use Bing for 14 days chose to go back to Google once the payments stopped, suggesting a strong preference for Google even after trying an alternative. But a healthy number in that group — about 22 percent — chose Bing and were still using it many weeks later.

"I realized Bing was not as bad as I thought it was...." one study participant said — which an assistant professor in business economics and public policy at the University of Pennsylvania says is a nice summation of the study's findings.

"The researchers did not test other search engines," the article notes. But it also points out that more importantly: the research caught the attention of some government officials: Colorado Attorney General Phil Weiser (D), who is leading the group of states that sued Google alongside the Justice Department, said the research helped inspire a demand by the states to fix Google's search monopoly. They asked a judge to require Google to bankroll a consumer information campaign about web search alternatives, including "short-term incentive payments."
On the basis of that, the article suggests "you could soon be paid to try Microsoft Bing or another alternative."

And in the meantime, the reporter writes, "I encourage you to join me in a two-week (unpaid) experiment mirroring the research: Change your standard search engine to something other than Google and see whether you like it. (And drop me a line to let me know how it went.) I'm going with DuckDuckGo, a privacy-focused web search engine that uses Bing's technology."
Social Networks

BlueSky Proposes 'New Standard' When Scraping Data for AI Training (techcrunch.com) 52

An anonymous reader shared this article from TechCrunch: Social network Bluesky recently published a proposal on GitHub outlining new options it could give users to indicate whether they want their posts and data to be scraped for things like generative AI training and public archiving.

CEO Jay Graber discussed the proposal earlier this week, while on-stage at South by Southwest, but it attracted fresh attention on Friday night, after she posted about it on Bluesky. Some users reacted with alarm to the company's plans, which they saw as a reversal of Bluesky's previous insistence that it won't sell user data to advertisers and won't train AI on user posts.... Graber replied that generative AI companies are "already scraping public data from across the web," including from Bluesky, since "everything on Bluesky is public like a website is public." So she said Bluesky is trying to create a "new standard" to govern that scraping, similar to the robots.txt file that websites use to communicate their permissions to web crawlers...

If a user indicates that they don't want their data used to train generative AI, the proposal says, "Companies and research teams building AI training sets are expected to respect this intent when they see it, either when scraping websites, or doing bulk transfers using the protocol itself."

Over on Threads someone had a different wish for our AI-enabled future. "I want to be able to conversationally chat to my feed algorithm. To be able to explain to it the types of content I want to see, and what I don't want to see. I want this to be an ongoing conversation as it refines what it shows me, or my interests change."

"Yeah I want this too," posted top Instagram/Threads executive Adam Mosseri, who said he'd talked about the idea with VC Sam Lessin. "There's a ways to go before we can do this at scale, but I think it'll happen eventually."
Space

Earth Safe From 'City-Killer' Asteroid 2024 YR4 34

Asteroid 2024 YR4, once considered a significant impact risk, has been reassigned to Torino Scale Level Zero and therefore poses no hazard to Earth. "The NASA JPL Center for Near-Earth Object Studies (CNEOS) now lists the 2024 YR4 impact probability as 0.00005 (0.005%) or 1-in-20,000 for its passage by Earth in 2032," Richard Binzel, Professor of Planetary Science at the Massachusetts Institute of Technology (MIT) and creator of the Torino scale exclusively told Space.com. "That's impact probability zero folks!" From the report: Discovered in Dec. 2024, 2024 YR4 quickly climbed to the top of NASA's Sentry Risk table, at one point having a 1 in 32 chance of hitting Earth. This elevated it to Level 3 on the Torino scale, a system used since 1999 to categorize potential Earth impact events. Level 3, which falls within the yellow band of the Torino Scale, is described as: "A close encounter, meriting attention by astronomers. Current calculations give a 1% or greater chance of collision capable of localized destruction."

This conforms to the second part of the Torino scale level 3 description, which states: "Most likely, new telescopic observations will lead to re-assignment to Level 0. Attention by public and by public officials is merited if the encounter is less than a decade away." "Asteroid 2024 YR4 has now been reassigned to Torino Scale Level Zero, the level for 'No Hazard' as additional tracking of its orbital path has reduced its possibility of intersecting the Earth to below the 1-in-1000 threshold," Binzel continued. "1-in-1000 is the threshold established for downgrading to Level 0 for any object smaller than 100 meters; YR4 has an estimated size of 164 feet (50 meters)."

[...] While 2024 YR4 poses no threat, it will still have a major scientific impact when it passes Earth in 2028 and again in 2032. On Dec. 17, the asteroid will come to within 5 million miles of Earth. Then, on Dec.22, 2032, 2024 YR4 will pass within just 167,000 miles of our planet. For context, the moon is 238,855 miles away.

Slashdot Top Deals