Power

Idaho Lab Produces World's First Molten Salt Fuel for Nuclear Reactors (energy.gov) 43

America's Energy Department runs a research lab in Idaho — and this week announced successful results from a ground-breaking experiment. "This is the first time in history that chloride-based molten salt fuel has been produced for a fast reactor," says Bill Phillips, the lab's technical lead for salt synthesis. He calls it "a major milestone for American innovation and a clear signal of our national commitment to advanced nuclear energy." Unlike traditional reactors that use solid fuel rods and water as a coolant, most molten salt reactors rely on liquid fuel — a mixture of salts containing fissile material. This design allows for higher operating temperatures, better fuel efficiency, and enhanced safety. It also opens the door to new applications, including compact nuclear systems for ships and remote installations.

"The Molten Chloride Fast Reactor represents a paradigm shift in the nuclear fuel cycle, and the Molten Chloride Reactor Experiment (MCRE) will directly inform the commercialization of that reactor," said Jeff Latkowski, senior vice president of TerraPower and program director for the Molten Chloride Fast Reactor. "Working with world-leading organizations such as INL to successfully synthesize this unique new fuel demonstrates how real progress in Gen IV nuclear is being made together."

"The implications for the maritime industry are significant," said Don Wood, senior technical advisor for MCRE. "Molten salt reactors could provide ships with highly efficient, low-maintenance nuclear power, reducing emissions and enabling long-range, uninterrupted travel. The technology could spark the rise of a new nuclear sector — one that is mobile, scalable and globally transformative.

More details from America's Energy Department: MCRE will require a total of 72 to 75 batches of fuel salt to go critical, making it the largest fuel production effort at INL since the operations of Experimental Breeder Reactor-II more than 30 years ago. The full-scale demonstration of the new fuel salt synthesis line for MCRE was made possible by a breakthrough in 2024. After years of testing, the team found the right recipe to convert 95 percent of uranium metal feedstock into 18 kilograms of uranium chloride fuel salt in only a few hours — a process that previously took more than a week to complete...

After delivering the first batch of fuel salt this fall, the team anticipates delivering four additional batches by March of 2026. MCRE is anticipated to run in 2028 for approximately six months at INL in the Laboratory for Operation and Testing (LOTUS) in the United States test bed.

"With the first batch of fuel salt successfully created at INL, researchers will now conduct testing to better understand the physics of the process, with a goal of moving the process to a commercial scale over the next decade," says Cowboy State Daily.

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Medicine

The Anxieties of Full-Body MRI Scans (Not Covered by Insurance) (yahoo.com) 75

Washington Post columnist Dana Milbank calls himself "a highly creative hypochondriac" — who just paid for an expensive MRI scan to locate abnormal spots as tiny as 2 millimeters.

He discusses the pros and cons of its "diffusion-weighted imaging" technology combined with the pattern recognition of AI, which theoretically "has the potential to save our lives by revealing budding cancers, silent aneurysms and other hidden would-be killers before they become deadly. " But the scans cost $2,500 a pop and insurance won't pay. Worse, for every cancer these MRIs find, they produce a slightly greater number of false positives that require a biopsy, with the potential for infection and bleeding and emotional distress. Even when the scans don't produce a false positive, they almost always come up with some vague and disconcerting abnormality.... Will we feel better after viewing our insides? Or will we become anxious about things we hadn't even thought to worry about?

Part of living has always been in the mystery, in not knowing what tomorrow will bring. Now, because of sophisticated imaging, genome sequencing and other revolutionary screening tools, we can have predictability, or at least the illusion of it. But do we want that? The American College of Radiology says we do not. Its still-current 2023 statement says there is not "sufficient evidence" to recommend full-body screening, cautioning that the scan could lead to needless testing and expense. But David Larson, chair of ACR's Commission on Quality and Safety, told me that could change as more data comes in. "When people ask me, 'Would you recommend it?' I would say it depends on your tolerance for ambiguity," he said, giving the example of somebody found to have a borderline aortic aneurysm who is advised to wait and monitor it. If "that won't keep you up at night, then I wouldn't necessarily recommend against it...."

About 1 in 20 gets that dreaded call. A study Prenuvo presented earlier this year of 1,011 participants found that 4.9 percent of scans required a follow-up biopsy. Of those, 2.2 percent were actually cancer, and the other 2.7 percent were false positives. Of the 22 cancers the scans caught, 86 percent of patients had no specific symptoms. But if finding something truly awful is rare, finding something abnormal is almost guaranteed. [Vikash Modi, Prenuvo's senior medical director of preventative medicine] said only 1 in 20 scans come back completely clean. The vast majority of patients wind up in the ambiguous realm where something may look suspicious but doesn't require urgent follow-up.

He opted for the cheaper $1,000 torso scan, which the senior medical director calls "our bread-and-butter area," since 17 of the 22 cancers detected in one Prenuvo study were in that area and is where they often find cancers that wouldn't be discovered until they were incurable like "that scary pancreatic stuff...."

Milbank's scan found 12 "abnormalities" included "a 2.5 mm pulmonary nodule in the right lower lobe" and "a 4.6 mm intraductal papillary mucinous neoplasm in the pancreatic tail" — but with 10 abnormalities labeled "minor" (and six being musculoskeletal wear-and-tear problems "I already knew about from the usual aches and pains".) Even the two "moderate" findings didn't sound that grim when I read on. The "indeterminant lesion" in my lung requires no follow-up, while the thing in my pancreas is "low-risk."... The "most interesting" finding was the pancreatic cyst, because, at this size and location, there's a 3 percent chance it will become cancerous in the next five years. But if annual follow-up scans of my pancreas (covered by insurance) show it's getting bigger, the cyst can be removed before it becomes cancer. For me, this made the MRI worthwhile. Sure, there was a 97 percent likelihood the cyst never would develop into a problem even if I hadn't learned about it. But now, with minimal inconvenience, I can eliminate that 3 percent risk of getting pancreatic cancer, the most lethal of major malignancies.
Portables

Why These Parents Want Schools to Stop Issuing iPads to Their Children (nbcnews.com) 48

What happened when a school in Los Angeles gave a sixth grader an iPad for use throughout the school day? "He used the iPad during school to watch YouTube and participate in Fortnite video game battles," reports NBC News.

His mother has now launched a coalition of parents called Schools Beyond Screens "organizing in WhatsApp groups, petition drives and actions at school board meetings and demanding meetings with district administrators, pressuring them to pull back on the school-mandated screen time." Los Angeles Unified is the first district of its size to face an organized — and growing — campaign by parents demanding that schools pull back on mandatory screen time. The discontent in Los Angeles Unified, the second-largest school district in the country, reflects a growing unease nationally about the amount of time children spend learning through screens in classrooms. While a majority of states prohibit children from using cellphones in class, 88% of schools provide students with personal devices, according to the National Center for Education Statistics, often Chromebook laptops or iPads. The parents hope getting a district that has over 409,000 students across nearly 800 schools to change how it approaches screen time would send a signal across public school districts to pull back from a yearslong effort to digitize classrooms....

[In the Los Angeles school district] Students in grade levels as low as kindergarten are provided iPads, and some schools require them to take the tablets home. Some teachers have allowed students to opt out of the iPad-based assignments, but other parents say they've been told that they can't. Parents can also opt their children out of having access to YouTube and several other Google products... The billion-dollar 2014 initiative to give tablet computers to everyone became a scandal after the bidding process appeared to heavily favor Apple, and it faced criticism once it became clear that students could bypass security protocols and that few teachers used the tablets. Currently, the district leaves it up to individual schools to decide whether they want students to take home iPads or Chromebooks every day and how much time they spend on them in class...

Around 300 parents attended listening sessions the district held last month about technology in the classroom. Nearly all who spoke criticized how much screen time schools gave their children in class, pointing to ways their behavior and grades suffered as students watched YouTube and played Minecraft... Several also asked district officials to explain why children as young as kindergartners were asked to sign a form to use devices in which they promised they would honor intellectual property law and refrain from meeting people in person whom they met online. "Is it possible for children to meet people over the internet on school-issued devices?" one father asked. The district officials declined to answer, saying it was meant to be a listening session.

In 2022, Los Angeles Unified started requiring students to complete benchmark assessments on educaitonal software i-Ready, the article points out, which generates unique questions for each students. "But parents and teachers are unable to see what children are asked, in part because the company that makes the program considers them proprietary information..."

One teacher says his school's administartors are requiring him to use i-Ready even though it doesn't have any material for the science class he's actually teaching. He's also noticed some students will use answers from AI chatbots, bypassing the school's monitoring software by creating alternate user profiles. But the monitoring software company suggests the school misconfigured their software's settings, adding "More commonly, when students attempt to bypass filtering or monitoring, they do so by using proxies."

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Privacy

India Reviews Telecom Industry Proposal For Always-On Satellite Location Tracking 24

India is weighing a proposal to mandate always-on satellite tracking in smartphones for precise government surveillance -- an idea strongly opposed by Apple, Google, Samsung, and industry groups. Reuters reports: For years, the [Prime Minister Narendra Modi's] administration has been concerned its agencies do not get precise locations when legal requests are made to telecom firms during investigations. Under the current system, the firms are limited to using cellular tower data that can only provide an estimated area location, which can be off by several meters.

The Cellular Operators Association of India (COAI), which represents Reliance's Jio and Bharti Airtel, has proposed that precise user locations should only be provided if the government orders smartphone makers to activate A-GPS technology -- which uses satellite signals and cellular data -- according to a June internal federal IT ministry email. That would require location services to always be activated in smartphones with no option for users to disable them. Apple, Samsung, and Alphabet's Google have told New Delhi that should not be mandated, said three of the sources who have direct knowledge of the deliberations.

A measure to track device-level location has no precedent anywhere else in the world, lobbying group India Cellular & Electronics Association (ICEA), which represents both Apple and Google, wrote in a confidential July letter to the government, which was viewed by Reuters. "The A-GPS network service ... (is) not deployed or supported for location surveillance," said the letter, which added that the measure "would be a regulatory overreach."
Earlier this week, Modi's government was forced to rescind an order requiring smartphone makers to preload a state-run cyber safety app on all devices after public backlash and privacy concerns.
Security

Microsoft 'Mitigates' Windows LNK Flaw Exploited As Zero-Day (bleepingcomputer.com) 25

joshuark shares a report from BleepingComputer: Microsoft has silently "mitigated" a high-severity Windows LNK vulnerability exploited by multiple state-backed and cybercrime hacking groups in zero-day attacks. Tracked as CVE-2025-9491, this security flaw allows attackers to hide malicious commands within Windows LNK files, which can be used to deploy malware and gain persistence on compromised devices. However, the attacks require user interaction to succeed, as they involve tricking potential victims into opening malicious Windows Shell Link (.lnk) files. Thus some element of social engineering, and user technically naive and gullibility such as thinking Windows is secure is required. [...]

As Trend Micro threat analysts discovered in March 2025, the CVE-2025-9491 was already being widely exploited by 11 state-sponsored groups and cybercrime gangs, including Evil Corp, Bitter, APT37, APT43 (also known as Kimsuky), Mustang Panda, SideWinder, RedHotel, Konni, and others. Microsoft told BleepingComputer in March that it would "consider addressing" this zero-day flaw, even though it didn't "meet the bar for immediate servicing." ACROS Security CEO and 0patch co-founder Mitja Kolsek found, Microsoft has silently changed LNK files in the November updates in an apparent effort to mitigate the CVE-2025-9491 flaw. After installing last month's updates, users can now see all characters in the Target field when opening the Properties of LNK files, not just the first 260. As the movie the Ninth Gate stated: "silentium est aurum"

Transportation

White House Rolls Back Fuel Economy Standards (caranddriver.com) 254

Longtime Slashdot reader sinij shares a report from Car and Driver: [T]he Trump administration announced less stringent Corporate Average Fuel Economy (CAFE) standards in an effort to bring down the price of new vehicles. The administration says that rules put in place by the Biden administration broke the law by going beyond the requirements mandated by Congress when the CAFE program was started. The new regulations will require automakers to meet an average fuel-economy figure of 34.5 mpg across 2031-model-year vehicles, instead of the 50.4 mpg that would have been required under the previous regulations. sinij comments: "This is a much-needed move as they also recently closed a number of loopholes, such as the assumed fuel-savings credit for engine start-stop technology, that made it more difficult to meet these goals. More so, a recent string of engine and transmission failures from multiple manufacturers shows that meeting fleet standards came at a very significant cost of reduced reliability."
Privacy

India Pulls Its Preinstalled iPhone App Demand 15

India has withdrawn its order requiring Apple and other smartphone makers to preinstall the government's Sanchar Saathi app after public backlash and privacy concerns. AppleInsider reports: On November 28, the India Ministry of Communication issued a secret directive to Apple and other smartphone manufacturers, requiring the preinstallation of a government-backed app. Less than a week later, the order has been rescinded. The withdrawal on Wednesday means Apple doesn't have to preload the Sanchar Saathi app onto iPhones sold in the country, in a way that couldn't be "disabled or restricted." [...]

In pulling back from the demand, the government insisted that the app had an "increasing acceptance" among citizens. There was a tenfold spike of new user registrations on Tuesday alone, with over 600,000 new users made aware of the app from the public debacle. India Minister of Communications Jyotiraditya Scindia took a moment to insist that concerns the app could be used for increased surveillance were unfounded. "Snooping is neither possible nor will it happen" with the app, Scindia claimed.

"This is a welcome development, but we are still awaiting the full text of the legal order that should accompany this announcement, including any revised directions under the Cyber Security Rules, 2024," said the Internet Freedom Foundation. It is treating the news with "cautious optimism, not closure," until formalities conclude. However, while promising, the backdown doesn't stop India from retrying something similar or another tactic in the future.
United States

New York Now Requires Retailers To Tell You When AI Sets Your Price (nytimes.com) 44

New York has become the first state in the nation to enact a law requiring retailers to disclose when AI and personal data are being used to set individualized prices [non-paywalled source] -- a measure that lawyers say will make algorithmic pricing "the next big battleground in A.I. regulation."

The law, enacted through the state budget, requires online retailers using personalized pricing to post a specific notice: "THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA." The National Retail Federation sued to block enforcement on First Amendment grounds, arguing the required disclosure was "misleading and ominous," but federal judge Jed S. Rakoff allowed the law to proceed last month.

Uber has started displaying the notice to New York users. Spokesman Ryan Thornton called the law "poorly drafted and ambiguous" but maintained the company only considers geographic factors and demand in setting prices. At least 10 states have bills pending that would require similar disclosures or ban personalized pricing outright. California and federal lawmakers are considering complete bans.
The Military

Defense Contractors Lobby To Kill Military Right-to-Repair, Push Pay-Per-Use Data Model (theverge.com) 62

A bipartisan right-to-repair provision that would let the U.S. military fix its own equipment faces a serious threat from defense industry lobbyists who want to replace it with a pay-per-use model for accessing repair information. A source familiar with negotiations told The Verge that there are significant concerns that the language in the National Defense Authorization Act will be swapped out for a "data-as-a-service" alternative that would require the Department of Defense to pay contractors for access to technical repair data.

The provision, introduced by Sens. Elizabeth Warren (D-MA) and Tim Sheehy (R-MT) in their Warrior Right to Repair Act, passed the Senate in October and has support from Defense Secretary Pete Hegseth, the Army and the Navy. The National Defense Industrial Association published a white paper backing the data-as-a-service model, arguing it would protect contractors' intellectual property. Reps. Mike Rogers (R-AL) and Adam Smith (D-WA), who lead the House Armed Services Committee, outlined similar language in their SPEED Act. Rogers received more than $535,000 from the defense industry in 2024; Smith received over $310,550. The final NDAA is expected early next week.
AI

AI Can Technically Perform 12% of US Labor Market's Wage Value, MIT Simulation Finds (cnbc.com) 70

Researchers at MIT and Oak Ridge National Laboratory have built a simulation that models all 151 million American workers and their skills, then maps those skills against the capabilities of over 13,000 AI tools currently in production to see where the two overlap. The answer, according to their analysis: 11.7% of the US labor market's total wage value, or about $1.2 trillion, sits in tasks that AI systems can technically perform [PDF].

The researchers call this the Iceberg Index, and the name is deliberate. The visible AI disruption happening in tech jobs right now accounts for only 2.2% of labor market wage value. The remaining exposure lurks in cognitive and administrative work across finance, healthcare administration, and professional services, and unlike tech-sector disruption, it's spread across all fifty states rather than concentrated on the coasts.

Delaware and South Dakota show higher Iceberg Index values than California because their economies lean heavily on administrative and financial work. Ohio and Tennessee register modest tech-sector exposure but substantial hidden risk in the white-collar functions that support their manufacturing bases.

To validate the framework, the researchers compared their predictions against Anthropic's Economic Index tracking real-world AI usage from millions of Claude users. The two measures agreed on state categorizations 69% of the time, with particularly strong alignment at the extremes.

The Iceberg Index doesn't predict job losses or adoption timelines. It measures technical capability, the overlap between what AI can do and what occupations require. Traditional economic indicators like GDP and unemployment explain less than five percent of the variation in this skill-based exposure, which is partly why the researchers argue workforce planners need new metrics.
AI

OpenAI Says Dead Teen Violated TOS When He Used ChatGPT To Plan Suicide 125

An anonymous reader quotes a report from Ars Technica: Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen's suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot. The earliest look at OpenAI's strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen's "suicide coach." OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world's most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring "the full picture" revealed by the teen's chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he'd begun experiencing suicidal ideation at age 11, long before he used the chatbot. "A full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT," OpenAI's filing argued. [...] All the logs that OpenAI referenced in its filing are sealed, making it impossible to verify the broader context the AI firm claims the logs provide. In its blog, OpenAI said it was limiting the amount of "sensitive evidence" made available to the public, due to its intention to handle mental health-related cases with "care, transparency, and respect."
The Raine family's lead lawyer called OpenAI's response "disturbing."

"They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing. That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a 'beautiful suicide.' And OpenAI and Sam Altman have no explanation for the last hours of Adam's life, when ChatGPT gave him a pep talk and then offered to write a suicide note."

OpenAI is leaning on its usage policies to defend against this case, emphasizing that "ChatGPT users acknowledge their use of ChatGPT is 'at your sole risk'" and that Raine should never have been allowed to use the chatbot without parental consent.
EU

European Lawmakers Seek EU-Wide Minimum Age To Access AI Chatbots, Social Media (reuters.com) 26

The European Parliament has passed a non-binding resolution urging an EU-wide minimum age of 16 to access social media, video-sharing platforms, and AI chatbots, with parental consent allowed for ages 13-16 and a hard ban for anyone under 13. "It also proposes additional measures, including a ban on addictive design features that keep children hooked to screens and manipulative advertising and gambling-like elements," reports Reuters. Furthermore, the draft "calls for the outright blocking of websites that don't follow EU rules and to address AI tools that can create fake or inappropriate content."

The resolution "carries no legal weight" but reflects the growing concern on the issue of AI companions and algorithm-driven platforms even. "Any binding legislation would require formal proposals from the European Commission, followed by negotiations between EU member states and Parliament in a process that typically takes years to complete," notes the report.
The Internet

The Underwater Cables That Carry the Internet Are in Trouble (bloomberg.com) 39

The roughly 500 fiber-optic cables lying on the ocean floor carry more than 95% of all internet data -- not satellites, as many might assume -- and they face growing threats from natural disasters, terrorists and nation-states capable of disrupting global communications by dragging anchors or deploying submarines against the infrastructure.

The cables are protected by layers of copper, steel, and plastics, but they remain vulnerable at multiple points: earthquakes can disturb them on the seafloor, and the connections where cables meet land-based infrastructure present targets for bad actors. National actors including Russia, China and the US possess the capability to attack these cables.

A bipartisan Senate bill co-sponsored by Democrat Jeanne Shaheen and Republican John Barrasso is under consideration. The legislation would require a report to Congress within six months on Chinese and Russian sabotage efforts, mandate sanctions against foreign parties responsible for attacks, and direct the US to provide more resources for cable protection and repair.
Earth

Malaysia's Johor Bans Low-Tier Data Centers Over Water Strain (thestar.com.my) 26

Malaysia's Johor, one of Southeast Asia's fastest-growing data center hubs, has announced it will no longer approve applications for Tier 1 and Tier 2 data centers because of their enormous water consumption -- up to 50 million liters daily, or roughly 200 times what higher-tier facilities require.

The Malaysian state has approved 51 data center projects as of November 2025. 17 centers are already operational, 11 are under construction and 23 received approval this year. The announcement follows concerns raised by a local politician who pointed to water supply disruptions in Georgia in the US after a data center began operations and protests in Uruguay over fears that data centers could affect farms.
Mozilla

Mozilla Announces 'TABS API' For Developers Building AI Agents (omgubuntu.co.uk) 10

"Fresh from announcing it is building an AI browsing mode in Firefox and laying the groundwork for agentic interactions in the Firefox 145 release, the corp arm of Mozilla is now flexing its AI muscles in the direction of those more likely to care," writes the blog OMG Ubuntu: If you're a developer building AI agents, you can sign up to get early access to Mozilla's TABS API, a "powerful web content extraction and transformation toolkit designed specifically for AI agent builders"... The TABS API enables devs to create agents to automate web interactions, like clicking, scrolling, searching, and submitting forms "just like a human". Real-time feedback and adaptive behaviours will, Mozilla say, offer "full control of the web, without the complexity."

As TABS is not powered by a Mozilla-backed LLM you'll need to connect it to your choice of third-party LLM for any relevant processing... Developers get 1,000 requests monthly on the free tier, which seems reasonable for prototyping personal projects. Complex agentic workloads may require more. Though pricing is yet to be locked in, the TABS API website suggests it'll cost ~$5 per 1000 requests. Paid plans will offer additional features too, like lower latency and, somewhat ironically, CAPTCHA solving so AI can 'prove' it's not a robot on pages gated to prevent automated activities.

Google, OpenAI, and other major AI vendors offer their own agentic APIs. Mozilla is pitching up late, but it plans to play differently. It touts a "strong focus on data minimisation and security", with scraped data treated ephemerally — i.e., not kept. As a distinction, that matters. AI agents can be given complex online tasks that involve all sorts of personal or sensitive data being fetched and worked with.... If you're minded to make one, perhaps without a motivation to asset-strip the common good, Mozilla's TABS API look like a solid place to start.

Space

Are Astronomers Wrong About Dark Energy? (cnn.com) 30

An anonymous reader shared this report from CNN: The universe's expansion might not be accelerating but slowing down, a new study suggests. If confirmed, the finding would upend decades of established astronomical assumptions and rewrite our understanding of dark energy, the elusive force that counters the inward pull of gravity in our universe...

Last year, a consortium of hundreds of researchers using data from the Dark Energy Spectroscopic Instrument (DESI) in Arizona, developed the largest ever 3D map of the universe. The observations hinted at the fact that dark energy may be weakening over time, indicating that the universe's rate of expansion could eventually slow. Now, a study published November 6 in the journal Monthly Notices of the Royal Astronomical Society provides further evidence that dark energy might not be pushing on the universe with the same strength it used to. The DESI project's findings last year represented "a major, major paradigm change ... and our result, in some sense, agrees well with that," said Young-Wook Lee, a professor of astrophysics at Yonsei University in South Korea and lead researcher for the new study....

To reach their conclusions, the researchers analyzed a sample of 300 galaxies containing Type 1a supernovas and posited that the dimming of distant exploding stars was not only due to their moving farther away from Earth, but also due to the progenitor star's age... [Study coauthor Junhyuk Son, a doctoral candidate of astronomy at Yonsei University, said] "we found that their luminosity actually depends on the age of the stars that produce them — younger progenitors yield slightly dimmer supernovae, while older ones are brighter." Son said the team has a high statistical confidence — 99.99% — about this age-brightness relation, allowing them to use Type 1a supernovas more accurately than before to assess the universe's expansion... Eventually, if the expansion continues to slow down, the universe could begin to contract, ending in what astronomers imagine may be the opposite of the big bang — the big crunch. "That is certainly a possibility," Lee said. "Even two years ago, the Big Crunch was out of the question. But we need more work to see whether it could actually happen."

The new research proposes a radical revision of accepted knowledge, so, understandably, it is being met with skepticism. "This study rests on a flawed premise," Adam Riess, a professor of physics and astronomy at the Johns Hopkins University and one of the recipients of the 2011 Nobel Prize in physics, said in an email. "It suggests supernovae have aged with the Universe, yet observations show the opposite — today's supernovae occur where young stars form. The same idea was proposed years ago and refuted then, and there appears to be nothing new in this version." Lee, however, said Riess' claim is incorrect. "Even in the present-day Universe, Type Ia supernovae are found just as frequently in old, quiescent elliptical galaxies as in young, star-forming ones — which clearly shows that this comment is mistaken. The so-called paper that 'refuted' our earlier result relied on deeply flawed data with enormous uncertainties," he said, adding that the age-brightness correlation has been independently confirmed by two separate teams in the United States and China... "Extraordinary claims require extraordinary evidence," Dragan Huterer, a professor of physics at the University of Michigan in Ann Arbor, said in an email, noting that he does not feel the new research "rises to the threshold to overturn the currently favored model...."

The new Vera C. Rubin Observatory, which started operating this year, is set to help settle the debate with the early 2026 launch of the Legacy Survey of Space and Time, an ultrawide and ultra-high-definition time-lapse record of the universe made by scanning the entire sky every few nights over 10 years to capture a compilation of asteroids and comets, exploding stars, and distant galaxies as they change.

AI

AI Nutrition Tracking Stinks (theverge.com) 33

AI nutrition tracking features in popular fitness apps are producing wildly inaccurate calorie and macro counts despite promises to simplify food logging through automated photo analysis. The Verge tested AI-powered nutrition tools in Ladder, Oura Advisor, January and MyFitnessPal. Ladder's AI estimated the outlet's carefully measured 355-calorie breakfast at 780 calories and got the macro breakdown wrong even after the reviewer manually edited entries to include exact brands and amounts.

Oura Advisor routinely mistook matcha protein shakes for green smoothies. January misidentified barbecue sauce as teriyaki sauce and failed to detect mushrooms in a chicken dish. None of the apps could identify healthier ingredient swaps or accurately log ethnic foods. Oura classified a mix of edamame, quinoa and brown rice as mashed potatoes and white rice. Ladder logged dal makhani curry as chicken soup. The AI features require extensive manual corrections that negate any time savings from automated logging, the publication concluded in its scathing review.
Communications

IBM, Cisco Outline Plans For Networks of Quantum Computers By Early 2030s 19

IBM and Cisco plan to link quantum computers over long distances by the early 2030s, "with the goal of demonstrating the concept is workable by the end of 2030," reports Reuters. "The move could pave the way for a quantum internet, though executives at the two companies cautioned that the networks would require technologies that do not currently exist and will have to be developed with the help of universities and federal laboratories." From the report: The challenge begins with a problem: Quantum computers like IBM's sit in massive cryogenic tanks that get so cold that atoms barely move. To get information out of them, IBM has to figure out how to transform information in stationary "qubits" -- the fundamental unit of information in a quantum computer -- into what Jay Gambetta, director of IBM Research and an IBM fellow, told Reuters are "flying" qubits that travel as microwaves.

But those flying microwave qubits will have to be turned into optical signals that can travel between Cisco switches on fiber-optic cables. The technology for that transformation -- called a microwave-optical transducer -- will have to be developed with the help of groups like the Superconducting Quantum Materials and Systems Center, led by the Fermi National Accelerator Laboratory near Chicago, among others. Along the way, Cisco and IBM will also publish open-source software to weave all the parts together.
Games

Roblox Blocks Children From Chatting To Adult Strangers (bbc.com) 52

Roblox is rolling out mandatory facial age-verification for chat features to prevent children from communicating with adult strangers. The platform will restrict chat to verified age groups, expand parental controls, and become the first major gaming platform to require facial age checks for messaging. The BBC reports: Mandatory age checks will be introduced for accounts using chat features, starting in December for Australia, New Zealand and the Netherlands, then the rest of the globe from January. [...] Rani Govender, policy manager for child safety online at the NSPCC, said action had been needed because young people had been exposed to "unacceptable risks" on Roblox, "leaving many vulnerable to harm and online abuse."

The charity welcomed the platform's latest announcement but called on Roblox to "ensure they deliver change for children in practice and prevent adult perpetrators from targeting and manipulating young users." The platform averaged more than 80 million daily players in 2024, about 40% of them under the age of 13. [...]

Matt Kaufman, chief safety officer for Roblox, told a press briefing the age estimation technology is "pretty accurate." He claimed the system can make close estimates of "within one to two years" bracket for users aged between five and 25. Currently it can be used voluntarily by anyone in the world.

Government

White House Prepares Executive Order To Block State AI Laws (politico.com) 81

An anonymous reader quotes a report from Politico: The White House is preparing to issue an executive order as soon as Friday that tells the Department of Justice and other federal agencies to prevent states from regulating artificial intelligence, according to four people familiar with the matter and a leaked draft of the order obtained by POLITICO. The draft document, confirmed as authentic by three people familiar with the matter, would create an "AI Litigation Task Force" at the DOJ whose "sole responsibility" would be to challenge state AI laws.

Government lawyers would be directed to challenge state laws on the grounds that they unconstitutionally regulate interstate commerce, are preempted by existing federal regulations or otherwise at the attorney general's discretion. The task force would consult with administration officials, including the special adviser for AI and crypto -- a role currently occupied by tech investor David Sacks.

The executive order, in the draft obtained by POLITICO, would also empower Commerce Secretary Howard Lutnick to publish a review of "onerous" state AI laws within 90 days and restrict federal broadband funds to states whose AI laws are found to be objectionable. It would direct the Federal Trade Commission to investigate whether state AI laws that "require alterations to the truthful outputs of AI models" are blocked by the FTC Act. And it would order the Federal Communications Commission to begin work on a reporting and disclosure standard for AI models that would preempt conflicting state laws.

Slashdot Top Deals