Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
News

The Bizarre Enhancement Claims Rocking Ski Jumping (nytimes.com) 95

German newspaper Bild reported in January that some ski jumpers have been injecting their penises with hyaluronic acid ahead of the Milan Cortina Winter Olympics -- the theory being that temporarily enlarged genitalia would yield looser-fitting suits when measured by 3D scanners, and those looser suits could act like sails to produce longer jumps.

A study published last October in the scientific journal Frontiers found that a 2cm suit change translated to an extra 5.8 metres in jump distance. No specific athletes have been accused. The World Anti-Doping Agency said Thursday it would investigate if presented with evidence, noting its powers extend to banning practices that violate the "spirit of sport." The claims arrive as ski jumping already faces scrutiny -- two Norwegian coaches and an equipment manager received 18-month bans in January for illegally manipulating suit stitching.
Businesses

Pinterest Sacks Workers For Creating Tool To Track Layoffs (bbc.com) 74

Pinterest has sacked two engineers for tracking which workers lost their jobs in a recent round of layoffs. BBC: The company recently announced job cuts, with chief executive Bill Ready stating in an email he was "doubling down on an AI-forward approach," according to an employee who posted some of the memo on LinkedIn.

Pinterest told investors the move would impact about 15% of the workforce, or roughly 700 roles, without saying which teams or workers were affected. But then "two engineers wrote custom scripts improperly accessing confidential company information to identify the locations and names of all dismissed employees and then shared it more broadly," a company spokesperson told the BBC. "This was a clear violation of Pinterest policy and of their former colleagues' privacy," the spokesperson added.

The script written by the Pinterest engineers was aimed at internal tools used at the company for employees to communicate, according to a person familiar with the firings who asked not to be identified. The person said the script created an alert for which employee names within a tool like the team communication platform Slack were being removed or deactivated, giving some insight into who at the company was impacted by the layoffs.

Science

Ultra-Processed Foods Should Be Treated More Like Cigarettes Than Food, Study Says (theguardian.com) 299

Ultra-processed foods (UPFs) have more in common with cigarettes than with fruit or vegetables, and require far tighter regulation, according to a new report. The Guardian: UPFs and cigarettes are engineered to encourage addiction and consumption, researchers from three US universities said, pointing to the parallels in widespread health harms that link both.

UPFs, which are widely available worldwide, are food products that have been industrially manufactured, often using emulsifiers or artificial colouring and flavours. The category includes soft drinks and packaged snacks such as crisps and biscuits. There are similarities in the production processes of UPFs and cigarettes, and in manufacturers' efforts to optimise the "doses" of products and how quickly they act on reward pathways in the body, according to the paper from researchers at Harvard, the University of Michigan and Duke University.

They draw on data from the fields of addiction science, nutrition and public health history to make their comparisons, published on 3 February in the healthcare journal the Milbank Quarterly. The authors suggest that marketing claims on the products, such as being "low fat" or "sugar free," are "health washing" that can stall regulation, akin to the advertising of cigarette filters in the 1950s as protective innovations that "in practice offered little meaningful benefit."

Open Source

'Vibe Coding Kills Open Source' (arxiv.org) 106

Four economists across Central European University, Bielefeld University and the Kiel Institute have built a general equilibrium model of the open-source software ecosystem and concluded that vibe coding -- the increasingly common practice of letting AI agents select, assemble and modify packages on a developer's behalf -- erodes the very funding mechanism that keeps open-source projects alive.

The core problem is a decoupling of usage from engagement. Tailwind CSS's npm downloads have climbed steadily, but its creator says documentation traffic is down about 40% since early 2023 and revenue has dropped close to 80%. Stack Overflow activity fell roughly 25% within six months of ChatGPT's launch. Open-source maintainers monetize through documentation visits, bug reports, and community interaction. AI agents skip all of that.

The model finds that feedback loops once responsible for open source's explosive growth now run in reverse. Fewer maintainers can justify sharing code, variety shrinks, and average quality falls -- even as total usage rises. One proposed fix is a "Spotify for open source" model where AI platforms redistribute subscription revenue to maintainers based on package usage. Vibe-coded users need to contribute at least 84% of what direct users generate, or roughly 84% of all revenue must come from sources independent of how users access the software.
Education

China's Decades-Old 'Genius Class' Pipeline Is Quietly Fueling Its AI Challenge To the US (ft.com) 113

China's decades-old network of elite high-school "genius classes" -- ultra-competitive talent streams that pull an estimated 100,000 gifted teenagers out of regular schooling every year and run them through college-level science curricula -- has produced the core technical talent now building the country's leading AI and technology companies, the Financial Times reported Saturday.

Graduates of these programs include the founder of ByteDance, the leaders of e-commerce giants Taobao and PDD, the billionaire behind super-app Meituan, the brothers who started Nvidia rival Cambricon, and the core engineers behind large language models at DeepSeek and Alibaba's Qwen. DeepSeek's research team of more than 100 was almost entirely composed of genius-class alumni when the startup released its R1 reasoning model last year at a fraction of the cost of its international rivals.

The system traces to the mid-1980s, when China first sent students to the International Mathematical Olympiad and a handful of top high schools began creating dedicated competition-track classes. China now graduates around five million STEM majors annually -- compared to roughly half a million in the United States -- and in 2025, 22 of the 23 students it sent to the International Science Olympiads returned with gold medals. The computer science track has overtaken maths and physics as the most popular competition subject, a shift that accelerated after Beijing designated AI development a "key national growth strategy" in 2017.
AI

Is AI Really Taking Jobs? Or Are Employers Just 'AI-Washing' Normal Layoffs? (nytimes.com) 66

The New York Times lists other reasons a company lays off people. ("It didn't meet financial targets. It overhired. Tariffs, or the loss of a big client, rocked it...")

"But lately, many companies are highlighting a new factor: artificial intelligence. Executives, saying they anticipate huge changes from the technology, are making cuts now." A.I. was cited in the announcements of more than 50,000 layoffs in 2025, according to Challenger, Gray & Christmas, a research firm... Investors may applaud such pre-emptive moves. But some skeptics (including media outlets) suggest that corporations are disingenuously blaming A.I. for layoffs, or "A.I.-washing." As the market research firm Forrester put it in a January report: "Many companies announcing A.I.-related layoffs do not have mature, vetted A.I. applications ready to fill those roles, highlighting a trend of 'A.I.-washing' — attributing financially motivated cuts to future A.I. implementation...."

"Companies are saying that 'we're anticipating that we're going to introduce A.I. that will take over these jobs.' But it hasn't happened yet. So that's one reason to be skeptical," said Peter Cappelli, a professor at the Wharton School... Of course, A.I. may well end up transforming the job market, in tech and beyond. But a recent study... [by a senior research fellow at the Brookings Institution who studies A.I. and work] found that AI has not yet meaningfully shifted the overall market. Tech firms have cut more than 700,000 employees globally since 2022, according to Layoffs.fyi, which tracks industry job losses. But much of that was a correction for overhiring during the pandemic.

As unpopular as A.I. job cuts may be to the public, they may be less controversial than other reasons — like bad company planning.

Amazon CEO Jassy has even said the reason for most of their layoffs was reducing bureaucracy, the article points out, although "Most analysts, however, believe Amazon is cutting jobs to clear money for A.I. investments, such as data centers."
Software

Backseat Software (mikeswanson.com) 98

Mike Swanson, commenting on modern software's intrusive, attention-seeking behavior: What if your car worked like so many apps? You're driving somewhere important...maybe running a little bit late. A few minutes into the drive, your car pulls over to the side of the road and asks:

"How are you enjoying your drive so far?"

Annoyed by the interruption, and even more behind schedule, you dismiss the prompt and merge back into traffic.

A minute later it does it again.

"Did you know I have a new feature? Tap here to learn more."

It blocks your speedometer with an overlay tutorial about the turn signal. It highlights the wiper controls and refuses to go away until you demonstrate mastery.

Ridiculous, of course.

And yet, this is how a lot of modern software behaves. Not because it's broken, but because we've normalized an interruption model that would be unacceptable almost anywhere else.

Privacy

An AI Toy Exposed 50K Logs of Its Chats With Kids To Anyone With a Gmail Account (wired.com) 21

An anonymous reader quotes a report from Wired: Earlier this month, Joseph Thacker's neighbor mentioned to him that she'd preordered a couple of stuffed dinosaur toys for her children. She'd chosen the toys, called Bondus, because they offered an AI chat feature that lets children talk to the toy like a kind of machine-learning-enabled imaginary friend. But she knew Thacker, a security researcher, had done work on AI risks for kids, and she was curious about his thoughts.

So Thacker looked into it. With just a few minutes of work, he and a web security researcher friend named Joel Margolis made a startling discovery: Bondu's web-based portal, intended to allow parents to check on their children's conversations and for Bondu's staff to monitor the products' use and performance, also let anyone with a Gmail account access transcripts of virtually every conversation Bondu's child users have ever had with the toy.

Without carrying out any actual hacking, simply by logging in with an arbitrary Google account, the two researchers immediately found themselves looking at children's private conversations, the pet names kids had given their Bondu, the likes and dislikes of the toys' toddler owners, their favorite snacks and dance moves. In total, Margolis and Thacker discovered that the data Bondu left unprotected -- accessible to anyone who logged in to the company's public-facing web console with their Google username -- included children's names, birth dates, family member names, "objectives" for the child chosen by a parent, and most disturbingly, detailed summaries and transcripts of every previous chat between the child and their Bondu, a toy practically designed to elicit intimate one-on-one conversation.
More than 50,000 chat transcripts were accessible through the exposed web portal. When the researchers alerted Bondu about the findings, the company acted to take down the console within minutes and relaunched it the next day with proper authentication measures.

"We take user privacy seriously and are committed to protecting user data," Bondu CEO Fateen Anam Rafid said in his statement. "We have communicated with all active users about our security protocols and continue to strengthen our systems with new protections," as well as hiring a security firm to validate its investigation and monitor its systems in the future.

Submission + - Trump reveals 'discombobulator' weapon was crucial to Venezuela raid (nypost.com)

Tablizer writes: Trump commented on the weapon when asked about reports this month that the Biden administration purchased a pulsed energy device suspected of being the type that caused “Havana Syndrome.”

That revelation followed on-the-ground accounts from Venezuela describing how Maduro’s gunmen were brought to their knees, “bleeding through the nose” and vomiting blood.

A self-identified member of the deposed strongman’s team of guards recounted afterward that “suddenly all our radar systems shut down without any explanation.”...

“At one point, they launched something; I don’t know how to describe it. It was like a very intense sound wave. Suddenly I felt like my head was exploding from the inside,” the witness said.

“We all started bleeding from the nose. Some were vomiting blood. We fell to the ground, unable to move. We couldn’t even stand up after that sonic weapon — or whatever it was.”

Power

Gasoline Out of Thin Air? It's a Reality! (jalopnik.com) 122

Can Aircela's machine "create gasoline using little more than electricity and the air that we breathe"? Jalopnik reports... The Aircela machine works through a three-step process. It captures carbon dioxide directly from the air... The machine also traps water vapor, and uses electrolysis to break water down into hydrogen and oxygen... The oxygen is released, leaving hydrogen and carbon dioxide, the building blocks of hydrocarbons. This mixture then undergoes a process known as direct hydrogenation of carbon dioxide to methanol, as documented in scientific papers.

Methanol is a useful, though dangerous, racing fuel, but the engine under your hood won't run on it, so it must be converted to gasoline. ExxonMobil has been studying the process of doing exactly that since at least the 1970s. It's another well-established process, and the final step the Aircela machine performs before dispensing it through a built-in ordinary gas pump. So while creating gasoline out of thin air sounds like something only a wizard alchemist in Dungeons & Dragons can do, each step of this process is grounded in science, and combining the steps in this manner means it can, and does, really work.

Aircela does not, however, promise free gasoline for all. There are some limitations to this process. A machine the size of Aircela's produces just one gallon of gas per day... The machine can store up to 17 gallons, according to Popular Science, so if you don't drive very much, you can fill up your tank, eventually... While the Aircela website does not list a price for the machine, The Autopian reports it's targeting a price between $15,000 and $20,000, with hopes of dropping the price once mass production begins. While certainly less expensive than a traditional gas station, it's still a bit of an investment to begin producing your own fuel. If you live or work out in the middle of nowhere, however, it could be close to or less than the cost of bringing gas to you, or driving all your vehicles into a distant town to fill up. You're also not limited to buying just one machine, as the system is designed to scale up to produce as much fuel as you need.

The main reason why this process isn't "something for nothing" is that it takes twice as much electrical energy to produce energy in the form of gasoline. As Aircela told The Autopian " Aircela is targeting >50% end to end power efficiency. Since there is about 37kWh of energy in a gallon of gasoline we will require about 75kWh to make it. When we power our machines with standalone, off-grid, photovoltaic panels this will correspond to less than $1.50/gallon in energy cost."

Thanks to long-time Slashdot reader Quasar1999 for sharing the news.
Power

The Case Against Small Modular Nuclear Reactors (cnn.com) 146

Small modular nuclear reactors (or SMRs) are touted as "cheaper, safer, faster to build and easier to finance" than conventional nuclear reactors, reports CNN. Amazon has invested in X-Energy, and earlier this month, Meta announced a deal with Oklo, and in Michigan last month, Holtec began the long formal licensing process for two SMRs with America's Nuclear Regulatory Commission next to a nuclear plant it hopes to reactive. (And in 2024, California-based Kairos Power broke ground in Tennessee on a SMR "demo" reactor.)

But "The reality, as ever, is likely to be messier and experts are sounding notes of caution..." All the arguments in favor of SMRs overlook a fundamental issue, said Edwin Lyman, director of nuclear power safety at the Union of Concerned Scientists: They are too expensive. Despite all the money swilling around the sector, "it's still not enough," he told CNN. Nuclear power cannot compete on cost with alternatives, both fossil fuels and increasingly renewable energy, he said."

Some SMRs also have an issue with fuel. The more unconventional designs, those cooled by salt or gas, often require a special type of fuel called high-assay low-enriched uranium, known as HALEU (pronounced hay-loo). The amounts available are limited and the supply chain has been dominated by Russia, despite efforts to build up a domestic supply. It's a major risk, said Nick Touran [a nuclear engineer and independent consultant]. The biggest challenge nuclear has is competing with natural gas, he said, a "luxury, super expensive fuel may not be the best way." There is still stigma around nuclear waste, too. SMR companies say smaller reactors mean less nuclear waste, but 2022 research from Stanford University suggested some SMRs could actually generate more waste, in part because they are less fuel efficient...

As companies race to prove SMRs can meet the hype, experts appear to be divided in their thinking. For some, SMRs are an expensive — and potentially dangerous — distraction, with timelines that stretch so far into the future they cannot be a genuine answer to soaring needs for clean power right now.

Nuclear engineering/consultant Touran told CNN the small reactors are "a technological solution to a financial problem. No venture capitalists can say, like, 'oh, sure, we'll build a $30 billion plant.' But, if you're down into hundreds of millions, maybe they can do it."

Submission + - AI isn't getting smarter. We are getting dumber. (newatlas.com)

schwit1 writes: The point the op-ed makes is fundamental: AI cannot add anything to the information it has. It might be able to compile that information well, but its analysis is always going to be limited because it has no true creative spirit. It is merely a software program, albeit a very sophisticated one.

This quote from the essay will give you the sense:

Maybe you just use AI to clarify your thoughts. Turn the mottle of ideas in your head into coherent communicable paragraphs. It's OK, you say, because you’re reviewing the results, and often editing the output. You’re ending up with exactly what you want to say, just in a form and style that’s better than any way you could have put it yourself.

But is what you end up with really your thoughts? And what if everyone started doing that?

Stripping the novelty and personality out of all communication; turning every one of our interactions into homogeneous robotic engagements? Every birthday greeting becomes akin to a printed hallmark card. Every eulogy turns into a stamp-card sentiment. Every email follows the auto-response template suggested by the browser.

We do this long enough and eventually we begin to lose the ability to communicate our inner thoughts to others. Our minds start to think in terms of LLM prompts. All I need is the gist of what I want to say, and the system fills in the blanks.


AI

The Risks of AI in Schools Outweigh the Benefits, Report Says (npr.org) 33

This month saw results from a yearlong global study of "potential negative risks that generative AI poses to student". The study (by the Brookings Institution's Center for Universal Education) also suggests how to prevent risks and maximize benefits: After interviews, focus groups, and consultations with over 500 students, teachers, parents, education leaders, and technologists across 50 countries, a close review of over 400 studies, and a Delphi panel, we find that at this point in its trajectory, the risks of utilizing generative AI in children's education overshadow its benefits.
"At the top of Brookings' list of risks is the negative effect AI can have on children's cognitive growth," reports NPR — "how they learn new skills and perceive and solve problems." The report describes a kind of doom loop of AI dependence, where students increasingly off-load their own thinking onto the technology, leading to the kind of cognitive decline or atrophy more commonly associated with aging brains... As one student told the researchers, "It's easy. You don't need to (use) your brain." The report offers a surfeit of evidence to suggest that students who use generative AI are already seeing declines in content knowledge, critical thinking and even creativity. And this could have enormous consequences if these young people grow into adults without learning to think critically...

Survey responses revealed deep concern that use of AI, particularly chatbots, "is undermining students' emotional well-being, including their ability to form relationships, recover from setbacks, and maintain mental health," the report says. One of the many problems with kids' overuse of AI is that the technology is inherently sycophantic — it has been designed to reinforce users' beliefs... Winthrop offers an example of a child interacting with a chatbot, "complaining about your parents and saying, 'They want me to wash the dishes — this is so annoying. I hate my parents.' The chatbot will likely say, 'You're right. You're misunderstood. I'm so sorry. I understand you.' Versus a friend who would say, 'Dude, I wash the dishes all the time in my house. I don't know what you're complaining about. That's normal.' That right there is the problem."

AI did have some advantages, the article points out: The report says another benefit of AI is that it allows teachers to automate some tasks: "generating parent emails ... translating materials, creating worksheets, rubrics, quizzes, and lesson plans" — and more. The report cites multiple research studies that found important time-saving benefits for teachers, including one U.S. study that found that teachers who use AI save an average of nearly six hours a week and about six weeks over the course of a full school year...

AI can also help make classrooms more accessible for students with a wide range of learning disabilities, including dyslexia. But "AI can massively increase existing divides" too, [warns Rebecca Winthrop, one of the report's authors and a senior fellow at Brookings]. That's because the free AI tools that are most accessible to students and schools can also be the least reliable and least factually accurate... "[T]his is the first time in ed-tech history that schools will have to pay more for more accurate information. And that really hurts schools without a lot of resources."

The report calls for more research — and make several recommendations (including "holistic" learning and "AI tools that teach, not tell.") But this may be their most important recommendation. "Provide a clear vision for ethical AI use that centers human agency..."

"We find that AI has the potential to benefit or hinder students, depending on how it is used."

Submission + - New, faster solution to removing PFAS (the 'forever chemicals') from water (theguardian.com)

Bruce66423 writes: 'New filtration technology developed by Rice University may absorb some Pfas “forever chemicals” at 100 times the rate than previously possible, which could dramatically improve pollution control and speed remediations.

'Researchers also say they have also found a way to destroy Pfas, though both technologies face a steep challenge in being deployed on an industrial scale.

'A new peer-reviewed paper details a layered double hydroxide (LDH) material made from copper and aluminum that absorbs long-chain Pfas up to 100 times faster than commonly used filtration systems.'

Medicine

'Active' Sitting Is Better For Brain Health (sciencealert.com) 40

alternative_right shares a report from ScienceAlert: A systematic review of 85 studies has now found good reason to differentiate between 'active' sitting, like playing cards or reading, and 'passive' sitting, like watching TV. [...] "Total sitting time has been shown to be related to brain health; however, sitting is often treated as a single entity, without considering the specific type of activity," explains public health researcher Paul Gardiner from the University of Queensland in Australia. "Most people spend many hours sitting each day, so the type of sitting really matters ... These findings show that small everyday choices -- like reading instead of watching television -- may help keep your brain healthier as you age."

Across numerous studies, Gardiner and colleagues found that active sitting activities, like reading, playing card games, and using a computer, showed "overwhelmingly positive associations with cognitive health, enhancing cognitive functions such as executive function, situational memory, and working memory." Meanwhile, passive sitting was most consistently associated with negative cognitive outcomes, including increased risk of dementia.
The study was published in the Journal of Alzheimer's Disease.

Slashdot Top Deals

It is your destiny. - Darth Vader

Working...