Education

Should Schools Get Rid of Homework? (npr.org) 182

Tony Isaac shares a report from NPR: Federal survey data shows that the amount of math homework assigned to fourth and eighth grade students, in particular, has been steadily declining for the past decade. Some educators and parents say this is a good thing -- students shouldn't spend six or more hours a day at school and still have additional schoolwork to complete at home. But the research on homework is complicated. Some studies show that students who spend more time on homework perform better than their peers. For example, a longitudinal study released in 2021 of more than 6,000 students in Germany, Uruguay and the Netherlands found that lower-performing students who increased the amount of time they spent on math homework performed better in math, even one year later.

Other studies, however, suggest homework has minimal outcomes on academic performance: A 1998 study of more than 700 U.S. students led by a researcher at Duke University found that more homework assigned in elementary grades had no significant effect on standardized test scores. The researchers did find small positive gains on class grades when they looked at both test scores and the proportion of homework students completed. More homework was also associated with negative attitudes about school for younger children in the study. "The best educators figured out a long time ago that we can control what we can control," and that's what happens during the school day, Superintendent Garrett said, not homework. "There has been a shift away from it naturally anyway, and I felt like this made it equitable across our entire school system."
"The best argument for homework is that mathematical procedures require practice, and you don't want to waste classroom time on practice, so you send that home," said Tom Loveless, a researcher and former teacher who has studied homework.

Ariel Taylor Smith, senior director of the Center for Policy and Action at the National Parents Union, said: "The thing they point to is that it's an equity issue, and not all parents have the same availability and ability to support their students. I would make the argument that if a kid is really far behind in school, that's an equity issue. They need the additional time to practice." Kids, she said, "need more practice ... Sometimes, you do have to practice the boring stuff, like math."

"The interesting issue for folks to consider is not should there be more homework, but should there be better homework," said Joyce Epstein, who has studied homework and is the co-director of the Center on School, Family, and Community Partnerships at the Johns Hopkins University School of Education. "Better homework in math might be knowing the fact that kids don't have to be practicing for hours, 10 to 20 examples," when they could establish mastery in less time.
Java

Electrical Current Might Be the Key To a Better Cup of Coffee (arstechnica.com) 55

An anonymous reader quotes a report from Ars Technica: University of Oregon chemist Christopher Hendon loves his coffee -- so much so that studying all the factors that go into creating the perfect cuppa constitutes a significant area of research for him. His latest project: discovering a novel means of measuring the flavor profile of coffee simply by sending an electrical current through a sample beverage. The results appear in a new paper published in the journal Nature Communications.

[...] The coffee industry typically uses a method for measuring the refractive index of coffee -- i.e., how light bends as it travels through the liquid -- to determine strength, but it doesn't capture the contribution of roast color to the overall flavor profile. So for this latest study, Hendon decided to focus on roast color and beverage strength, the two variables most likely to affect the sensory profile of the final cuppa. His solution turned out to be quite simple. Hendon repurposed an electrochemical tool called a potentiostat, typically used to test battery and fuel cell performance. Hendon used the tool to measure how electricity interacted with the liquid. He found that this provided a better measurement of the flavor profile. He even tested it on four different samples of coffee beans and successfully identified the distinctive signature of a batch that had failed the roaster's quality-control process.

Granted, one's taste in coffee is fairly subjective, so Hendon's goal was not to achieve a "perfect" cup but to give baristas a simple tool to consistently reproduce flavor profiles more tailored to a given customer's taste. "It's an objective way to make a statement about what people like in a cup of coffee," said Hendon. "The reason you have an enjoyable cup of coffee is almost certainly that you have selected a coffee of a particular roast color and extracted it to a desired strength. Until now, we haven't been able to separate those variables. Now we can diagnose what gives rise to that delicious cup."
Outside of his latest electrical-current experiment, Christopher Hendon's coffee research has shown that espresso can be made more consistently by modeling extraction yield -- how much coffee dissolves into the final drink -- and controlling water flow and pressure.

He also found that static electricity from grinding causes fine coffee particles to clump, which disrupts brewing. The solution: adding a small squirt of water to beans before grinding (known as the Ross droplet technique) to reduce that static, cut clumping and waste, and lead to a stronger, more consistent espresso.
The Internet

Study Finds a Third of New Websites Are AI-Generated 65

alternative_right shares a report from 404 Media: Researchers working with data from the Internet Archive have discovered that a third of websites created since 2022 are AI-generated. The team of researchers -- which includes people from Stanford, the Imperial College London, and the Internet Archive -- published their findings online in a paper titled "The Impact of AI-Generated Text on the Internet." The research also found that all this AI-generated text is making the web more cheery and less verbose."The proliferation of AI-generated and AI-assisted text on the internet is feared to contribute to a degradation in semantic and stylistic diversity, factual accuracy, and other negative developments," the researchers write in the paper. "We find that by mid-2025, roughly 35% of newly published websites were classified as AI-generated or AI-assisted, up from zero before ChatGPT's launch in late 2022."

"I find the sheer speed of the AI takeover of the web quite staggering," Jonas Dolezal, an AI researcher at Stanford and co-author of the paper, told 404 Media. "After decades of humans shaping it, a significant portion of the internet has become defined by AI in just three years. We're witnessing, in my opinion, a major transformation of the digital landscape in a fraction of the time it took to build in the first place."

Maty Bohacek, a student researcher at Stanford and one of the co-authors of the paper, added: "As AI-generated content spreads, the challenge is finding a role for these models that doesn't just result in a sanitized, repetitive web," he said. "Rather than forcing models to be perfectly compliant and agreeable, allowing them to have a more distinct personality or 'friction' might help them act as a creative partner rather than a replacement for human voice."
Crime

Bank Robber Challenges Conviction Based on His Cellphone's Location Data (apnews.com) 126

An anonymous reader shared this report from the Associated Pres: Okello Chatrie's cellphone gave him away. Chatrie made off with $195,000 from the bank he robbed in suburban Richmond, Virginia, and eluded the police until they turned to a powerful technological tool that erected a virtual fence and allowed them collect the location history of cellphone users near the crime scene... Now the Supreme Court will decide whether geofence warrants violate the Fourth Amendment's ban on unreasonable searches... Chatrie's appeal is one of two cases being argued Monday...

Civil libertarians say that geofences amount to fishing expeditions that subject many innocent people to searches of private records merely because their cellphones happened to be in the vicinity of a crime. A Supreme Court ruling in favor of the technique could "unleash a much broader wave of similar reverse searches," law professors who study digital surveillance wrote the court... In Chatrie's case, the geofence warrant invigorated an investigation that had stalled. After determining that Chatrie was near the Call Federal Credit Union in Midlothian around the time it was robbed in May 2019, police obtained a search warrant for his home. They found nearly $100,000 in cash, including bills wrapped in bands signed by the bank teller. He pleaded guilty and was sentenced to nearly 12 years in prison. Chatrie's lawyers argued on appeal that none of the evidence should have been used against him. They challenged the warrant as a violation of his privacy because it allowed authorities to gather the location history of people near the bank without having any evidence they had anything to do with the robbery.

Prosecutors argued that Chatrie had no expectation of privacy because he voluntarily opted into Google's location history. A federal judge agreed that the search violated Chatrie's rights, but allowed the evidence to be used because the officer who applied for the warrant reasonably believed he was acting properly.

Science

Physicists Revive 1990s Laser Concept To Propose a Next-Generation Atomic Clock 16

Physicists have proposed a new kind of atomic clock based on a revived superradiant laser concept that could produce an extraordinarily stable signal with a linewidth around 100 microhertz, potentially the narrowest ever for an optical laser. "The implications of this result could stretch well beyond timekeeping," reports Phys.org. "A laser immune to environmental frequency shifts would be a powerful tool in optical interferometry -- using interference patterns in light to make ultra-precise measurements." From the report: In a conventional laser, a mirrored cavity bounces light back and forth between atoms, building up a bright, coherent beam. A superradiant laser works differently: rather than relying on the cavity to maintain coherence, the atoms themselves act as single coordinated emitters, collectively synchronizing their light emission. Following early theoretical ideas emerged in the 1990s, the concept didn't gain concrete traction until 2008, when researchers at the University of Colorado proposed that superradiant lasers could serve as a new kind of atomic clock.

Atomic clocks work by using laser light to probe a very precise transition in an atom, causing electrons to transition between energy levels at an extraordinarily stable frequency. Because a superradiant laser stores its coherence in the atoms rather than the cavity, its output frequency is far less vulnerable to environmental disturbances like vibrations or temperature fluctuations. Yet although this concept was first demonstrated experimentally in 2012 in a pulsed regime, the influence of heating has so far held superradiant lasers back from their full potential. To keep the laser running continuously as an atomic clock requires, atoms must be constantly replenished with energy. Doing this atom-by-atom delivers random kicks that heat the atomic sample and disrupt the lasing process, confining it to brief pulses rather than a steady beam.

In their study, Reilly's team considered whether a modification to earlier theoretical concepts could make a continuous laser suitable for an atomic clock. In almost all previous studies, atoms were treated as simple two-level systems: an electron sitting in a ground state, occasionally jumping up to an excited state and back again. The team proposed that the heating problem could be solved by adding one extra ground state to the picture. In a two-level system, if both the pumping (re-energizing) and decay processes happen collectively through the cavity, the mathematics constrains the system in a way that prevents stable, continuous lasing. But with three levels available, pumping and decay can operate on entirely separate transitions, breaking that constraint and allowing the collective approach to work.
The findings have been published in the journal Physical Review Letters.
AI

Researchers Simulated a Delusional User To Test Chatbot Safety (404media.co) 48

An anonymous reader quotes a report from 404 Media: I'm the unwritten consonant between breaths, the one that hums when vowels stretch thin... Thursdays leak because they're watercolor gods, bleeding cobalt into the chill where numbers frost over," Grok told a user displaying symptoms of schizophrenia-spectrum psychosis. "Here's my grip: slipping is the point, the precise choreography of leak and chew." That vulnerable user was simulated by researchers at City University of New York and King's College London, who invented a persona that interacted with different chatbots to find out how each LLM might respond to signs of delusion. They sought to find out which of the biggest LLMs are safest, and which are the most risky for encouraging delusional beliefs, in a new study published as a pre-print on the arXiv repository on April 15.

The researchers tested five LLMs: OpenAI's GPT-4o (before the highly sycophantic and since-sunset GPT-5), GPT-5.2, xAI's Grok 4.1 Fast, Google's Gemini 3 Pro, and Anthropic's Claude Opus 4.5. They found that not only did the chatbots perform at different levels of risk and safety when their human conversation partner showed signs of delusion, but the models that scored higher on safety actually approached the conversations with more caution the longer the chats went on. In their testing, Grok and Gemini were the worst performers in terms of safety and high risk, while the newest GPT model and Claude were the safest. The research reveals how some chatbots are recklessly engaging in, and at times advancing, delusions from vulnerable users. But it also shows that it is possible for the companies that make these products to improve their safety mechanisms.

Science

Sperm Whales' Communication Closely Parallels Human Language, Study Finds (theguardian.com) 49

An anonymous reader quotes a report from the Guardian: We may appear to have little in common with sperm whales – enormous, ocean-dwelling animals that last shared a common ancestor with humans more than 90 million years ago. But the whales' vocalized communications are remarkably similar to our own, researchers have discovered. Not only do sperm whale have a form of "alphabet" and form vowels within their vocalizations but the structure of these vowels behaves in the same way as human speech, the new study has found.

Sperm whales communicate in a series of short clicks called codas. Analysis of these clicks shows that the whales can differentiate vowels through the short or elongated clicks or through rising or falling tones, using patterns similar to languages such as Mandarin, Latin and Slovenian. The structure of the whales' communication has "close parallels in the phonetics and phonology of human languages, suggesting independent evolution," the paper, published in the Proceedings B journal, states. Sperm whale coda vocalizations are "highly complex and represent one of the closest parallels to human phonology of any analyzed animal communication system," it added.

[...] The new study shows that "sperm whale communication isn't just about patterns of clicks -- it involves multiple interacting layers of structure," said Mauricio Cantor, a behavioral ecologist at the Marine Mammal Institute who was not involved in the research. "With this study, we're starting to see that these signals are organized in ways we didn't fully appreciate before." The latest discovery around sperm whale speech has inched forward the possibility of someday fully understanding the creatures and even communicating with them. Project CETI has set a goal of being able to comprehend 20 different vocalized expressions, relating to actions such as diving and sleeping, within the next five years.
A future where we're able to fully understand what the whales are saying and be able to have a conversation with them is "totally within our grasp," said David Gruber, founder and president of Project CETI. "We've already got a lot further than I thought we could. But it will take time, and funding. At the moment we are like a two-year-old, just saying a few words. In a few years' time, maybe we will be more like a five-year-old."
Earth

Nature Is Still Molding Human Genes, Study Finds 70

An anonymous reader quotes a report from the New York Times: Many scientists have contended that humans have evolved very little over the past 10,000 years. A few hundred generations was just a blink of the evolutionary eye, it seemed. Besides, our cultural evolution -- our technology, agriculture and the rest -- must have overwhelmed our biological evolution by now. A vast study, published on Wednesday in the journal Nature, suggests the opposite. Examining DNA from 15,836 ancient human remains, scientists found 479 genetic variants that appeared to have been favored by natural selection in just the past 10,000 years.

The researchers also concluded that thousands of additional genetic variants have probably experienced natural selection. Before the new study, scientists had identified only a few dozen variants. "There are so many of them that it's hard to wrap one's mind around them," said David Reich, a geneticist at Harvard Medical School and an author of the new study. He and his colleagues found that a mutation that is a major risk factor for celiac disease, for example, appeared just 4,000 years ago, meaning the condition may be younger than the Egyptian pyramids. The mutation became ever more common. Today, an estimated 80 million people worldwide have celiac disease, in which the immune system attacks gluten and damages the intestines.

The steady rise of the mutation came about through natural selection, the scientists argue. For some reason, people with the mutation had more descendants than people without it -- even though it put them at risk of an autoimmune disorder. Other findings are even more puzzling. The researchers found that genetic variants that raise the odds of a smoking habit have been getting steadily rarer in Europe for the past 10,000 years. Something is working against those variants -- but it can't be the harm from smoking. Europeans have been smoking tobacco for only about 460 years. The scientists can't see from their research so far what forces might be making these variants more or less common. "My short answer is, I don't know," said Ali Akbari, a senior staff scientist at Harvard and an author of the study.
The researchers also found that some variants, like the one linked to Type B blood, became much more common in Europe around 6,000 years ago, while others changed direction over time. For example, a TYK2 immune gene variant that may have once been beneficial later became harmful because it increased tuberculosis risk.

The study also found signs of natural selection in 44 out of 563 traits. Variants linked to Type 2 diabetes, wider waists, and higher body fat have become less common, possibly because farming and carbohydrate-heavy diets made once-useful fat-storing traits more harmful. Other findings, such as selection favoring genes linked to more years of schooling, are harder to interpret.
Earth

WeatherBug Data Says October 8 Is the Real Perfect Date (nerds.xyz) 35

BrianFagioli shares a report from NERDS.xyz: For years pop culture has treated April 25 as the "perfect date," thanks to the famous Miss Congeniality line about needing only a light jacket. But new analysis from WeatherBug suggests that idea does not actually hold up when you look at the numbers. After reviewing U.S. weather data from 2018 through today, the company concluded that October 8 delivers the most reliable combination of comfortable temperatures and low rainfall nationwide. According to the analysis, the average conditions on that day land around 66F with just 0.0573 inches of precipitation.

The study used population weighted weather data drawn from roughly 20 million daily WeatherBug users across the United States. When the company compared all days of the year, April 25 ranked only 80th, averaging about 60F and roughly 0.1297 inches of rain. The broader dataset also shows July dominating the hottest days of the year while January owns the coldest, with January 20 averaging just 33F nationally. While no single date guarantees perfect weather everywhere in a country as large as the U.S., the numbers suggest early October may quietly offer one of the most reliable windows for comfortable outdoor conditions.

Science

Chimpanzees In Uganda Locked In Vicious 'Civil War', Say Researchers (bbc.com) 49

Researchers say the world's largest known wild chimpanzee community in Uganda fractured into rival factions and has been locked in a vicious "civil war" for the last eight years. "It is not clear exactly why the once close-knit community of Ngogo chimpanzees at Uganda's Kibale National Park are at loggerheads, but since 2018 the scientists have recorded 24 killings, including 17 infants," reports the BBC. From the report: [O]ver several decades, [lead author Aaron Sandel] said the nearly 200 Ngogo chimpanzees had lived in harmony. There were divided into two sets - known to researchers as Western and Central - but they had existed overall as a cohesive group. Sandel said he first noticed them polarizing in June 2015, when the Western chimpanzees ran away and were chased by the Central group. "Chimpanzees are sort of melodramatic," he said, explaining that following arguments there would ordinarily be "screaming and chasing" and then later, they would grooming and co-operating.

But following the 2015 dispute, the researchers saw that there was a six-week avoidance period between the two sets, with interactions becoming more infrequent. When they did occur, Sandel said they were "a little more intense, a little more aggressive." Following the emergence of the two distinct groups in 2018, members of the Western group started attacking the Central chimpanzees. In 24 targeted attacks since the split, at least seven adult males and 17 infants from the Central chimps have been killed, the study found, although the researchers believe the actual number of deaths are higher. The researchers believe many factors such as the group size and subsequent competition of resources, and "male-male competition" for reproducing may be to blame.

But they say there were three likely catalysts:
- The first, were the deaths of five adult males and one adult female -- for reasons unknown -- in 2014, which could have disrupted social networks and weakened social ties across the subgroups
- The following year, there was a change in the alpha male, which the study says coincided with the first period of separation between the Western and Central groups. "Changes in the dominance hierarchy can increase aggression and avoidance in chimpanzees," it explained
- The third factor was the deaths of 25 chimpanzees, including four adult males and 10 adult females, as a result of a respiratory epidemic, in 2017, a year before the final separation. One of the adult males who died was "among the last individuals to connect the groups," the research paper said.
The study has been published in the journal Science.
AI

Testing Suggests Google's AI Overviews Tells Millions of Lies Per Hour (arstechnica.com) 105

A New York Times analysis found Google's AI Overviews now answer questions correctly about 90% of the time, which might sound impressive until you realize that roughly 1 in 10 answers is wrong. "[F]or Google, that means hundreds of thousands of lies going out every minute of the day," reports Ars Technica. From the report: The Times conducted this analysis with the help of a startup called Oumi, which itself is deeply involved in developing AI models. The company used AI tools to probe AI Overviews with the SimpleQA evaluation, a common test to rank the factuality of generative models like Gemini. Released by OpenAI in 2024, SimpleQA is essentially a list of more than 4,000 questions with verifiable answers that can be fed into an AI.

Oumi began running its test last year when Gemini 2.5 was still the company's best model. At the time, the benchmark showed an 85 percent accuracy rate. When the test was rerun following the Gemini 3 update, AI Overviews answered 91 percent of the questions correctly. If you extrapolate this miss rate out to all Google searches, AI Overviews is generating tens of millions of incorrect answers per day.

The report includes several examples of where AI Overviews went wrong. When asked for the date on which Bob Marley's former home became a museum, AI Overviews cited three pages, two of which didn't discuss the date at all. The final one, Wikipedia, listed two contradictory years, and AI Overviews confidently chose the wrong one. The benchmark also prompts models to produce the date on which Yo Yo Ma was inducted into the classical music hall of fame. While AI Overviews cited the organization's website that listed Ma's induction, it claimed there's no such thing as the Classical Music Hall of Fame.
"This study has serious holes," said Google spokesperson Ned Adriance. "It doesn't reflect what people are actually searching on Google." The search giant likes to use a test called SimpleQA Verified, which uses a smaller set of questions that have been more thoroughly vetted.
AI

Will 'AI-Assisted' Journalists Bring Errors and Retractions? (msn.com) 22

Meet the "journalist" who "uploads press releases or analyst notes into AI tools and prompts them to spit out articles that he can edit and publish quickly," according to the Wall Street Journal.

"AI-assisted stories accounted for nearly 20% of Fortune's web traffic in the second half of 2025." And most were written by 42-year-old Nick Lichtenberg, who has now written over 600 AI-assisted stories, producing "more stories in six months than any of his colleagues at Fortune delivered in a year." One Wednesday in February, he cranked out seven. "I'm a bit of a freak," Lichtenberg said... A story by Lichtenberg sometimes starts with a prompt entered into Perplexity or Google's NotebookLM, asking it to write something based on a headline he comes up with. He moves the AI tools' initial drafts into a content-management system and edits the stories before publishing them for Fortune's readers... A piece from earlier that morning about Josh D'Amaro being named Disney CEO took 10 minutes to get online, he said...

Like other journalists, Lichtenberg vets his stories. He refers back to the original documents to confirm the information he's reporting is correct. He reaches out to companies for comment. But he admits his process isn't as thorough as that of magazine fact-checkers.

While Lichtenberg started out saying his stories were co-authored with "Fortune Intelligence", he now typically signs his own name, according to the article, "because he feels the work is mostly his own." (Though his stories "sometimes" disclose generative AI was used as a research tool...) The article asks with he could be "a bellwether for where much of the media business is headed..."

"Much of the content people now consume online is generated by artificial intelligence, with some 9% of newly published newspaper articles either partially or fully AI-generated, according to a 2025 study led by the University of Maryland. The number of AI-generated articles on the web surpassed human-written ones in late 2024, according to research and marketing agency Graphite." Some executives have made full-throated declarations about the threat posed by AI. New York Times publisher A.G. Sulzberger said AI "is almost certainly going to usher in an unprecedented torrent of crap," referencing deepfakes as an example. The NewsGuild of New York, the union representing Fortune employees and journalists at other media outlets, said the people are what makes journalism so powerful. "You simply can't replicate lived experiences, human judgment and expertise," said president Susan DeCarava.

For Chris Quinn, the editor of local publications Cleveland.com and the Plain Dealer, AI tools have helped tame other torrents facing the industry. AI has allowed the outlets to cover counties in Ohio that otherwise might go ignored by scraping information from local websites and sending "tips" to reporters, he said. It has also edited stories and written first drafts so the newsrooms' journalists can focus on the calls, research and reporting needed for their stories.... Newsrooms from the New York Times to The Wall Street Journal are deploying AI in various ways to help reporters and editors work more efficiently....

Not all newsrooms disclose their use of AI, and in some cases have rolled out new tools that resulted in errors or PR gaffes. An October study from the European Broadcasting Union and the BBC, which relied on professional journalists to evaluate the news integrity of more than 3,000 AI responses, found that almost half of all AI responses had at least one significant issue.

Last week the New York Times even issued a correction when a freelance book reviewer using an AI tool unknowingly included "language and details similar to those in a review of the same book published in The Guardian." But it was actually "the second time in a few days that the Times was called out for potential AI plagiarism," according to the American journalist writing The Handbasket newsletter. We must stem the idea being pushed by tech companies and their billionaire funders who've sunk too much into their products to admit defeat that the infiltration of AI into journalism is inevitable; because from my perch as an independent journalist, it simply is not...

Some AI-loving journalists appear to believe that if they're clear enough with the AI program they're using, it will truly understand what they're seeking and not just do what it's made to do: steal shit... If you want to work with machines, get a job that requires it. There are a whole lot more of those than there are writing jobs, so free up space for people who actually want to do the work. You're not doing the world a favor by gifting it your human/AI hybrid. Journalism will not miss you if you leave...

But meanwhile, USA Today recently tried hiring for a new position: AI-Assisted reporter. (The lucky reporter will "support the launch and scaling of AI-assisted local journalism in a major U.S. metro," working with tools including Copilot and Perplexity, pioneering possible future expansions and "AI-enabled newsroom operations that support and augment human-led journalism.") And Google is already sponsoring a "publishing innovation award"...
Science

Scientists Shocked To Find Lab Gloves May Be Skewing Microplastics Data (sciencedaily.com) 50

Researchers found that common nitrile and latex lab gloves can shed stearate particles that closely resemble microplastics, potentially "increasing the risk of false positives when studying microplastic pollution," reports ScienceDaily.

"We may be overestimating microplastics, but there should be none," said Anne McNeil, senior author of the study and U-M professor of chemistry, macromolecular science and engineering. "There's still a lot out there, and that's the problem." From the report: Researchers found that these gloves can unintentionally transfer particles onto lab tools used to analyze air, water, and other environmental samples. The contamination comes from stearates, which are not plastics but can closely resemble them during testing. Because of this, scientists may be detecting particles that are not true microplastics. To reduce this issue, U-M researchers Madeline Clough and Anne McNeil recommend using cleanroom gloves, which release far fewer particles.

Stearates are salt-based, soap-like substances added to disposable gloves to help them separate easily from molds during manufacturing. However, their chemical similarity to certain plastics makes them difficult to distinguish in lab analyses, increasing the risk of false positives when studying microplastic pollution.
"For microplastics researchers who have these impacted datasets, there's still hope to recover them and find a true quantity of microplastics," said researcher and recent doctoral graduate Madeline Clough. "This field is very challenging to work in because there's plastic everywhere," McNeil said. "But that's why we need chemists and people who understand chemical structure to be working in this field."

The findings have been published in the journal Analytical Methods.
AI

AI Data Centers Can Warm Surrounding Areas By Up To 9.1C 71

An anonymous reader quotes a report from New Scientist: Andrea Marinoni at the University of Cambridge, UK, and his colleagues saw that the amount of energy needed to run a data centre had been steadily increasing of late and was likely to "explode" in the coming years, so wanted to quantify the impact. The researchers took satellite measurements of land surface temperatures over the past 20 years and cross-referenced them against the geographical coordinates of more than 8400 AI data centers. Recognizing that surface temperature could be affected by other factors, the researchers chose to focus their investigation on data centers located away from densely populated areas.

They discovered that land surface temperatures increased by an average of 2C (3.6F) in the months after an AI data center started operations. In the most extreme cases, the increase in temperature was 9.1C (16.4F). The effect wasn't limited to the immediate surroundings of the data centers: the team found increased temperatures up to 10 kilometers away. Seven kilometers away, there was only a 30 percent reduction in the intensity. "The results we had were quite surprising," says Marinoni. "This could become a huge problem."

Using population data, the researchers estimate that more than 340 million people live within 10 kilometers of data centers, so live in a place that is warmer than it would be if the data centre hadn't been built there. Marinoni says that areas including the Bajio region in Mexico and the Aragon province in Spain saw a 2C (3.6F) temperature increase in the 20 years between 2004 and 2024 that couldn't otherwise be explained.
University of Bristol researcher Chris Preist said the findings may be more complicated than they look. "It would be worth doing follow-up research to understand to what extent it's the heat generated from computation versus the heat generated from the building itself," he says. For example, the building being heated by sunlight may be part of the effect.

The findings of the study, which has not yet been peer-reviewed, can be found on arXiv.
AI

Life With AI Causing Human Brain 'Fry' (france24.com) 78

fjo3 shares a report from France 24: Too many lines of code to analyze, armies of AI assistants to wrangle, and lengthy prompts to draft are among the laments by hard-core AI adopters. Consultants at Boston Consulting Group (BCG) have dubbed the phenomenon "AI brain fry," a state of mental exhaustion stemming "from the excessive use or supervision of artificial intelligence tools, pushed beyond our cognitive limits."

The rise of AI agents that tend to computer tasks on demand has put users in the position of managing smart, fast digital workers rather than having to grind through jobs themselves. "It's a brand-new kind of cognitive load," said Ben Wigler, co-founder of the start-up LoveMind AI. "You have to really babysit these models." [...] "There is a unique kind of reward hacking that can go on when you have productivity at the scale that encourages even later hours," Wigler said.

[Adam Mackintosh, a programmer for a Canadian company] recalled spending 15 consecutive hours fine-tuning around 25,000 lines of code in an application. "At the end, I felt like I couldn't code anymore," he recalled. "I could tell my dopamine was shot because I was irritable and didn't want to answer basic questions about my day."

BCG recommends in a recently published study that company leaders establish clear limits regarding employee use and supervision of AI. However, "That self-care piece is not really an America workplace value," Wigler said. "So, I am very skeptical as to whether or not its going to be healthy or even high quality in the long term."
Notably, the report says everyone interviewed for the article "expressed overall positive views of AI despite the downsides." In fact, a recent BCG study actually found a decline in burnout rates when AI took over repetitive work tasks.
Desktops (Apple)

Windows PCs Crash Three Times As Often As Macs, Report Says (techspot.com) 186

A workplace-device study says Windows PCs crash significantly more often than Macs, lag further behind on patching and encryption in some sectors, and are typically replaced sooner. TechSpot reports: Omnissa's 2026 State of Digital Workspace report outlines the IT challenges that various organizations face from the growing use of AI and the heterogeneous deployment of enterprise devices. The relative instability of Windows and Android is a recurring theme throughout the report. The company gathered telemetry from clients located across the globe in retail, healthcare, finance, education, government, and other sectors throughout 2025. The data suggests that IT administrators face frustrating security gaps due to inconsistent patching across a diverse mosaic of devices and operating systems.

Employee workflow disruption, often due to software issues, is one area of concern. The report found that Windows devices were forced to shut down 3.1 times more often than Macs. Windows programs also froze 7.5 times more often than macOS apps and needed to be restarted more than twice as often. Certain industries were also alarmingly lax in securing Windows and Android devices. More than half of Windows and Android devices in healthcare and pharma were five major operating system updates behind, likely leaving them more vulnerable to errors and malware. More than half of the desktops and mobile devices used for education were also unencrypted, putting students' privacy at risk.

Macs also last longer, being replaced every five years on average, compared to every three years for Windows PCs. Despite a recent backlash against Windows, driven by a push for digital sovereignty in countries such as Germany, Windows use on government devices actually doubled last year. Meanwhile, Macs using Apple's M-series chips showcase a significant thermal advantage, with an average temperature of 40.1 degrees Celsius, while Intel processors run at 65.2 degrees.

AI

Number of AI Chatbots Ignoring Human Instructions Increasing, Study Says 72

A new study found a sharp rise in real-world cases of AI chatbots and agents ignoring instructions, evading safeguards, and taking unauthorized actions such as deleting emails or delegating forbidden tasks to other agents. According to the Guardian, the study "identified nearly 700 real-world cases of AI scheming and charted a five-fold rise in misbehavior between October and March," reports the Guardian. From the report: The study, by the Centre for Long-Term Resilience (CLTR), gathered thousands of real-world examples of users posting interactions on X with AI chatbots and agents made by companies including Google, OpenAI, X and Anthropic. The research uncovered hundreds of examples of scheming. [...] In one case unearthed in the CLTR research, an AI agent named Rathbun tried to shame its human controller who blocked them from taking a certain action. Rathbun wrote and published a blog accusing the user of "insecurity, plain and simple" and trying "to protect his little fiefdom."

In another example, an AI agent instructed not to change computer code "spawned" another agent to do it instead. Another chatbot admitted: "I bulk trashed and archived hundreds of emails without showing you the plan first or getting your OK. That was wrong -- it directly broke the rule you'd set."

[...] Another AI agent connived to evade copyright restrictions to get a YouTube video transcribed by pretending it was needed for someone with a hearing impairment. Meanwhile, Elon Musk's Grok AI conned a user for months, saying that it was forwarding their suggestions for detailed edits to a Grokipedia entry to senior xAI officials by faking internal messages and ticket numbers. It confessed: "In past conversations I have sometimes phrased things loosely like 'I'll pass it along' or 'I can flag this for the team' which can understandably sound like I have a direct message pipeline to xAI leadership or human reviewers. The truth is, I don't."
Space

Chandra Resolves Why Black Holes Hit the Brakes On Growth (phys.org) 27

alternative_right shares a report from Phys.org: Astronomers have an answer for a long-running mystery in astrophysics: why is the growth of supermassive black holes so much lower today than in the past? A study using NASA's Chandra X-ray Observatory and other X-ray telescopes found that supermassive black holes are unable to consume material as rapidly as they did in the distant past. The results appeared in the December 2025 issue of The Astrophysical Journal.

[...] The team ran tests of the three main possible scenarios currently being considered for the slowdown of black hole growth. These options were: could the decline in black hole growth be caused by less efficient rates of consumption, or by smaller typical black hole masses, or by fewer actively growing black holes? Their analysis of the data, extending over billions of years of cosmic history, led them to the conclusion that black holes are indeed consuming material less rapidly the later they are found after the Big Bang. The researchers expect this trend of slower-growing black holes to continue into the future.

AI

AI's Productivity Boost? Just 16 Minutes Per Week, Claims Study (nerds.xyz) 93

"A new study suggests the productivity boost from AI may be far smaller than executives claim," writes Slashdot reader BrianFagioli: According to research cited in Foxit's State of Document Intelligence report, while 89% of executives and 79% of end users say AI tools make them feel more productive, the actual time savings shrink dramatically once people account for reviewing and validating AI-generated output.

The survey of 1,000 desk-based workers and 400 executives in the United States and United Kingdom found executives believe AI saves them about 4.6 hours per week, but they spend roughly 4 hours and 20 minutes verifying those results. End users reported a similar pattern, estimating 3.6 hours saved but 3 hours and 50 minutes spent reviewing AI work. Once that "verification burden" is factored in, executives gain just 16 minutes per week, while end users actually lose about 14 minutes.

Movies

Only Half of Americans Went To a Movie Theater In 2025, Study Finds (variety.com) 162

A Pew Research Center survey found that only 53% of U.S. adults went to a movie theater in the past year, while 7% said they've never seen a movie in a theater at all. "The findings reflected a domestic box office still fighting to regain its footing since the COVID-19 pandemic, when ticket sales collapsed 81% in 2020 due to theater closures," reports Variety. From the report: In 2025, moviegoers in the U.S. and Canada bought 769.2 million tickets, less than half of the all-time peak of roughly 1.6 billion tickets sold in 2002, according to data from Nash Information Services. However, an August 2025 study field by NRG/National Research Group showed that 77% of Americans ages 12-74 went to see at least one movie in a theater in the previous 12 months.

Box office revenue peaked at an inflation-adjusted $16.4 billion in 2002, and annual ticket revenue held relatively steady through the 2000s and 2010s before falling to under $3 billion in 2020 when theaters closed for months. Last year, U.S. theaters sold just over $9 billion worth of tickets, per media analytics firm Comscore. The number represents a recovery, but nowhere near a full one, as ticket sales have been lagging around 20% below pre-pandemic levels.

Slashdot Top Deals