AI

Cornell Researchers Develop Invisible Light-Based Watermark To Detect Deepfakes 56

Cornell University researchers have developed an "invisible" light-based watermarking system that embeds unique codes into the physical light that illuminates the subject during recording, allowing any camera to capture authentication data without special hardware. By comparing these coded light patterns against recorded footage, analysts can spot deepfake manipulations, offering a more resilient verification method than traditional file-based watermarks. TechSpot reports: Programmable light sources such as computer monitors, studio lighting, or certain LED fixtures can be embedded with coded brightness patterns using software alone. Standard non-programmable lamps can be adapted by fitting them with a compact chip -- roughly the size of a postage stamp -- that subtly fluctuates light intensity according to a secret code. The embedded code consists of tiny variations in lighting frequency and brightness that are imperceptible to the naked eye. Michael explained that these fluctuations are designed based on human visual perception research. Each light's unique code effectively produces a low-resolution, time-stamped record of the scene under slightly different lighting conditions. [Abe Davis, an assistant professor] refers to these as code videos.

"When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos," Davis said. "And if someone tries to generate fake video with AI, the resulting code videos just look like random variations." By comparing the coded patterns against the suspect footage, analysts can detect missing sequences, inserted objects, or altered scenes. For example, content removed from an interview would appear as visual gaps in the recovered code video, while fabricated elements would often show up as solid black areas. The researchers have demonstrated the use of up to three independent lighting codes within the same scene. This layering increases the complexity of the watermark and raises the difficulty for potential forgers, who would have to replicate multiple synchronized code videos that all match the visible footage.
The concept is called noise-coded illumination and was presented on August 10 at SIGGRAPH 2025 in Vancouver, British Columbia.
Medicine

Low Dose of Lithium Reverses Alzheimer's Symptoms In Mice (newscientist.com) 70

An anonymous reader quotes a report from New Scientist: People withAlzheimer's disease have lower levels of lithium in their brains, and giving lithium to mice with symptoms of the condition reverses cognitive decline. Together, the findings suggest that lithium deficiency could be a driver of Alzheimer's disease and that low-dose lithium medications could help treat it. [...] [Bruce Yanknerat Harvard University] and his colleagues analyzed levels of 27 metals in the brains of 285 people after they died, 94 of whom were diagnosed with Alzheimer's disease and 58 of whom had mild cognitive impairment, a precursor of the condition. The other participants showed no signs of cognitive decline at the time of their death.

Lithium levels in the prefrontal cortex -- a brain region crucial for memory and decision-making -- were about 36 percent lower, on average, in people with Alzheimer's disease than in those without any cognitive decline. For those with mild cognitive impairment, lithium levels were about 23 percent lower. "We suspect that's due to a number of environmental factors: dietary intake, genetics and so forth," says Yankner. Yet there seemed to be another reason, too. In those with Alzheimer's disease, clumps of proteins called amyloid plaques contained nearly three times the amount of lithium as plaque-free regions of their brain. "Lithium becomes sequestered in these plaques," says Yankner. "We have two things going on. There is impaired uptake of lithium [in the brain] very early on and then, as the disease progresses, the lithium that is in the brain is further diminished by being bound to amyloid."

To understand how this influences cognition, the team genetically engineered 22 mice to develop Alzheimer's-like symptoms and reduced their lithium intake by 92 percent. After about eight months, the animals performed significantly worse on multiple memory tests compared with 16 mice on a standard diet. It took lithium-deficient mice around 10 seconds longer to find a hidden platform in a water maze, for example, even after six days of training. Their brains also contained nearly two and a half times as many amyloid plaques. Genetic analysis of brain cells from the lithium-deficient mice showed increased activity in genes related to neurodegeneration and Alzheimer's. They also had more brain inflammation and their immune cells were less able to clear away amyloid plaques, changes also seen in people with Alzheimer's disease.

The team then screened different lithium compounds for their ability to bind to amyloid and found that lithium orotate -- a naturally occurring compound in the body formed by combining lithium with orotic acid -- appeared to be the least likely to get trapped within plaques. Nine months of treatment with this compound significantly reduced plaques in mice with Alzheimer's-like symptoms, and they also performed as well on memory tests as normal mice. These results suggest lithium orotate could be a promising treatment for Alzheimer's.
The findings have been published in the journal Nature.
Privacy

'Facial Recognition Tech Mistook Me For Wanted Man' (bbc.co.uk) 112

Bruce66423 shares a report from the BBC: A man who is bringing a High Court challenge against the Metropolitan Police after live facial recognition technology wrongly identified him as a suspect has described it as "stop and search on steroids." Shaun Thompson, 39, was stopped by police in February last year outside London Bridge Tube station. Privacy campaign group Big Brother Watch said the judicial review, due to be heard in January, was the first legal case of its kind against the "intrusive technology." The Met, which announced last week that it would double its live facial recognition technology (LFR) deployments, said it was removing hundreds of dangerous offenders and remained confident its use is lawful. LFR maps a person's unique facial features, and matches them against faces on watch-lists. [...]

Mr Thompson said his experience of being stopped had been "intimidating" and "aggressive." "Every time I come past London Bridge, I think about that moment. Every single time." He described how he had been returning home from a shift in Croydon, south London, with the community group Street Fathers, which aims to protect young people from knife crime. As he passed a white van, he said police approached him and told him he was a wanted man. "When I asked what I was wanted for, they said, 'that's what we're here to find out'." He said officers asked him for his fingerprints, but he refused, and he was let go only after about 30 minutes, after showing them a photo of his passport.

Mr Thompson says he is bringing the legal challenge because he is worried about the impact LFR could have on others, particularly if young people are misidentified. "I want structural change. This is not the way forward. This is like living in Minority Report," he said, referring to the science fiction film where technology is used to predict crimes before they're committed. "This is not the life I know. It's stop and search on steroids. "I can only imagine the kind of damage it could do to other people if it's making mistakes with me, someone who's doing work with the community."
Bruce66423 comments: "I suspect a payout of 10,000 pounds for each false match that is acted on would probably encourage more careful use, perhaps with a second payout of 100,000 pounds if the same person is victimized again."
Privacy

Despite Breach and Lawsuits, Tea Dating App Surges in Popularity (www.cbc.ca) 39

The women-only app Tea now "faces two class action lawsuits filed in California" in response to a recent breach," reports NPR — even as the company is now boasting it has more than 6.2 million users.

A spokesperson for Tea told the CBC it's "working to identify any users whose personal information was involved" in a breach of 72,000 images (including 13,000 verification photos and images of government IDs) and a later breach of 1.1 million private messages. Tea said they will be offering those users "free identity protection services." The company said it removed the ID requirement in 2023, but data that was stored before February 2024, when Tea migrated to a more secure system, was accessed in the breach... [Several sites have pointed out Tea's current privacy policy is telling users selfies are "deleted immediately."]

Tea was reportedly intended to launch in Canada on Friday, according to information previously posted on the App Store, but as of this week the launch date is now in February 2026. Tea didn't respond to CBC's questions about the apparent delay. Yet even amid the current turmoil, Tea's waitlist has ballooned to 1.5 million women, all eager to join, the company posted on Wednesday. A day later, Tea posted in its Instagram stories that it had approved "well over" 800,000 women into the app that day alone.

So, why is it so popular, despite the drama and risks?

Tea tapped into a perceived weakness of ther dating apps, according to an associate health studies professor at Ontario's Western University interviewed by the CBC, who thinks users should avoid Tea, at least until its security is restored.

Tech blogger John Gruber called the incident "yet another data point for the argument that any 'private messaging' feature that doesn't use E2EE isn't actually private at all." (And later Gruber notes Tea's apparent absence at the top of the charts in Google's Play Store. "I strongly suspect that, although Google hasn't removed Tea from the Play Store, they've delisted it from discovery other than by searching for it by name or following a direct link to its listing.")

Besides anonymous discussions about specific men, Tea also allows its users to perform background and criminal record checks, according to NPR, as well as reverse image searches. But the recent breach, besides threatening the safety of its users, also "laid bare the anonymous, one-sided accusations against the men in their dating pools." The CBC points out there's a men's rights group on Reddit now urging civil lawsuits against tea as part of a plan to get the app shut down. And "Cleveland lawyer Aaron Minc, who specializes in cases involving online defamation and harassment, told The Associated Press that his firm has received hundreds of calls from people upset about what's been posted about them on Tea."

Yet in response to Tea's latest Instagram post, "The comments were almost entirely from people asking Tea to approve them, so they could join the app."
United States

'Chuck E. Cheese' Handcuffed and Arrested in Florida, Charged with Using a Stolen Credit Card (nbcnews.com) 50

NBC News reports: Customers watched in disbelief as Florida police arrested a Chuck E. Cheese employee — in costume portraying the pizza-hawking rodent — and accused him of using a stolen credit card, officials said Thursday.... "I grabbed his right arm while giving the verbal instruction, 'Chuck E, come with me Chuck E,'" Tallahassee police officer Jarrett Cruz wrote in the report.
After a child's birthday party in June at Chuck E. Cheese, the child's mother had "spotted fraudulent charges at stores she doesn't frequent," according to the article — and she recognized a Chuck E. Cheese employee when reviewing a store's security footage. But when a police officer interviewed the employee — and then briefly left the restaurant — they returned to discover that their suspect "was gone but a Chuck E. Cheese mascot was now in the restaurant."

Police officer Cruz "told the mascot not to make a scene before the officer and his partner 'exerted minor physical effort' to handcuff him, police said... " The officers read the mouse his Miranda warnings before he insisted he never stole anyone's credit, police said.... Officers found the victim's Visa card in [the costume-wearing employee's] left pocket and a receipt from a smoke shop where one of the fraudulent purchases was made, police said.
He was booked on charges of "suspicion of larceny, possession of another person's ID without consent and fraudulent use of a credit card two or more times," according to the article. He was released after posting a $6,500 bond.

Thanks to long-time Slashdot reader destinyland for sharing the news.
Crime

How Gmail Server Evidence Led to a Jury Verdict of $23.2 Million For Wrongful Death (andrewwatters.com) 33

Long-time Slashdot reader wattersa is a lawyer in Redwood City, California, and a Slashdot reader since 1998. In 2022 he shared the remarkable story of a three-year missing person investigation that was ultimately solved with a subpoena to Google. A murder victim appeared to have sent an email at a time which would exonerate the chief suspect. But a closer inspection of that email's IP addresses revealed it was actually sent from a hotel where the suspect was staying. ("Although Google does not include the originating IP address in the email headers, it turns out that they retain the IP address for some unknown length of time...")

Today wattersa brings this update: The case finally went to trial in July 2025, where I testified about the investigation along with an expert witness on computer networking. The jury took three hours to return a verdict against the victim's husband for wrongful death in the amount of $23.2 million, with a special finding that he caused the death of his wife.

The defendant is a successful mechanical engineer at an energy company, but is walking as a free man because he is Canadian and no one can prosecute him in the U.S., since Taiwan and the U.S. don't have extradition with each other.

It was an interesting case and I look forward to using it as a model in other missing person cases.

The Internet

Scammers Use Google Ads To Inject Phony Help Lines On Apple, Microsoft Sites (arstechnica.com) 30

An anonymous reader quotes a report from Ars Technica: Tech support scammers have devised a method to inject their fake phone numbers into webpages when a target's web browser visits official sites for Apple, PayPal, Netflix, and other companies. The ruse, outlined in a post on Wednesday from security firm Malwarebytes, threatens to trick users into calling the malicious numbers even when they think they're taking measures to prevent falling for such scams. One of the more common pieces of security advice is to carefully scrutinize the address bar of a browser to ensure it's pointing to an organization's official website. The ongoing scam is able to bypass such checks.

The unknown actors behind the scam begin by buying Google ads that appear at the top of search results for Microsoft, Apple, HP, PayPal, Netflix, and other sites. While Google displays only the scheme and host name of the site the ad links to (for instance, https://www.microsoft.com/ the ad appends parameters to the path to the right of that address. When a target clicks on the ad, it opens a page on the official site. The appended parameters then inject fake phone numbers into the page the target sees.

Google requires ads to display the official domain they link to, but the company allows parameters to be added to the right of it that aren't visible. The scammers are taking advantage of this by adding strings to the right of the hostname. The parameters aren't displayed in the Google ad, so a target has no obvious reason to suspect anything is amiss. When clicked on, the ad leads to the correct hostname. The appended parameters, however, inject a fake phone number into the webpage the target sees. The technique works on most browsers and against most websites. Malwarebytes.com was among the sites affected until recently, when the site began filtering out the malicious parameters.

Education

'Ghost' Students are Enrolling in US Colleges Just to Steal Financial Aid (apnews.com) 110

Last week America's financial aid program announced that "the rate of fraud through stolen identities has reached a level that imperils the federal student aid programs."

Or, as the Associated Press suggests: Online classes + AI = financial aid fraud. "In some cases, professors discover almost no one in their class is real..." Fake college enrollments have been surging as crime rings deploy "ghost students" — chatbots that join online classrooms and stay just long enough to collect a financial aid check... Students get locked out of the classes they need to graduate as bots push courses over their enrollment limits.

And victims of identity theft who discover loans fraudulently taken out in their names must go through months of calling colleges, the Federal Student Aid office and loan servicers to try to get the debt erased. [Last week], the U.S. Education Department introduced a temporary rule requiring students to show colleges a government-issued ID to prove their identity... "The rate of fraud through stolen identities has reached a level that imperils the federal student aid program," the department said in its guidance to colleges.

An Associated Press analysis of fraud reports obtained through a public records request shows California colleges in 2024 reported 1.2 million fraudulent applications, which resulted in 223,000 suspected fake enrollments. Other states are affected by the same problem, but with 116 community colleges, California is a particularly large target. Criminals stole at least $11.1 million in federal, state and local financial aid from California community colleges last year that could not be recovered, according to the reports... Scammers frequently use AI chatbots to carry out the fraud, targeting courses that are online and allow students to watch lectures and complete coursework on their own time...

Criminal cases around the country offer a glimpse of the schemes' pervasiveness. In the past year, investigators indicted a man accused of leading a Texas fraud ring that used stolen identities to pursue $1.5 million in student aid. Another person in Texas pleaded guilty to using the names of prison inmates to apply for over $650,000 in student aid at colleges across the South and Southwest. And a person in New York recently pleaded guilty to a $450,000 student aid scam that lasted a decade.

Fortune found one community college that "wound up dropping more than 10,000 enrollments representing thousands of students who were not really students," according to the school's president. The scope of the ghost-student plague is staggering. Jordan Burris, vice president at identity-verification firm Socure and former chief of staff in the White House's Office of the Federal Chief Information Officer, told Fortune more than half the students registering for classes at some schools have been found to be illegitimate. Among Socure's client base, between 20% to 60% of student applicants are ghosts... At one college, more than 400 different financial-aid applications could be tracked back to a handful of recycled phone numbers. "It was a digital poltergeist effectively haunting the school's enrollment system," said Burris.

The scheme has also proved incredibly lucrative. According to a Department of Education advisory, about $90 million in aid was doled out to ineligible students, the DOE analysis revealed, and some $30 million was traced to dead people whose identities were used to enroll in classes. The issue has become so dire that the DOE announced this month it had found nearly 150,000 suspect identities in federal student-aid forms and is now requiring higher-ed institutions to validate the identities of first-time applicants for Free Application for Federal Student Aid (FAFSA) forms...

Maurice Simpkins, president and cofounder of AMSimpkins, says he has identified international fraud rings operating out of Japan, Vietnam, Bangladesh, Pakistan, and Nairobi that have repeatedly targeted U.S. colleges... In the past 18 months, schools blocked thousands of bot applicants because they originated from the same mailing address; had hundreds of similar emails with a single-digit difference, or had phone numbers and email addresses that were created moments before applying for registration.

Fortune shares this story from the higher education VP at IT consulting firm Voyatek. "One of the professors was so excited their class was full, never before being 100% occupied, and thought they might need to open a second section. When we worked with them as the first week of class was ongoing, we found out they were not real people."
ISS

NASA Delays Commercial Crew Launch To Assess ISS Air Leak (cbsnews.com) 18

NASA and Axiom Space have indefinitely delayed the Axiom-4 launch to the International Space Station due to concerns about a persistent air leak in the Russian PrK vestibule of the aging Zvezda module. "The PrK serves as a passageway between the station's Zvezda module and spacecraft docked at its aft port," notes CBS News. From the report: In a blog post, NASA said cosmonauts aboard the station "recently performed inspections of the pressurized module's interior surfaces, sealed some additional areas of interest, and measured the current leak rate. Following this effort, the segment now is holding pressure." The post went on to say the Axiom-4 delay will provide "additional time for NASA and (the Russian space agency) Roscosmos to evaluate the situation and determine whether any additional troubleshooting is necessary."

Launched in July 2000 atop a Russian Proton rocket, Zvezda was the third module to join the growing space station, providing a command center for Russian cosmonauts, crew quarters, the aft docking port and two additional ports now occupied by airlock and research modules. The leakage was first noticed in 2019, and has been openly discussed ever since by NASA during periodic reviews and space station news briefings. The leak rate has varied, but has stayed in the neighborhood of around 1-to-2 pounds per day. "The station is not young," astronaut Mike Barratt said last November during a post flight news conference. "It's been up there for quite a while, and you expect some wear and tear, and we're seeing that in the form of some cracks that have formed." The Russians have made a variety of attempts to patch a suspect crack and other possible sources of leakage, but air has continued to escape into space.

In November, Bob Cabana, a former astronaut and NASA manager who chaired the agency's ISS Advisory Committee, said U.S. and Russian engineers "don't have a common understanding of what the likely root cause is, or the severity of the consequences of these leaks." "The Russian position is that the most probable cause of the PrK cracks is high cyclic fatigue caused by micro vibrations," Cabana said. "NASA believes the PrK cracks are likely multi-causal including pressure and mechanical stress, residual stress, material properties and environmental exposures. "The Russians believe that continued operations are safe, but they can't prove to our satisfaction that they are, and the US believes that it's not safe, but we can't prove that to the Russian satisfaction that that's the case."

As an interim step, the hatch leading to the PrK and the station's aft docking compartment is closed during daily operations and only opened when the Russians need to unload a visiting Progress cargo ship. And as an added precaution on NASA's part, whenever the hatch to the PrK and docking compartment is open, a hatch between the Russian and U.S. segments of the station is closed. "We've taken a very conservative approach to close a hatch between the US side and the Russian side during those time periods," Barratt said. "It's not a comfortable thing, but it is the best agreement between all the smart people on both sides. And it's something that we crew live with and enact." Cabana said last year that the Russians do not believe "catastrophic disintegration of the PrK is realistic (but) NASA has expressed concerns about the structural integrity of the PrK and the possibility of a catastrophic failure."

NASA

NASA Pulls the Plug on Jupiter-Moon Lander, So Scientists Propose Landing It on Saturn (gizmodo.com) 45

"NASA engineers have spent the past decade developing a rugged, partially autonomous lander designed to explore Europa, one of Jupiter's most intriguing moons," reports Gizmodo.

But though NASA "got cold feet over the project," the engineers behind the project are now suggesting the probe could instead explore Enceladus, the sixth-largest moon of Saturn: Europa has long been a prime target in the search for extraterrestrial biology because scientists suspect it harbors a subsurface ocean beneath its icy crust, potentially teeming with microbial life. But the robot — packed with radiation shielding, cutting-edge software, and ice-drilling appendages — won't be going anywhere anytime soon.

In a recent paper in Science Robotics, engineers at NASA's Jet Propulsion Laboratory (JPL) outlined the design and testing of what was once the Europa Lander prototype, a four-legged robotic explorer built to survive the brutal surface conditions of the Jovian moon. The robot was designed to walk — as opposed to roll — analyze terrain, collect samples, and drill into Europa's icy crust — all with minimal guidance from Earth, due to the major communication lag between our planet and the moon 568 million miles (914 million kilometers) away. Designed to operate autonomously for hours at a time, the bot came equipped with stereoscopic cameras, a robotic arm, LED lights, and a suite of specialized materials tough enough to endure harsh radiation and bone-chilling cold....

According to the team, the challenges of getting to Europa — its radiation exposure, immense distance, and short observation windows — proved too daunting for NASA's higher-ups. And that's before you take into consideration the devastating budget cuts planned by the Trump administration, which would see the agency's funding fall from $7.3 billion to $3.9 billion. The lander, once the centerpiece of a bold astrobiology initiative, is now essentially mothballed.

But the engineers aren't giving up. They're now lobbying for the robot to get a second shot — on Enceladus, Saturn's ice-covered moon, which also boasts a subsurface ocean and has proven more favorable for robotic exploration. Enceladus is still frigid, but `has lower radiation and better access windows than Europa.

Security

ASUS Router Backdoors Affect 9,000 Devices, Persists After Firmware Updates 23

An anonymous reader quotes a report from SC Media: Thousands of ASUS routers have been compromised with malware-free backdoors in an ongoing campaign to potentially build a future botnet, GreyNoise reported Wednesday. The threat actors abuse security vulnerabilities and legitimate router features to establish persistent access without the use of malware, and these backdoors survive both reboots and firmware updates, making them difficult to remove.

The attacks, which researchers suspect are conducted by highly sophisticated threat actors, were first detected by GreyNoise's AI-powered Sift tool in mid-March and disclosed Thursday after coordination with government officials and industry partners. Sekoia.io also reported the compromise of thousands of ASUS routers in their investigation of a broader campaign, dubbed ViciousTrap, in which edge devices from other brands were also compromised to create a honeypot network. Sekoia.io found that the ASUS routers were not used to create honeypots, and that the threat actors gained SSH access using the same port, TCP/53282, identified by GreyNoise in their report.
The backdoor campaign affects multiple ASUS router models, including the RT-AC3200, RT-AC3100, GT-AC2900, and Lyra Mini.

GreyNoise advises users to perform a full factory reset and manually reconfigure any potentially compromised device. To identify a breach, users should check for SSH access on TCP port 53282 and inspect the authorized_keys file for unauthorized entries.
The Courts

AI of Dead Arizona Road Rage Victim Addresses Killer In Court (theguardian.com) 127

An anonymous reader quotes a report from The Guardian: Chris Pelkey was killed in a road rage shooting in Chandler, Arizona, in 2021. Three and a half years later, Pelkey appeared in an Arizona court to address his killer. Sort of. "To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances," says a video recording of Pelkey. "In another life, we probably could have been friends. I believe in forgiveness, and a God who forgives. I always have, and I still do," Pelkey continues, wearing a grey baseball cap and sporting the same thick red and brown beard he wore in life.

Pelkey was 37 years old, devoutly religious and an army combat veteran. Horcasitas shot Pelkey at a red light in 2021 after Pelkey exited his vehicle and walked back towards Horcasitas's car. Pelkey's appearance from beyond the grave was made possible by artificial intelligence in what could be the first use of AI to deliver a victim impact statement. Stacey Wales, Pelkey's sister, told local outlet ABC-15 that she had a recurring thought when gathering more than 40 impact statements from Chris's family and friends. "All I kept coming back to was, what would Chris say?" Wales said. [...]

Wales and her husband fed an AI model videos and audio of Pelkey to try to come up with a rendering that would match the sentiments and thoughts of a still-alive Pelkey, something that Wales compared with a "Frankenstein of love" to local outlet Fox 10. Judge Todd Lang responded positively to the AI usage. Lang ultimately sentenced Horcasitas to 10 and a half years in prison on manslaughter charges. "I loved that AI, thank you for that. As angry as you are, as justifiably angry as the family is, I heard the forgiveness," Lang said. "I feel that that was genuine." Also in favor was Pelkey's brother John, who said that he felt "waves of healing" from seeing his brother's face, and believes that Chris would have forgiven his killer. "That was the man I knew," John said.

Moon

Can Solar Wind Make Water on the Moon? A NASA Experiment Shows Maybe (space.com) 26

"Future moon astronauts may find water more accessible than previously thought," writes Space.com, citing a new NASA-led experiment: Because the moon lacks a magnetic field like Earth's, the barren lunar surface is constantly bombarded by energetic particles from the sun... Li Hsia Yeo, a planetary scientist at NASA's Goddard Space Flight Center in Maryland, led a lab experiment observing the effects of simulated solar wind on two samples of loose regolith brought to Earth by the Apollo 17 mission... To mimic conditions on the moon, the researchers built a custom apparatus that included a vacuum chamber, where the samples were placed, and a tiny particle accelerator, which the scientists used to bombard the samples with hydrogen ions for several days.

"The exciting thing here is that with only lunar soil and a basic ingredient from the sun — which is always spitting out hydrogen — there's a possibility of creating water," Yeo said in a statement. "That's incredible to think about." Supporting this idea, observations from previous moon missions have revealed an abundance of hydrogen gas in the moon's tenuous atmosphere. Scientists suspect that solar-wind-driven heating facilitates the combination of hydrogen atoms on the surface into hydrogen gas, which then escapes into space. This process also has a surprising upside, the new study suggests. Leftover oxygen atoms are free to bond with new hydrogen atoms formed by repeated bombardment of the solar wind, prepping the moon for more water formation on a renewable basis.

The findings could help assess how sustainable water on the moon is, as the sought-after resource is crucial for both life support and as propellant for rockets. The team's study was published in March in the journal JGR Planets .

NASA created a fascinating animation showing how water is released from the Moon during meteor showers. (In 2016 scientists discovered that when speck of comet debris vaporize on impact, they create shock waves in the lunar soil which can sometimes breach the dry upper layer, releasing water molecules from the hydrated layer below...)
Education

Should College Application Essays Be Banned? (substack.com) 128

While college applicants are often required to write a personal essay for their applications, political scientist/author/academic Yascha Mounk argues that's "a deeply unfair way to select students for top colleges, one that is much more biased against the poor than standardized tests." The college essay wrongly encourages students to cast themselves as victims, to exaggerate the adversity they've faced, and to turn genuinely upsetting experiences into the focal point of their self-understanding. The college essay, dear reader, should be banned and banished and burned to the ground.

There are many tangible, "objective" reasons to oppose making personal statements a key part of the admissions process. Perhaps the most obvious is that they have always been the easiest part of the system to game. While rich parents can hire SAT tutors they can't sit the standardized test in the stead of their offspring; they can, however, easily write the admissions essay for their kid or hire a "college consultant" who "works with" the applicant to "improve" that essay. Even if rich parents don't cheat in those ways, their class position gives rich kids a huge advantage in the exercise... [W]riting a good admissions essay is to a large extent an exercise in demonstrating one's good taste — and the ability to do so has always depended on being fluent in the unspoken norms of an elite community...

Many on the left oppose standardized tests on the grounds that they have a class bias, and that hiring a tutor can make you perform better at them. But studies on the subject consistently suggest that the class bias of personal essays is far stronger than the class bias of standardized tests.... But the thing I truly hate about the college essay is not that it is part of a system that keeps deserving kids out of top colleges while rewarding privileged kids who (to add insult to injury) get to flatter themselves that they have been selected for showcasing such superior personality in their 750-word statements composed by their college consultant or ghostwritten by ChatGPT... [W]hat I truly hate about the college essay is the way in which it shapes the lives of high school students and encourages the whole elite stratum of society — including some of its most affluent, privileged and sheltered members — to conceive of themselves in terms of the hardships they have supposedly suffered...

[I]t is the bizarre spectacle of those kids from comparatively privileged backgrounds being effectively coerced by the admissions system to self-exoticize as products of great hardship which I find to be truly unseemly... And this is why I suspect that the seemingly innocuous institution of the college essay is more deeply damaging — to the high school experience, to the self-conception of millions of Americans, and even to the country's ability to sustain a trusted elite — than it appears... [I]t drains the souls of teenagers and encourages a deeply pernicious brand of fakery and breeds widespread mistrust in social elites.

The college essay is absurd and unfair and — ironically — unforgivably cringe. It's time to put an end to its strange hold over American society, and liberate us all from its tyranny.

AI

Police Using AI Personas to Infiltrate Online Activist Spaces, Records Reveal (wired.com) 77

samleecole shares a report from 404 Media and Wired: American police departments near the United States-Mexico border are paying hundreds of thousands of dollars for an unproven and secretive technology that uses AI-generated online personas designed to interact with and collect intelligence on "college protesters," "radicalized" political activists, and suspected drug and human traffickers, according to internal documents, contracts, and communications 404 Media obtained via public records requests. Massive Blue, the New York-based company that is selling police departments this technology, calls its product Overwatch, which it markets as an "AI-powered force multiplier for public safety" that "deploys lifelike virtual agents, which infiltrate and engage criminal networks across various channels." According to a presentation obtained by 404 Media, Massive Blue is offering cops these virtual personas that can be deployed across the internet with the express purpose of interacting with suspects over text messages and social media. [...]

While the documents don't describe every technical aspect of how Overwatch works, they do give a high-level overview of what it is. The company describes a tool that uses AI-generated images and text to create social media profiles that can interact with suspected drug traffickers, human traffickers, and gun traffickers. After Overwatch scans open social media channels for potential suspects, these AI personas can also communicate with suspects over text, Discord, and other messaging services. The documents we obtained don't explain how Massive Blue determines who is a potential suspect based on their social media activity. Salzwedel, of Pinal County, said "Massive Blue's solutions crawl multiple areas of the Internet, and social media outlets are just one component. We cannot disclose any further information to preserve the integrity of our investigations." [...] Besides scanning social media and engaging suspects with AI personas, the presentation says that Overwatch can use generative AI to create "proof of life" images of a person holding a sign with a username and date written on it in pen.

United Kingdom

UK Laws Are Not 'Fit For Social Media Age' (independent.co.uk) 48

An anonymous reader quotes a report from the New York Times: British laws restricting what the police can say about criminal cases are "not fit for the social media age (source paywalled; alternative source)," a government committee said in a report released Monday in Britain that highlighted how unchecked misinformation stoked riots last summer. Violent disorder, fueled by the far right, affected several towns and cities for days after a teenager killed three girls on July 29 at a Taylor Swift-themed dance class in Southport, England. In the hours after the stabbings, false claims that the attacker was an undocumented Muslim immigrant spread rapidly online. In a report looking into the riots, a parliamentary committee said a lack of information from the authorities after the attack "created a vacuum where misinformation was able to grow." The report blamed decades-old British laws, aimed at preventing jury bias, that stopped the police from correcting false claims. By the time the police announced the suspect was British-born, those false claims had reached millions.

The Home Affairs Committee, which brings together lawmakers from across the political spectrum, published its report after questioning police chiefs, government officials and emergency workers over four months of hearings. Axel Rudakubana, who was sentenced to life in prison for the attack, was born and raised in Britain by a Christian family from Rwanda. A judge later found there was no evidence he was driven by a single political or religious ideology, but was obsessed with violence. [...] The committee's report acknowledged that it was impossible to determine "whether the disorder could have been prevented had more information been published." But it concluded that the lack of information after the stabbing "created a vacuum where misinformation was able to grow, further undermining public confidence," and that the law on contempt was not "fit for the social media age."

Networking

Cloudflare Accused of Blocking Niche Browsers (palemoon.org) 162

Long-time Slashdot reader BenFenner writes: For the third time in recent memory, CloudFlare has blocked large swaths of niche browsers and their users from accessing web sites that CloudFlare gate-keeps. In the past these issues have been resolved quickly (within a week) and apologies issued with promises to do better. (See 2024-03-11, 2024-07-08, and 2025-01-30.)

This time around it has been over six weeks and CloudFlare has been unable or unwilling to fix the problem on their end, effectively stalling any progress on the matter with various tactics including asking browser developers to sign overarching NDAs.

That last link is an update posted today by Pale Moon's main developer: Our current situation remains unchanged: CloudFlare is still blocking our access to websites through the challenges, and the captcha/turnstile continues to hang the browser until our watchdog terminates the hung script after which it reloads and hangs again after a short pause (but allowing users to close the tab in that pause, at least). To say that this upsets me is an understatement. Other than deliberate intent or absolute incompetence, I see no reason for this to endure. Neither of those options are very flattering for CloudFlare.

I wish I had better news.

In a comment, Slashdot reader BenFenner shares a list posted by Pale Moon's developer of reportedly affected browsers:
  • Pale Moon
  • Basilisk
  • Waterfox
  • Falkon
  • SeaMonkey
  • Various Firefox ESR flavors
  • Thorium (on some systems)
  • Ungoogled Chromium
  • K-Meleon
  • LibreWolf
  • MyPal 68
  • Otter browser

Slashdot reader Z00L00K speculates that "this is some kind of anti-bot measure that fails. I suspect that the reason for them wanting a NDA to be signed is to prevent ways to circumvent the anti-bot measures..."


Education

'I Used to Teach Students. Now I Catch ChatGPT Cheats' (thewalrus.ca) 241

Philosophy/ethics professor Troy Jollimore looks at the implications of a world where many students are submitting AI-generated essays. ("Sometimes they will provide quotations, giving page numbers that, as often as not, do not seem to correspond to anything in the actual world...") Ideally if the students write the essays themselves, "some of them start to feel it. They begin to grasp that thinking well, and in an informed manner, really is different from thinking poorly and from a position of ignorance. That moment, when you start to understand the power of clear thinking, is crucial.

"The trouble with generative AI is that it short-circuits that process entirely." One begins to suspect that a great many students wanted this all along: to make it through college unaltered, unscathed. To be precisely the same person at graduation, and after, as they were on the first day they arrived on campus. As if the whole experience had never really happened at all. I once believed my students and I were in this together, engaged in a shared intellectual pursuit. That faith has been obliterated over the past few semesters. It's not just the sheer volume of assignments that appear to be entirely generated by AI — papers that show no sign the student has listened to a lecture, done any of the assigned reading, or even briefly entertained a single concept from the course...

It's other things too... The students who beg you to reconsider the zero you gave them in order not to lose their scholarship. (I want to say to them: Shouldn't that scholarship be going to ChatGPT?â) It's also, and especially, the students who look at you mystified. The use of AI already seems so natural to so many of them, so much an inevitability and an accepted feature of the educational landscape, that any prohibition strikes them as nonsensical. Don't we instructors understand that today's students will be able, will indeed be expected, to use AI when they enter the workforce? Writing is no longer something people will have to do in order to get a job.

Or so, at any rate, a number of them have told me. Which is why, they argue, forcing them to write in college makes no sense. That mystified look does not vanish — indeed, it sometimes intensifies — when I respond by saying: Look, even if that were true, you have to understand that I don't equate education with job training.

What do you mean? they might then ask.

And I say: I'm not really concerned with your future job. I want to prepare you for life...

My students have been shaped by a culture that has long doubted the value of being able to think and write for oneself — and that is increasingly convinced of the power of a machine to do both for us. As a result, when it comes to writing their own papers, they simply disregard it. They look at instructors who levy such prohibitions as irritating anachronisms, relics of a bygone, pre-ChatGPT age.... As I go on, I find that more of the time, energy, and resources I have for teaching are dedicated to dealing with this issue. I am doing less and less actual teaching, more and more policing. Sometimes I try to remember the last time I actually looked forward to walking into a classroom. It's been a while.

Privacy

India Grants Tax Officials Sweeping Digital Access Powers (indiatimes.com) 16

India's income tax department will gain powers to access citizens' social media accounts, emails and other digital spaces beginning April 2026 under the new income tax bill, in a significant expansion of its search and seizure authority.

The legislation, which has raised privacy concerns among legal experts, allows tax officers to "gain access by overriding the access code" to computer systems and "virtual digital spaces" if they suspect tax evasion.

The bill broadly defines virtual digital spaces to include email servers, social media accounts, online investment accounts, banking platforms, and cloud servers.

"The expansion raises significant concerns regarding constitutional validity, potential state overreach, and practical enforcement," Sonam Chandwani, Managing Partner at KS Legal and Associates, told Indian newspaper Economic Times.
China

China's Supreme Court Calls For Crack Down on Paper Mills (nature.com) 17

China's highest court has called for a crack down on the activities of paper mills, businesses that churn out fraudulent or poor-quality manuscripts and sell authorships. Nature: Some researchers are cautiously optimistic that the court's guidance will help curb the use of these services, while others think the impact will be minimal. "This is the first time the supreme court has issued guidance on paper mills and on scientific fraud," says Wang Fei, who studies research-integrity policy at Dalian University of Technology in China.

Paper mills sell suspect research and authorships to researchers who want journal articles to burnish their CVs. They are a significant contributor to overall research misconduct, particularly in China. Last month, the Supreme People's Court published a set of guiding opinions on technology innovation. Among the list of 25 articles, one called for lower courts to crack down on 'paper industry chains,' and for research fraud to be severely punished.
Further reading:
Research Reveals Data on Which Institutions Are Retraction Hotspots;
Paper Mills Have Flooded Science With 400,000 Fake Studies, Experts Warn.

Slashdot Top Deals