United States

Strange Wild Pigs in California - What Turned Their Flesh Blue? (yahoo.com) 56

A professional trapper had one question about the wild pig he'd found in California. Why was its flesh blue? The Los Angeles Times explains: [California's Department of Fish and Wildlife] is now warning trappers and hunters to keep an eye out for possibly contaminated wildlife in the area, and not to consume the tainted meat, over concerns the blue meat is a sign that the animal may have consumed poison.... The startling find of wild pigs with bright blue tissue in Monterey County suggests the animals have been exposed to anticoagulant rodenticide diphacinone, a popular poison used by farmers and agriculture companies to control the population of rats, mice, squirrels and other small animals, according to a statement from the California Department of Fish and Wildlife. "Hunters should be aware that the meat of game animals, such as wild pig, deer, bear and geese, might be contaminated if that game animal has been exposed to rodenticides," said Ryan Bourbor, pesticide investigations coordinator with the state agency.
Diphacinone has been prohibited in California since 2024 (with exceptions for government agencies sor their certified Vector Control Technicians).

The state's Fish and Wildlife department says anyone who finds wildlife with blue fat or tissue should contact the state's wildlife officials.

Thanks to long-time Slashdot reader Bruce66423 for sharing the news.
Medicine

Smartwatches Offer Little Insight Into Stress Levels, Researchers Find (theguardian.com) 58

An anonymous reader quotes a report from The Guardian: They are supposed to monitor you throughout the working day and help make sure that life is not getting on top of you. But a study has concluded that smartwatches cannot accurately measure your stress levels -- and may think you are overworked when really you are just excited. Researchers found almost no relationship between the stress levels reported by the smartwatch and the levels that participants said they experienced. However, recorded fatigue levels had a very slight association with the smartwatch data, while sleep had a stronger correlation.

Eiko Fried, an author of the study, said the correlation between the smartwatch and self-reported stress scores was "basically zero." He added: "This is no surprise to us given that the watch measures heart rate and heart rate doesn't have that much to do with the emotion you're experiencing -- it also goes up for sexual arousal or joyful experiences." He noted that his Garmin had previously told him he was stressed when he was working out in the gym and when excitedly talking to a friend he had not seen for a while at a wedding. "The findings raise important questions about what wearable data can or can't tell us about mental states," said Fried. "Be careful and don't live by your smartwatch -- these are consumer devices, not medical devices."
The research has been published in the Journal of Psychopathology and Clinical Science.
The Courts

AI Industry Horrified To Face Largest Copyright Class Action Ever Certified (arstechnica.com) 188

An anonymous reader quotes a report from Ars Technica: AI industry groups are urging an appeals court to block what they say is the largest copyright class action ever certified. They've warned that a single lawsuit raised by three authors over Anthropic's AI training now threatens to "financially ruin" the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement. Last week, Anthropic petitioned (PDF) to appeal the class certification, urging the court to weigh questions that the district court judge, William Alsup, seemingly did not. Alsup allegedly failed to conduct a "rigorous analysis" of the potential class and instead based his judgment on his "50 years" of experience, Anthropic said.

If the appeals court denies the petition, Anthropic argued, the emerging company may be doomed. As Anthropic argued, it now "faces hundreds of billions of dollars in potential damages liability at trial in four months" based on a class certification rushed at "warp speed" that involves "up to seven million potential claimants, whose works span a century of publishing history," each possibly triggering a $150,000 fine. Confronted with such extreme potential damages, Anthropic may lose its rights to raise valid defenses of its AI training, deciding it would be more prudent to settle, the company argued. And that could set an alarming precedent, considering all the other lawsuits generative AI (GenAI) companies face over training on copyrighted materials, Anthropic argued. "One district court's errors should not be allowed to decide the fate of a transformational GenAI company like Anthropic or so heavily influence the future of the GenAI industry generally," Anthropic wrote. "This Court can and should intervene now."

In a court filing Thursday, the Consumer Technology Association and the Computer and Communications Industry Association backed Anthropic, warning the appeals court that "the district court's erroneous class certification" would threaten "immense harm not only to a single AI company, but to the entire fledgling AI industry and to America's global technological competitiveness." According to the groups, allowing copyright class actions in AI training cases will result in a future where copyright questions remain unresolved and the risk of "emboldened" claimants forcing enormous settlements will chill investments in AI. "Such potential liability in this case exerts incredibly coercive settlement pressure for Anthropic," industry groups argued, concluding that "as generative AI begins to shape the trajectory of the global economy, the technology industry cannot withstand such devastating litigation. The United States currently may be the global leader in AI development, but that could change if litigation stymies investment by imposing excessive damages on AI companies."

Google

Google Tests AI-Powered Google Finance (blog.google) 12

Google announced Friday it will roll out an AI-powered redesign of Google Finance over the coming weeks in the United States. The update adds natural language query processing for financial research questions with comprehensive AI responses including relevant links, advanced charting tools with technical indicators and candlestick charts, expanded market data covering commodities and additional cryptocurrencies, and a live news feed displaying real-time headlines.
Google

Google Says AI Search Features Haven't Hurt Web Traffic Despite Industry Reports (blog.google) 14

Google says total organic click volume from its search engine to websites has remained ""relatively stable year-over-year" despite the introduction of AI Overviews, contradicting third-party reports of dramatic traffic declines. The company reports average click quality has increased, with users less likely to immediately return to search results after clicking through to websites. Google attributes stable traffic patterns to users conducting more searches and asking longer, more complex questions since AI features launched, while AI Overviews display more links per page than traditional results.
China

Lyft Will Use Chinese Driverless Cars In Britain and Germany (techcrunch.com) 24

An anonymous reader quotes a report from the New York Times: China's automakers have teamed up with software companies togo global with their driverless cars, which are poised to claim a big share of a growing market as Western manufacturers are still preparing to compete. The industry in China is expanding despite tariffs imposed last year by the European Union on electric cars, and despite some worries in Europe about the security implications of relying on Chinese suppliers. Baidu, one of China's biggest software companies, said on Monday that it would supply Lyft, an American ride-hailing service, with self-driving cars assembled by Jiangling Motors of China (source paywalled; alternative source). Lyft is expected to begin operating them next year in Germany and Britain, subject to regulatory approval, the companies said.

The announcement comes three months after Uber and Momenta, a Chinese autonomous driving company, announced their own plans to begin offering self-driving cars in an unspecified European city early next year. Momenta will soon provide assisted driving technology to the Chinese company IM Motors for its cars sold in Britain. While Momenta has not specified the model that Uber will be using, it has already signaled it will choose a Chinese model. In China, "the pace of development and the pressure to deliver at scale push companies to improve quickly," said Gerhard Steiger, the chairman of Momenta Europe. China's state-controlled banking system has been lending money at low interest rates to the country's electric car industry in a bid for global leadership. [...]

Expanding robotaxi services to new cities, not to mention new countries, is not easy. While the individual cars do not have drivers, they typically require one controller for every several cars to handle difficulties and answer questions from users. And the cars often need to be specially programmed for traffic conditions unique to each city. Lyft and Baidu nonetheless said that they had plans for "the fleet scaling to thousands of vehicles across Europe in the following years."

Government

Swedish PM Under Fire For Using AI In Role 26

Sweden's Prime Minister Ulf Kristersson has come under fire after admitting that he frequently uses AI tools like ChatGPT for second opinions on political matters. The Guardian reports: ... Kristersson, whose Moderate party leads Sweden's center-right coalition government, said he used tools including ChatGPT and the French service LeChat. His colleagues also used AI in their daily work, he said. Kristersson told the Swedish business newspaper Dagens industri: "I use it myself quite often. If for nothing else than for a second opinion. What have others done? And should we think the complete opposite? Those types of questions."

Tech experts, however, have raised concerns about politicians using AI tools in such a way, and the Aftonbladet newspaper accused Kristersson in a editorial of having "fallen for the oligarchs' AI psychosis." Kristersson's spokesperson, Tom Samuelsson, later said the prime minister did not take risks in his use of AI. "Naturally it is not security sensitive information that ends up there. It is used more as a ballpark," he said.

But Virginia Dignum, a professor of responsible artificial intelligence at Umea University, said AI was not capable of giving a meaningful opinion on political ideas, and that it simply reflects the views of those who built it. "The more he relies on AI for simple things, the bigger the risk of an overconfidence in the system. It is a slippery slope," she told the Dagens Nyheter newspaper. "We must demand that reliability can be guaranteed. We didn't vote for ChatGPT."
Security

CrowdStrike Investigated 320 North Korean IT Worker Cases In the Past Year (cyberscoop.com) 11

An anonymous reader quotes a report from CyberScoop: North Korean operatives seeking and gaining technical jobs with foreign companies kept CrowdStrike busy, accounting for almost one incident response case or investigation per day in the past year, the company said in its annual threat hunting report released Monday. "We saw a 220% year-over-year increase in the last 12 months of Famous Chollima activity," Adam Meyers, senior vice president of counter adversary operations, said during a media briefing about the report. "We see them almost every day now," he said, referring to the North Korean state-sponsored group of North Korean technical specialists that has crept into the workforce of Fortune 500 companies and small-to-midsized organizations across the globe.

CrowdStrike's threat-hunting team investigated more than 320 incidents involving North Korean operatives gaining remote employment as IT workers during the one-year period ending June 30. CrowdStrike researchers found that Famous Chollima fueled that pace of activity with an assist from generative artificial intelligence tools that helped North Korean operatives maneuver workflows and evade detection during the hiring process. "They use generative AI across all stages of their operation," Meyers said. The insider threat group used generative AI to draft resumes, create false identities, build tools for job research, mask their identity during video interviews and answer questions or complete technical coding assignments, the report found. CrowdStrike said North Korean tech workers also used generative AI on the job to help with daily tasks and manage various communications across multiple jobs -- sometimes three to four -- they worked simultaneously.

Threat hunters observed other significant shifts in malicious activity during the past year, including a 27% year-over-year increase in hands-on-keyboard intrusions -- 81% of which involved no malware. Cybercrime accounted for 73% of all interactive intrusions during the one-year period. CrowdStrike continues to find and add more threat groups and clusters of activity to its matrix of cybercriminals, nation-state attackers and hacktivists. The company identified 14 new threat groups or individuals in the past six months, Meyers said. "We're up to over 265 named adversary groups that we track, and then 150 what we call malicious activity clusters," otherwise unnamed threat groups or individuals under development, Meyers said.

AI

Disney Struggles With How to Use AI - While Retaining Copyrights and Avoiding Legal Issues (msn.com) 29

Disney "cloned" Dwayne Johnson when filming a live-action Moana, reports the Wall Street Journal, using an AI process that they were ultimately afraid to use: Under the plan they devised, Johnson's similarly buff cousin Tanoai Reed — who is 6-foot-3 and 250 pounds — would fill in as a body double for a small number of shots. Disney would work with AI company Metaphysic to create deepfakes of Johnson's face that could be layered on top of Reed's performance in the footage — a "digital double" that effectively allowed Johnson to be in two places at once... Johnson approved the plan, but the use of a new technology had Disney attorneys hammering out details over how it could be deployed, what security precautions would protect the data and a host of other concerns. They also worried that the studio ultimately couldn't claim ownership over every element of the film if AI generated parts of it, people involved in the negotiations said. Disney and Metaphysic spent 18 months negotiating on and off over the terms of the contract and work on the digital double. But none of the footage will be in the final film when it's released next summer...

Interviews with more than 20 current and former employees and partners present an entertainment giant torn between the inevitability of AI's advance and concerns about how to use it. Progress has at times been slowed by bureaucracy and hand-wringing over the company's social contract with its fans, not to mention its legal contract with unions representing actors, writers and other creative partners... For Disney, protecting its characters and stories while also embracing new AI technology is key. "We have been around for 100 years and we intend to be around for the next 100 years," said the company's legal chief, Horacio Gutierrez, in an interview. "AI will be transformative, but it doesn't need to be lawless...." [As recently as June, a Disney/Comcast Universal lawsuit had argued that Midjourney "is the quintessential copyright free-rider and a bottomless pit of plagiarism."]

Concerns about bad publicity were a big reason that Disney scrapped a plan to use AI in Tron: Ares — a movie set for release in October about an AI-generated soldier entering the real world. Since the movie is about artificial intelligence, executives pitched the idea of actually incorporating AI into one of the characters... as a buzzy marketing strategy, according to people familiar with the matter. A writer would provide context on the animated character — a sidekick to Jeff Bridges' lead role named Bit — to a generative AI program. Then on screen, the AI program, voiced by an actor, would respond to questions as Bit as cameras rolled. But with negotiations with unions representing writers and actors over contracts happening at the same time, Disney dismissed the idea, and executives internally were told that the company couldn't risk the bad publicity, the people said...

Disney's own history speaks to how studios have navigated technological crossroads before. When Disney hired Pixar to produce a handful of graphic images for its 1989 hit The Little Mermaid, executives kept the incorporation a secret, fearing backlash from fans if they learned that not every frame of the animated film had been hand-drawn. Such knowledge, executives feared, might "take away the magic."

Disney invested $1.5 billion in Fortnite creator Epic Games, acccording to the article, and is planning a world in Fortnite where gamers can interact with Marvel superheroes and creatures from Avatar. But "an experiment to allow gamers to interact with an AI-generated Darth Vader was fraught. Within minutes of launching the AI bot, gamers had figured out a way to make it curse in James Earl Jones's signature baritone." (Though Epic patched the workaround within 30 minutes.)

But the article spells out another concern for Disney executives. "If a Fortnite gamer creates a Darth Vader and Spider-Man dance that goes viral on YouTube, who owns that dance?
Programming

The Toughest Programming Question for High School Students on This Year's CS Exam: Arrays 65

America's nonprofit College Board lets high school students take college-level classes — including a computer programming course that culminates with a 90-minute test. But students did better on questions about If-Then statements than they did on questions about arrays, according to the head of the program. Long-time Slashdot reader theodp explains: Students exhibited "strong performance on primitive types, Boolean expressions, and If statements; 44% of students earned 7-8 of these 8 points," says program head Trevor Packard. But students were challenged by "questions on Arrays, ArrayLists, and 2D Arrays; 17% of students earned 11-12 of these 12 points."

"The most challenging AP Computer Science A free-response question was #4, the 2D array number puzzle; 19% of students earned 8-9 of the 9 points possible."

You can see that question here. ("You will write the constructor and one method of the SumOrSameGame class... Array elements are initialized with random integers between 1 and 9, inclusive, each with an equal chance of being assigned to each element of puzzle...") Although to be fair, it was the last question on the test — appearing on page 16 — so maybe some students just didn't get to it.

theodp shares a sample Java solution and one in Excel VBA solution (which includes a visual presentation).

There's tests in 38 subjects — but CS and Statistics are the subjects where the highest number of students earned the test's lowest-possible score (1 out of 5). That end of the graph also includes notoriously difficult subjects like Latin, Japanese Language, and Physics.

There's also a table showing scores for the last 23 years, with fewer than 67% of students achieving a passing grade (3+) for the first 11 years. But in 2013 and 2017, more than 67% of students achieved that passsing grade, and the percentage has stayed above that line ever since (except for 2021), vascillating between 67% and 70.4%.

2018: 67.8%
2019: 69.6%
2020: 70.4%
2021: 65.1%
2022: 67.6%
2023: 68.0%
2024: 67.2%
2025: 67.0%
Privacy

Despite Breach and Lawsuits, Tea Dating App Surges in Popularity (www.cbc.ca) 39

The women-only app Tea now "faces two class action lawsuits filed in California" in response to a recent breach," reports NPR — even as the company is now boasting it has more than 6.2 million users.

A spokesperson for Tea told the CBC it's "working to identify any users whose personal information was involved" in a breach of 72,000 images (including 13,000 verification photos and images of government IDs) and a later breach of 1.1 million private messages. Tea said they will be offering those users "free identity protection services." The company said it removed the ID requirement in 2023, but data that was stored before February 2024, when Tea migrated to a more secure system, was accessed in the breach... [Several sites have pointed out Tea's current privacy policy is telling users selfies are "deleted immediately."]

Tea was reportedly intended to launch in Canada on Friday, according to information previously posted on the App Store, but as of this week the launch date is now in February 2026. Tea didn't respond to CBC's questions about the apparent delay. Yet even amid the current turmoil, Tea's waitlist has ballooned to 1.5 million women, all eager to join, the company posted on Wednesday. A day later, Tea posted in its Instagram stories that it had approved "well over" 800,000 women into the app that day alone.

So, why is it so popular, despite the drama and risks?

Tea tapped into a perceived weakness of ther dating apps, according to an associate health studies professor at Ontario's Western University interviewed by the CBC, who thinks users should avoid Tea, at least until its security is restored.

Tech blogger John Gruber called the incident "yet another data point for the argument that any 'private messaging' feature that doesn't use E2EE isn't actually private at all." (And later Gruber notes Tea's apparent absence at the top of the charts in Google's Play Store. "I strongly suspect that, although Google hasn't removed Tea from the Play Store, they've delisted it from discovery other than by searching for it by name or following a direct link to its listing.")

Besides anonymous discussions about specific men, Tea also allows its users to perform background and criminal record checks, according to NPR, as well as reverse image searches. But the recent breach, besides threatening the safety of its users, also "laid bare the anonymous, one-sided accusations against the men in their dating pools." The CBC points out there's a men's rights group on Reddit now urging civil lawsuits against tea as part of a plan to get the app shut down. And "Cleveland lawyer Aaron Minc, who specializes in cases involving online defamation and harassment, told The Associated Press that his firm has received hundreds of calls from people upset about what's been posted about them on Tea."

Yet in response to Tea's latest Instagram post, "The comments were almost entirely from people asking Tea to approve them, so they could join the app."
The Internet

Public ChatGPT Queries Are Getting Indexed By Google and Other Search Engines (techcrunch.com) 31

An anonymous reader quotes a report from TechCrunch: It's a strange glimpse into the human mind: If you filter search results on Google, Bing, and other search engines to only include URLs from the domain "https://chatgpt.com/share," you can find strangers' conversations with ChatGPT. Sometimes, these shared conversation links are pretty dull — people ask for help renovating their bathroom, understanding astrophysics, and finding recipe ideas. In another case, one user asks ChatGPT to rewrite their resume for a particular job application (judging by this person's LinkedIn, which was easy to find based on the details in the chat log, they did not get the job). Someone else is asking questions that sound like they came out of an incel forum. Another person asks the snarky, hostile AI assistant if they can microwave a metal fork (for the record: no), but they continue to ask the AI increasingly absurd and trollish questions, eventually leading it to create a guide called "How to Use a Microwave Without Summoning Satan: A Beginner's Guide."

ChatGPT does not make these conversations public by default. A conversation would be appended with a "/share" URL only if the user deliberately clicks the "share" button on their own chat and then clicks a second "create link" button. The service also declares that "your name, custom instructions, and any messages you add after sharing stay private." After clicking through to create a link, users can toggle whether or not they want that link to be discoverable. However, users may not anticipate that other search engines will index their shared ChatGPT links, potentially betraying personal information (my apologies to the person whose LinkedIn I discovered).
According to ChatGPT, these chats were indexed as part of an experiment. "ChatGPT chats are not public unless you choose to share them," an OpenAI spokesperson told TechCrunch. "We've been testing ways to make it easier to share helpful conversations, while keeping users in control, and we recently ended an experiment to have chats appear in search engine results if you explicitly opted in when sharing."

A Google spokesperson also weighed in, telling TechCrunch that the company has no control over what gets indexed. "Neither Google nor any other search engine controls what pages are made public on the web. Publishers of these pages have full control over whether they are indexed by search engines."
Security

In Search of Riches, Hackers Plant 4G-Enabled Raspberry Pi In Bank Network (arstechnica.com) 54

Hackers from the group UNC2891 attempted a high-tech bank heist by physically planting a 4G-enabled Raspberry Pi inside a bank's ATM network, using advanced malware hidden with a never-before-seen Linux bind mount technique to evade detection. "The trick allowed the malware to operate similarly to a rootkit, which uses advanced techniques to hide itself from the operating system it runs on," reports Ars Technica. Although the plot was uncovered before the hackers could hijack the ATM switching server, the tactic showcased a new level of sophistication in cyber-physical attacks on financial institutions. The security firm Group-IB, which detailed the attack in a report on Wednesday, didn't say where the compromised switching equipment was located or how attackers managed to plant the Raspberry Pi. Ars Technica reports: To maintain persistence, UNC2891 also compromised a mail server because it had constant Internet connectivity. The Raspberry Pi and the mail server backdoor would then communicate by using the bank's monitoring server as an intermediary. The monitoring server was chosen because it had access to almost every server within the data center. As Group-IB was initially investigating the bank's network, researchers noticed some unusual behaviors on the monitoring server, including an outbound beaconing signal every 10 minutes and repeated connection attempts to an unknown device. The researchers then used a forensic tool to analyze the communications. The tool identified the endpoints as a Raspberry Pi and the mail server but was unable to identify the process names responsible for the beaconing.

The researchers then captured the system memory as the beacons were sent. The review identified the process as lightdm, a process associated with an open source LightDM display manager. The process appeared to be legitimate, but the researchers found it suspicious because the LightDM binary was installed in an unusual location. After further investigation, the researchers discovered that the processes of the custom backdoor had been deliberately disguised in an attempt to throw researchers off the scent.

[Group-IB Senior Digital Forensics and Incident Response Specialist Nam Le Phuong] explained: "The backdoor process is deliberately obfuscated by the threat actor through the use of process masquerading. Specifically, the binary is named "lightdm", mimicking the legitimate LightDM display manager commonly found on Linux systems. To enhance the deception, the process is executed with command-line arguments resembling legitimate parameters -- for example, lightdm -- session child 11 19 -- in an effort to evade detection and mislead forensic analysts during post-compromise investigations. These backdoors were actively establishing connections to both the Raspberry Pi and the internal Mail Server."

Education

ChatGPT's New Study Mode Is Designed To Help You Learn, Not Just Give Answers 29

An anonymous reader quotes a report from Ars Technica: The rise of large language models like ChatGPT has led to widespread concern that "everyone is cheating their way through college," as a recent New York magazine article memorably put it. Now, OpenAI is rolling out a new "Study Mode" that it claims is less about providing answers or doing the work for students and more about helping them "build [a] deep understanding" of complex topics.

Study Mode isn't a new ChatGPT model but a series of "custom system instructions" written for the LLM "in collaboration with teachers, scientists, and pedagogy experts to reflect a core set of behaviors that support deeper learning," OpenAI said. Instead of the usual summary of a subject that stock ChatGPT might give -- which one OpenAI employee likened to "a mini textbook chapter" -- Study Mode slowly rolls out new information in a "scaffolded" structure. The mode is designed to ask "guiding questions" in the Socratic style and to pause for periodic "knowledge checks" and personalized feedback to make sure the user understands before moving on. It's unknown how many students will use this guided learning tool instead of just asking ChatGPT to generate answers from the start.

In an early hands-off demo attended by Ars Technica, Study Mode responded to a request to "teach me about game theory" by first asking about the user's overall familiarity with the subject and what they'll be using the information for. ChatGPT introduced a short overview of some core game theory concepts, then paused to ask a question before providing a relevant real-world example. In another example involving a classic "train traveling at speed" math problem, Study Mode resisted multiple simulated attempts by the frustrated "student" to simply ask for the answer and instead tried to gently redirect the conversation to how the available information could be used to generate that answer. An OpenAI representative told Ars that Study Mode will eventually provide direct solutions if asked repeatedly, but the default behavior is more tuned to a Socratic tutoring style.
OpenAI said it drew inspiration for Study Mode from "power users" and collaborated with pedagogy experts and college students to help refine its responses. As for whether the mode can be trusted, OpenAI told Ars that "the risk of hallucination is lower with Study Mode because the model processes information in smaller chunks, calibrating along the way."

The current Study Mode prompt does, however, result in some "inconsistent behavior and mistakes across conversations," the company warned.
Earth

Researchers Quietly Planned a Test to Dim Sunlight Over 3,900 Square Miles (politico.com) 81

California researchers planned a multimillion-dollar test of salt water-spraying equipment that could one day be used to dim the sun's rays — over a 3,900-square mile are off the west coasts of North America, Chile or south-central Africa. E&E News calls it part of a "secretive" initiative backed by "wealthy philanthropists with ties to Wall Street and Silicon Valley" — and a piece of the "vast scope of research aimed at finding ways to counter the Earth's warming, work that has often occurred outside public view." "At such scales, meaningful changes in clouds will be readily detectable from space," said a 2023 research plan from the [University of Washington's] Marine Cloud Brightening Program. The massive experiment would have been contingent upon the successful completion of the thwarted pilot test on the carrier deck in Alameda, according to the plan.... Before the setback in Alameda, the team had received some federal funding and hoped to gain access to government ships and planes, the documents show.

The university and its partners — a solar geoengineering research advocacy group called SilverLining and the scientific nonprofit SRI International — didn't respond to detailed questions about the status of the larger cloud experiment. But SilverLining's executive director, Kelly Wanser, said in an email that the Marine Cloud Brightening Program aimed to "fill gaps in the information" needed to determine if the technologies are safe and effective.âIn the initial experiment, the researchers appeared to have disregarded past lessons about building community support for studies related to altering the climate, and instead kept their plans from the public and lawmakers until the testing was underway, some solar geoengineering experts told E&E News. The experts also expressed surprise at the size of the planned second experiment....

The program does not "recommend, support or develop plans for the use of marine cloud brightening to alter weather or climate," Sarah Doherty, an atmospheric and climate science professor at the university who leads the program, said in a statement to E&E News. She emphasized that the program remains focused on researching the technology, not deploying it. There are no "plans for conducting large-scale studies that would alter weather or climate," she added.

"More than 575 scientists have called for a ban on geoengineering development," according to the article, "because it 'cannot be governed globally in a fair, inclusive, and effective manner.'" But "Some scientists believe that the perils of climate change are too dire to not pursue the technology, which they say can be safely tested in well-designed experiments... " "If we really were serious about the idea that to do any controversial topic needs some kind of large-scale consensus before we can research the topic, I think that means we don't research topics," David Keith, a geophysical sciences professor at the University of Chicago, said at a think tank discussion last month... "The studies that the program is pursuing are scientifically sound and would be unlikely to alter weather patterns — even for the Puerto Rico-sized test, said Daniele Visioni, a professor of atmospheric sciences at Cornell University. Nearly 30 percent of the planet is already covered by clouds, he noted.
Thanks to Slashdot reader fjo3 for sharing the news.
Privacy

Astronomer Hires Coldplay Lead Singer's Ex-Wife as 'Temporary' Spokesperson: Gwyneth Paltrow (bbc.com) 153

The "Chief People Officer" of dataops company Astronomer resigned this week from her position after apparently being caught on that "Kiss Cam" at a Coldplay concert with the company's CEO, reports the BBC. That CEO has also resigned, with Astronomer appointing their original co-founder and chief product officer as the new interim CEO.

UPDATE (7/26): In an unexpected twist, Astronomer put out a new video Friday night starring... Gwyneth Paltrow.

Actress/businesswoman Paltrow "was married to Coldplay's frontman Chris Martin for 13 years," reports CBS News. In the video posted Friday, Paltrow says she was hired by Astronomer as a "very temporary" spokesperson.

"Astronomer has gotten a lot of questions over the last few days," Paltrow begins, "and they wanted me to answer the most common ones..."

As the question "OMG! What the actual f" begins appearing on the screen, Paltrow responds "Yes, Astronomer is the best place to run Apache Airflow, unifying the experience of running data, ML, and AI pipelines at scale. We've been thrilled so many people have a newfound interest in data workflow automation." (Paltrow also mentions the company's upcoming Beyond Analytics dataops conference in September.)

Astronomer is still grappling with unintended fame after the "Kiss Cam" incident. ("Either they're having an affair or they're just very shy," Coldplay's lead singer had said during the viral video, in which the startled couple hurries to hide off-camera). The incident raised privacy concerns, as it turns out both people in the video were in fact married to someone else, though the singer did earlier warn the crowd "we're going to use our cameras and put some of you on the big screen," according to CNN. The New York Post notes the woman's now-deleted LinkedIn account showed that she has also served as an "advisory board member" at her husband's company since September of 2020. The Post cites a source close to the situation who says the woman's husband "was in Asia for a few weeks," returning to America right as the video went viral. Kristin and Andrew Cabot married sometime after her previous divorce was finalized in 2022. The source said there had been little indication of any trouble in paradise before the Coldplay concert video went viral. "The family is now saying they have been having marriage troubles for several months and were discussing separating..."
The video had racked up 127 million videos by yesterday, notes Newsweek, adding that the U.K. tabloid the Daily Mail apparently took photos outside the woman's house, reporting that she does not appear to be wearing a wedding ring.
Medicine

FDA's New Drug Approval AI Is Generating Fake Studies (gizmodo.com) 41

An anonymous reader quotes a report from Gizmodo: Robert F. Kennedy Jr., the Secretary of Health and Human Services, has made a big push to get agencies like the Food and Drug Administration to use generative artificial intelligence tools. In fact, Kennedy recently told Tucker Carlson that AI will soon be used to approve new drugs "very, very quickly." But a new report from CNN confirms all our worst fears. Elsa, the FDA's AI tool, is spitting out fake studies.

CNN spoke with six current and former employees at the FDA, three of whom have used Elsa for work that they described as helpful, like creating meeting notes and summaries. But three of those FDA employees told CNN (paywalled) that Elsa just makes up nonexistent studies, something commonly referred to in AI as "hallucinating." The AI will also misrepresent research, according to these employees. "Anything that you don't have time to double-check is unreliable. It hallucinates confidently," one unnamed FDA employee told CNN. [...] Kennedy's Make America Healthy Again (MAHA) commission issued a report back in May that was later found to be filled with citations for fake studies. An analysis from the nonprofit news outlet NOTUS found that at least seven studies cited didn't even exist, with many more misrepresenting what was actually said in a given study. We still don't know if the commission used Elsa to generate that report.

FDA Commissioner Marty Makary initially deployed Elsa across the agency on June 2, and an internal slide leaked to Gizmodo bragged that the system was "cost-effective," only costing $12,000 in its first week. Makary said that Elsa was "ahead of schedule and under budget" when he first announced the AI rollout. But it seems like you get what you pay for. If you don't care about the accuracy of your work, Elsa sounds like a great tool for allowing you to get slop out the door faster, generating garbage studies that could potentially have real consequences for public health in the U.S. CNN notes that if an FDA employee asks Elsa to generate a one-paragraph summary of a 20-page paper on a new drug, there's no simple way to know if that summary is accurate. And even if the summary is more or less accurate, what if there's something within that 20-page report that would be a big red flag for any human with expertise? The only way to know for sure if something was missed or if the summary is accurate is to actually read the report. The FDA employees who spoke with CNN said they tested Elsa by asking basic questions like how many drugs of a certain class have been approved for children. Elsa confidently gave wrong answers, and while it apparently apologized when it was corrected, a robot being "sorry" doesn't really fix anything.

The Courts

After $380 Million Hack, Clorox Sues Its 'Service Desk' Vendor For Simply Giving Out Passwords (arstechnica.com) 89

An anonymous reader quotes a report from Ars Technica: Hacking is hard. Well, sometimes. Other times, you just call up a company's IT service desk and pretend to be an employee who needs a password reset, an Okta multifactor authentication reset, and a Microsoft multifactor authentication reset... and it's done. Without even verifying your identity. So you use that information to log in to the target network and discover a more trusted user who works in IT security. You call the IT service desk back, acting like you are now this second person, and you request the same thing: a password reset, an Okta multifactor authentication reset, and a Microsoft multifactor authentication reset. Again, the desk provides it, no identity verification needed. So you log in to the network with these new credentials and set about planting ransomware or exfiltrating data in the target network, eventually doing an estimated $380 million in damage. Easy, right?

According to The Clorox Company, which makes everything from lip balm to cat litter to charcoal to bleach, this is exactly what happened to it in 2023. But Clorox says that the "debilitating" breach was not its fault. It had outsourced the "service desk" part of its IT security operations to the massive services company Cognizant -- and Clorox says that Cognizant failed to follow even the most basic agreed-upon procedures for running the service desk. In the words of a new Clorox lawsuit, Cognizant's behavior was "all a devastating lie," it "failed to show even scant care," and it was "aware that its employees were not adequately trained."

"Cognizant was not duped by any elaborate ploy or sophisticated hacking techniques," says the lawsuit, using italics to indicate outrage emphasis. "The cybercriminal just called the Cognizant Service Desk, asked for credentials to access Clorox's network, and Cognizant handed the credentials right over. Cognizant is on tape handing over the keys to Clorox's corporate network to the cybercriminal -- no authentication questions asked." [...] The new lawsuit, filed in California state courts, wants Cognizant to cough up millions of dollars to cover the damage Clorox says it suffered after weeks of disruption to its factories and ordering systems. (You can read a brief timeline of the disruption here.)

Supercomputing

Scientists Make 'Magic State' Breakthrough After 20 Years (livescience.com) 38

An anonymous reader quotes a report from Live Science: In a world first, scientists have demonstrated an enigmatic phenomenon in quantum computing that could pave the way for fault-tolerant machines that are far more powerful than any supercomputer. The process, called "magic state distillation," was first proposed 20 years ago, but its use in logical qubits has eluded scientists ever since. It has long been considered crucial for producing the high-quality resources, known as "magic states," needed to fulfill the full potential of quantum computers. [...] Now, however, scientists with QuEra say they have demonstrated magic state distillation in practice for the first time on logical qubits. They outlined their findings in a new study published July 14 in the journal Nature.

In the study, using the Gemini neutral-atom quantum computer, the scientists distilled five imperfect magic states into a single, cleaner magic state. They performed this separately on a Distance-3 and a Distance-5 logical qubit, demonstrating that it scales with the quality of the logical qubit. "A greater distance means better logical qubits. A Distance-2, for instance, means that you can detect an error but not correct it. Distance-3 means that you can detect and correct a single error. Distance-5 would mean that you can detect and correct up to two errors, and so on, and so on," [explained Yuval Boger, chief commercial officer at QuEra who was not personally involved in the research]. "So the greater the distance, the higher fidelity of the qubit is -- and we liken it to distilling crude oil into a jet fuel."

As a result of the distillation process, the fidelity of the final magic state exceeded that of any input. This proved that fault-tolerant magic state distillation worked in practice, the scientists said. This means that a quantum computer that uses both logical qubits and high-quality magic states to run non-Clifford gates is now possible. "We're seeing sort of a shift from a few years ago," Boger said. "The challenge was: can quantum computers be built at all? Then it was: can errors be detected and corrected? Us and Google and others have shown that, yes, that can be done. Now it's about: can we make these computers truly useful? And to make one computer truly useful, other than making them larger, you want them to be able to run programs that cannot be simulated on classical computers."

Piracy

Cloudflare Starts Blocking Pirate Sites For UK Users 36

An anonymous reader quotes a report from TorrentFreak: Internet service providers BT, Virgin Media, Sky, TalkTalk, EE, and Plusnet account for the majority of the UK's residential internet market and as a result, blocking injunctions previously obtained at the High Court often list these companies as respondents. These so-called "no fault' injunctions stopped being adversarial a long time ago; ISPs indicate in advance they won't contest a blocking order against various pirate sites, and typically that's good enough for the Court to issue an order with which they subsequently comply. For more than 15 years, this has led to blocking being carried out as close to users as possible, with ISPs' individual blocking measures doing the heavy lifting. A new wave of blocking targeting around 200 pirate site domains came into force yesterday but with the unexpected involvement of a significant new player.

In the latest wave of blocking that seems to have come into force yesterday, close to 200 pirate domains requested by the Motion Picture Association were added to one of the longest pirate site blocking lists in the world. The big change is the unexpected involvement of Cloudflare, which for some users attempting to access the domains added yesterday, displays the [Error 451 -- Unavailable for Legal Reasons] notice ... As stated in the notice, Error 451 is returned when a domain is blocked for legal reasons, in this case reasons specific to the UK. [...] In this case there's no indication of who requested the blocking order, or the authority that issued it. However, from experience we know that the request was made by the studios of the Motion Picture Association and for the same reason the High Court in London was the issuing authority. [...] The issue lies with dynamic injunctions; while a list of domains will appear in the original order (which may or may not be made available), when the MPA concludes that other domains that appear subsequently are linked to the same order, those can be blocked too, but the details are only rarely made public.

From information obtained independently, one candidate is an original order obtained in December 2022 which requested blocking of domains with well known pirate brands including 123movies, fmovies, soap2day, hurawatch, sflix, and onionplay. This leads directly to another unusual issue. The notice linked from Cloudflare doesn't directly concern Cloudflare. The studios sent the notice to Google after Google agreed to voluntarily remove those domains from its search indexes, if it was provided with a copy of relevant court orders. Notices like these were supplied and the domains were deindexed, and the practice has continued ever since. That raises questions about the nature of Cloudflare's involvement here and why it links to the order sent to Google; notices sent to Cloudflare are usually submitted to Lumen by Cloudflare itself. That doesn't appear to be the case here.
"Domains blocked by Sky, BPI and others, don't appear to be affected," notes TorrentFreak. "All relate to sites targeted by the MPA, and the majority if not all trigger malware warnings of a very serious kind, either immediately upon visiting the sites, or shortly after."

"At least in the short term, if Cloudflare is blocking a domain in the UK, moving on is strongly advised."

Slashdot Top Deals