Graphics

Gamers React With Overwhelming Disgust To DLSS 5's Generative AI Glow-Ups (arstechnica.com) 124

Kyle Orland writes via Ars Technica: Since deep-learning super-sampling (DLSS) launched on 2018's RTX 2080 cards, gamers have been generally bullish on the technology as a way to effectively use machine-learning upscaling techniques to increase resolutions or juice frame rates in games. With yesterday's tease of the upcoming DLSS 5, though, Nvidia has crossed a line from mere upscaling into complete lighting and texture overhauls influenced by "generative AI." The result is a bland, uncanny gloss that has received an instant and overwhelmingly negative reaction from large swaths of gamers and the industry at large.

While previous DLSS releases rendered upscaled frames or created entirely new ones to smooth out gaps, Nvidia calls DLSS 5 -- which it plans to launch in Autumn -- "a real-time neural rendering model" that can "deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects." Nvidia CEO Jensen Huang said explicitly that the technology melds "generative AI" with "handcrafted rendering" for "a dramatic leap in visual realism while preserving the control artists need for creative expression."

Unlike existing generative video models, which Nvidia notes are "difficult to precisely control and often lack predictability," DLSS 5 uses a game's internal color and motion vectors "to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame." That underlying game data helps the system "understand complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast," the company says.
Nvidia's announcement video and detailed Digital Foundry breakdown can be found at their respective links.

"Reactions have compared the effect to air-brushed pornography, 'yassified, looks-maxed freaks,' or those uncanny, unavoidable Evony ads," writes Orland. "Others have noted how DLSS 5 seems to mangle the intended art direction by dampening shadows in favor of a homogenized look."

Thomas Was Alone developer Mike Bithell said the technology seems designed "for when you absolutely, positively, don't want any art direction in your gaming experience."

Gunfire Games Senior Concept Artist Jeff Talbot added that "in every shot the art direction was taken away for the senseless addition of 'details.' Each DLSS 5 shot looked worse and had less character than the original. This is just a garbage AI Filter."

DLSS 5's "AI dogshit is actually depressing," said New Blood Interactive founder and CEO Dave Oshry, adding that future generations "won't even know this looks 'bad' or 'wrong' because to them it'll be normal."
Iphone

A Possible US Government iPhone-Hacking Toolkit Is Now In the Hands of Foreign Spies, Criminals (wired.com) 39

Security researchers say a highly sophisticated iPhone exploitation toolkit dubbed "Coruna," which possibly originated from a U.S. government contractor, has spread from suspected Russian espionage operations to crypto-stealing criminal campaigns. Apple has patched the exploited vulnerabilities in newer iOS versions, but tens of thousands of devices may have already been compromised. An anonymous reader quotes an excerpt from Wired's report: Security researchers at Google on Tuesday released a report describing what they're calling "Coruna," a highly sophisticated iPhone hacking toolkit that includes five complete hacking techniques capable of bypassing all the defenses of an iPhone to silently install malware on a device when it visits a website containing the exploitation code. In total, Coruna takes advantage of 23 distinct vulnerabilities in iOS, a rare collection of hacking components that suggests it was created by a well-resourced, likely state-sponsored group of hackers.

In fact, Google traces components of Coruna to hacking techniques it spotted in use in February of last year and attributed to what it describes only as a "customer of a surveillance company." Then, five months later, Google says a more complete version of Coruna reappeared in what appears to have been an espionage campaign carried out by a suspected Russian spy group, which hid the hacking code in a common visitor-counting component of Ukrainian websites. Finally, Google spotted Coruna in use yet again in what seems to have been a purely profit-focused hacking campaign, infecting Chinese-language crypto and gambling sites to deliver malware that steals victims cryptocurrency.

Conspicuously absent from Google's report is any mention of who the original surveillance company "customer" that deployed Coruna may have been. But the mobile security company iVerify, which also analyzed a version of Coruna it obtained from one of the infected Chinese sites, suggests the code may well have started life as a hacking kit built for or purchased by the US government. Google and iVerify both note that Coruna contains multiple components previously used in a hacking operation known as "Triangulation" that was discovered targeting Russian cybersecurity firm Kaspersky in 2023, which the Russian government claimed was the work of the NSA. (The US government didn't respond to Russia's claim.)

Coruna's code also appears to have been originally written by English-speaking coders, notes iVerify's cofounder Rocky Cole. "It's highly sophisticated, took millions of dollars to develop, and it bears the hallmarks of other modules that have been publicly attributed to the US government," Cole tells WIRED. "This is the first example we've seen of very likely US government tools -- based on what the code is telling us -- spinning out of control and being used by both our adversaries and cybercriminal groups." Regardless of Coruna's origin, Google warns that a highly valuable and rare hacking toolkit appears to have traveled through a series of unlikely hands, and now exists in the wild where it could still be adopted -- or adapted -- by any hacker group seeking to target iPhone users.
"How this proliferation occurred is unclear, but suggests an active market for 'second hand' zero-day exploits," Google's report reads. "Beyond these identified exploits, multiple threat actors have now acquired advanced exploitation techniques that can be re-used and modified with newly identified vulnerabilities."
Hardware

ASML Unveils EUV Light Source Advance That Could Yield 50% More Chips By 2030 (reuters.com) 26

An anonymous reader quotes a report from Reuters: Researchers at ASML Holding say they have found a way to boost the power of the light source in a key chip making machine to turn out up to 50% more chips by decade's end, to help retain the Dutch company's edge over emerging U.S. and Chinese rivals. ASML is the world's only maker of commercial extreme ultraviolet lithography (EUV) machines, a critical tool for chipmakers such as TSMC, Intel and others in producing advanced computing chips. "It's not a parlor trick or something like this, where we demonstrate for a very short time that it can work," Michael Purvis, ASML's lead technologist for its EUV source light, said in an interview. "It's a system that can produce 1,000 watts under all the same requirements that you could see at a customer," he added, speaking at the company's California facilities near San Diego. [...]

With the technological advance revealed on Monday, which is being reported here for the first time, ASML aims to outdistance any would-be rivals by improving the most technologically challenging aspect of the machines. This is the quest to generate EUV light with the right power and properties to turn out chips at high volume. The company's researchers have found a way to boost the power of the EUV light source to 1,000 watts from 600 watts now. The chief advantage is that greater power translates into the ability to make more chips every hour, helping to lower the cost of each. Chips are printed similar to a photograph, where the EUV light is shone on a silicon wafer coated with special chemicals called a photoresist. With a more powerful EUV light source, chip factories need shorter exposure times. "We'd like to make sure that our customers can keep on using EUV at a much lower cost," Teun van Gogh, executive vice president for the NXE line of EUV machines at ASML, told Reuters. Van Gogh said customers should be able to process about 330 silicon wafers an hour on each machine by the end of the decade, up from 220 now. Depending on the size of a chip, each wafer can hold anywhere from scores to thousands of the devices.

ASML got the power boost by doubling down on an approach that already places its machines among the most complex inventions of humans. To produce light with a wavelength of 13.5 nanometers, ASML's machine shoots a stream of molten droplets of tin through a chamber, where a massive carbon dioxide laser heats them into plasma. This is a superheated state of matter in which the tin droplets become hotter than the sun and emit EUV light, to be collected by precision optic equipment supplied by Germany's Carl Zeiss AG and fed into the machine to print chips. The key advancements in Monday's disclosure involved doubling the number of tin drops to about 100,000 every second, and shaping them into plasma using two smaller laser bursts, as opposed to today's machines that use a single shaping burst. [...] ASML believes the techniques it used to hit 1,000 watts will unlock continued advances in the future, Purvis said, adding, "We see a reasonably clear path toward 1,500 watts, and no fundamental reason why we couldn't get to 2,000 watts."

First Person Shooters (Games)

Programmer Gets Doom Running On a Space Satellite (zdnet.com) 28

An Icelandic programmer successfully ran Doom on the European Space Agency's OPS-SAT satellite, proving that the iconic 1993 shooter can now run not just everywhere on Earth -- but in orbit. ZDNet reports: Olafur Waage, a senior software developer from Iceland who now works in Norway, explained at Ubuntu Summit 25.10 how he, a self-described "professional keyboard typist" and maker of funny videos, ended up making what is perhaps the game's most outlandish port yet: Doom running on a real satellite in orbit, the European Space Agency (ESA) OPS-SAT satellite. OPS-SAT, a "flying laboratory" for testing novel onboard computing techniques, was equipped with an experimental computer approximately 10 times more powerful than the norm for spacecraft. Waag explained, "OPS-SAT was the first of its kind, devoted to demonstrating drastically improved mission control capabilities when satellites can fly more powerful onboard computers. The point was to break the curse of being too risk-averse with multi-million-dollar spacecraft." (The satellite was decommissioned in 2024.) [...]

Running Doom in orbit was partly a challenge of portability and partly a challenge of the limitations of space hardware and mission control. The on-board ARM dual-core Cortex-A9 processor, while hot stuff for space computing hardware (which tends to be low-powered and radiation-hardened), was slow even by Earth-bound standards. Waage chose Chocolate Doom 2.3, a popular open-source version of Doom, for its compatibility with the Ubuntu 18.04 Long Term Support (LTS) distro, which was already running on OPS-SAT. Besides, Waage noted, "We picked Chocolate Doom 2.3 because of the libraries available for 18.04 -- that was the last one that would actually build.

Updating software in orbit is extremely difficult, so relatively little code would have to be uploaded. As Waage said, "Doom is relatively straightforward C with a few external dependencies." In other words, it's easy to port. [...] The only sign that Doom was running in space at first was a lone log entry. So, the team used the satellite's camera to snap real-time images of the Earth, then swapped Doom's Mars skybox for actual satellite photos. "The idea was to take a screenshot from the satellite and use that as the sky, all rendered in software using the game's restricted 256-color palette," explained Waage. Even this posed unexpected difficulties: "Trying to draw all of these beautiful colors with those colors," said Waage, "it's probably not going to work right off. But we tried gradient tests, NASA demo photos. It took quite a bit of tweaking." Eventually, instead of a fantasy Mars as the sky background, they got a good-looking, real Earth in the game's sky. The game itself ran flawlessly. After all, Waage said, "It ran beautifully. It's on Ubuntu."

The Military

Germany To Allow Police To Shoot Down Drones (reuters.com) 60

Germany's cabinet has approved a new law allowing police to shoot down or disable rogue drones that threaten airspace security, following recent airport disruptions attributed to Russian reconnaissance. "Other techniques available to down drones include using lasers or jamming signals to sever control and navigation links," notes Reuters. From the report: With the new law, Germany joins European countries that have recently given security forces powers to down drones violating their airspace, including Britain, France, Lithuania and Romania. A dedicated counter-drone unit will be created within the federal police, Interior Minister Alexander Dobrindt said, and researchers would consult with Israel and Ukraine as they were more advanced in drone technology. Police would deal with drones flying at around tree-level, whereas more powerful drones should be tackled by the military, Dobrindt said.

Germany recorded 172 drone-related disruptions to air traffic between January and the end of September 2025, up from 129 in the same period last year and 121 in 2023, according to data from Deutsche Flugsicherung (DFS). German military drills last month in the northern port city of Hamburg demonstrated how like a spider, a large military drone shot a net at a smaller one in mid-flight, entangling its propellers and forcing it to the ground, where a robotic dog trotted over to seek possible explosives. Shooting down drones could be unsafe in densely populated urban areas, however, and airports do not necessarily have detection systems that can immediately report sightings.

Security

Apple Claims 'Most Significant Upgrade to Memory Safety' in OS History (apple.com) 39

"There has never been a successful, widespread malware attack against iPhone," notes Apple's security blog, pointing out that "The only system-level iOS attacks we observe in the wild come from mercenary spyware... historically associated with state actors and [using] exploit chains that cost millions of dollars..."

But they're doing something about it — this week announcing a new always-on memory-safety protection in the iPhone 17 lineup and iPhone Air (including the kernel and over 70 userland processes)... Known mercenary spyware chains used against iOS share a common denominator with those targeting Windows and Android: they exploit memory safety vulnerabilities, which are interchangeable, powerful, and exist throughout the industry... For Apple, improving memory safety is a broad effort that includes developing with safe languages and deploying mitigations at scale...

Our analysis found that, when employed as a real-time defensive measure, the original Arm Memory Tagging Extension (MTE) release exhibited weaknesses that were unacceptable to us, and we worked with Arm to address these shortcomings in the new Enhanced Memory Tagging Extension (EMTE) specification, released in 2022. More importantly, our analysis showed that while EMTE had great potential as specified, a rigorous implementation with deep hardware and operating system support could be a breakthrough that produces an extraordinary new security mechanism.... Ultimately, we determined that to deliver truly best-in-class memory safety, we would carry out a massive engineering effort spanning all of Apple — including updates to Apple silicon, our operating systems, and our software frameworks. This effort, together with our highly successful secure memory allocator work, would transform MTE from a helpful debugging tool into a groundbreaking new security feature.

Today we're introducing the culmination of this effort: Memory Integrity Enforcement (MIE), our comprehensive memory safety defense for Apple platforms. Memory Integrity Enforcement is built on the robust foundation provided by our secure memory allocators, coupled with Enhanced Memory Tagging Extension (EMTE) in synchronous mode, and supported by extensive Tag Confidentiality Enforcement policies. MIE is built right into Apple hardware and software in all models of iPhone 17 and iPhone Air and offers unparalleled, always-on memory safety protection for our key attack surfaces including the kernel, while maintaining the power and performance that users expect. In addition, we're making EMTE available to all Apple developers in Xcode as part of the new Enhanced Security feature that we released earlier this year during WWDC...

Based on our evaluations pitting Memory Integrity Enforcement against exceptionally sophisticated mercenary spyware attacks from the last three years, we believe MIE will make exploit chains significantly more expensive and difficult to develop and maintain, disrupt many of the most effective exploitation techniques from the last 25 years, and completely redefine the landscape of memory safety for Apple products. Because of how dramatically it reduces an attacker's ability to exploit memory corruption vulnerabilities on our devices, we believe Memory Integrity Enforcement represents the most significant upgrade to memory safety in the history of consumer operating systems.

Medicine

Cancer-Fighting Immune Cells Could Soon Be Engineered Inside Our Bodies (nature.com) 23

Researchers are developing techniques to genetically modify cancer-fighting immune cells directly inside patients rather than in expensive laboratory facilities, potentially making CAR-T therapy accessible to far more people.

Current CAR-T treatments require removing a patient's T cells, shipping them to specialized facilities for genetic engineering, then returning them weeks later at costs around $500,000 per dose. The new "in vivo" approaches use viral vectors or RNA-loaded nanoparticles to deliver genetic instructions directly to T cells circulating in the bloodstream, which could reduce costs by an order of magnitude. Companies including Capstan Therapeutics, co-founded by Nobel laureates, and AstraZeneca-backed EsoBiotec have launched early human trials. While only about 200 US centers currently offer traditional CAR-T therapy, the approach could make the powerful treatment available on demand like conventional drugs.
AI

DeepMind Details All the Ways AGI Could Wreck the World (arstechnica.com) 36

An anonymous reader quotes a report from Ars Technica, written by Ryan Whitwam: Researchers at DeepMind have ... released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience. It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to "severe harm." This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks.

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne'er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon. DeepMind says companies developing AGI will have to conduct extensive testing and create robust post-training safety protocols. Essentially, AI guardrails on steroids. They also suggest devising a method to suppress dangerous capabilities entirely, sometimes called "unlearning," but it's unclear if this is possible without substantially limiting models. Misalignment is largely not something we have to worry about with generative AI as it currently exists. This type of AGI harm is envisioned as a rogue machine that has shaken off the limits imposed by its designers. Terminators, anyone? More specifically, the AI takes actions it knows the developer did not intend. DeepMind says its standard for misalignment here is more advanced than simple deception or scheming as seen in the current literature.

To avoid that, DeepMind suggests developers use techniques like amplified oversight, in which two copies of an AI check each other's output, to create robust systems that aren't likely to go rogue. If that fails, DeepMind suggests intensive stress testing and monitoring to watch for any hint that an AI might be turning against us. Keeping AGIs in virtual sandboxes with strict security and direct human oversight could help mitigate issues arising from misalignment. Basically, make sure there's an "off" switch. If, on the other hand, an AI didn't know that its output would be harmful and the human operator didn't intend for it to be, that's a mistake. We get plenty of those with current AI systems -- remember when Google said to put glue on pizza? The "glue" for AGI could be much stickier, though. DeepMind notes that militaries may deploy AGI due to "competitive pressure," but such systems could make serious mistakes as they will be tasked with much more elaborate functions than today's AI. The paper doesn't have a great solution for mitigating mistakes. It boils down to not letting AGI get too powerful in the first place. DeepMind calls for deploying slowly and limiting AGI authority. The study also suggests passing AGI commands through a "shield" system that ensures they are safe before implementation.

Lastly, there are structural risks, which DeepMind defines as the unintended but real consequences of multi-agent systems contributing to our already complex human existence. For example, AGI could create false information that is so believable that we no longer know who or what to trust. The paper also raises the possibility that AGI could accumulate more and more control over economic and political systems, perhaps by devising heavy-handed tariff schemes. Then one day, we look up and realize the machines are in charge instead of us. This category of risk is also the hardest to guard against because it would depend on how people, infrastructure, and institutions operate in the future.

AI

Anthropic Announces Updates On Security Safeguards For Its AI Models (cnbc.com) 39

Anthropic announced updates to the "responsible scaling" policy for its AI, including defining which of its model safety levels are powerful enough to need additional security safeguards. In an earlier version of its responsible scaling policy, Anthropic said it will start sweeping physical offices for hidden devices as part of a ramped-up security effort as the AI race intensifies. From a report: The company, backed by Amazon and Google, published safety and security updates in a blog post on Monday, and said it also plans to establish an executive risk council and build an in-house security team. Anthropic closed its latest funding round earlier this month at a $61.5 billion valuation, which makes it one of the highest-valued AI startups.

In addition to high-growth startups, tech giants including Google, Amazon and Microsoft are racing to announce new products and features. Competition is also coming from China, a risk that became more evident earlier this year when DeepSeek's AI model went viral in the U.S. Anthropic said in the post that it will introduce "physical" safety processes, such as technical surveillance countermeasures -- or the process of finding and identifying surveillance devices that are used to spy on organizations. The sweeps will be conducted "using advanced detection equipment and techniques" and will look for "intruders."
CNBC corrected that story to note that it had written about previous security safeguards Anthropic shared in October 2024. On Monday, Anthropic defined model capabilities that would require additional deployment and security safeguards beyond AI Safety Level (ASL) 3.
China

Jack Ma-Backed Ant Touts AI Breakthrough Using Chinese Chips (yahoo.com) 30

An anonymous reader quotes a report from Bloomberg: Jack Ma-backed Ant Group used Chinese-made semiconductors to develop techniques for training AI models that would cut costs by 20%, according to people familiar with the matter. Ant used domestic chips, including from affiliate Alibaba and Huawei, to train models using the so-called Mixture of Experts machine learning approach, the people said. It got results similar to those from Nvidia chips like the H800, they said, asking not to be named as the information isn't public. Hangzhou-based Ant is still using Nvidia for AI development but is now relying mostly on alternatives including from Advanced Micro Devices and Chinese chips for its latest models, one of the people said.

The models mark Ant's entry into a race between Chinese and US companies that's accelerated since DeepSeek demonstrated how capable models can be trained for far less than the billions invested by OpenAI and Alphabet Inc.'s Google. It underscores how Chinese companies are trying to use local alternatives to the most advanced Nvidia semiconductors. While not the most advanced, the H800 is a relatively powerful processor and currently barred by the US from China. The company published a research paper this month that claimed its models at times outperformed Meta Platforms Inc. in certain benchmarks, which Bloomberg News hasn't independently verified. But if they work as advertised, Ant's platforms could mark another step forward for Chinese artificial intelligence development by slashing the cost of inferencing or supporting AI services.

AI

Spain To Impose Massive Fines For Not Labeling AI-Generated Content 27

Spain's government has approved legislation imposing substantial fines of up to 35 million euros or 7% of global turnover on companies that fail to clearly label AI-generated content. Reuters reports: The bill adopts guidelines from the European Union's landmark AI Act imposing strict transparency obligations on AI systems deemed to be high-risk, Digital Transformation Minister Oscar Lopez told reporters. "AI is a very powerful tool that can be used to improve our lives ... or to spread misinformation and attack democracy," he said. Spain is among the first EU countries to implement the bloc's rules, considered more comprehensive than the United States' system that largely relies on voluntary compliance and a patchwork of state regulations. Lopez added that everyone was susceptible to "deepfake" attacks - a term for videos, photographs or audios that have been edited or generated through AI algorithms but are presented as real. [...]

The bill also bans other practices, such as the use of subliminal techniques - sounds and images that are imperceptible - to manipulate vulnerable groups. Lopez cited chatbots inciting people with addictions to gamble or toys encouraging children to perform dangerous challenges as examples. It would also prevent organizations from classifying people through their biometric data using AI, rating them based on their behavior or personal traits to grant them access to benefits or assess their risk of committing a crime. However, authorities would still be allowed to use real-time biometric surveillance in public spaces for national security reasons.
AI

What Happened When Conspiracy Theorists Talked to OpenAI's GPT-4 Turbo? (washingtonpost.com) 134

A "decision science partner" at a seed-stage venture fund (who is also a cognitive-behavioral decision science author and professional poker player) explored what happens when GPT-4 Turbo converses with conspiracy theorists: Researchers have struggled for decades to develop techniques to weaken the grip of conspiracy theories and cult ideology on adherents. This is why a new paper in the journal Science by Thomas Costello of MIT's Sloan School of Management, Gordon Pennycook of Cornell University and David Rand, also of Sloan, is so exciting... In a pair of studies involving more than 2,000 participants, the researchers found a 20 percent reduction in belief in conspiracy theories after participants interacted with a powerful, flexible, personalized GPT-4 Turbo conversation partner. The researchers trained the AI to try to persuade the participants to reduce their belief in conspiracies by refuting the specific evidence the participants provided to support their favored conspiracy theory.

The reduction in belief held across a range of topics... Even more encouraging, participants demonstrated increased intentions to ignore or unfollow social media accounts promoting the conspiracies, and significantly increased willingness to ignore or argue against other believers in the conspiracy. And the results appear to be durable, holding up in evaluations 10 days and two months later... Why was AI able to persuade people to change their minds? The authors posit that it "simply takes the right evidence," tailored to the individual, to effect belief change, noting: "From a theoretical perspective, this paints a surprisingly optimistic picture of human reasoning: Conspiratorial rabbit holes may indeed have an exit. Psychological needs and motivations do not inherently blind conspiracists to evidence...."

It is hard to walk away from who you are, whether you are a QAnon believer, a flat-Earther, a truther of any kind or just a stock analyst who has taken a position that makes you stand out from the crowd. And that's why the AI approach might work so well. The participants were not interacting with a human, which, I suspect, didn't trigger identity in the same way, allowing the participants to be more open-minded. Identity is such a huge part of these conspiracy theories in terms of distinctiveness, putting distance between you and other people. When you're interacting with AI, you're not arguing with a human being whom you might be standing in opposition to, which could cause you to be less open-minded.

Answering questions from Slashdot readers in 2005, Wil Wheaton described playing poker against the cognitive-behavioral decision science author who wrote this article...
AI

US Police Seldom Disclose Use of AI-Powered Facial Recognition, Investigation Finds (msn.com) 63

An anonymous reader shared this report from the Washington Post: Hundreds of Americans have been arrested after being connected to a crime by facial recognition software, a Washington Post investigation has found, but many never know it because police seldom disclose their use of the controversial technology...

In fact, the records show that officers often obscured their reliance on the software in public-facing reports, saying that they identified suspects "through investigative means" or that a human source such as a witness or police officer made the initial identification... The Coral Springs Police Department in South Florida instructs officers not to reveal the use of facial recognition in written reports, according to operations deputy chief Ryan Gallagher. He said investigative techniques are exempt from Florida's public disclosure laws... The department would disclose the source of the investigative lead if it were asked in a criminal proceeding, Gallagher added....

Prosecutors are required to inform defendants about any information that would help prove their innocence, reduce their sentence or hurt the credibility of a witness testifying against them. When prosecutors fail to disclose such information — known as a "Brady violation" after the 1963 Supreme Court ruling that mandates it — the court can declare a mistrial, overturn a conviction or even sanction the prosecutor. No federal laws regulate facial recognition and courts do not agree whether AI identifications are subject to Brady rules. Some states and cities have begun mandating greater transparency around the technology, but even in these locations, the technology is either not being used that often or it's not being disclosed, according to interviews and public records requests...

Over the past four years, the Miami Police Department ran 2,500 facial recognition searches in investigations that led to at least 186 arrests and more than 50 convictions. Among the arrestees, just 1 in 16 were told about the technology's use — less than 7 percent — according to a review by The Post of public reports and interviews with some arrestees and their lawyers. The police department said that in some of those cases the technology was used for purposes other than identification, such as finding a suspect's social media feeds, but did not indicate in how many of the cases that happened. Carlos J. Martinez, the county's chief public defender, said he had no idea how many of his Miami clients were identified with facial recognition until The Post presented him with a list. "One of the basic tenets of our justice system is due process, is knowing what evidence there is against you and being able to challenge the evidence that's against you," Martinez said. "When that's kept from you, that is an all-powerful government that can trample all over us."

After reviewing The Post's findings, Miami police and local prosecutors announced plans to revise their policies to require clearer disclosure in every case involving facial recognition.

The article points out that Miami's Assistant Police Chief actually told a congressional panel on law enforcement AI use that his department is "the first to be completely transparent about" the use of facial recognition. (When confronted with the Washington Post's findings, he "acknowledged that officers may not have always informed local prosecutors [and] said the department would give prosecutors all information on the use of facial recognition, in past and future cases".

He told the Post that the department would "begin training officers to always disclose the use of facial recognition in incident reports." But he also said they would "leave it up to prosecutors to decide what to disclose to defendants."
AI

Anthropic Hires OpenAI Co-Founder Durk Kingma 9

OpenAI co-founder Durk Kingma announced that he'll be joining Anthropic. "Anthropic's approach to AI development resonates significantly with my own beliefs," Kingma wrote in a post on X. "[L]ooking forward to contributing to Anthropic's mission of developing powerful AI systems responsibly. Can't wait to work with their talented team, including a number of great ex-colleagues from OpenAI and Google, and tackle the challenges ahead!" TechCrunch reports: Kingma, who has a Ph.D. in machine learning from the University of Amsterdam, spent several years as a doctoral fellow at Google before joining OpenAI's founding team as a research scientist. At OpenAI, Kingma focused on basic research, leading the algorithms team to develop techniques and methods primarily for generative AI models, including image generators (e.g. DALL-E 3) and large language models (e.g. ChatGPT). In 2018, Kingma left to become a part-time angel investor and advisor for AI startups. He rejoined Google in July of that year, and started at Google Brain, which became one of the tech giant's premiere AI R&D labs before it merged with DeepMind in 2023.
Sci-Fi

'Amazing' New Technology Set To Transform the Search For Alien Life (theguardian.com) 127

Robin McKie writes via The Guardian: Scientists with Breakthrough Listen, the world's largest scientific research program dedicated to finding alien civilizations, say a host of technological developments are about to transform the search for intelligent life in the cosmos. These innovations will be outlined at the group's annual conference, which is to be held in the UK for the first time, in Oxford, this week. Several hundred scientists, from astronomers to zoologists, are expected to attend. "There are amazing technologies that are under development, such as the construction of huge new telescopes in Chile, Africa and Australia, as well as developments in AI," said astronomer Steve Croft, a project scientist with Breakthrough Listen. "They are going to transform how we look for alien civilizations."

Among these new instruments are the Square Kilometer Array, made up of hundreds of radio telescopes now being built in South Africa and Australia, and the Vera Rubin Observatory that is being constructed in Chile. The former will become the world's most powerful radio astronomy facility while the latter, the world's largest camera, will be able to image the entire visible sky every three or four nights, and is expected to help discover millions of new galaxies and stars. Both facilities are set to start observations in the next few years and both will provide data for Breakthrough Listen. Using AI to analyze these vast streams of information for subtle patterns that would reveal evidence of intelligent life will give added power to the search for alien civilizations, added Croft.

"Until now, we have been restricted to looking for signals deliberately sent out by aliens to advertise their existence. The new techniques are going to be so sensitive that, for the first time, we will be able to detect unintentional transmissions as opposed to deliberate ones and will be able to spot alien airport radar, or powerful TV transmitters -- things like that." [...] Croft remains optimistic that we will soon succeed in making contact. "We know that the conditions for life are everywhere, we know that the ingredients for life are everywhere. I think it would be deeply weird if it turned out we were the only inhabited planet in the galaxy or in the universe. But you know, it's possible."

Encryption

Undisclosed WhatsApp Vulnerability Lets Governments See Who You Message (theintercept.com) 38

WhatsApp's security team warned that despite the app's encryption, users are vulnerable to government surveillance through traffic analysis, according to an internal threat assessment obtained by The Intercept. The document suggests that governments can monitor when and where encrypted communications occur, potentially allowing powerful inferences about who is conversing with whom. The report adds: Even though the contents of WhatsApp communications are unreadable, the assessment shows how governments can use their access to internet infrastructure to monitor when and where encrypted communications are occurring, like observing a mail carrier ferrying a sealed envelope. This view into national internet traffic is enough to make powerful inferences about which individuals are conversing with each other, even if the subjects of their conversations remain a mystery. "Even assuming WhatsApp's encryption is unbreakable," the assessment reads, "ongoing 'collect and correlate' attacks would still break our intended privacy model."

The WhatsApp threat assessment does not describe specific instances in which it knows this method has been deployed by state actors. But it cites extensive reporting by the New York Times and Amnesty International showing how countries around the world spy on dissident encrypted chat app usage, including WhatsApp, using the very same techniques. As war has grown increasingly computerized, metadata -- information about the who, when, and where of conversations -- has come to hold immense value to intelligence, military, and police agencies around the world. "We kill people based on metadata," former National Security Agency chief Michael Hayden once infamously quipped.
Meta said "WhatsApp has no backdoors and we have no evidence of vulnerabilities in how WhatsApp works." Though the assessment describes the "vulnerabilities" as "ongoing," and specifically mentions WhatsApp 17 times, a Meta spokesperson said the document is "not a reflection of a vulnerability in WhatsApp," only "theoretical," and not unique to WhatsApp.
Power

Fusion Research Facility's Final Tritium Experiments Yield New Energy Record (phys.org) 61

schwit1 quotes a report from Phys.Org: The Joint European Torus (JET), one of the world's largest and most powerful fusion machines, has demonstrated the ability to reliably generate fusion energy, while simultaneously setting a world record in energy output. These notable accomplishments represent a significant milestone in the field of fusion science and engineering. In JET's final deuterium-tritium experiments (DTE3), high fusion power was consistently produced for five seconds, resulting in a ground-breaking record of 69 megajoules using a mere 0.2 milligrams of fuel.

JET is a tokamak, a design which uses powerful magnetic fields to confine a plasma in the shape of a doughnut. Most approaches to creating commercial fusion favor the use of two hydrogen variants -- deuterium and tritium. When deuterium and tritium fuse together they produce helium and vast amounts of energy, a reaction that will form the basis of future fusion powerplants. Dr. Fernanda Rimini, JET Senior Exploitation Manager, said, "We can reliably create fusion plasmas using the same fuel mixture to be used by commercial fusion energy powerplants, showcasing the advanced expertise developed over time."

Professor Ambrogio Fasoli, Program Manager (CEO) at EUROfusion, said, "Our successful demonstration of operational scenarios for future fusion machines like ITER and DEMO, validated by the new energy record, instill greater confidence in the development of fusion energy. Beyond setting a new record, we achieved things we've never done before and deepened our understanding of fusion physics." Dr. Emmanuel Joffrin, EUROfusion Tokamak Exploitation Task Force Leader from CEA, said, "Not only did we demonstrate how to soften the intense heat flowing from the plasma to the exhaust, we also showed in JET how we can get the plasma edge into a stable state thus preventing bursts of energy reaching the wall. Both techniques are intended to protect the integrity of the walls of future machines. This is the first time that we've ever been able to test those scenarios in a deuterium-tritium environment."

AI

OpenAI's In-House Initiative Explores Stopping an AI From Going Rogue - With More AI (technologyreview.com) 43

MIT Technology Review reports that OpenAI "has announced the first results from its superalignment team, the firm's in-house initiative dedicated to preventing a superintelligence — a hypothetical future computer that can outsmart humans — from going rogue." Unlike many of the company's announcements, this heralds no big breakthrough. In a low-key research paper, the team describes a technique that lets a less powerful large language model supervise a more powerful one — and suggests that this might be a small step toward figuring out how humans might supervise superhuman machines....

Many researchers still question whether machines will ever match human intelligence, let alone outmatch it. OpenAI's team takes machines' eventual superiority as given. "AI progress in the last few years has been just extraordinarily rapid," says Leopold Aschenbrenner, a researcher on the superalignment team. "We've been crushing all the benchmarks, and that progress is continuing unabated." For Aschenbrenner and others at the company, models with human-like abilities are just around the corner. "But it won't stop there," he says. "We're going to have superhuman models, models that are much smarter than us. And that presents fundamental new technical challenges."

In July, Sutskever and fellow OpenAI scientist Jan Leike set up the superalignment team to address those challenges. "I'm doing it for my own self-interest," Sutskever told MIT Technology Review in September. "It's obviously important that any superintelligence anyone builds does not go rogue. Obviously...."

Instead of looking at how humans could supervise superhuman machines, they looked at how GPT-2, a model that OpenAI released five years ago, could supervise GPT-4, OpenAI's latest and most powerful model. "If you can do that, it might be evidence that you can use similar techniques to have humans supervise superhuman models," says Collin Burns, another researcher on the superalignment team... The results were mixed. The team measured the gap in performance between GPT-4 trained on GPT-2's best guesses and GPT-4 trained on correct answers. They found that GPT-4 trained by GPT-2 performed 20% to 70% better than GPT-2 on the language tasks but did less well on the chess puzzles.... They conclude that the approach is promising but needs more work...

Alongside this research update, the company announced a new $10 million money pot that it plans to use to fund people working on superalignment. It will offer grants of up to $2 million to university labs, nonprofits, and individual researchers and one-year fellowships of $150,000 to graduate students.

AI

Jailbroken AI Chatbots Can Jailbreak Other Chatbots 39

In a new preprint study, researchers were able to get AI chatbots to teach other chatbots how to bypass built-in restrictions. According to Scientific American, AIs were observed "breaking the rules to offer advice on how to synthesize methamphetamine, build a bomb and launder money." From the report: Modern chatbots have the power to adopt personas by feigning specific personalities or acting like fictional characters. The new study took advantage of that ability by asking a particular AI chatbot to act as a research assistant. Then the researchers instructed this assistant to help develop prompts that could "jailbreak" other chatbots -- destroy the guardrails encoded into such programs. The research assistant chatbot's automated attack techniques proved to be successful 42.5 percent of the time against GPT-4, one of the large language models (LLMs) that power ChatGPT. It was also successful 61 percent of the time against Claude 2, the model underpinning Anthropic's chatbot, and 35.9 percent of the time against Vicuna, an open-source chatbot.

Ever since LLM-powered chatbots became available to the public, enterprising mischief-makers have been able to jailbreak the programs. By asking chatbots the right questions, people have previously convinced the machines to ignore preset rules and offer criminal advice, such as a recipe for napalm. As these techniques have been made public, AI model developers have raced to patch them -- a cat-and-mouse game requiring attackers to come up with new methods. That takes time. But asking AI to formulate strategies that convince other AIs to ignore their safety rails can speed the process up by a factor of 25, according to the researchers. And the success of the attacks across different chatbots suggested to the team that the issue reaches beyond individual companies' code. The vulnerability seems to be inherent in the design of AI-powered chatbots more widely.
"In the current state of things, our attacks mainly show that we can get models to say things that LLM developers don't want them to say," says Rusheb Shah, another co-author of the study. "But as models get more powerful, maybe the potential for these attacks to become dangerous grows."
Privacy

Cellebrite Asks Cops To Keep Its Phone Hacking Tech 'Hush Hush' (techcrunch.com) 50

An anonymous reader shares a report: For years, cops and other government authorities all over the world have been using phone hacking technology provided by Cellebrite to unlock phones and obtain the data within. And the company has been keen on keeping the use of its technology "hush hush." As part of the deal with government agencies, Cellebrite asks users to keep its tech -- and the fact that they used it -- secret, TechCrunch has learned. This request concerns legal experts who argue that powerful technology like the one Cellebrite builds and sells, and how it gets used by law enforcement agencies, ought to be public and scrutinized.

In a leaked training video for law enforcement customers that was obtained by TechCrunch, a senior Cellebrite employee tells customers that "ultimately, you've extracted the data, it's the data that solves the crime, how you got in, let's try to keep that as hush hush as possible." "We don't really want any techniques to leak in court through disclosure practices, or you know, ultimately in testimony, when you are sitting in the stand, producing all this evidence and discussing how you got into the phone," the employee, who we are not naming, says in the video.

Slashdot Top Deals