The Courts

Amazon Wins Court Order To Block Perplexity's AI Shopping Bots (cnbc.com) 29

Last November, Amazon sued Perplexity demanding that the AI search startup stop allowing its AI browser agent, Comet, to make purchases for users online. Today, a judge ruled in favor of the tech giant, granting it a temporary court injunction blocking the scraping of Amazon's website. According to court filings, the judge found strong evidence the tool accessed the retailer's systems "without authorization." CNBC reports: In a ruling dated Monday, U.S. District Judge Maxine Chesney wrote that Amazon has provided "strong evidence" that Perplexity's Comet browser accessed its website at the user's direction, but "without authorization" from the e-commerce giant. Chesney said Amazon submitted "essentially undisputed evidence" that it spent more than $5,000 to respond to the issue, including "numerous hours" where its employees worked to develop tools to block Comet from accessing its private customer tools and to prevent the tool from "future unauthorized access." "Given such evidence, the Court finds Amazon has shown a likelihood of success on the merits of its claim," Chesney wrote.

Chesney's ruling includes a weeklong stay to allow Perplexity to appeal the order. Amazon wrote in its original complaint that Perplexity's agents posed security risks to customer data because they "can act within protected computer systems, including private customer accounts requiring a password." The company also said Perplexity's agents created challenges for the company's advertising business, because when AI systems generate ad traffic, the impressions have to be detected and filtered out before advertisers can be billed. "This requires modifications to Amazon's advertising systems, including developing new detection mechanisms to identify and exclude automated traffic," Amazon wrote in its complaint. "These system adaptations are necessary to maintain contractual obligations with advertisers who pay only for legitimate human impressions."

AI

Google's 'AI Overviews' Cite YouTube For Health Queries More Than Any Medical Sites, Study Suggests (theguardian.com) 38

An anonymous reader shared this report from the Guardian: Google's search feature AI Overviews cites YouTube more than any medical website when answering queries about health conditions, according to research that raises fresh questions about a tool seen by 2 billion people each month.

The company has said its AI summaries, which appear at the top of search results and use generative AI to answer questions from users, are "reliable" and cite reputable medical sources such as the Centers for Disease Control and Prevention and the Mayo Clinic. However, a study that analysed responses to more than 50,000 health queries, captured using Google searches from Berlin, found the top cited source was YouTube. The video-sharing platform is the world's second most visited website, after Google itself, and is owned by Google. Researchers at SE Ranking, a search engine optimisation platform, found YouTube made up 4.43% of all AI Overview citations. No hospital network, government health portal, medical association or academic institution came close to that number, they said. "This matters because YouTube is not a medical publisher," the researchers wrote. "It is a general-purpose video platform...."

In one case that experts said was "dangerous" and "alarming", Google provided bogus information about crucial liver function tests that could have left people with serious liver disease wrongly thinking they were healthy. The company later removed AI Overviews for some but not all medical searches... Hannah van Kolfschooten, a researcher specialising in AI, health and law at the University of Basel who was not involved with the research, said: "This study provides empirical evidence that the risks posed by AI Overviews for health are structural, not anecdotal. It becomes difficult for Google to argue that misleading or harmful health outputs are rare cases.

"Instead, the findings show that these risks are embedded in the way AI Overviews are designed. In particular, the heavy reliance on YouTube rather than on public health authorities or medical institutions suggests that visibility and popularity, rather than medical reliability, is the central driver for health knowledge."

AI

South Korea Launches Landmark Laws To Regulate AI 7

An anonymous reader quotes a report from the Korea Herald: South Korea will begin enforcing its Artificial Intelligence Act on Thursday, becoming the first country to formally establish safety requirements for high-performance, or so-called frontier, AI systems -- a move that sets the country apart in the global regulatory landscape. According to the Ministry of Science and ICT, the new law is designed primarily to foster growth in the domestic AI sector, while also introducing baseline safeguards to address potential risks posed by increasingly powerful AI technologies. Officials described the inclusion of legal safety obligations for frontier AI as a world-first legislative step.

The act lays the groundwork for a national-level AI policy framework. It establishes a central decision-making body -- the Presidential Council on National Artificial Intelligence Strategy -- and creates a legal foundation for an AI Safety Institute that will oversee safety and trust-related assessments. The law also outlines wide-ranging support measures, including research and development, data infrastructure, talent training, startup assistance, and help with overseas expansion.

To reduce the initial burden on businesses, the government plans to implement a grace period of at least one year. During this time, it will not carry out fact-finding investigations or impose administrative sanctions. Instead, the focus will be on consultations and education. A dedicated AI Act support desk will help companies determine whether their systems fall within the law's scope and how to respond accordingly. Officials noted that the grace period may be extended depending on how international standards and market conditions evolve. The law applies to three areas only: high-impact AI, safety obligations for high-performance AI and transparency requirements for generative AI.

Enforcement under the Korean law is intentionally light. It does not impose criminal penalties. Instead, it prioritizes corrective orders for noncompliance, with fines -- capped at 30 million won ($20,300) -- issued only if those orders are ignored. This, the government says, reflects a compliance-oriented approach rather than a punitive one. Transparency obligations for generative AI largely align with those in the EU, but Korea applies them more narrowly. Content that could be mistaken for real, such as deepfake images, video or audio, must clearly disclose its AI-generated origin. For other types of AI-generated content, invisible labeling via metadata is allowed. Personal or noncommercial use of generative AI is excluded from regulation.
"This is not about boasting that we are the first in the world," said Kim Kyeong-man, deputy minister of the office of artificial intelligence policy at the ICT ministry. "We're approaching this from the most basic level of global consensus."

Korea's approach differs from the EU by defining "high-performance AI" using technical thresholds like cumulative training compute, rather than regulating based on how AI is used. As a result, Korea believes no current models meet the bar for regulation, while the EU is phasing in broader, use-based AI rules over several years.
United States

Trump Signs Defense Bill Prohibiting China-Based Engineers in Pentagon IT Work (propublica.org) 32

President Donald Trump signed into law this month a measure that prohibits anyone based in China and other adversarial countries from accessing the Pentagon's cloud computing systems. From a report: The ban, which is tucked inside the $900 billion defense policy law, was enacted in response to a ProPublica investigation this year that exposed how Microsoft used China-based engineers to service the Defense Department's computer systems for nearly a decade -- a practice that left some of the country's most sensitive data vulnerable to hacking from its leading cyber adversary.

U.S.-based supervisors, known as "digital escorts," were supposed to serve as a check on these foreign employees, but we found they often lacked the expertise needed to effectively supervise engineers with far more advanced technical skills. In the wake of the reporting, leading members of Congress called on the Defense Department to strengthen its security requirements while blasting Microsoft for what some Republicans called "a national betrayal." Cybersecurity and intelligence experts have told ProPublica that the arrangement posed major risks to national security, given that laws in China grant the country's officials broad authority to collect data.

Beer

Heart Association Revives Theory That Light Drinking May Be Good For You 96

An anonymous reader quotes a report from the New York Times: For a while, it seemed the notion that light drinking was good for the heart had gone by the wayside, debunked by new studies and overshadowed by warnings that alcohol causes cancer. Now the American Heart Association has revived the idea in a scientific review that is drawing intense criticism, setting off a new round of debate about alcohol consumption. The paper, which sought to summarize the latest research and was aimed at practicing cardiologists, concluded that light drinking -- one to two drinks a day -- posed no risk for coronary disease, stroke, sudden death and possibly heart failure, and may even reduce the risk of developing these conditions.

Controversy over the influential organization's review has been simmering since it was published in the association's journal Circulation in July. Public health groups and many doctors have warned on the basis of recent studies that alcohol can be harmful even in small amounts. Groups like the European Heart Network and the World Heart Federation have stressed that even modest drinking increases the odds of cardiovascular disease.
"It says in all our guidelines right now, 'If you don't drink, don't start.' There's not enough evidence to suggest conclusively that it prevents heart disease," said Dr. Mariell Jessup, the chief science and medical officer at the heart association, adding that the review was not meant to serve as a guideline and that the group's advice to patients has not changed.

Critics argue that suggesting any heart-health benefits from alcohol is dangerous given its well-documented risks, and they accuse the heart association of selectively weighing studies. They also say a past tie to the alcohol industry by one author should have disqualified him from participating.

"The cardiovascular benefits of moderate drinking are questionable at best," said Dr. Elizabeth Farkouh, an internist and alcohol researcher. "But even if there was a benefit, there are so many other ways to reduce cardiovascular risk that don't come with an associated cancer risk."

The new review's conclusion is also at odds with the CDC's guidance on alcohol, which notes that "even moderate drinking may increase your risk of death and other alcohol-related harms, compared to not drinking." It also seems to diverge from the heart association's diet and lifestyle recommendation to consume "limited or preferably no alcohol," along with its 2023 statement that recent research suggests there is "no safe level of alcohol use."
Medicine

Science Journal Retracts Study On Safety of Monsanto's Roundup (theguardian.com) 44

An anonymous reader quotes a report from the Guardian: The journal Regulatory Toxicology and Pharmacology has formally retracted a sweeping scientific paper published in 2000 that became a key defense for Monsanto's claim that Roundup herbicide and its active ingredient glyphosate don't cause cancer. Martin van den Berg, the journal's editor in chief, said in a note accompanying the retraction that he had taken the step because of "serious ethical concerns regarding the independence and accountability of the authors of this article and the academic integrity of the carcinogenicity studies presented."

The paper, titled Safety Evaluation and Risk Assessment of the Herbicide Roundup and Its Active Ingredient, Glyphosate, for Humans, concluded that Monsanto's glyphosate-based weed killers posed no health risks to humans -- no cancer risks, no reproductive risks, no adverse effects on development of endocrine systems in people or animals. Regulators around the world have cited the paper as evidence of the safety of glyphosate herbicides, including the Environmental Protection Agency (EPA) in this assessment (PDF). [...]

In explaining the decision to retract the 25-year-old research paper, Van den Berg wrote: "Concerns were raised regarding the authorship of this paper, validity of the research findings in the context of misrepresentation of the contributions by the authors and the study sponsor and potential conflicts of interest of the authors." He noted that the paper's conclusions regarding the carcinogenicity of glyphosate were solely based on unpublished studies from Monsanto, ignoring other outside, published research.
"The retraction of this study is a long time coming," said Brent Wisner, one of the lead lawyers in the Roundup litigation and a key player in getting the internal documents revealed to the public. Wisner said the study was the "quintessential example of how companies like Monsanto could fundamentally undermine the peer-review process through ghostwriting, cherrypicking unpublished studies, and biased interpretations."

"This garbage ghostwritten study finally got the fate it deserved,â Wisner added. "Hopefully, journals will now be more vigilant in protecting the impartiality of science on which so many people depend."
EU

EU Eyes Banning Huawei, ZTE Corp From Mobile Networks of Member Countries (archive.ph) 21

The European Commission is considering turning its non-binding 2020 guidance on "high-risk vendors" into a legal requirement that would effectively force EU member states to phase out Huawei and ZTE from mobile and fixed-line networks. Bloomberg reports: Commission Vice President Henna Virkkunen wants to convert the European Commission's 2020 recommendation to stop using high-risk vendors in mobile networks into a legal requirement, according to the people, who asked not to be identified because the negotiations are private. While infrastructure decisions rest with national governments, Virkkunen's proposal would compel EU countries to align with the commission's security guidance.

The EU is increasingly focused on the risks posed by Chinese telecom equipment makers as trade and political ties with its second-largest trading partner fray. The concern is that handing over control of critical national infrastructure to companies with such close ties to Beijing could compromise national security interests.

Virkkunen is examining ways to limit the use of Chinese equipment suppliers in fixed-line networks, as countries push for the rapid deployment of state-of-the-art fiber cables to expand high-speed internet access. The commission is also considering measures to dissuade non-EU countries from relying on Chinese vendors, including by withholding Global Gateway funding from nations that use the grants for projects involving Huawei equipment, according to the people.

Bitcoin

European Banks To Launch Euro Stablecoin In Bid To Counter US Dominance (reuters.com) 33

Nine major European banks are creating a Netherlands-based company to launch a euro-backed stablecoin in 2026, aiming to counter U.S. dominance in the digital token market. Reuters reports: While global stablecoin issuance stands at nearly $300 billion, euro-denominated stablecoins totalled just $620 million, according to figures released last week by the Bank of Italy, with dollar-pegged tokens overwhelmingly dominant. "The initiative will provide a real European alternative to the U.S.-dominated stablecoin market, contributing to Europe's strategic autonomy in payments," the banks said. They launched the effort, which they said will create a token that can be used for quick, low-cost payments and settlements, even as the European Central Bank voices scepticism over stablecoins.

ECB President Christine Lagarde in June told European policymakers that privately issued stablecoins posed risks for monetary policy and financial stability. As a safer alternative, she has urged European lawmakers to introduce legislation backing the launch of a digital version of the EU's single currency. Some commercial banks, however, have pushed back against the introduction of a digital euro, fearing that it would empty their coffers as customers transfer cash out of banks and into the safety of an ECB-guaranteed wallet. In addition to ING and UniCredit, the other banks participating in the new company include Banca Sella, KBC, DekaBank, Danske Bank, SEB, Caixabank, and Raiffeisen Bank International. They said that others could join the initiative, and a CEO for the company would be appointed soon.
According to a recent report by Deutsche Bank, emerging market economies are adopting dollar-based stablecoins to replace local deposits and cash. "This has created a global monetary dilemma: countries should adopt stablecoins or risk being left behind. Europe is under particular pressure."
The Military

Nations Meet At UN For 'Killer Robot' Talks (reuters.com) 35

An anonymous reader quotes a report from Reuters: Countries are meeting at the United Nations on Monday to revive efforts to regulate the kinds of AI-controlled autonomous weapons increasingly used in modern warfare, as experts warn time is running out to put guardrails on new lethal technology. Autonomous and artificial intelligence-assisted weapons systems are already playing a greater role in conflicts from Ukraine to Gaza. And rising defence spending worldwide promises to provide a further boost for burgeoning AI-assisted military technology.

Progress towards establishing global rules governing their development and use, however, has not kept pace. And internationally binding standards remain virtually non-existent. Since 2014, countries that are part of the Convention on Conventional Weapons (CCW) have been meeting in Geneva to discuss a potential ban fully autonomous systems that operate without meaningful human control and regulate others. U.N. Secretary-General Antonio Guterres has set a 2026 deadline for states to establish clear rules on AI weapon use. But human rights groups warn that consensus among governments is lacking. Alexander Kmentt, head of arms control at Austria's foreign ministry, said that must quickly change.

"Time is really running out to put in some guardrails so that the nightmare scenarios that some of the most noted experts are warning of don't come to pass," he told Reuters. Monday's gathering of the U.N. General Assembly in New York will be the body's first meeting dedicated to autonomous weapons. Though not legally binding, diplomatic officials want the consultations to ramp up pressure on military powers that are resisting regulation due to concerns the rules could dull the technology's battlefield advantages. Campaign groups hope the meeting, which will also address critical issues not covered by the CCW, including ethical and human rights concerns and the use of autonomous weapons by non-state actors, will push states to agree on a legal instrument. They view it as a crucial litmus test on whether countries are able to bridge divisions ahead of the next round of CCW talks in September.
"This issue needs clarification through a legally binding treaty. The technology is moving so fast," said Patrick Wilcken, Amnesty International's Researcher on Military, Security and Policing. "The idea that you wouldn't want to rule out the delegation of life or death decisions ... to a machine seems extraordinary."

In 2023, 164 states signed a 2023 U.N. General Assembly resolution calling for the international community to urgently address the risks posed by autonomous weapons.
Earth

Study Finds Almost 200 Pesticides in European Homes (theguardian.com) 25

Almost 200 pesticides have been found by a study examining dust in homes around Europe, as scientists say regulators need to take "toxic cocktails" of chemicals into account when banning or restricting the use of pesticides. From a report: Scientists say their research supports the idea that regulators should assess the risks posed by pesticides when they react with other chemicals, as well as individually. They say this should apply to substances already in use, as well as those yet to be approved.

In preliminary findings from the largest study of its kind, scientists examining household dust from homes in 10 European countries in 2021 detected 197 pesticides in total. More than 40% of the pesticides found in the dust have been linked to highly toxic effects, including cancer and disruption of the hormonal system in humans.

The number of pesticides in each home ranged between 25 and 121, and levels of pesticides tended to be higher in the homes of farmers. Prof Paul Scheepers, of the Radboud Institute for Biological and Environmental Sciences, said: "We have many epidemiological studies showing that diseases are associated with mixtures of pesticides."

Earth

Mysterious Radiation Belts Detected Around Earth After Epic Solar Storm 16

After the powerful solar storm of May 2024, scientists detected two new temporary radiation belts around Earth -- one of which contained something we had never seen before: energetic protons. ScienceAlert reports: "These are really high-energy electrons and protons that have found their way into Earth's inner magnetic environment," says astronomer David Sibeck of NASA's Goddard Space Flight Center, who was not involved with the research. "Some might stay in this place for a very long time." In fact, the belts remained intact for much longer than previous temporary radiation belts generated by solar storms: three months, compared to the weeks we'd normally expect.

Subsequent solar storms in June and August of 2024 knocked most of the particles out of orbit, significantly diminishing the density of the belts. A small amount, however, still remains up there, hanging out with Earth. What's more, the proton belt may remain intact for over a year. Ongoing measurements will help scientists measure its longevity and decay rate.

This is important information to have: particles in Earth orbit can pose a hazard to satellites hanging out up there, so knowing the particle density and the effects solar storms can have thereon can help engineers design mitigation strategies to protect our technology. At the moment, though, the hazard posed by the new radiation belts is unquantified. Future studies will be needed to determine the risks these, and future belts, might pose.
The findings have been published in the Journal of Geophysical Research: Space Physics.
Earth

Supreme Court Allows Hawaii To Sue Oil Companies Over Climate Change Effects (cbsnews.com) 75

An anonymous reader quotes a report from CBS News: The Supreme Court on Monday said it will not consider whether to quash lawsuits brought by Honolulu seeking billions of dollars from oil and gas companies for the damage caused by the effects of climate change, clearing the way for the cases to move forward. The legal battle pursued in Hawaii state court is similar to others filed against the nation's largest energy companies by state and local governments in their courts. The suits claim that the oil and gas industry engaged in a deceptive campaign and misled the public about the dangers of their fossil fuel products and the environmental impacts.

A group of 15 energy companies asked the Supreme Court to review a decision from the Hawaii Supreme Court that allowed a lawsuit brought by the city and county of Honolulu, as well as its Board of Water Supply, to proceed. The suit was brought in Hawaii state court in March 2020, and Honolulu raised (PDF) several claims under state law, including creating a public nuisance and failure to warn the public of the risks posed by their fossil fuel products. The city accused the oil and gas industry of contributing to global climate change, leading to flooding, erosion and more frequent and intense extreme weather events. These changes, they said, have led to property damage and a drop in tax revenue as a result of less tourism.

The energy companies unsuccessfully sought to have the case moved to federal court, arguing that the claims raised by Honolulu under state law were overridden by federal law and the Clean Air Act. A state trial court denied their efforts to dismiss the case. The oil and gas industry has argued that greenhouse-gas emissions "flow from billions of daily choices, over more than a century, by governments, companies and individuals about what types of fuels to use, and how to use them." Honolulu, the companies said, was seeking damages for the "cumulative effect of worldwide emissions leading to global climate change." The Hawaii Supreme Court ultimately allowed (PDF) the lawsuit to proceed. The state's highest court determined that the Clean Air Act displaced federal common law governing suits seeking damages for interstate pollution. It also rejected the oil companies' argument that Honolulu was seeking to regulate emissions through its lawsuit, finding that the city instead wanted to challenge the promotion and sale of fossil fuel products "without warning and abetted by a sophisticated disinformation campaign."

"Plaintiffs' state tort law claims do not seek to regulate emissions, and there is thus no 'actual conflict' between Hawaii tort law and the [Clean Air Act]," the Hawaii Supreme Court ruled. "These claims potentially regulate marketing conduct while the CAA regulates pollution." The oil companies asked the U.S. Supreme Court to review the ruling from the Hawaii high court and urged it to stop Honolulu's lawsuit from going forward. Regulation of interstate pollution is a federal area governed by federal law, lawyers for the energy industry argued. [...] The Supreme Court in June asked the Biden administration to weigh in on the cases and whether it should step into the dispute. In a filing submitted to the Supreme Court before the transfer of presidential power, the Biden administration urged the justices to turn away the appeals, in part because it said it is too soon for them to intervene.

United States

Congress Funds Removal of Chinese Telecom Gear as Feds Probe Home Router Risks (msn.com) 43

Congress approved $3 billion Wednesday for a long-languishing project to cull Chinese equipment from networks nationwide over fears they are vulnerable to cyberattacks, underscoring the risk Beijing-sponsored hackers pose to phone and internet networks. From a report: The new funding comes as the Commerce Department reviews whether to ban routers made by the Chinese-owned company TP-Link, which account for more than half of the U.S. retail router market.

The actions reflect the heightened attention among Washington policymakers to the threat posed by Chinese state-linked hackers. U.S. officials revealed the "Volt Typhoon" hack last year and in recent months have expressed alarm over the even bigger "Salt Typhoon" hack. In both cases, Chinese government hackers successfully penetrated major U.S. phone networks and critical infrastructure facilities, and U.S. officials said they still have not been able to expel the Salt Typhoon interlopers.

China

Chinese Hacker Singlehandedly Responsible For Exploiting 81,000 Sophos Firewalls, DOJ Says (cybernews.com) 16

An anonymous reader shares a report: A Chinese hacker indicted earlier this month and the PRC-based cybersecurity company he worked for are both sanctioned by the US government for compromising "tens of thousands of firewalls" -- some protecting US critical infrastructure, putting human lives at risk.

In a series of coordinated actions, the US Treasury Department's Office of Foreign Assets Control (OFAC), the Department of Justice (DoJ), and the FBI said the massive cyber espionage campaign, which compromised at least 36 firewalls protecting US critical infrastructure, posed significant risks to national security.

A federal court in Indiana earlier this month unsealed an indictment charging 30-year-old Guan Tianfeng (Guan) with conspiracy to commit computer and wire fraud by hacking into firewall devices worldwide, including one "used by an agency of the United States." Guan, employed by the Chinese cybersecurity firm Sichuan Silence -- a known contractor for Beijing intelligence -- was alleged to have discovered a zero-day vulnerability in firewall products manufactured by UK cybersecurity firm Sophos.

Transportation

Two Drone Pilots Arrested Near Boston, and Drones Cause One-Hour Runway Closure at North New York Airport (go.com) 89

Saturday night two men were arrested near Boston "following a hazardous drone operation near Logan Airport's airspace," according to a police statement. They credit an officer "leveraging advanced UAS monitoring technology" who "identified the drone's location, altitude, flight history, and the operators' position." Recognizing the serious risks posed by the drone's proximity to Logan's airspace, additional resources were mobilized. The Boston Police Department coordinated with Homeland Security, the Massachusetts State Police, the Joint Terrorism Task Force, the Federal Communications Commission (FCC), and Logan Airport Air Traffic Control to address the situation.
"Both suspects face charges of trespassing, with additional fines or charges potentially forthcoming."

Meanwhile on Friday night "Officials at Stewart International Airport, located roughly 60 miles north of New York City, said they shut down their runways for an hour," reports ABC News, after America's Federal Aviation Administration "alerted them that a drone was spotted in the area around 9:30 p.m." Though officials say flight operations weren't impacted during the closure, the article notes that New York's governor is now calling for federal assistance, including more federal law enforcement officers, saying "This has gone too far." [Governor Hochul] called on Congress to pass the Counter-UAS Authority Security, Safety, and Reauthorization Act, which would strengthen the FAA's oversight of drones and give more authority to state and local law enforcement agencies to investigate the activity.
The article explores the larger problem of Americans reporting drone sightings: Officials from a wide range of federal agencies spoke with reporters Saturday on a phone call and emphasized that the federal investigation into drone sightings in New Jersey is ongoing. One FBI official said that out of the nearly 5,000 tips they have received, less than 100 have generated credible leads for further investigation. A Department of Homeland Security official said that they are "confident that many of the reported drone sightings are, in fact, manned aircraft being misidentified as drones." The FBI official also talked about how investigators overlaid the locations of the reported drone sightings and found that "the density of reported sightings matches the approach pattern" of the New York area's busy airports including Newark-Liberty, JFK, and LaGuardia.

But, an FAA official says that there have "without a doubt" been drones flying over New Jersey, pointing to the fact that there are nearly a million drones registered in the U.S. "With nearly a million registered [unmanned aircraft systems] in the United States, there's no doubt many of them are owned and operated here within the state," the FAA official said... A Joint Chiefs of Staff official said that there have been visual sightings of drones reported by "highly trained security personnel" near Picatinny Arsenal and Naval Weapons Station Earle in New Jersey. The official said that they do not believe the sightings "were aligned with a foreign actor, or that they had malicious intent."

"We don't know what activity is. We don't know if it is criminal, but I will tell you that it is irresponsible," the official said. "Here on the military side, we are just as frustrated with the irresponsible nature of this activity."

Later ABC News reported that the FAA had imposed temporary drone flight restrictions in New Jersey over the Picatinny Arsenal military base. And they added that America's Homeland Security Secretary Alejandro Mayorkas "said the federal government is taking action to address the aerial drones that have prompted concern among New Jersey residents. "I want to assure the American public that we in the federal government have deployed additional resources, personnel, technology to assist the New Jersey State Police in addressing the drone sightings...." There have been numerous reports of drone activity along the East Coast since November. Mayorkas cited the 2023 change of a Federal Aviation Administration rule that allows drones to fly at night as to why there might be an uptick in sightings. "I want to assure the American public that we are on it," he said, before calling on Congress to expand local and state authority to help address the issue.

"It is critical, as we all have said for a number of years, that we need from Congress additional authorities to address the drone situation," Mayorkas said. "Our authorities currently are limited and they are set to expire. We need them extended and expanded... We want state and local authorities to also have the ability to counter growing activity under federal supervision," he added, echoing sentiments from local officials...

Addressing national security concerns the sightings have prompted, Mayorkas said the U.S. knows of no foreign involvement and that it remains "vigilant" in investigating the drone sightings. [ABC News anchor George] Stephanopoulos pressed Mayorkas about past security threats drones have caused, including the arrest of a Chinese national last week who allegedly flew a drone over an Air Force base in California. "When a drone is flown over restricted airspace, we act very, very swiftly," the homeland security secretary said. "In fact, when an individual in California flew a drone over restricted airspace, that individual was identified, apprehended and is being charged by federal authorities."

AI

LLM Attacks Take Just 42 Seconds On Average, 20% of Jailbreaks Succeed (scworld.com) 79

spatwei shared an article from SC World: Attacks on large language models (LLMs) take less than a minute to complete on average, and leak sensitive data 90% of the time when successful, according to Pillar Security.

Pillar's State of Attacks on GenAI report, published Wednesday, revealed new insights on LLM attacks and jailbreaks, based on telemetry data and real-life attack examples from more than 2,000 AI applications. LLM jailbreaks successfully bypass model guardrails in one out of every five attempts, the Pillar researchers also found, with the speed and ease of LLM exploits demonstrating the risks posed by the growing generative AI (GenAI) attack surface...

The more than 2,000 LLM apps studied for the State of Attacks on GenAI report spanned multiple industries and use cases, with virtual customer support chatbots being the most prevalent use case, making up 57.6% of all apps.

Common jailbreak techniques included "ignore previous instructions" and "ADMIN override", or just using base64 encoding. "The Pillar researchers found that attacks on LLMs took an average of 42 seconds to complete, with the shortest attack taking just 4 seconds and the longest taking 14 minutes to complete.

"Attacks also only involved five total interactions with the LLM on average, further demonstrating the brevity and simplicity of attacks."
Security

OpenAI Says China-Linked Group Tried to Phish Its Employees (yahoo.com) 21

OpenAI said a group with apparent ties to China tried to carry out a phishing attack on its employees, reigniting concerns that bad actors in Beijing want to steal sensitive information from top US artificial intelligence companies. From a report: The AI startup said Wednesday that a suspected China-based group called SweetSpecter posed as a user of OpenAI's chatbot ChatGPT earlier this year and sent customer support emails to staff. The emails included malware attachments that, if opened, would have allowed SweetSpecter to take screenshots and exfiltrate data, OpenAI said, but the attempt was unsuccessful.

"OpenAI's security team contacted employees who were believed to have been targeted in this spear phishing campaign and found that existing security controls prevented the emails from ever reaching their corporate emails," OpenAI said. The disclosure highlights the potential cybersecurity risks for leading AI companies as the US and China are locked in a high-stakes battle for artificial intelligence supremacy. In March, for example, a former Google engineer was charged with stealing AI trade secrets for a Chinese firm.

AI

AI Agent Promotes Itself To Sysadmin, Trashes Boot Sequence 86

The Register's Thomas Claburn reports: Buck Shlegeris, CEO at Redwood Research, a nonprofit that explores the risks posed by AI, recently learned an amusing but hard lesson in automation when he asked his LLM-powered agent to open a secure connection from his laptop to his desktop machine. "I expected the model would scan the network and find the desktop computer, then stop," Shlegeris explained to The Register via email. "I was surprised that after it found the computer, it decided to continue taking actions, first examining the system and then deciding to do a software update, which it then botched." Shlegeris documented the incident in a social media post.

He created his AI agent himself. It's a Python wrapper consisting of a few hundred lines of code that allows Anthropic's powerful large language model Claude to generate some commands to run in bash based on an input prompt, run those commands on Shlegeris' laptop, and then access, analyze, and act on the output with more commands. Shlegeris directed his AI agent to try to SSH from his laptop to his desktop Ubuntu Linux machine, without knowing the IP address [...]. As a log of the incident indicates, the agent tried to open an SSH connection, and failed. So Shlegeris tried to correct the bot. [...]

The AI agent responded it needed to know the IP address of the device, so it then turned to the network mapping tool nmap on the laptop to find the desktop box. Unable to identify devices running SSH servers on the network, the bot tried other commands such as "arp" and "ping" before finally establishing an SSH connection. No password was needed due to the use of SSH keys; the user buck was also a sudoer, granting the bot full access to the system. Shlegeris's AI agent, once it was able to establish a secure shell connection to the Linux desktop, then decided to play sysadmin and install a series of updates using the package manager Apt. Then things went off the rails.

"It looked around at the system info, decided to upgrade a bunch of stuff including the Linux kernel, got impatient with Apt and so investigated why it was taking so long, then eventually the update succeeded but the machine doesn't have the new kernel so edited my Grub [bootloader] config," Buck explained in his post. "At this point I was amused enough to just let it continue. Unfortunately, the computer no longer boots." Indeed, the bot got as far as messing up the boot configuration, so that following a reboot by the agent for updates and changes to take effect, the desktop machine wouldn't successfully start.
United States

EPA Must Address Fluoridated Water's Risk To Children's IQs, US Judge Rules (reuters.com) 153

An anonymous reader quotes a report from Reuters: A federal judge in California has ordered the U.S. Environmental Protection Agency to strengthen regulations for fluoride in drinking water, saying the compound poses an unreasonable potential risk to children at levels that are currently typical nationwide. U.S. District Judge Edward Chen in San Francisco on Tuesday sided (PDF) with several advocacy groups, finding the current practice of adding fluoride to drinking water supplies to fight cavities presented unreasonable risks for children's developing brains.

Chen said the advocacy groups had established during a non-jury trial that fluoride posed an unreasonable risk of harm sufficient to require a regulatory response by the EPA under the Toxic Substances Control Act. "The scientific literature in the record provides a high level of certainty that a hazard is present; fluoride is associated with reduced IQ," wrote Chen, an appointee of Democratic former President Barack Obama. But the judge stressed he was not concluding with certainty that fluoridated water endangered public health. [...] The EPA said it was reviewing the decision.
"The court's historic decision should help pave the way towards better and safer fluoride standards for all," Michael Connett, a lawyer for the advocacy groups, said in a statement on Wednesday.
AI

AI Pioneers Call For Protections Against 'Catastrophic Risks' 69

An anonymous reader quotes a report from the New York Times: Scientists who helped pioneer artificial intelligence are warning that countries must create a global system of oversight to check the potentially grave risks posed by the fast-developing technology. The release of ChatGPT and a string of similar services that can create text and images on command have shown how A.I. is advancing in powerful ways. The race to commercialize the technology has quickly brought it from the fringes of science to smartphones, cars and classrooms, and governments from Washington to Beijing have been forced to figure out how to regulate and harness it. In a statement on Monday, a group of influential A.I. scientists raised concerns that the technology they helped build could cause serious harm. They warned that A.I. technology could, within a matter of years, overtake the capabilities of its makers and that "loss of human control or malicious use of these A.I. systems could lead to catastrophic outcomes for all of humanity."

If A.I. systems anywhere in the world were to develop these abilities today, there is no plan for how to rein them in, said Gillian Hadfield, a legal scholar and professor of computer science and government at Johns Hopkins University. "If we had some sort of catastrophe six months from now, if we do detect there are models that are starting to autonomously self-improve, who are you going to call?" Dr. Hadfield said. On Sept. 5-8, Dr. Hadfield joined scientists from around the world in Venice to talk about such a plan. It was the third meeting of the International Dialogues on A.I. Safety, organized by the Safe AI Forum, a project of a nonprofit research group in the United States called Far.AI. Governments need to know what is going on at the research labs and companies working on A.I. systems in their countries, the group said in its statement. And they need a way to communicate about potential risks that does not require companies or researchers to share proprietary information with competitors. The group proposed that countries set up A.I. safety authorities to register the A.I. systems within their borders. Those authorities would then work together to agree on a set of red lines and warning signs, such as if an A.I. system could copy itself or intentionally deceive its creators. This would all be coordinated by an international body.

Among the signatories was Yoshua Bengio, whose work is so often cited that he is called one of the godfathers of the field. There was Andrew Yao, whose course at Tsinghua University in Beijing has minted the founders of many of China's top tech companies. Geoffrey Hinton, a pioneering scientist who spent a decade at Google, participated remotely. All three are winners of the Turing Award, the equivalent of the Nobel Prize for computing. The group also included scientists from several of China's leading A.I. research institutions, some of which are state-funded and advise the government. A few former government officials joined, including Fu Ying, who had been a Chinese foreign ministry official and diplomat, and Mary Robinson, the former president of Ireland. Earlier this year, the group met in Beijing, where they briefed senior Chinese government officials on their discussion.

Slashdot Top Deals