AI

New Research Reveals AI Lacks Independent Learning, Poses No Existential Threat (neurosciencenews.com) 129

ZipNada writes: New research reveals that large language models (LLMs) like ChatGPT cannot learn independently or acquire new skills without explicit instructions, making them predictable and controllable. The study dispels fears of these models developing complex reasoning abilities, emphasizing that while LLMs can generate sophisticated language, they are unlikely to pose existential threats. However, the potential misuse of AI, such as generating fake news, still requires attention. The study, published today as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024) -- the premier international conference in natural language processing -- reveals that LLMs have a superficial ability to follow instructions and excel at proficiency in language, however, they have no potential to master new skills without explicit instruction. This means they remain inherently controllable, predictable and safe. "The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus," said Dr Harish Tayyar Madabushi, computer scientist at the University of Bath and co-author of the new study on the 'emergent abilities' of LLMs.

Professor Iryna Gurevych added: "... our results do not mean that AI is not a threat at all. Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news."
Earth

Should We Fight Climate Change by Releasing Sulfur Dioxide into the Stratosphere? (japantimes.co.jp) 288

A professor in the University of Chicago's department of geophysical sciences "believes that by intentionally releasing sulfur dioxide into the stratosphere, it would be possible to lower temperatures worldwide," reports the New York Times.

He's not the only one promoting the idea. "Harvard University has a solar geoengineering program that has received grants from Microsoft co-founder Bill Gates, the Alfred P. Sloan Foundation and the William and Flora Hewlett Foundation. It's being studied by the Environmental Defense Fund along with the World Climate Research Program.... But many scientists and environmentalists fear that it could result in unpredictable calamities." Because it would be used in the stratosphere and not limited to a particular area, solar geoengineering could affect the whole world, possibly scrambling natural systems, like creating rain in one arid region while drying out the monsoon season elsewhere. Opponents worry it would distract from the urgent work of transitioning away from fossil fuels. They object to intentionally releasing sulfur dioxide, a pollutant that would eventually move from the stratosphere to ground level, where it can irritate the skin, eyes, nose and throat and can cause respiratory problems. And they fear that once begun, a solar geoengineering program would be difficult to stop...

Keith, a professor in the University of Chicago's department of geophysical sciences, countered that the risks posed by solar geoengineering are well understood, not as severe as portrayed by critics and dwarfed by the potential benefits. If the technique slowed the warming of the planet by even just 1 degree Celsius, or 1.8 degrees Fahrenheit, over the next century, Keith said, it could help prevent millions of heat-related deaths each decade...

Opponents of solar geoengineering cite several main risks. They say it could create a "moral hazard," mistakenly giving people the impression that it is not necessary to rapidly reduce fossil fuel emissions. The second main concern has to do with unintended consequences. "This is a really dangerous path to go down," said Beatrice Rindevall, the chair of the Swedish Society for Nature Conservation, which opposed the experiment. "It could shock the climate system, could alter hydrological cycles and could exacerbate extreme weather and climate instability." And once solar geoengineering began to cool the planet, stopping the effort abruptly could result in a sudden rise in temperatures, a phenomenon known as "termination shock." The planet could experience "potentially massive temperature rise in an unprepared world over a matter of five to 10 years, hitting the Earth's climate with something that it probably hasn't seen since the dinosaur-killing impactor," Pierrehumbert said. On top of all this, there are fears about rogue actors using solar geoengineering and concerns that the technology could be weaponized. Not to mention the fact that sulfur dioxide can harm human health.

Keith is adamant that those fears are overblown. And while there would be some additional air pollution, he claims the risk is negligible compared to the benefits.

The opposition is making it hard to even conduct tests, according to the article — like when Keith "wanted to release a few pounds of mineral dust at an altitude of roughly 20 kilometers and track how the dust behaved as it floated across the sky."

The experiment was called off after opposition from numerous groups — including Greta Thunberg and an organization representing Indigenous people who felt the experiment was disrespecting nature.
China

US To Issue Proposed Rules Limiting Chinese Vehicle Software in August (reuters.com) 31

The U.S. Commerce Department plans to issue proposed rules on connected vehicles next month and expects to impose limits on some software made in China and other countries deemed adversaries, a senior official said Tuesday. From a report: "We're looking at a few components and some software - not the whole car - but it would be some of the key driver components of the vehicle that manage the software and manage the data around that car that would have to be made in an allied country," said export controls chief Alan Estevez at a forum in Colorado.

In May, Commerce Secretary Gina Raimondo said her department planned to issue proposed rules on Chinese-connected vehicles this autumn and had said the Biden administration could take "extreme action" and ban Chinese-connected vehicles or impose restrictions on them after the Biden administration in February launched a probe into whether Chinese vehicle imports posed national security risks.

AI

OpenAI Says It Has Begun Training a New Flagship AI Model (nytimes.com) 40

OpenAI said on Tuesday that it has begun training a new flagship AI model that would succeed the GPT-4 technology that drives its popular online chatbot, ChatGPT. From a report: The San Francisco start-up, which is one of the world's leading A.I. companies, said in a blog post that it expects the new model to bring "the next level of capabilities" as it strives to build "artificial general intelligence," or A.G.I., a machine that can do anything the human brain can do. The new model would be an engine for A.I. products including chatbots, digital assistants akin to Apple's Siri, search engines and image generators.

OpenAI also said it was creating a new Safety and Security Committee to explore how it should handle the risks posed by the new model and future technologies. "While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment," the company said. OpenAI is aiming to move A.I. technology forward faster than its rivals, while also appeasing critics who say the technology is becoming increasingly dangerous, helping to spread disinformation, replace jobs and even threaten humanity. Experts disagree on when tech companies will reach artificial general intelligence, but companies including OpenAI, Google, Meta and Microsoft have steadily increased the power of A.I. technologies for more than a decade, demonstrating a noticeable leap roughly every two to three years.

United States

US Will Investigate National Security Risks Posed By Chinese-made 'Smart Cars' (nbcnews.com) 68

Citing potential national security risks, the Biden administration says it will investigate Chinese-made "smart cars" that can gather sensitive information about Americans driving them. From a report: The probe could lead to new regulations aimed at preventing China from using sophisticated technology in electric cars and other so-called connected vehicles to track drivers and their personal information. Officials are concerned that features such as driver assistance technology could be used to effectively spy on Americans.

While the action stops short of a ban on Chinese imports, President Joe Biden said he is taking unprecedented steps to safeguard Americans' data. "China is determined to dominate the future of the auto market, including by using unfair practices," Biden said in a statement Thursday. "China's policies could flood our market with its vehicles, posing risks to our national security. I'm not going to let that happen on my watch." Biden and other officials noted that China has imposed wide-ranging restrictions on American autos and other foreign vehicles.
Commerce Secretary Gina Raimondo said connected cars "are like smart phones on wheels" and pose a serious national security risk.
AI

Microsoft AI Engineer Says Company Thwarted Attempt To Expose DALL-E 3 Safety Problems (geekwire.com) 78

Todd Bishop reports via GeekWire: A Microsoft AI engineering leader says he discovered vulnerabilities in OpenAI's DALL-E 3 image generator in early December allowing users to bypass safety guardrails to create violent and explicit images, and that the company impeded his previous attempt to bring public attention to the issue. The emergence of explicit deepfake images of Taylor Swift last week "is an example of the type of abuse I was concerned about and the reason why I urged OpenAI to remove DALL-E 3 from public use and reported my concerns to Microsoft," writes Shane Jones, a Microsoft principal software engineering lead, in a letter Tuesday to Washington state's attorney general and Congressional representatives.

404 Media reported last week that the fake explicit images of Swift originated in a "specific Telegram group dedicated to abusive images of women," noting that at least one of the AI tools commonly used by the group is Microsoft Designer, which is based in part on technology from OpenAI's DALL-E 3. "The vulnerabilities in DALL-E 3, and products like Microsoft Designer that use DALL-E 3, makes it easier for people to abuse AI in generating harmful images," Jones writes in the letter to U.S. Sens. Patty Murray and Maria Cantwell, Rep. Adam Smith, and Attorney General Bob Ferguson, which was obtained by GeekWire. He adds, "Microsoft was aware of these vulnerabilities and the potential for abuse."

Jones writes that he discovered the vulnerability independently in early December. He reported the vulnerability to Microsoft, according to the letter, and was instructed to report the issue to OpenAI, the Redmond company's close partner, whose technology powers products including Microsoft Designer. He writes that he did report it to OpenAI. "As I continued to research the risks associated with this specific vulnerability, I became aware of the capacity DALL-E 3 has to generate violent and disturbing harmful images," he writes. "Based on my understanding of how the model was trained, and the security vulnerabilities I discovered, I reached the conclusion that DALL-E 3 posed a public safety risk and should be removed from public use until OpenAI could address the risks associated with this model."

On Dec. 14, he writes, he posted publicly on LinkedIn urging OpenAI's non-profit board to withdraw DALL-E 3 from the market. He informed his Microsoft leadership team of the post, according to the letter, and was quickly contacted by his manager, saying that Microsoft's legal department was demanding that he delete the post immediately, and would follow up with an explanation or justification. He agreed to delete the post on that basis but never heard from Microsoft legal, he writes. "Over the following month, I repeatedly requested an explanation for why I was told to delete my letter," he writes. "I also offered to share information that could assist with fixing the specific vulnerability I had discovered and provide ideas for making AI image generation technology safer. Microsoft's legal department has still not responded or communicated directly with me." "Artificial intelligence is advancing at an unprecedented pace. I understand it will take time for legislation to be enacted to ensure AI public safety," he adds. "At the same time, we need to hold companies accountable for the safety of their products and their responsibility to disclose known risks to the public. Concerned employees, like myself, should not be intimidated into staying silent."
The full text of Jones' letter can be read here (PDF).
Transportation

GPS Interference Now a Major Flight Safety Concern For Airline Industry (theregister.com) 41

An anonymous reader quotes a report from The Register: Europe's aviation safety body is working with the airline industry to counter a danger posed by interference with GPS signals -- now seen as a growing threat to the safety of air travel. The European Union Aviation Safety Agency (EASA) and the International Air Transport Association (IATA) held a recent workshop on incidents where people spoofed and jammed satellite navigation systems, and concluded these pose a "significant challenge" to safety. Mitigating the risks posed by such actions will require measures to be enacted in the short term as well as medium and long term timescales, the two bodies said. They want to start by sharing information about the incidents and any potential remedies.

In Europe, this information sharing will occur through the European Occurrence Reporting scheme and EASA's Data4Safety program. Given the global nature of the problem, a broader solution would be better, but this would have to be pursued at a later date, EASA said. Inevitably, another of the measures involves retaining traditional navigation aids to ensure there is a conventional backup for GNSS navigation, while a third calls for guidance from aircraft manufacturers to airlines and other aircraft operators to ensure they know how to manage jamming and spoofing situations. As a further measure, EASA said it will inform all relevant stakeholders, which includes airlines, air navigation service providers, airports and the air industry, about recorded incidents.

Interference with global navigation systems can take one of two forms: jamming requires nothing more than transmitting a radio signal strong enough to drown out those from GPS satellites, while spoofing is more insidious and involves transmitting fake signals that fool the receiver into calculating its position incorrectly. According to EASA, jamming and spoofing incidents have increasingly threatened the integrity of location services across Eastern Europe and the Middle East in recent years. [...] The IATA said that coordinated action is needed, including sharing of safety data and a commitment from nations to retain traditional navigation systems as backup.

Privacy

Mobile Device Ambient Light Sensors Can Be Used To Spy On Users (ieee.org) 11

"The ambient light sensors present in most mobile devices can be accessed by software without any special permissions, unlike permissions required for accessing the microphone or the cameras," writes longtime Slashdot reader BishopBerkeley. "When properly interrogated, the data from the light sensor can reveal much about the user." IEEE Spectrum reports: While that may not seem to provide much detailed information, researchers have already shown these sensors can detect light intensity changes that can be used to infer what kind of TV programs someone is watching, what websites they are browsing or even keypad entries on a touchscreen. Now, [Yang Liu, a PhD student at MIT] and colleagues have shown in a paper in Science Advances that by cross-referencing data from the ambient light sensor on a tablet with specially tailored videos displayed on the tablet's screen, it's possible to generate images of a user's hands as they interact with the tablet. While the images are low-resolution and currently take impractically long to capture, he says this kind of approach could allow a determined attacker to infer how someone is using the touchscreen on their device. [...]

"The acquisition time in minutes is too cumbersome to launch simple and general privacy attacks on a mass scale," says Lukasz Olejnik, an independent security researcher and consultant who has previously highlighted the security risks posed by ambient light sensors. "However, I would not rule out the significance of targeted collections for tailored operations against chosen targets." But he also points out that, following his earlier research, the World Wide Web Consortium issued a new standard that limited access to the light sensor API, which has already been adopted by browser vendors.

Liu notes, however, that there are still no blanket restrictions for Android apps. In addition, the researchers discovered that some devices directly log data from the light sensor in a system file that is easily accessible, bypassing the need to go through an API. The team also found that lowering the resolution of the images could bring the acquisition times within practical limits while still maintaining enough detail for basic recognition tasks. Nonetheless, Liu agrees that the approach is too complicated for widespread attacks. And one saving grace is that it is unlikely to ever work on a smartphone as the displays are simply too small. But Liu says their results demonstrate how seemingly harmless combinations of components in mobile devices can lead to surprising security risks.

Medicine

Chemicals of 'Concern' Found In Philips Breathing Machines (propublica.org) 43

In 2021, Philips pulled its popular sleep apnea machines and ventilators off the shelves after discovering that an industrial foam built into the devices to reduce noise could release toxic particles and fumes into the masks worn by patients. "But as Philips publicly pledged to send out replacements, supervisors inside the company's headquarters near Pittsburgh were quietly racing to manage a new crisis that threatened the massive recall and posed risks to patients all over again," reports ProPublica. "Tests by independent laboratories retained by Philips had found that a different foam used by the company -- material fitted inside the millions of replacement machines -- was also emitting dangerous chemicals, including formaldehyde, a known carcinogen."

"Though Philips has said the machines are safe, ProPublica and the Pittsburgh Post-Gazette obtained test results and other internal records that reveal for the first time how scientists working for the company grew increasingly alarmed and how infighting broke out as the new threat reached the highest levels of the Pittsburgh operation. The findings also underscore an unchecked pattern of corporate secrecy that began long before Philips decided to use the new foam." From the report: The company had previously failed to disclose complaints about the original foam in its profitable breathing machines, a polyester-based polyurethane material that was found to degrade in heat and humidity. Former patients and others have described hundreds of deaths and thousands of cases of cancer in government reports. After the introduction of the new foam in 2021, this one made of silicone, the company again held back details about the problem from the public even as it sent out replacement machines with the new material to customers around the world.

One of the devices was the DreamStation 2, a newly released continuous positive airway pressure, or CPAP, machine promoted as one of the company's primary replacements. Federal regulators were alerted to the concern more than two years ago but said in a news release at the time that the company was carrying out additional tests on the foam and that patients should keep using their replacements until more details were available. The Food and Drug Administration has not provided new information on the test results since then, and it is still unclear whether the material is safe. That leaves millions of people in the United States alone caught in the middle, including those with sleep apnea, which causes breathing to stop and start through the night and can lead to heart attacks, strokes and sudden death.

The new foam isn't the only problem: An internal investigation at Philips launched in the months after the recall found that water was condensing in the circuitry of the DreamStation 2, creating a new series of safety risks. "Loss of therapy, thermal events, and shock hazards," the investigation concluded. The FDA issued an alert about overheating last month, warning that the devices could produce "fire, smoke, burns, and other signs of overheating" and advising patients to keep the machines away from carpet, fabric and "other flammable materials." Philips has said that customers could continue using the devices if they followed safety instructions. ...

Iphone

Apple Blocks 'Beeper Mini', Citing Security Concerns. But Beeper Keeps Trying (engadget.com) 90

A 16-year-old high school student reverse engineered Apple's messaging protocol, leading to the launch of an interoperable Android app called "Beeper Mini".

But on Friday the Verge reported that "less than a week after its launch, the app started experiencing technical issues when users were suddenly unable to send and receive blue bubble messages." Reached for comment, Beeper CEO Eric Migicovsky did not deny that Apple has successfully blocked Beeper Mini. "If it's Apple, then I think the biggest question is... if Apple truly cares about the privacy and security of their own iPhone users, why would they stop a service that enables their own users to now send encrypted messages to Android users, rather than using unsecure SMS...? Beeper Mini is here today and works great. Why force iPhone users back to sending unencrypted SMS when they chat with friends on Android?"
Apple says they're unable to verify that end-to-end encryption is maintained when messages are sent through unauthorized channels, according to a statement quoted by TechCrunch: "At Apple, we build our products and services with industry-leading privacy and security technologies designed to give users control of their data and keep personal information safe. We took steps to protect our users by blocking techniques that exploit fake credentials in order to gain access to iMessage. These techniques posed significant risks to user security and privacy, including the potential for metadata exposure and enabling unwanted messages, spam, and phishing attacks. We will continue to make updates in the future to protect our users."
Beeper responded on X: We stand behind what we've built. Beeper Mini is keeps your messages private, and boosts security compared to unencrypted SMS. For anyone who claims otherwise, we'd be happy to give our entire source code to mutually agreed upon third party to evaluate the security of our app.
Ars Technica adds: On Saturday, Migicovsky notified Beeper Cloud (desktop) users that iMessage was working again for them, after a long night of fixes. "Work continues on Beeper Mini," Migicovsky wrote shortly after noon Eastern time.
Engadget notes: The Beeper Mini team has apparently been working around the clock to resolve the outage affecting the new "iMessage on Android" app, and says a fix is "very close." And once the fix rolls out, users' seven-day free trials will be reset so they can start over fresh.
Meanwhile, at around 9 p.m. EST, Beeper CEO Eric Migicovsky posted on X that "For 3 blissful days this week, iPhone and Android users enjoyed high quality encrypted chats. We're working hard to return to that state."
Space

Airbus Introduces 'Detumbler' Device To Address Satellite Tumbling In Low Earth Orbit (spacedaily.com) 23

Airbus has launched an innovative "detumbler" device designed to mitigate the risks posed by tumbling satellites in space. Space Daily reports: The Detumbler, a brainchild of Airbus and supported by the French Space Agency CNES under their Tech4SpaceCare initiative, was unveiled on Saturday, November 11. This magnetic damping device, weighing approximately 100 grams, is engineered to be attached to satellites nearing the end of their operational lives. Its purpose is to prevent these satellites from tumbling, a common issue in orbital flight dynamics, especially in LEO. The device features a central rotor wheel and magnets that interact with the Earth's magnetic field, effectively damping unwanted motion.

Airbus' development of the Detumbler commenced in 2021. Its operational principle is simple yet innovative. When a satellite functions normally, the rotor behaves akin to a compass, aligning with the Earth's magnetic field. However, if the satellite begins to tumble, the movement of the rotor induces eddy currents, creating a friction torque that dampens this motion. The design of the Detumbler involves a stator housing, complete with a bottom plate and top cover, along with the rotor comprising the central axle, rotor wheel, and magnets.

Tumbling satellites, particularly those in LEO, pose a significant challenge for future active debris removal missions. Dead satellites naturally tend to tumble due to orbital flight dynamics. The introduction of the Airbus Detumbler could revolutionize this scenario, making satellites easier to capture during debris-clearing missions and enhancing the overall safety and sustainability of space operations.
Airbus is expected to perform an in-orbit demonstration of the Detumbler in early 2024.
United Kingdom

Tech Groups Fear New Powers Will Allow UK To Block Encryption (ft.com) 40

Tech groups have called on ministers to clarify the extent of proposed powers that they fear would allow the UK government to intervene and block the rollout of new privacy features for messaging apps. FT: The Investigatory Powers Amendment Bill, which was set out in the King's Speech on Tuesday, would oblige companies to inform the Home Office in advance about any security or privacy features they want to add to their platforms, including encryption. At present, the government has the power to force telecoms companies and messaging platforms to supply data on national security grounds and to help with criminal investigations.

The new legislation was designed to "recalibrate" those powers to respond to risks posed to public safety by multinational tech companies rolling out new services that "preclude lawful access to data," the government said. But Meredith Whittaker, president of private messaging group Signal, urged ministers to provide more clarity on what she described as a "bellicose" proposal amid fears that, if enacted, the new legislation would allow ministers and officials to veto the introduction of new safety features. "We will need to see the details, but what is being described suggests an astonishing level of technically confused government over-reach that will make it nearly impossible for any service, homegrown or foreign, to operate with integrity in the UK," she told the Financial Times.

AI

US, China and 26 Other Nations Agree To Co-operate Over AI Development (ft.com) 15

Twenty-eight countries including the US, UK and China have agreed to work together to ensure artificial intelligence is used in a "human-centric, trustworthy and responsible" way, in the first global commitment of its kind. From a report: The pledge forms part of a communique signed by major powers including Brazil, India and Saudi Arabia, at the inaugural AI Safety Summit. The two-day event, hosted and convened by British prime minister Rishi Sunak at Bletchley Park, started on Wednesday. Called the Bletchley Declaration, the document recognises the "potential for serious, even catastrophic, harm" to be caused by advanced AI models, but adds such risks are "best addressed through international co-operation." Other signatories include the EU, France, Germany, Japan, Kenya and Nigeria.

The communique represents the first global statement on the need to regulate the development of AI, but at the summit there are expected to be disagreements about how far such controls should go. Country representatives attending the event include Hadassa Getzstain, Israeli chief of staff at the ministry of innovation, science and technology, and Wu Zhaohui, Chinese vice minister for technology. Gina Raimondo, US commerce secretary, gave an opening speech at the summit and announced a US safety institute to evaluate the risks of AI. This comes on the heels of a sweeping executive order by President Joe Biden, announced on Monday, and intended to curb the risks posed by the technology.

Transportation

Cruise Suspends All Driverless Operations Nationwide (apnews.com) 139

GM's autonomous vehicle unit Cruise is now suspending driverless operations all across America.

The move comes just days after California regulators revoked Cruise's license for driverless vehicles, declaring that Cruise's AVs posed an "an unreasonable risk to public safety" and "are not safe for the public's operation," also arguing that Cruise had misrepresented information related to its safety. And the Associated Press reports that Cruise "is also being investigated by U.S. regulators after receiving reports of potential risks to pedestrians and passengers." Human-supervised operations of Cruise's autonomous vehicles, or AVs, will continue — including under California's indefinite suspension...

Earlier this month, a Cruise robotaxi notably ran over a pedestrian who had been hit by another vehicle driven by a human. The pedestrian became pinned under a tire of the Cruise vehicle after it came to a stop — and then was pulled for about 20 feet (six meters) as the car attempted to move off the road. The DMV and others have accused Cruise of not initially sharing all video footage of the accident, but the robotaxi operator pushed back — saying it disclosed the full video to state and federal officials. In a Tuesday statement, Cruise said it cooperating with regulators investigating the October 2 accident — and that its engineers are working on way for its robotaxis to improve their response "to this kind of extremely rare event." Still, some are skeptical of Cruise's response to the accident and point to lingering questions. Bryant Walker Smith, a University of South Carolina law professor who studies automated vehicles, wants to know "who knew what when?" at Cruise, and maybe GM, following the accident.

Also earlier this month, the National Highway Traffic Safety Administration [or NHTSA] announced that it was investigating Cruise's autonomous vehicle division after receiving reports of incidents where vehicles may not have used proper caution around pedestrians in roadways, including crosswalks. The NHTSA's Office of Defects Investigation said it received two reports involving pedestrian injuries from Cruise vehicles. It also identified two additional incidents from videos posted to public websites, noting that the total number is unknown.

In December of last year, the NHSTA opened a separate probe into reports of Cruise's robotaxis that stopped too quickly or unexpectedly quit moving, potentially stranding passengers. Three rear-end collisions that reportedly took place after Cruise AVs braked hard kicked off the investigation. According to an October 20 letter that was made public Thursday, since beginning this probe the NHSTA has received five other reports of Cruise AVs unexpectedly breaking with no obstacles ahead. Each case involved AVs operating without human supervision and resulted in rear-end collisions.

Cruise emphasized on Twitter/X that their nationwide suspension of driverless testing "isn't related to any new on-road incidents." Instead, "We have decided to proactively pause driverless operations across all of our fleets while we take time to examine our processes, systems, and tools and reflect on how we can better operate in a way that will earn public trust."

Their announcement began by stressing that "The most important thing for us right now is to take steps to rebuild public trust."
Google

AI Risk Must Be Treated As Seriously As Climate Crisis, Says Google DeepMind Chief 120

An anonymous reader quotes a report from The Guardian: The world must treat the risks from artificial intelligence as seriously as the climate crisis and cannot afford to delay its response, one of the technology's leading figures has warned. Speaking as the UK government prepares to host a summit on AI safety, Demis Hassabis said oversight of the industry could start with a body similar to the Intergovernmental Panel on Climate Change (IPCC). Hassabis, the British chief executive of Google's AI unit, said the world must act immediately in tackling the technology's dangers, which included aiding the creation of bioweapons and the existential threat posed by super-intelligent systems.

"We must take the risks of AI as seriously as other major global challenges, like climate change," he said. "It took the international community too long to coordinate an effective global response to this, and we're living with the consequences of that now. We can't afford the same delay with AI." Hassabis, whose unit created the revolutionary AlphaFold program that depicts protein structures, said AI could be "one of the most important and beneficial technologies ever invented." However, he told the Guardian a regime of oversight was needed and governments should take inspiration from international structures such as the IPCC.

"I think we have to start with something like the IPCC, where it's a scientific and research agreement with reports, and then build up from there." He added: "Then what I'd like to see eventually is an equivalent of a Cern for AI safety that does research into that -- but internationally. And then maybe there's some kind of equivalent one day of the IAEA, which actually audits these things." The International Atomic Energy Agency (IAEA) is a UN body that promotes the secure and peaceful use of nuclear technology in an effort to prevent proliferation of nuclear weapons, including via inspections. However, Hassabis said none of the regulatory analogies used for AI were "directly applicable" to the technology, though "valuable lessons" could be drawn from existing institutions.
Hassabis said the world was a long time away from "god-like" AI being developed but "we can see the path there, so we should be discussing it now."

He said current AI systems "aren't of risk but the next few generations may be when they have extra capabilities like planning and memory and other things ... They will be phenomenal for good use cases but also they will have risks."
EU

Alibaba Accused of 'Possible Espionage' At European Hub (ft.com) 38

An anonymous reader quotes a report from the Financial Times: Belgium's intelligence service has been monitoring Alibaba's main logistics hub in Europe for espionage following suspicions Beijing has been exploiting its growing economic presence in the west. European governments have been increasing scrutiny of the alleged security and economic risks posed by Chinese companies, which has been part of a wider reassessment of the EU's traditional openness to trade with China. In specific reference to Alibaba's logistics arm at the cargo airport in Liege, Belgium's security services told the Financial Times they were working to detect "possible espionage and/or interference activities" carried out by Chinese entities "including Alibaba".

Alibaba, which denies any wrongdoing, signed an agreement with Belgium in 2018 to open the hub in Liege, Europe's fifth-largest cargo airport, ploughing 100 million euros of investment into the ailing economy of the French-speaking Walloon region. But almost two years on from the site being opened, the Belgian State Security Service (VSSE) has continued monitoring Alibaba's operations following intelligence assessments, said people familiar with the matter. One area of scrutiny includes the introduction of software systems that collate sensitive economic information. The security service said the presence of Alibaba "constitutes a point of attention for the VSSE" because of legislation forcing Chinese companies to share their data with Chinese authorities and intelligence services. "China has the intent and capacity to use this data for non-commercial purposes," the agency said.

Concerns about potential espionage at the site were first raised before the hub was built, including in the Belgian parliament. At the time China strongly denied the "unprovoked insinuations" over exaggerated "so-called security risks of Chinese companies." The VSSE's statement to the FT indicate its concerns over espionage still remain after the opening of the hub. [...] The main concern is that this platform, alongside a couple of other logistical platforms that the Chinese have been proposing to European countries, is giving them a lot of insights into supply chains and into eventual vulnerabilities," said Jonathan Holslag, a professor at the Vrije Universiteit Brussel. According to a person familiar with Alibaba's relations to China's government, the logistics centers are expected to pass on information about local sentiment and report data about European trade and logistics to Beijing's authorities.
"The site in Liege is the only European logistics center run by Alibaba's logistics spin-off Cainiao," reports the FT. The company is reportedly able to access data about merchants, products, transport details and flows. It may also be able to access information about final customers.
Earth

Invasive Species Cost Humans $423 Billion Each Year and Threaten World's Diversity (theguardian.com) 57

Invasive species are costing the world at least $423bn every year and have become a leading threat to the diversity of life on Earth, according to a UN assessment. From a news report: From invasive mice that eat seabird chicks in their nests to non-native grasses that helped fuel and intensify last month's deadly fires in Hawaii, at least 3,500 harmful invasive species have been recorded globally in every region, spread by human travel and trade. Their impact is destructive for humans and wildlife, sometimes causing extinctions and permanently damaging the healthy functioning of an ecosystem.

Leading scientists say the threat posed by invasive species is under appreciated, underestimated and sometimes unacknowledged, with more than 37,000 alien species now known to be introduced around the world and about 200 establishing themselves each year. While not all will become invasive, experts say there are significant tools to mitigate their spread and impact, protecting and restoring ecosystems in the process.

"Invasive alien species are a major threat to biodiversity and can cause irreversible damage to nature, including local and global species extinctions, and also threaten human wellbeing," wrote Prof Helen Roy, Prof Anibal Pauchard and Prof Peter Stoett, who led the research. "It would be an extremely costly mistake to regard biological invasions only as someone else's problem," said Pauchard. "Although the specific species that inflict damage vary from place to place, these are risks and challenges with global roots but very local impacts facing people in every country, from all backgrounds and in every community -- even Antarctica is being affected."

Australia

Australian Senate Committee Recommends Government Ban on TikTok Be Extended To WeChat (apnews.com) 10

An Australian Senate committee has recommended a ban on the Chinese-owned video-sharing app TikTok from federal government devices be extended to China's most popular social media platform, WeChat. From a report: The Committee on Foreign Interference through Social Media also recommended in a report late Tuesday that social media giants such as Facebook and Twitter should become more transparent or be fined. Committee chair James Paterson said on Wednesday the report's recommendations would make Australia a more difficult target for the serious foreign interference risks that the nation faced. "It tackles both the problems posed by authoritarian-headquartered social media platforms like TikTok and WeChat and Western-headquartered social media platforms being weaponized by the actions of authoritarian governments including Facebook, YouTube and Twitter," Paterson told reporters.
Businesses

Amazon Claims It Isn't a 'Very Large Online Platform' To Evade EU Rules (arstechnica.com) 48

An anonymous reader quotes a report from Ars Technica: Amazon doesn't want to comply with Europe's Digital Services Act, and to avoid the rules the company is arguing that it doesn't meet the definition of a Very Large Online Platform under EU law. Amazon filed an appeal at the EU General Court to challenge the European Commission decision that Amazon meets the criteria and must comply with the new regulations. "We agree with the EC's objective and are committed to protecting customers from illegal products and content, but Amazon doesn't fit this description of a 'Very Large Online Platform' (VLOP) under the DSA and therefore should not be designated as such," Amazon said in a statement provided to Ars today.

The Digital Services Act includes content moderation requirements, transparency rules, and protections for minors. Targeted advertising based on profiling toward children will no longer be permitted, for example. Amazon argued that the new law is supposed to "address systemic risks posed by very large companies with advertising as their primary revenue and that distribute speech and information," and not businesses that are primarily retail-based. "The vast majority of our revenue comes from our retail business," Amazon said. Amazon also claims it's unfair that some retailers with larger businesses in individual countries weren't on the list of 19 companies that must comply with the Digital Services Act. The rules only designate platforms with over 45 million active users in the EU as of February 17.

Amazon said it is "not the largest retailer in any of the EU countries where we operate, and none of these largest retailers in each European country has been designated as a VLOP. If the VLOP designation were to be applied to Amazon and not to other large retailers across the EU, Amazon would be unfairly singled out and forced to meet onerous administrative obligations that don't benefit EU consumers." Those other companies Amazon referred to include Poland's Allegro or the Dutch Bol.com, according to a Bloomberg report. Neither of those platforms appears to have at least 45 million active users.
A summary of the appeal provided by Amazon claimed the designation "is based on a discriminatory criterion and disproportionately violates the principle of equal treatment and the applicant's fundamental rights." In response, the EC said that "it would defend its position in court and added that Amazon still must comply with the rules by end of August, regardless of the appeal," Bloomberg wrote.

"The scope of the DSA is very clear and is defined to cover all platforms that expose their users to content, including the sale of products or services, which can be illegal," the commission said in statement reported by Bloomberg. "For marketplaces as for social networks, very wide user reach increases the risks and the platforms' responsibilities to address them."
Privacy

Stop Using Google Analytics, Warns Sweden's Privacy Watchdog (techcrunch.com) 18

Sweden's data protection watchdog has issued a couple of fines in relation to exports of European users' data via Google Analytics which it found breach the bloc's privacy rulebook owing to risks posed by U.S. government surveillance. It has also warned other companies against use of Google's tool. From a report: The fines -- just over $1.1 million for Swedish telco Tele2 and less than $30,000 for local online retailer CDON -- are notable as they are the first such fines following a raft of strategic privacy complaints targeting Google Analytics (and Facebook Connect) back in August 2020.

The regulator found that so-called supplementary measures applied by Google to European users' data sent to the U.S. for processing were insufficient to raise the level of protection to the required legal standard. Including Google's use of IP address truncation (an anonymization measure) as, in the Tele2 case, it said the company did not clarify whether the truncation was performed before or after the transfer of the data to the U.S. so had failed to demonstrate there is "no potential access to the entire IP address before the last octet is truncated." The watchdog also found breaches of the bloc's General Data Protection Regulation (GDPR) rules on transfers to third countries in the case of two other companies' use of Google Analytics, Coop and Dagens Industries, but did not issue fines in those cases.

Slashdot Top Deals