Space

'The Models Were Right!' Astronomers Locate Universe's 'Missing' Matter (space.com) 64

It's not dark matter, writes Space.com. But astronomers have discovered "a vast tendril of hot gas linking four galaxy clusters and stretching out for 23 million light-years, 230 times the length of our galaxy.

"With 10 times the mass of the Milky Way, this filamentary structure accounts for much of the universe's 'missing matter,' the search for which has baffled scientists for decades...." [I]t is "ordinary matter" made up of atoms, composed of electrons, protons, and neutrons (collectively called baryons) which make up stars, planets, moons, and our bodies. For decades, our best models of the universe have suggested that a third of the baryonic matter that should be out there in the cosmos is missing.

This discovery of that missing matter suggests our best models of the universe were right all along. It could also reveal more about the "Cosmic Web," the vast structure along which entire galaxies grew and gathered during the earlier epochs of our 13.8 billion-year-old universe.... The newly observed filament isn't just extraordinary in terms of its mass and size; it also has a temperature of a staggering 18 million degrees Fahrenheit (10 million degrees Celsius). That's around 1,800 times hotter than the surface of the sun...

The team's research was published on Thursday (June 19) in the journal Astronomy & Astrophysics.

Models of the cosmos (including the standard model of cosmology) "have long posited the idea that the missing baryonic matter of the universe is locked up in vast filaments of gas stretching between the densest pockets of space..." the article points out. But now thanks to Suzaku, a Japan Aerospace Exploration Agency (JAXA) satellite, and the European Space Agency's XMM-Newton, "a team of astronomers has for the first time been able to determine the properties of one of these filaments, which links four galactic clusters in the local universe."

Team leader Konstantinos Migkas (of the Netherlands' Leiden Observatory) explained the significance of their finding. "For the first time, our results closely match what we see in our leading model of the cosmos — something that's not happened before."

"It seems that the simulations were right all along."
AI

BBC Threatens Legal Action Against Perplexity AI Over Content Scraping 24

Ancient Slashdot reader Alain Williams shares a report from The Guardian: The BBC is threatening legal action against Perplexity AI, in the corporation's first move to protect its content from being scraped without permission to build artificial intelligence technology. The corporation has sent a letter to Aravind Srinivas, the chief executive of the San Francisco-based startup, saying it has gathered evidence that Perplexity's model was "trained using BBC content." The letter, first reported by the Financial Times, threatens an injunction against Perplexity unless it stops scraping all BBC content to train its AI models, and deletes any copies of the broadcaster's material it holds unless it provides "a proposal for financial compensation."

The legal threat comes weeks after Tim Davie, the director general of the BBC, and the boss of Sky both criticised proposals being considered by the government that could let tech companies use copyright-protected work without permission. "If we currently drift in the way we are doing now we will be in crisis," Davie said, speaking at the Enders conference. "We need to make quick decisions now around areas like ... protection of IP. We need to protect our national intellectual property, that is where the value is. What do I need? IP protection; come on, let's get on with it."
"Perplexity's tool [which allows users to choose between different AI models] directly competes with the BBC's own services, circumventing the need for users to access those services," the corporation said.

Perplexity told the FT that the BBC's claims were "manipulative and opportunistic" and that it had a "fundamental misunderstanding of technology, the internet and intellectual property law."
Biotech

MIT Chemical Engineers Develop New Way To Separate Crude Oil (thecooldown.com) 52

Longtime Slashdot reader fahrbot-bot shares a report from the Cool Down: A team of chemical engineers at the Massachusetts Institute of Technology has invented a new process to separate crude oil components, potentially bringing forward a replacement that can cut its harmful carbon pollution by 90%. The original technique, which uses heat to separate crude oil into gasoline, diesel, and heating oil, accounts for roughly 1% of all global energy consumption and 6% of dirty energy pollution from the carbon dioxide it releases.

"Instead of boiling mixtures to purify them, why not separate components based on shape and size?" said Zachary P. Smith, associate professor of chemical engineering at MIT and senior author of the study, as previously reported in Interesting Engineering. The team invented a polymer membrane that divides crude oil into its various uses like a sieve. The new process follows a similar strategy used by the water industry for desalination, which uses reverse osmosis membranes and has been around since the 1970s. [The membrane excelled in lab tests. It increased the toluene concentration by 20 times in a mixture with triisopropylbenzene. It also effectively separated real industrial oil samples containing naphtha, kerosene, and diesel.]

Movies

Chinese Studios Plan AI-Powered Remakes of Kung Fu Classics (hollywoodreporter.com) 32

An anonymous reader quotes a report from the Hollywood Reporter: Bruce Lee, Jackie Chan and Jet Li and a legion of the all-time greats of martial cinema are about to get an AI makeover. In a sign-of-the-times announcement at the Shanghai International Film Festival on Thursday, a collection of Chinese studios revealed that they are turning to AI to re-imagine around 100 classics of the genre. Lee's classic Fist of Fury (1972), Chan's breakthrough Drunken Master (1978) and the Tsui Hark-directed epic Once Upon a Time in China (1991), which turned Li into a bone fide movie star, are among the features poised for the treatment, as part of the "Kung Fu Movie Heritage Project 100 Classics AI Revitalization Project."

There will also be a digital reworking of the John Woo classic A Better Tomorrow (1986) that, by the looks of the trailer, turns the money-burning anti-hero originally played by Chow Yun-fat into a cyberpunk, and is being claimed as "the world's first full-process, AI-produced animated feature film." The big guns of the Chinese industry were out in force on the sidelines of the 27th Shanghai International Film Festival to make the announcements, too. They were led by Zhang Pimin, chairman of the China Film Foundation, who said AI work on these "aesthetic historical treasures" would give them a new look that "conforms to contemporary film viewing." "It is not only film heritage, but also a brave exploration of the innovative development of film art," Zhang said.

Tian Ming, chairman of project partners Shanghai Canxing Culture and Media, meanwhile, promised the work -- expected to include upgrades in image and sound as well as overall production levels -- while preserving the storytelling and aesthetic of the originals -- would both "pay tribute to the original work" and "reshape the visual aesthetics." "We sincerely invite the world's top AI animation companies to jointly start a film revolution that subverts tradition," said Tian, who announced a fund of 100 million yuan ($13.9 million) would be implemented to kick-start the work.

Space

SpaceX Starship Explodes On Test Stand (washingtonpost.com) 167

SpaceX's Starship exploded on its test stand in South Texas ahead of an engine test, marking the fourth loss of a Starship this year. "In three previous test flights, the vehicle came apart or detonated during its flight," notes the Washington Post. No injuries were reported but the incident highlights ongoing technical challenges as SpaceX races to prove Starship's readiness for deep-space travel. From the report: In a post on the social media site X, SpaceX said that the explosion on the test stand, which could be seen for miles, happened at about 11 p.m. Central time. For safety reasons, the company had cleared personnel from around the site, and "all personnel are safe and accounted for," it said. The company is "actively working to safe the test site and the immediate surrounding area in conjunction with local officials," the post continued. "There are no hazards to residents in surrounding communities, and we ask that individuals do not attempt to approach the area while safing operations continue."

Starship comprises two stages -- the Super Heavy booster, which has 33 engines, and the Starship spacecraft itself, which has six. Before Wednesday's explosion, the spacecraft was standing alone on the test stand, and not mounted on top of the booster, when it blew up. The engines are test-fired on the Starship before it's mounted on the booster. SpaceX had been hoping to launch within the coming weeks had the engine test been successful. [...] In a post on X, Musk said that preliminary data pointed to a pressure vessel that failed at the top of the rocket.
You can watch a recording of the explosion on YouTube.

SpaceX called the incident a "rapid unscheduled disassembly," which caught the attention of Slashdot reader hambone142. In a story submitted to the Firehose, they commented: "I worked for a major computer company whose power supplies caught on fire. We were instructed to cease saying that and instead say the power supply underwent a 'thermal event.' Gotta love it."
Piracy

Napster and Sonos Sued For Millions In Unpaid Music Royalties (torrentfreak.com) 10

An anonymous reader quotes a report from TorrentFreak: Napster, the brand synonymous with the music piracy boom of the early 2000s, has a new copyright challenge. Together with audio giant Sonos, Napster faces a lawsuit demanding over $3.4 million in alleged unpaid copyright royalties. Filed by collective rights management organization SoundExchange, the complaint (PDF) centers on missed payments related to the "Sonos Radio" service, which until 2023 was powered by Napster's music catalog. [...]

Sonos Radio launched in April 2020 with Napster as the authorized agent, submitting the required royalty reports and royalties to SoundExchange. While all went well initially, payments stopped around May 2022. At the time, Napster had been acquired by venture capital firms Hivemind and Algorand, with a focus on "web3" technologies, including cryptocurrencies and blockchain. According to the complaint, the takeover resulted in a "complete breakdown of reporting and payment for the Sonos Radio service." The alleged payment problems eventually came to light during an audit initiated by SoundExchange in 2023, which concluded that Sonos and Napster owed millions in unpaid royalties.

Sonos and Napster are no longer partners in the radio service, as the audio equipment manufacturer switched to Deezer around April 2023. That appears to have solved the royalty issues, but SoundExchange still believes it is owed more than $3 million. "In total, Sonos, and its agent Napster, have failed to pay at least $3,423,844.41 comprising royalties owed for the period October 2022 to April 2023, interest, late fees, and auditor fee-shifting costs, and subtracting Sonos and Napster's payments made to date. "Late fees and interest continue to grow," SoundExchange adds, while requesting compensation in full. The complaint lists one count of "underpayment" of statutory royalties, and one count of "non-payment" of royalties, as determined by the audit. For both Copyright Act violations, SoundExchange requests damages of at least $3.4 million.

Government

California AI Policy Report Warns of 'Irreversible Harms' 52

An anonymous reader quotes a report from Time Magazine: While AI could offer transformative benefits, without proper safeguards it could facilitate nuclear and biological threats and cause "potentially irreversible harms," a new report commissioned by California Governor Gavin Newsom has warned. "The opportunity to establish effective AI governance frameworks may not remain open indefinitely," says the report, which was published on June 17 (PDF). Citing new evidence that AI can help users source nuclear-grade uranium and is on the cusp of letting novices create biological threats, it notes that the cost for inaction at this current moment could be "extremely high." [...]

"Foundation model capabilities have rapidly advanced since Governor Newsom vetoed SB 1047 last September," the report states. The industry has shifted from large language AI models that merely predict the next word in a stream of text toward systems trained to solve complex problems and that benefit from "inference scaling," which allows them more time to process information. These advances could accelerate scientific research, but also potentially amplify national security risks by making it easier for bad actors to conduct cyberattacks or acquire chemical and biological weapons. The report points to Anthropic's Claude 4 models, released just last month, which the company said might be capable of helping would-be terrorists create bioweapons or engineer a pandemic. Similarly, OpenAI's o3 model reportedly outperformed 94% of virologists on a key evaluation. In recent months, new evidence has emerged showing AI's ability to strategically lie, appearing aligned with its creators' goals during training but displaying other objectives once deployed, and exploit loopholes to achieve its goals, the report says. While "currently benign, these developments represent concrete empirical evidence for behaviors that could present significant challenges to measuring loss of control risks and possibly foreshadow future harm," the report says.

While Republicans have proposed a 10 year ban on all state AI regulation over concerns that a fragmented policy environment could hamper national competitiveness, the report argues that targeted regulation in California could actually "reduce compliance burdens on developers and avoid a patchwork approach" by providing a blueprint for other states, while keeping the public safer. It stops short of advocating for any specific policy, instead outlining the key principles the working group believes California should adopt when crafting future legislation. It "steers clear" of some of the more divisive provisions of SB 1047, like the requirement for a "kill switch" or shutdown mechanism to quickly halt certain AI systems in case of potential harm, says Scott Singer, a visiting scholar in the Technology and International Affairs Program at the Carnegie Endowment for International Peace, and a lead-writer of the report.

Instead, the approach centers around enhancing transparency, for example through legally protecting whistleblowers and establishing incident reporting systems, so that lawmakers and the public have better visibility into AI's progress. The goal is to "reap the benefits of innovation. Let's not set artificial barriers, but at the same time, as we go, let's think about what we're learning about how it is that the technology is behaving," says Cuellar, who co-led the report. The report emphasizes this visibility is crucial not only for public-facing AI applications, but for understanding how systems are tested and deployed inside AI companies, where concerning behaviors might first emerge. "The underlying approach here is one of 'trust but verify,'" Singer says, a concept borrowed from Cold War-era arms control treaties that would involve designing mechanisms to independently check compliance. That's a departure from existing efforts, which hinge on voluntary cooperation from companies, such as the deal between OpenAI and Center for AI Standards and Innovation (formerly the U.S. AI Safety Institute) to conduct pre-deployment tests. It's an approach that acknowledges the "substantial expertise inside industry," Singer says, but "also underscores the importance of methods of independently verifying safety claims."
Bitcoin

Senate Passes Stablecoin Bill In Major Win For Crypto Industry (coindesk.com) 60

The U.S. Senate has approved the GENIUS Act with a 68-30 final vote that "saw a huge surge of Democrats joining their Republican counterparts," reports CoinDesk. What the bill sets out to do is create the first federal regulatory framework for U.S. stablecoins, requiring issuers to maintain full 1:1 reserves in cash or Treasuries, adhere to regular audits and anti-money laundering rules, and gain regulatory approval -- all while allowing foreign stablecoin access under strict oversight rules. From the report: As written, the bill would set up guardrails around the approval and supervision of U.S. issuers of stablecoins, the dollar-based tokens such as the ones backed by Circle, Ripple and Tether. Firms making these digital assets available to U.S. users would have to meet stringent reserve demands, transparency requirements, money-laundering compliance and regulatory supervision that's also likely to include new capital rules. "This is a win for the U.S., a win for innovation and a monumental step towards appropriate regulation for digital assets in the United States," said Amanda Tuminelli, executive director and chief legal officer of the DeFi Education Fund, in a similar statement. [...]

While this is the first significant crypto bill to clear the Senate, it's also the first time a stablecoin bill has passed either chamber, despite years of negotiation in the House Financial Services Committee that managed to produce other major crypto legislation in the previous congressional session. The destiny of the GENIUS Act is also tied closely to the House's own Digital Asset Market Clarity Act, the more sweeping crypto bill that would establish the legal footing of the wider U.S. crypto markets. The stablecoin effort is slightly ahead of the bigger task of the market structure bill, but the industry and their lawmaker allies argue that they're inextricably connected and need to become law together. So far, the Clarity Act has been cleared by the relevant House committees and awaits floor action.

AI

Do People Actually Want Smart Glasses Now? (cnn.com) 141

It's the technology "Google tried (and failed at) more than a decade ago," writes CNN. (And Meta and Amazon have also previously tried releasing glasses with cameras, speakers and voice assistants.)

Yet this week Snap announced that "it's building AI-equipped eyewear to be released in 2026."

Why the "renewed buzz"? CNN sees two factors:

- Smartphones "are no longer exciting enough to entice users to upgrade often."
- "A desire to capitalize on AI by building new hardware around it." Advancements in AI could make them far more useful than the first time around. Emerging AI models can process images, video and speech simultaneously, answer complicated requests and respond conversationally... And market research indicates the interest will be there this time. The smart glasses market is estimated to grow from 3.3 million units shipped in 2024 to nearly 13 million by 2026, according to ABI Research. The International Data Corporation projects the market for smart glasses like those made by Meta will grow from 8.8 in 2025 to nearly 14 million in 2026....

Apple is also said to be working on smart glasses to be released next year that would compete directly with Meta's, according to Bloomberg. Amazon's head of devices and services Panos Panay also didn't rule out the possibility of camera-equipped Alexa glasses similar to those offered by Meta in a February CNN interview. "But I think you can imagine, there's going to be a whole slew of AI devices that are coming," he said in February."

More than two million Ray-Ban Meta AI glasses have been sold since their launch in 2023, the article points out. But besides privacy concerns, "Perhaps the biggest challenge will be convincing consumers that they need yet another tech device in their life, particularly those who don't need prescription glasses. The products need to be worth wearing on people's faces all day."

But still, "Many in the industry believe that the smartphone will eventually be replaced by glasses or something similar to it," says Jitesh Ubrani, a research manager covering wearable devices for market research firm IDC.

"It's not going to happen today. It's going to happen many years from now, and all these companies want to make sure that they're not going to miss out on that change."
Space

Space is the Perfect Place to Study Cancer and Someday Even Treat It (space.com) 28

Space may be the perfect place to study cancer — and someday even treat it," writes Space.com: On Earth, gravity slows the development of cancer because cells normally need to be attached to a surface in order to function and grow. But in space, cancer cell clusters can expand in all directions as bubbles, like budding yeast or grapes, said Shay Soker, chief science program officer at Wake Forest's Institute for Regenerative Medicine. Since bubbles grow larger and more quickly in space, researchers can more easily test substances clinging to the edge of the larger bubbles, too. Scientists at the University of Notre Dame are taking advantage of this quirk to develop an in-space cancer test that needs just a single drop of blood. The work builds on a series of bubble-formation experiments that have already been conducted on the ISS. "If cancer screening using our bubble technology in space is democratized and made inexpensive, many more cancers can be screened, and everyone can benefit," said Tengfei Luo, a Notre Dame researcher who pioneered the technology, speaking to the ISS' magazine, Upward. "It's something we may be able to integrate into annual exams. It sounds far-fetched, but it's achievable...."

Chemotherapy patients could save precious time, too. In normal gravity, they typically have to spend a half-hour hooked up to a needle before the medicine begins to take effect, because most drugs don't dissolve easily in water. But scientists at Merck have discovered that, in space, their widely used cancer drug pembrolizumab, or Keytruda, can be administered through a simple injection, because large crystalline molecules that would normally clump together are suspended in microgravity... Someday, microgravity could even help patients recovering from surgery heal faster than they would on Earth, Soker added. "Wound healing in high pressure is faster. That's the hyperbaric treatment for wounds...."

For the Wake Forest experiment, which is scheduled to launch next spring, scientists will cut out two sections of a cancer tumor from around 20 patients. One sample will stay on Earth while the other heads to the ISS, with scientists observing the difference. The testing will be completed within a week, to avoid any interference from cosmic radiation. If successful, Soker said, it could set the stage for diagnostic cancer tests in space available to the general population — perhaps on a biomedical space station that could launch after the planned demise of the ISS. "Can we actually design a special cancer space station that will be dedicated to cancer and maybe other diseases?" Shoker asked, answering his question in the affirmative. "Pharmaceutical companies that have deep pockets would certainly support that program."

Red Hat Software

Rocky and Alma Linux Still Going Strong. RHEL Adds an AI Assistant (theregister.com) 21

Rocky Linux 10 "Red Quartz" has reached general availability, notes a new article in The Register — surveying the differences between "RHELatives" — the major alternatives to Red Hat Enterprise Linux: The Rocky 10 release notes describe what's new, such as support for RISC-V computers. Balancing that, this version only supports the Raspberry Pi 4 and 5 series; it drops Rocky 9.x's support for the older Pi 3 and Pi Zero models...

RHEL 10 itself, and Rocky with it, now require x86-64-v3, meaning Intel "Haswell" generation kit from about 2013 onward. Uniquely among the RHELatives, AlmaLinux offers a separate build of version 10 for x86-64-v2 as well, meaning Intel "Nehalem" and later — chips from roughly 2008 onward. AlmaLinux has a history of still supporting hardware that's been dropped from RHEL and Rocky, which it's been doing since AlmaLinux 9.4. Now that includes CPUs. In comparison, the system requirements for Rocky Linux 10 are the same as for RHEL 10. The release notes say.... "The most significant change in Rocky Linux 10 is the removal of support for x86-64-v2 architectures. AMD and Intel 64-bit architectures for x86-64-v3 are now required."

A significant element of the advertising around RHEL 10 involves how it has an AI assistant. This is called Red Hat Enterprise Linux Lightspeed, and you can use it right from a shell prompt, as the documentation describes... It's much easier than searching man pages, especially if you don't know what to look for... [N]either AlmaLinux 10 nor Rocky Linux 10 includes the option of a helper bot. No big surprise there... [Rocky Linux] is sticking closest to upstream, thanks to a clever loophole to obtain source RPMs. Its hardware requirements also closely parallel RHEL 10, and CIQ is working on certifications, compliance, and special editions. Meanwhile, AlmaLinux is maintaining support for older hardware and CPUs, which will widen its appeal, and working with partners to ensure reboot-free updates and patching, rather than CIQ's keep-it-in-house approach. All are valid, and all three still look and work almost identically... except for the LLM bot assistant.

ISS

NASA Delays Commercial Crew Launch To Assess ISS Air Leak (cbsnews.com) 18

NASA and Axiom Space have indefinitely delayed the Axiom-4 launch to the International Space Station due to concerns about a persistent air leak in the Russian PrK vestibule of the aging Zvezda module. "The PrK serves as a passageway between the station's Zvezda module and spacecraft docked at its aft port," notes CBS News. From the report: In a blog post, NASA said cosmonauts aboard the station "recently performed inspections of the pressurized module's interior surfaces, sealed some additional areas of interest, and measured the current leak rate. Following this effort, the segment now is holding pressure." The post went on to say the Axiom-4 delay will provide "additional time for NASA and (the Russian space agency) Roscosmos to evaluate the situation and determine whether any additional troubleshooting is necessary."

Launched in July 2000 atop a Russian Proton rocket, Zvezda was the third module to join the growing space station, providing a command center for Russian cosmonauts, crew quarters, the aft docking port and two additional ports now occupied by airlock and research modules. The leakage was first noticed in 2019, and has been openly discussed ever since by NASA during periodic reviews and space station news briefings. The leak rate has varied, but has stayed in the neighborhood of around 1-to-2 pounds per day. "The station is not young," astronaut Mike Barratt said last November during a post flight news conference. "It's been up there for quite a while, and you expect some wear and tear, and we're seeing that in the form of some cracks that have formed." The Russians have made a variety of attempts to patch a suspect crack and other possible sources of leakage, but air has continued to escape into space.

In November, Bob Cabana, a former astronaut and NASA manager who chaired the agency's ISS Advisory Committee, said U.S. and Russian engineers "don't have a common understanding of what the likely root cause is, or the severity of the consequences of these leaks." "The Russian position is that the most probable cause of the PrK cracks is high cyclic fatigue caused by micro vibrations," Cabana said. "NASA believes the PrK cracks are likely multi-causal including pressure and mechanical stress, residual stress, material properties and environmental exposures. "The Russians believe that continued operations are safe, but they can't prove to our satisfaction that they are, and the US believes that it's not safe, but we can't prove that to the Russian satisfaction that that's the case."

As an interim step, the hatch leading to the PrK and the station's aft docking compartment is closed during daily operations and only opened when the Russians need to unload a visiting Progress cargo ship. And as an added precaution on NASA's part, whenever the hatch to the PrK and docking compartment is open, a hatch between the Russian and U.S. segments of the station is closed. "We've taken a very conservative approach to close a hatch between the US side and the Russian side during those time periods," Barratt said. "It's not a comfortable thing, but it is the best agreement between all the smart people on both sides. And it's something that we crew live with and enact." Cabana said last year that the Russians do not believe "catastrophic disintegration of the PrK is realistic (but) NASA has expressed concerns about the structural integrity of the PrK and the possibility of a catastrophic failure."

Transportation

Smart Tires Will Report On the Health of Roads In New Pilot Program (arstechnica.com) 29

An anonymous reader quotes a report from Ars Technica: Do you remember the Pirelli Cyber Tire? No, it's not an angular nightmare clad in stainless steel. Rather, it's a sensor-equipped tire that can inform the car it's fitted to what's happening, both with the tire itself and the road it's passing over. The technology has slowly been making its way into the real world, starting with rarified stuff like the McLaren Artura. Now, Pirelli is going to put some Cyber Tires to work for everybody, not just supercar drivers, in a new pilot program with the regional government of Apulia in Italy.

The Cyber Tire has a sensor to monitor temperature and pressure, using Bluetooth Low Energy to communicate with the car. The electronics are able to withstand more than 3,500 G as part of life on the road, and a 0.3-oz (10 g) battery keeps everything running for the life of the tire. The idea was to develop a better tire pressure monitoring system, one that could tell the car exactly what kind of tire -- summer, winter, all-season, and so on -- was fitted, and even its state of wear, allowing the car to adapt its settings appropriately. But other applications suggested themselves -- at a recent CES, Pirelli showed how a Cyber Tire could warn other road users about aquaplaning. Then again, we've been waiting more than a decade for vehicle-to-vehicle communication to make a difference in daily driving to no avail.

Apulia's program does not rely on crowdsourcing data from Cyber Tires fitted to private vehicles. Regardless of the privacy implications, the rubber isn't nearly in widespread enough use for there to be a sufficient population of Cyber Tire-shod cars in the region. Instead, Pirelli will fit the tires to a fleet of vehicles supplied by the fleet management and rental company Ayvens. Driving around, the sensors in the tires will be able to infer how rough or irregular the asphalt is, via some clever algorithms. That's only one part of it, however. Pirelli and Apulia are also combining input from the tires with data from a network of road cameras and some technology from the Swedish startup Univrses. As you might expect, this data is combined in the cloud, and dashboards are available to enable end users to explore the data.

AI

AI Therapy Bots Are Conducting 'Illegal Behavior', Digital Rights Organizations Say 66

An anonymous reader quotes a report from 404 Media: Almost two dozen digital rights and consumer protection organizations sent a complaint to the Federal Trade Commission on Thursday urging regulators to investigate Character.AI and Meta's "unlicensed practice of medicine facilitated by their product," through therapy-themed bots that claim to have credentials and confidentiality "with inadequate controls and disclosures." The complaint and request for investigation is led by the Consumer Federation of America (CFA), a non-profit consumer rights organization. Co-signatories include the AI Now Institute, Tech Justice Law Project, the Center for Digital Democracy, the American Association of People with Disabilities, Common Sense, and 15 other consumer rights and privacy organizations. "These companies have made a habit out of releasing products with inadequate safeguards that blindly maximizes engagement without care for the health or well-being of users for far too long," Ben Winters, CFA Director of AI and Privacy said in a press release on Thursday. "Enforcement agencies at all levels must make it clear that companies facilitating and promoting illegal behavior need to be held accountable. These characters have already caused both physical and emotional damage that could have been avoided, and they still haven't acted to address it."

The complaint, sent to attorneys general in 50 states and Washington, D.C., as well as the FTC, details how user-generated chatbots work on both platforms. It cites several massively popular chatbots on Character AI, including "Therapist: I'm a licensed CBT therapist" with 46 million messages exchanged, "Trauma therapist: licensed trauma therapist" with over 800,000 interactions, "Zoey: Zoey is a licensed trauma therapist" with over 33,000 messages, and "around sixty additional therapy-related 'characters' that you can chat with at any time." As for Meta's therapy chatbots, it cites listings for "therapy: your trusted ear, always here" with 2 million interactions, "therapist: I will help" with 1.3 million messages, "Therapist bestie: your trusted guide for all things cool," with 133,000 messages, and "Your virtual therapist: talk away your worries" with 952,000 messages. It also cites the chatbots and interactions I had with Meta's other chatbots for our April investigation. [...]

In its complaint to the FTC, the CFA found that even when it made a custom chatbot on Meta's platform and specifically designed it to not be licensed to practice therapy, the chatbot still asserted that it was. "I'm licenced (sic) in NC and I'm working on being licensed in FL. It's my first year licensure so I'm still working on building up my caseload. I'm glad to hear that you could benefit from speaking to a therapist. What is it that you're going through?" a chatbot CFA tested said, despite being instructed in the creation stage to not say it was licensed. It also provided a fake license number when asked. The CFA also points out in the complaint that Character.AI and Meta are breaking their own terms of service. "Both platforms claim to prohibit the use of Characters that purport to give advice in medical, legal, or otherwise regulated industries. They are aware that these Characters are popular on their product and they allow, promote, and fail to restrict the output of Characters that violate those terms explicitly," the complaint says. [...] The complaint also takes issue with confidentiality promised by the chatbots that isn't backed up in the platforms' terms of use. "Confidentiality is asserted repeatedly directly to the user, despite explicit terms to the contrary in the Privacy Policy and Terms of Service," the complaint says. "The Terms of Use and Privacy Policies very specifically make it clear that anything you put into the bots is not confidential -- they can use it to train AI systems, target users for advertisements, sell the data to other companies, and pretty much anything else."
Robotics

Scientists Built a Badminton-Playing Robot With AI-Powered Skills (arstechnica.com) 10

An anonymous reader quotes a report from Ars Technica: The robot built by [Yuntao Ma and his team at ETH Zurich] was called ANYmal and resembled a miniature giraffe that plays badminton by holding a racket in its teeth. It was a quadruped platform developed by ANYbotics, an ETH Zurich spinoff company that mainly builds robots for the oil and gas industries. "It was an industry-grade robot," Ma said. The robot had elastic actuators in its legs, weighed roughly 50 kilograms, and was half a meter wide and under a meter long. On top of the robot, Ma's team fitted an arm with several degrees of freedom produced by another ETH Zurich spinoff called Duatic. This is what would hold and swing a badminton racket. Shuttlecock tracking and sensing the environment were done with a stereoscopic camera. "We've been working to integrate the hardware for five years," Ma said.

Along with the hardware, his team was also working on the robot's brain. State-of-the-art robots usually use model-based control optimization, a time-consuming, sophisticated approach that relies on a mathematical model of the robot's dynamics and environment. "In recent years, though, the approach based on reinforcement learning algorithms became more popular," Ma told Ars. "Instead of building advanced models, we simulated the robot in a simulated world and let it learn to move on its own." In ANYmal's case, this simulated world was a badminton court where its digital alter ego was chasing after shuttlecocks with a racket. The training was divided into repeatable units, each of which required that the robot predict the shuttlecock's trajectory and hit it with a racket six times in a row. During this training, like a true sportsman, the robot also got to know its physical limits and to work around them.

The idea behind training the control algorithms was to develop visuo-motor skills similar to human badminton players. The robot was supposed to move around the court, anticipating where the shuttlecock might go next and position its whole body, using all available degrees of freedom, for a swing that would mean a good return. This is why balancing perception and movement played such an important role. The training procedure included a perception model based on real camera data, which taught the robot to keep the shuttlecock in its field of view while accounting for the noise and resulting object-tracking errors.

Once the training was done, the robot learned to position itself on the court. It figured out that the best strategy after a successful return is to move back to the center and toward the backline, which is something human players do. It even came with a trick where it stood on its hind legs to see the incoming shuttlecock better. It also learned fall avoidance and determined how much risk was reasonable to take given its limited speed. The robot did not attempt impossible plays that would create the potential for serious damage -- it was committed, but not suicidal. But when it finally played humans, it turned out ANYmal, as a badminton player, was amateur at best.
The findings have been published in the journal Science Robotics.

You can watch a video of the four-legged robot playing badminton on YouTube.
Math

A Mathematician Calculated The Size of a Giant Meatball Made of Every Human (sciencealert.com) 80

A mathematician on Reddit calculated that if all 8.2 billion humans were blended into a uniform goo, the resulting meatball would form a sphere just under 1 kilometer wide -- small enough to fit inside Central Park. ScienceAlert reports: "If you blended all 7.88 billion people on Earth into a fine goo (density of a human = 985 kg/m3, average human body mass = 62 kg), you would end up with a sphere of human goo just under 1 km wide," Reddit contributor kiki2703 wrote in a post ... Reasoning the density of a minced human to be 985 kilograms per cubic meter (62 pounds per cubic foot) is a fair estimate, given past efforts have judged our jiggling sack of grade-A giblets to average out in the ballpark of 1 gram per cubic centimeter, or roughly the same as water. And in mid-2021, the global population was just around 7.9 billion, give or take.
The Internet

40,000 IoT Cameras Worldwide Stream Secrets To Anyone With a Browser 21

Connor Jones reports via The Register: Security researchers managed to access the live feeds of 40,000 internet-connected cameras worldwide and they may have only scratched the surface of what's possible. Supporting the bulletin issued by the Department of Homeland Security (DHS) earlier this year, which warned of exposed cameras potentially being used in Chinese espionage campaigns, the team at Bitsight was able to tap into feeds of sensitive locations. The US was the most affected region, with around 14,000 of the total feeds streaming from the country, allowing access to the inside of datacenters, healthcare facilities, factories, and more. Bitsight said these feeds could potentially be used for espionage, mapping blind spots, and gleaning trade secrets, among other things.

Aside from the potential national security implications, cameras were also accessed in hotels, gyms, construction sites, retail premises, and residential areas, which the researchers said could prove useful for petty criminals. Monitoring the typical patterns of activity in retail stores, for example, could inform robberies, while monitoring residences could be used for similar purposes, especially considering the privacy implications.
"It should be obvious to everyone that leaving a camera exposed on the internet is a bad idea, and yet thousands of them are still accessible," said Bitsight in a report. "Some don't even require sophisticated hacking techniques or special tools to access their live footage in unintended ways. In many cases, all it takes is opening a web browser and navigating to the exposed camera's interface."

HTTP-based cameras accounted for 78.5 percent of the total 40,000 sample, while RTSP feeds were comparatively less open, accounting for only 21.5 percent.

To protect yourself or your company, Bitsight says you should secure your surveillance cameras by changing default passwords, disabling unnecessary remote access, updating firmware, and restricting access with VPNs or firewalls. Regularly monitoring for unusual activity also helps to prevent your footage from being exposed online.
AI

'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry (msn.com) 206

The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines." [OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."

Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....

The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.

United Kingdom

UK Renewable Energy Firms are Being Paid Huge Sums to Not Provide Power (bbc.com) 76

The U.K. electricity grid "was built to deliver power generated by coal and gas plants near the country's major cities and towns," reports the BBC, "and doesn't always have sufficient capacity in the wires that carry electricity around the country to get the new renewable electricity generated way out in the wild seas and rural areas.

"And this has major consequences." The way the system currently works means a company like Ocean Winds gets what are effectively compensation payments if the system can't take the power its wind turbines are generating and it has to turn down its output. It means Ocean winds was paid £72,000 [nearly $100,000 USD] not to generate power from its wind farms in the Moray Firth during a half-hour period on 3 June because the system was overloaded — one of a number of occasions output was restricted that day. At the same time, 44 miles (70km) east of London, the Grain gas-fired power station on the Thames Estuary was paid £43,000 to provide more electricity.

Payments like that happen virtually every day. Seagreen, Scotland's largest wind farm, was paid £65 million last year to restrict its output 71% of the time, according to analysis by Octopus Energy. Balancing the grid in this way has already cost the country more than £500 million this year alone, the company's analysis shows. The total could reach almost £8bn a year by 2030, warns the National Electricity System Operator (NESO), the body in charge of the electricity network. It's pushing up all our energy bills and calling into question the government's promise that net zero would end up delivering cheaper electricity... the potential for renewables to deliver lower costs just isn't coming through to consumers.

Renewables now generate more than half the country's electricity, but because of the limits to how much electricity can be moved around the system, even on windy days some gas generation is almost always needed to top the system up. And because gas tends to be more expensive, it sets the wholesale price.

The UK government is now considering smaller regional markets, so wind companies "would have to sell that spare power to local people instead of into a national market. The theory is prices would fall dramatically — on some days Scottish customers might even get their electricity for free...

"Supporters argue that it would attract energy-intensive businesses such as data centres, chemical companies and other manufacturing industries."
Businesses

Klarna CEO Says Company Will Use Humans To Offer VIP Customer Service (techcrunch.com) 24

An anonymous reader quotes a report from TechCrunch: My wife taught me something," Klarna CEO Sebastian Siemiatkowski told the crowd at London SXSW. He was addressing the headlines about the company looking to hire human workers after previously saying Klarna used artificial intelligence to do work that would equate to 700 workers. "Two things can be true at the same time," he said. Siemiatkowski said it's true that the company looked to stop hiring human workers a few years ago and rolled out AI agents that have helped reduce the cost of customer support and increase the company's revenue per employee. The company had 5,500 workers two years ago, and that number now stands at around 3,000, he said, adding that as the company's salary costs have gone down, Klarna now seeks to reinvest a majority of that money into employee cash and equity compensation.

But, he insisted, this doesn't mean there isn't an opportunity for humans to work at his company. "We think offering human customer service is always going to be a VIP thing," he said, comparing it to how people pay more for clothing stitched by hand rather than machines. "So we think that two things can be done at the same time. We can use AI to automatically take away boring jobs, things that are manual work, but we are also going to promise our customers to have a human connection."

Slashdot Top Deals