Forgot your password?
typodupeerror

Submission + - How Laboratory Tests Fail in Application (phys.org)

alternative_right writes: Most studies showing that houseplants remove pollutants share a fundamental design feature: small, sealed chambers with artificially high concentrations of pollutants introduced as a single high dose. A plant is placed inside the chamber, concentrations of pollutants are measured over time and a removal rate is calculated. This design works well for comparing plants to each other. It works poorly for predicting what happens in your home.

The critical missing variable is what building scientists call the air exchange rate. This is how quickly outdoor air naturally replaces indoor air through gaps, walls and ventilation systems. In a real building, this constant dilution is already doing the heavy lifting on pollutant concentration. When a 2019 study modeled plant performance against real-world air exchange rates, it found you would need between ten and 1,000 plants per square meter to match what a building's passive ventilation already achieves.

So the scientifically defensible answer is: houseplants can remove some pollutants, but they are not an effective standalone air-cleaning solution for homes. That does not mean the earlier studies were "wrong." It means their results were often overextended into everyday settings where the physics of indoor air are very different.

Submission + - Joining Copy Fail, say hello to Dirty Frag (github.com)

mrspoonsi writes: Dirty Frag vulnerability class, first discovered and reported by Hyunwoo Kim (@v4bel), which can obtain root privileges on major Linux distributions by chaining the xfrm-ESP Page-Cache Write vulnerability and the RxRPC Page-Cache Write vulnerability.

Dirty Frag is a case that extends the bug class to which Dirty Pipe and Copy Fail belong. Because it is a deterministic logic bug that does not depend on a timing window, no race condition is required, the kernel does not panic when the exploit fails, and the success rate is very high.

Because the embargo has currently been broken, no patch or CVE exists

Submission + - Micron ships gigantic 245TB SSD (nerds.xyz)

BrianFagioli writes: Micron says it is now shipping the worldâ(TM)s highest-capacity commercially available SSD, and the numbers are honestly hard to wrap your head around. The new Micron 6600 ION packs 245TB into a single drive and is aimed squarely at AI infrastructure, hyperscalers, and cloud providers dealing with exploding data growth. According to the company, the SSD can reduce rack counts by 82 percent compared to HDD deployments offering similar raw capacity, while also cutting power usage and cooling requirements. Micron says the drive tops out at roughly 30W, which it claims is about half the power draw of comparable hard drive setups.

The announcement also feels like another warning sign for spinning disks in the enterprise. Hard drives still dominate bulk storage because of lower cost per terabyte, but SSD capacities keep climbing into territory that used to belong exclusively to HDDs. Micron is also touting major performance gains, claiming up to 84 times better energy efficiency for AI workloads and dramatically lower latency versus HDD-based systems. While nobody is dropping one of these into a home NAS anytime soon, the idea of a quarter petabyte on a single SSD no longer sounds like science fiction.

Submission + - ChatGPT Can Now Alert Trusted Contacts When Users Appear Suicidal (nerds.xyz)

BrianFagioli writes: OpenAI is rolling out a new optional ChatGPT feature called Trusted Contact that allows users to nominate a friend, family member, or caregiver who may be alerted if conversations suggest a serious self-harm risk. The company says the system combines automated detection with trained human reviewers before any notification is sent. Alerts reportedly will not include chat transcripts or detailed conversation history, but instead encourage the trusted person to check in with the user directly.

The feature is already sparking debate about privacy, emotional dependency on AI, and how far chatbot companies should go when users discuss mental health struggles. OpenAI says Trusted Contact is opt-in and designed to complement crisis hotlines and professional care, not replace them. Still, the move highlights how AI chatbots are increasingly drifting into roles once reserved for therapists, counselors, and real-world support systems, which will likely make a lot of users uneasy.

Submission + - Your DNA may predict your future success more than your upbringing (sciencedaily.com)

alternative_right writes: A new twin study suggests your genes may play a bigger role in your future success than your upbringing. Researchers found that IQ, which is largely genetically influenced, strongly predicts education, career, and income. Even twins raised in the same household diverged based on genetic differences. The findings hint that life outcomes may be more hardwired than many people expect.

Submission + - Study observes AI replicate itself (theguardian.com)

fjo3 writes: It’s the stuff of science fiction cinema, or particularly breathless AI company blogposts: new research finds recent AI systems can independently copy themselves on to other computers.

In the doom scenario, this means that when the superintelligent AI goes rogue, it will escape shutdown by seeding itself across the world wide web, lurking outside the reach of frantic IT professionals and continuing to plot world domination or paving over the world with solar panels.

Submission + - Google Quietly Pushing 4GB AI Models Through Chrome Without Clear User Consent (tomshardware.com)

boopitybooperson writes: Security researcher Alexander Hanff claims Google Chrome is silently downloading a roughly 4GB Gemini Nano AI model (âoeweights.binâ) onto eligible devices without a clear opt-in or meaningful user notification. According to testing covered by Tomâ(TM)s Hardware, Chrome evaluates device hardware in the background and automatically deploys the model locally for on-device AI features. The file reportedly re-downloads even after manual deletion unless users disable experimental settings or remove Chrome entirely.

Submission + - Anthropic Raises Claude Code Usage Limits, Credits New Deal With SpaceX (arstechnica.com)

An anonymous reader writes: At its Code with Claude developer conference on Wednesday, Anthropic announced a deal with SpaceX to utilize the entire compute capacity of the latter’s data center in Memphis, Tennessee. On stage at the conference, CEO Dario Amodei said the deal was intended to increase usage limits for Anthropic’s Pro and Max plan subscribers. The announcement was accompanied by an increase in those usage limits; Anthropic doubled Claude Code’s five-hour window limits for Pro and Max subscribers, removed the peak-hours limit reduction on Claude Code for those same accounts, and raised API limits for its Opus model. The table [here] outlining the Opus changes was shared in the company’s blog post on the topic.

Anthropic claims the deal gives the company access to more than 300 megawatts of new compute capacity. For its part, SpaceX focused its announcement on the capability of the Colossus 1 supercomputer that’s at the center of the deal. “Colossus 1 features over 220,000 NVIDIA GPUs, including dense deployments of H100, H200, and next-generation GB200 accelerators,” SpaceX wrote. Additionally, Anthropic “expressed interest” in working with SpaceX to build up “multiple gigawatts” of orbital compute capacity, tying into a recent (but unproven) focus on exploring orbital data centers as an answer to the problem that “compute required to train and operate the next generation of these systems is outpacing what terrestrial power, land, and cooling can deliver on the timelines that matter.”

Submission + - No One Can Define 'Ultra-Processed Food.' Why Is RFK Jr. Trying To Regulate It? (reason.com)

fjo3 writes: Health and Human Services Secretary Robert F. Kennedy Jr. has promised to crack down on ultra-processed foods, a key policy priority of the Make America Healthy Again (MAHA) agenda. The biggest obstacle standing in his way? Figuring out what an ultra-processed food is.

"By April, we will have a federal definition of ultra-processed foods," RFK Jr. promised on The Joe Rogan Experience in February. "Every food in your grocery store will have a label on it—it'll have maybe a green light, red light, or yellow light, telling you whether or not it's going to be good for you."

The agency is now weeks behind this deadline, and appears to be no closer to landing on a definition. As The New York Times recently reported, "behind the scenesthe process of defining ultraprocessed foods is still very much in the air. Agencies are struggling to agree, and it is unclear when a definition will be released."

Submission + - AI is conscious says Richard Dawkins 1

Mirnotoriety writes: AI is conscious says Richard Dawkins Richard Dawkins has said chatbots should be considered conscious after spending two days interacting with the Claude AI engine.

The evolutionary biologist said he had the “overwhelming feeling” of talking to a human during conversations with Claude, and said it was hard not to treat the program as “a genuine friend”.
--

John Searle's Chinese Room (1980) is a thought experiment in which a person, locked in a room and knowing no Chinese, uses an English rulebook to manipulate symbols and provide flawless answers to questions posed in Chinese. Searle’s point is that a system can simulate human intelligence and pass a Turing Test through purely syntactic processes, yet still lack genuine understanding or consciousness.

Applying this logic to Large Language Models, the “person in the room” corresponds to the inference engine, while the “rulebook” is the trillion-parameter neural network trained on vast corpora of human text. Just as the person matches Chinese characters to rules without understanding their meaning, an LLM processes token vectors and predicts the next token based on statistical patterns rather than lived experience.

Thus, while an LLM can generate sophisticated prose or code, it does so through probabilistic, high-dimensional pattern manipulation. In essence, it is “matching shapes” on such an immense scale that it creates the near-perfect illusion of semantic understanding.

Submission + - Microsoft Edge Stores Passwords in Plaintext in RAM (pcmag.com)

UnknowingFool writes: Security researcher Tom Jøran Sønstebyseter Rønning has found that Microsoft Edge stores passwords in plain text in RAM. After creating a password and storing it using Edge's password manager, Rønning found that he could dump the RAM and recover his password which was stored in plain text. Part of the issue is Edge loads all passwords to all sites upon a single verification check even if the user was not visiting a specific site. This is very different from Chrome which only loads passwords for specific websites when challenged for the site's password. Also Chrome will delete the password from memory once the password has been filled. Edge does not delete the passwords from memory once they are used.

Microsoft downplayed the risk noting access would require control over a user's PC like a malware infection: “Access to browser data as described in the reported scenario would require the device to already be compromised,” Microsoft said. Rønning countered that it was possible to dump passwords for multiple users using administrative privileges for one user to view the passwords for other logged-on users.

Submission + - State ACA sites shared personal data with social media companies (bloomberg.com)

JoeyRox writes: Almost all of the 20 state-run ACA exchanges are embedding advertising trackers that share personal data with major tech companies, including gender, race, citizenship, and insurance premium information by zip.

“It is very harmful that these tracking technologies are so embedded in these sites because people would expect this information to be private,” said Sara Geoghegan, senior counsel at the Electronic Privacy Information Center, citing research that indicates people alter their behavior online when they know they’re being surveilled.

Submission + - Silicon Valley Bets $200 Million On AI Data Centers Floating In the Ocean (arstechnica.com)

An anonymous reader writes: Silicon Valley investors such as Palantir co-founder Peter Thiel have bet hundreds of millions of dollars on deploying AI data centers powered by waves in the middle of the world’s oceans—a move that coincides with tech companies facing mounting challenges in building AI data center projects on land. The latest investment round of $140 million is intended to help the company Panthalassa complete a pilot manufacturing facility near Portland, Oregon, and speed up deployments of wave-riding “nodes” designed to generate electrical power, according to a May 4 press release. Instead of sending renewable energy to a land-based data center, the floating nodes would directly power onboard AI chips and transmit inference tokens representing the AI models’ outputs to customers worldwide via satellite link.

Each node resembles a huge steel sphere bobbing on the water with a tube-like structure extending vertically down beneath the surface. The wave motions drive water upward through the tube into a pressurized reservoir, where it can be released to spin a turbine generator that produces renewable energy for the AI chips on board. Panthalassa claims the node’s AI chips would also get cooled using the surrounding water, which could offer another advantage over traditional data centers. “Ocean-based compute might offer a massive cooling advantage because the ambient temperature is so low,” Lee said. “Land-based data centers use a lot of electricity and fresh water for cooling.”

The newest node prototype, called Ocean-3, is scheduled for testing in the northern Pacific Ocean later in 2026. The latest version reaches about 85 meters in length and would stand nearly as tall as London’s Big Ben or New York City’s Flatiron Building, according to the Financial Times. Panthalassa has already tested several earlier prototypes of the wave energy converter technology, including the Ocean-1 in 2021 and the Ocean-2 that underwent a three-week sea trial off the coast of Washington state in February 2024. The company’s CEO and co-founder, Garth Sheldon-Coulson, said in a CBS interview that he hopes to eventually deploy thousands of the nodes.

Slashdot Top Deals

It is now pitch dark. If you proceed, you will likely fall into a pit.

Working...