Science

Smelling This One Specific Scent Can Boost the Brain's Gray Matter (sciencealert.com) 42

"According to a new study, wearing the right kind of perfume or cologne can enlarge your brain's gray matter," writes ScienceAlert Researchers from Kyoto University and the University of Tsukuba in Japan asked 28 women to wear a specific rose scent oil on their clothing for a month, with another 22 volunteers enlisted as controls who put on plain water instead. Magnetic resonance imaging ( MRI) scans showed boosts in the gray matter volume of the rose scent participants.

While an increase in brain volume doesn't necessarily translate into more thinking power, the findings could have implications for neurodegenerative conditions such as dementia. "This study is the first to show that continuous scent inhalation changes brain structure," write the researchers in their published paper. We've seen scents like this improve memory and cognitive performance, but here the team wanted to try a longer-term experiment to see how triggering our sense of smell might lead to measurable changes in brain structure...

It's difficult to pin down exactly what's causing this boost in gray matter. Another possibility raised by the researchers is that the rose scent is actually labeled as unpleasant by the brain, with the subsequent emotional regulation responsible for the PCC working harder and increasing in size. The researchers hope that the findings could be useful in the development of aromatherapies that boost mental health and brain plasticity...

The research was published in the Brain Research Bulletin.

Power

Wave Energy Projects Have Come a Long Way After 10 Years (eurekalert.org) 44

They offer "a self-sustaining power solution for marine regions," according to a newly published 41-page review after "pioneering use in wave energy harvesting in 2014". Ten years later, researchers have developed several structures for these "triboelectric nanogenerators" (TENGs) to "facilitate their commercial deployment." But there's a lack of "comprehensive summaries and performance evaluations".

So the review "distills a decade of blue-energy research into six design pillars" for next-generation technology, writes EurekaAlert, which points the way "to self-powered ocean grids, distributed marine IoT, and even hydrogen harvested from the sea itself..." By "translating chaotic ocean motion into deterministic electron flow," the team "turns every swell, gust and glint of sunlight into dispatchable power — ushering in an era where the sea itself becomes a silent, self-replenishing power plant."

Some insights: - Multilayer stacks, origami folds and magnetic-levitation frames push volumetric power density...three orders of magnitude above first-generation prototypes.

- Frequency-complementary couplings of TENG, EMG and PENG create full-spectrum harvesters that deliver 117 % power-conversion efficiency in real waves.

- Pendulum, gear and magnetic-multiplier mechanisms translate chaotic 0.1-2 Hz swells into stable high-frequency oscillations, multiplying average power 14-fold.

- Resonance-tuned structures now span 0.01-5 Hz, locking onto shifting wave spectra across seasons and sea states.

- Spherical, dodecahedral and tensegrity architectures harvest six-degree-of-freedom motion, eliminating orientational blind spots.

- Single devices co-harvest wave, wind and solar inputs, powering self-charging buoys that cut battery replacement to zero...

Another new wave energy project is moving forward, according to the blog Renewable Energy World: Eco Wave Power, an onshore wave energy technology company, announced that its U.S. pilot project at the Port of Los Angeles has successfully completed operational testing and achieved a new milestone: the lowering of its floaters into the water for the first time. The moment, broadcast live by Good Morning America, follows the finalization of all installation works at the project site, including full installation of all wave energy floaters; connection of hydraulic pipes and supporting infrastructure; and placement of the onshore energy conversion unit.

With installation completed, Eco Wave Power has now officially entered the operational phase of its U.S. excursion... [Inna Braverman, founder and CEO of Eco Wave Power] said "This pilot station is a vital step in demonstrating how wave energy can be harnessed using existing marine infrastructure, while laying the groundwork for full-scale commercialization in the United States...." Eco Wave Power's patented onshore wave energy system attaches floaters to existing marine structures. The up-and-down motion of the waves drives hydraulic cylinders, which send pressurized fluid to a land-based energy conversion unit that generates electricity... The U.S. Department of Energy's National Renewable Energy Laboratory estimates that wave energy has the potential to generate over 1,400 terawatt-hours per year — enough to power approximately 130 million homes.

Eco Wave Power's 404.7 MW global project pipeline also includes upcoming operational sites in Taiwan, India, and Portugal, alongside its grid-connected station in Israel.

Long-time Slashdot reader PongoX11 also brings word of a company building a "simple" floating rig to turn wave motion into electricity, calling it "a steel can that moves water around" and wondering if "This one might work!"

The news site TechEBlog points out that "Unlike old-school wave energy systems with clunky mechanical parts, Ocean-2 rocks a modular, flexible setup that rolls with the ocean's flow." At about 10 meters wide [30 feet wide. and 260 feet long!], it is made from materials designed to (hopefully) withstand the ocean's abuse, over some maintenance cycle. It's designed for deep ocean, so solving this technically is the first big challenge. Figuring out how to use/monetize all that cheap energy out in the middle of nowhere will be the next.
"Ocean-2 works with the ocean, not against it, so we can generate power without messing up marine life," said Panthalassa's CEO, Dr. Elena Martinez, according to TechEBlog: Tests in Puget Sound, done with Everett Ship Repair, showed it pumping out up to 50 kilowatts in decent conditions — enough juice for a small coastal town. "We're thinking big," Martinez said in a press release. "Ocean-2 is just the start, but we're already planning bigger arrays that could crank out gigawatts..." Looking forward, Panthalassa sees Ocean-2 as part of a massive wave energy network. By 2030, they're aiming to roll out arrays that could power whole coastal cities, cutting down on fossil fuel use.
Google

Google Has Eliminated 35% of Managers Overseeing Small Teams in Past Year, Exec Says (cnbc.com) 30

Google has eliminated more than one-third of its managers overseeing small teams, an executive told employees last week, as the company continues its focus on efficiencies across the organization. From a report: "Right now, we have 35% fewer managers, with fewer direct reports" than at this time a year ago, said Brian Welle, vice president of people analytics and performance, according to audio of an all-hands meeting reviewed by CNBC. "So a lot of fast progress there."

At the meeting, employees asked Welle and other executives about job security, "internal barriers" and Google's culture after several recent rounds of layoffs, buyouts and reorganizations. Welle said the idea is to reduce bureaucracy and run the company more efficiently. "When we look across our entire leadership population, that['s mangers, directors and VPs, we want them to be a smaller percentage of our overall workforce over time," he said.

Education

South Korea Bans Phones in School Classrooms Nationwide (bbc.com) 25

South Korea has passed a bill banning the use of mobile phones and smart devices during class hours in schools -- becoming the latest country to restrict phone use among children and teens. From a report: The law, which comes into effect from the next school year in March 2026, is the result of a bi-partisan effort to curb smartphone addiction, as more research points to its harmful effects. Lawmakers, parents and teachers argue that smartphone use is affecting students' academic performance and takes away time they could have spent studying.

Submission + - Hosting.com acquires Rocket.net to expand global WordPress hosting business (nerds.xyz)

BrianFagioli writes: Hosting.com has acquired Rocket.net, bringing the fast-growing managed WordPress hosting company under its corporate umbrella. The move gives hosting.com a proven SaaS platform and a strong brand in WordPress hosting, while Rocket.net gains the capital and global reach of a much larger player. Financial details of the deal were not disclosed.

Rocket.net will continue to operate under its own name, but it is now part of hosting.comâ(TM)s family of brands. As part of the deal, Rocket.net founder and CEO Ben Gabler has been appointed Chief Product Officer at hosting.com, where he will lead product and software engineering across the entire company.

Here at NERDS.xyz, we use Rocket.net ourselves, and we absolutely love it. The service has been rock solid for us, with top-tier performance and customer support that actually feels personal. That kind of reliability and responsiveness is rare in the hosting world.

Rocket.net has been on a rapid rise since its founding in 2020. In 2025, it ranked 167th on the Inc. 5000 list of the fastest-growing companies in the United States. The company has also boasted a 98 percent customer satisfaction rate while powering some of the largest WordPress sites in the world.

For hosting.com, the acquisition strengthens its ability to serve a wider range of customers. The company, founded in 2019, already operates more than 20 data centers, powers over 3 million websites, and serves 600,000 customers worldwide with a team of 900 employees.

Jessica Frick, a veteran of Automattic and Pressable, will continue to run Rocket.net as General Manager, reporting to Gabler. âoeIâ(TM)m beyond excited by our new partnership with hosting.com. It will immediately put the Rocket.net platform in front of hundreds of thousands of hosting.com customers,â she said.

The Rocket.net platform will now be rolled out across hosting.comâ(TM)s global footprint, including the USA, UK, Germany, and Singapore, as well as new regions such as Mexico, the UAE, and Australia.

Both companies stress that their commitment to WordPress and open source will remain intact. Hosting.com already sponsors global WordCamps and encourages employees to contribute to the WordPress project, while Rocket.net has long positioned itself as a champion of the open web.

In plain terms, folks, Rocket.net is now owned by hosting.com. The brand lives on, but the acquisition means Rocket.net is no longer an independent company.

AMD

AMD Blames Motherboard Makers For Burnt-Out CPUs (arstechnica.com) 35

An anonymous reader quotes a report from Ars Technica: AMD's X3D-series Ryzen chips have become popular with PC gamers because games in particular happen to benefit disproportionately from the chips' extra 64MB of L3 cache memory. But that extra memory occasionally comes with extra headaches. Not long after they were released earlier this year, some early adopters started having problems with their CPUs, ranging from failure to boot to actual physical scorching and burnout -- the problems were particularly common for users of the 9800X3D processor in ASRock motherboards, and one Reddit thread currently records 157 incidents of failure for that CPU model across various ASRock boards.

In an interview with the Korean language website Quasar Zone (via Tom's Hardware), AMD executives David McAfee and Travis Kirsch acknowledged the problems and pointed to the most likely culprit: motherboard makers who don't follow AMD's recommended specifications. Some manufacturers have historically shipped their AMD and Intel motherboards with elevated default power settings in the interest of squeezing a bit more performance out of the chips -- but those adjustments can also cause problems in some cases, especially for higher-end CPUs.

Submission + - AMD Blames Motherboard Makers For Burnt-Out CPUs (arstechnica.com)

An anonymous reader writes: AMD's X3D-series Ryzen chips have become popular with PC gamers because games in particular happen to benefit disproportionately from the chips' extra 64MB of L3 cache memory. But that extra memory occasionally comes with extra headaches. Not long after they were released earlier this year, some early adopters started having problems with their CPUs, ranging from failure to boot to actual physical scorching and burnout—the problems were particularly common for users of the 9800X3D processor in ASRock motherboards, and one Reddit thread currently records 157 incidents of failure for that CPU model across various ASRock boards.

In an interview with the Korean language website Quasar Zone (via Tom's Hardware), AMD executives David McAfee and Travis Kirsch acknowledged the problems and pointed to the most likely culprit: motherboard makers who don't follow AMD's recommended specifications. Some manufacturers have historically shipped their AMD and Intel motherboards with elevated default power settings in the interest of squeezing a bit more performance out of the chips—but those adjustments can also cause problems in some cases, especially for higher-end CPUs.

Python

Survey Finds More Python Developers Like PostgreSQL, AI Coding Agents - and Rust for Packages (jetbrains.com) 85

More than 30,000 Python developers from around the world answered questions for the Python Software Foundation's annual survey — and PSF Fellow Michael Kennedy tells the Python community what they've learned in a new blog post. Some highlights: Most still use older Python versions despite benefits of newer releases... Many of us (15%) are running on the very latest released version of Python, but more likely than not, we're using a version a year old or older (83%). [Although less than 1% are using "Python 3.5 or lower".] The survey also indicates that many of us are using Docker and containers to execute our code, which makes this 83% or higher number even more surprising... You simply choose a newer runtime, and your code runs faster. CPython has been extremely good at backward compatibility. There's rarely significant effort involved in upgrading... [He calculates some cloud users are paying up to $420,000 and $5.6M more in compute costs.] If your company realizes you are burning an extra $0.4M-$5M a year because you haven't gotten around to spending the day it takes to upgrade, that'll be a tough conversation...

Rust is how we speed up Python now... The Python Language Summit of 2025 revealed that "Somewhere between one-quarter and one-third of all native code being uploaded to PyPI for new projects uses Rust", indicating that "people are choosing to start new projects using Rust". Looking into the survey results, we see that Rust usage grew from 27% to 33% for binary extensions to Python packages... [The blog post later advises Python developers to learn to read basic Rust, "not to replace Python, but to complement it," since Rust "is becoming increasingly important in the most significant portions of the Python ecosystem."]

PostgreSQL is the king of Python databases, and only it's growing, going from 43% to 49%. That's +14% year over year, which is remarkable for a 28-year-old open-source project... [E]very single database in the top six grew in usage year over year. This is likely another indicator that web development itself is growing again, as discussed above...

[N]early half of the respondents (49%) plan to try AI coding agents in the coming year. Program managers at major tech companies have stated that they almost cannot hire developers who don't embrace agentic AI. The productive delta between those using it and those who avoid it is simply too great (estimated at about 30% greater productivity with AI).

It's their eighth annual survey (conducted in collaboration with JetBrains last October and November). But even though Python is 34 years old, it's still evolving. "In just the past few months, we have seen two new high-performance typing tools released," notes the blog post. (The ty and Pyrefly typecheckers — both written in Rust.) And Python 3.14 will be the first version of Python to completely support free-threaded Python... Just last week, the steering council and core developers officially accepted this as a permanent part of the language and runtime... Developers and data scientists will have to think more carefully about threaded code with locks, race conditions, and the performance benefits that come with it. Package maintainers, especially those with native code extensions, may have to rewrite some of their code to support free-threaded Python so they themselves do not enter race conditions and deadlocks.

There is a massive upside to this as well. I'm currently writing this on the cheapest Apple Mac Mini M4. This computer comes with 10 CPU cores. That means until this change manifests in Python, the maximum performance I can get out of a single Python process is 10% of what my machine is actually capable of. Once free-threaded Python is fully part of the ecosystem, I should get much closer to maximum capacity with a standard Python program using threading and the async and await keywords.

Some other notable findings from the survey:
  • Data science is now over half of all Python. This year, 51% of all surveyed Python developers are involved in data exploration and processing, with pandas and NumPy being the tools most commonly used for this.
  • Exactly 50% of respondents have less than two years of professional coding experience! And 39% have less than two years of experience with Python (even in hobbyist or educational settings)...
  • "The survey tells us that one-third of devs contributed to open source. This manifests primarily as code and documentation/tutorial additions."

Firefox

Firefox 142's Link Previews Have a New Option: AI-Generated Summaries (theregister.com) 73

"Good news, everyone! The new version of Mozilla's browser now makes even more extensive use of AI," writes the Register, "providing summaries of linked content and offering developers the ability to add LLM support to extensions." Firefox 142 brings some visible shininess, but due to the combination of regional restrictions and Mozilla's progressive rollout system, not everybody can see all the features just yet... Not geofenced but subject to phased rollout are link previews, for various native-English-speaking regions. Hover over, long-press, or right-click a link and pick Preview Link, and a summary should appear. Mozilla's summary says: "Previews can optionally include AI-generated key points, which are processed on your device to protect your privacy."
"Link Previews is gradually rolling out to ensure performance and quality," Firefox says in their release notes, "and is now available in en-US, en-CA, en-GB, en-AU for users with more than 3 GB of available RAM." (The notes also add a welcome for "the developers who contributed their first code change to Firefox in this release, 20 of whom were brand new volunteers!")

The Register notes that Firefox 142 also gives developers the ability to add LLM support to extensions using wllama, a Wasm binding interfacing with llama.cpp, which lets you run Meta's Llama LLM and other models, locally or in the cloud.

Submission + - Google TV and Android TV apps must support 64-bit starting August 2026 (nerds.xyz)

BrianFagioli writes: Google is preparing to bring its television platforms in line with the rest of Android. Starting August 1, 2026, both Google TV and Android TV will require app updates that include native code to provide 64-bit support. The move follows similar requirements for phones and tablets, and it paves the way for upcoming 64-bit TV devices.

For developers, the change means any new app or update with native code must include an arm64 version alongside the existing 32-bit variant. Apps that fail to do so will not be accepted on Google Play for TV devices. Google says the shift will bring better performance, shorter start times, and enable new experiences on future hardware.

If an app targets Android 15 or higher, it must also be compatible with 16 KB memory page sizes, another requirement meant to ensure smoother performance across modern systems. While 32-bit support will continue, apps can no longer be 32-bit only. Developers will need to ship both versions using App Bundle ABI splits.

Google stresses that this requirement only impacts apps containing native code, such as those with .so libraries. Tools like the APK Analyzer can help identify whether an app includes these files. To test the changes, developers can use the Google TV emulator for Apple Silicon Macs or physical hardware like the Nvidia Shield, which supports both 32-bit and 64-bit userspaces. Pixel phones running Android 7 or newer can also be configured to mimic TV resolution and DPI for sideload testing.

The company recommends developers begin checking their apps now and updating code where needed. By August 2026, all TV apps with native code must be ready for 64-bit and 16 KB page size compliance in order to remain on Google Play.

Facebook

Whistleblower Alleges Meta Artificially Boosted Shops Ads Performance (adweek.com) 8

An anonymous reader quotes a report from Adweek: Meta wanted advertisers to believe its ecommerce ad product, Shops ads, was outperforming the competition, per a whistleblower complaint filed in a U.K. court. The former employee alleges the social media giant artificially inflated return on ad spend (ROAS) by counting shipping fees as revenue, subsidizing bids in ad auctions, and applying undisclosed discounts. The complaint, viewed by ADWEEK, was filed with the London Central Employment Tribunal on Wednesday (August 20) by Samujjal Purkayastha, a former product manager on Meta's Shops ads team. The document claims Meta artificially inflated performance metrics to push brands toward its fledgling ecommerce ad product.

The company's motivation, the complaint says, was in part to combat Apple's 2021 privacy changes that cut the troves of iOS tracking information that had long powered Meta's ad machine. Meta's former chief financial officer (CFO), David Wehner, said the changes would cost "on the order of $10 billion" in losses during the company's Q4 2021 earnings call. User purchases on Facebook or Instagram Shops pages would provide more first-party data, however. Purkayastha, who joined Meta (then Facebook) in 2020 as a product manager on the Facebook Artificial Intelligence Applied Research team, was reassigned to the Shops Ads team in March 2022 and remained at the company until Feb. 19, 2025, when he was terminated.

He alleged that during internal reviews in early 2024, Meta data scientists found the return on ad spend (ROAS) from Shops ads had been inflated between 17% and 19%. This discrepancy stemmed from Meta counting shipping fees and taxes as part of a sale, even though that money never went to merchants, he alleged. The company's other ad products exclude those figures, in line with competitors like Google, the complaint reads. Without including the fees and taxes, Shops ads performed no better than Meta's traditional ads, Purkayastha claimed. "This was significant," the complaint reads. "In addition to the ROAS performance metric being overstated by nearly a fifth, it meant that, rather than having exceeded our primary target, the Shops Ads team had in fact missed it once the figure was reduced to take account of the artificial inflation."
Purkayastha raised these concerns with senior leadership in multiple meetings between 2022 and 2024, and is now seeking interim relief through his employment tribunal filing to have his former position reinstated.

A Meta spokesperson told ADWEEK the company is "actively defending these proceedings," adding that "allegations related to the integrity of our advertising practices are without merit and we have full confidence in our performance review processes."
Science

Most Air Cleaning Devices Have Not Been Tested On People (theconversation.com) 54

A new review of nearly 700 studies on portable air cleaners found that over 90% of them were tested in empty spaces, not on people, leaving major gaps in evidence about whether these devices actually prevent infections or if they might even cause harm by releasing chemicals like ozone or formaldehyde. The Conversation reports: Many respiratory viruses, such as COVID-19 and influenza, can spread through indoor air. Technologies such as HEPA filters, ultraviolet light and special ventilation designs -- collectively known as engineering infection controls -- are intended to clean indoor air and prevent viruses and other disease-causing pathogens from spreading. Along with our colleagues across three academic institutions and two government science agencies, we identified and analyzed every research study evaluating the effectiveness of these technologies published from the 1920s through 2023 -- 672 of them in total.

These studies assessed performance in three main ways: Some measured whether the interventions reduced infections in people; others used animals such as guinea pigs or mice; and the rest took air samples to determine whether the devices reduced the number of small particles or microbes in the air. Only about 8% of the studies tested effectiveness on people, while over 90% tested the devices in unoccupied spaces.

We found substantial variation across different technologies. For example, 44 studies examined an air cleaning process called photocatalytic oxidation, which produces chemicals that kill microbes, but only one of those tested whether the technology prevented infections in people. Another 35 studies evaluated plasma-based technologies for killing microbes, and none involved human participants. We also found 43 studies on filters incorporating nanomaterials designed to both capture and kill microbes -- again, none included human testing.

AI

MIT Report: 95% of Generative AI Pilots at Companies Are Failing (fortune.com) 93

The GenAI Divide: State of AI in Business 2025, a new report published by MIT's NANDA initiative, reveals that while generative AI holds promise for enterprises, most initiatives to drive rapid revenue growth are falling flat. Fortune: Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L. The research -- based on 150 interviews with leaders, a survey of 350 employees, and an analysis of 300 public AI deployments -- paints a clear divide between success stories and stalled projects.

To unpack these findings, I spoke with Aditya Challapally, the lead author of the report, and a research contributor to project NANDA at MIT. "Some large companies' pilots and younger startups are really excelling with generative AI," Challapally said. Startups led by 19- or 20-year-olds, for example, "have seen revenues jump from zero to $20 million in a year," he said. "It's because they pick one pain point, execute well, and partner smartly with companies who use their tools," he added.

But for 95% of companies in the dataset, generative AI implementation is falling short. The core issue? Not the quality of the AI models, but the "learning gap" for both tools and organizations. While executives often blame regulation or model performance, MIT's research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don't learn from or adapt to workflows, Challapally explained.

Intel

Intel is Getting a $2 Billion Investment From SoftBank (cnbc.com) 23

Intel and SoftBank announced on Monday that the Japanese conglomerate will make a $2 billion investment the embattled chipmaker. SoftBank will pay $23 per share for Intel's common stock. The investment is a vote of confidence in Intel, which has not been able to take advantage of the AI boom in advanced semiconductors and has spent heavily to stand up a manufacturing business that has yet to secure a significant customer.

"Masa and I have worked closely together for decades, and I appreciate the confidence he has placed in Intel with this investment," Intel CEO Lip-Bu Tan said in a statement. Intel shares lost 60% of their value last year, their worst performance in the company's more than half-century on the public market.
Security

Phishing Training Is Pretty Pointless, Researchers Find (scworld.com) 151

"Phishing training for employees as currently practiced is essentially useless," writes SC World, citing the presentation of two researchers at the Black Hat security conference: In a scientific study involving thousands of test subjects, eight months and four different kinds of phishing training, the average improvement rate of falling for phishing scams was a whopping 1.7%. "Is all of this focus on training worth the outcome?" asked researcher Ariana Mirian, a senior security researcher at Censys and recently a Ph.D. student at U.C. San Diego, where the study was conducted. "Training barely works..."

[Research partner Christian Dameff, co-director of the U.C. San Diego Center for Healthcare Cybersecurity] and Mirian wanted scientifically rigorous, real-world results. (You can read their academic paper here.) They enrolled more than 19,000 employees of the UCSD Health system and randomly split them into five groups, each member of which would see something different when they failed a phishing test randomly sent once a month to their workplace email accounts... Over the eight months of testing, however, there was little difference in improvement among the four groups that received different kinds of training. Those groups did improve a bit over the control group's performance — by the aforementioned 1.7%...

[A]bout 30% of users clicked on a link promising information about a change in the organization's vacation policy. Almost as many fell for one about a change in workplace dress code... Another lesson was that given enough time, almost everyone falls for a phishing email. Over the eight months of the experiment, just over 50% failed at least once.

Thanks to Slashdot reader spatwei for sharing the article.
Open Source

Remember the Companies Making Vital Open Source Contributions (infoworld.com) 22

Matt Asay answered questions from Slashdot readers in 2010 as the then-COO of Canonical. Today he runs developer marketing at Oracle (after holding similar positions at AWS, Adobe, and MongoDB).

And this week Asay contributed an opinion piece to InfoWorld reminding us of open source contributions from companies where "enlightened self-interest underwrites the boring but vital work — CI hardware, security audits, long-term maintenance — that grassroots volunteers struggle to fund." [I]f you look at the Linux 6.15 kernel contributor list (as just one example), the top contributor, as measured by change sets, is Intel... Another example: Take the last year of contributions to Kubernetes. Google (of course), Red Hat, Microsoft, VMware, and AWS all headline the list. Not because it's sexy, but because they make billions of dollars selling Kubernetes services... Some companies (including mine) sell proprietary software, and so it's easy to mentally bucket these vendors with license fees or closed cloud services. That bias makes it easy to ignore empirical contribution data, which indicates open source contributions on a grand scale.
Asay notes Oracle's many contributions to Linux: In the [Linux kernel] 6.1 release cycle, Oracle emerged as the top contributor by lines of code changed across the entire kernel... [I]t's Oracle that patches memory-management structures and shepherds block-device drivers for the Linux we all use. Oracle's kernel work isn't a one-off either. A few releases earlier, the company topped the "core of the kernel" leaderboard in 5.18, and it hasn't slowed down since, helping land the Maple Tree data structure and other performance boosters. Those patches power Oracle Cloud Infrastructure (OCI), of course, but they also speed up Ubuntu on your old ThinkPad. Self-interested contributions? Absolutely. Public benefit? Equally absolute.

This isn't just an Oracle thing. When we widen the lens beyond Oracle, the pattern holds. In 2023, I wrote about Amazon's "quiet open source revolution," showing how AWS was suddenly everywhere in GitHub commit logs despite the company's earlier reticence. (Disclosure: I used to run AWS' open source strategy and marketing team.) Back in 2017, I argued that cloud vendors were open sourcing code as on-ramps to proprietary services rather than end-products. Both observations remain true, but they miss a larger point: Motives aside, the code flows and the community benefits.

If you care about outcomes, the motives don't really matter. Or maybe they do: It's far more sustainable to have companies contributing because it helps them deliver revenue than to contribute out of charity. The former is durable; the latter is not.

There's another practical consideration: scale. "Large vendors wield resources that community projects can't match."

Asay closes by urging readers to "Follow the commits" and "embrace mixed motives... the point isn't sainthood; it's sustainable, shared innovation. Every company (and really every developer) contributes out of some form of self-interest. That's the rule, not the exception. Embrace it." Going forward, we should expect to see even more counterintuitive contributor lists. Generative AI is turbocharging code generation, but someone still has to integrate those patches, write tests, and shepherd them upstream. The companies with the most to lose from brittle infrastructure — cloud providers, database vendors, silicon makers — will foot the bill. If history is a guide, they'll do so quietly.
Power

Virtual Power Plants: Where Home Batteries are Saving Americans from Blackouts (msn.com) 123

Puerto Rico expects 93 different power outages this summer, reports the Washington Post.

But they also note that "roughly 1 in 10 Puerto Rican homes now have a battery and solar array for backup power" which have also "become a crucial source of backup power for the entire island grid." A network of 69,000 home batteries can generate as much electricity as a small natural gas turbine during an emergency, temporarily covering about 2 percent of the island's energy needs when things go wrong... "It has very, very certainly prevented more widespread outages," said Daniel Haughton, [transmission and distribution planning director for Puerto Rico's grid operator]. "In the instances that we had to [cut power], it was for a much shorter duration: A four-hour outage became a one- or two-hour outage."

Puerto Rico's experience offers a glimpse into the future for the rest of the United States, where batteries are starting to play a big role in keeping the lights on. Authorities in Texas, California and New England have credited home batteries with preventing blackouts during summer energy crunches. As power grids across the country groan under the increasing strain of new data centers, factories and EVs, batteries offer a way for homeowners to protect themselves — and all of their neighbors — from the threat of outages. Batteries have been booming in the U.S. since 2022, when Congress created generous installation tax credits for homeowners and power companies.

Home batteries generally come as an option alongside rooftop solar panels, according to Christopher Rauscher, head of grid services and electrification for Sunrun, a company that installs both. More than 70 percent of the people who hire Sunrun to put up solar panels also get a battery. With the tax credits — and the money saved on rising electricity costs — solar panels and batteries make financial sense for most American homes, according to a study Stanford University scientists published Aug. 1. About 60 percent of homes would save money in the long run with solar panels and batteries...

Those batteries can have broader benefits, too. Utilities pay customers hundreds of dollars a year to sign their batteries up to form "virtual power plants," which send electricity to the grid whenever power plants can't keep up with demand. California's network of home batteries can now add 535 megawatts of electricity in an emergency — about half as much energy as a nuclear power plant... [H]omeowners can make thousands of dollars a year lowering their energy bills, selling solar power back to the grid or enrolling their batteries in a virtual power plant, depending on their power company's policies and state regulations. "Over time, you would get the full payback for your system and basically get your backup for free," said Ram Rajagopal, an associate professor of civil and environmental engineering who co-authored the Stanford study.

AI

OpenAI's GPT-5 Sees a Big Surge in Enterprise Use (cnbc.com) 33

ChatGPT now has nearly 700 million weekly users, OpenAI says. But after launching GPT-5 last week, critics bashed its less-intuitive feel, reports CNBC, "ultimately leading the company to restore its legacy GPT-4 to paying chatbot customers."

Yet GPT-5 was always about cracking the enterprise market "where rival Anthropic has enjoyed a head start," they write. And one week in, "startups like Cursor, Vercel, and Factory say they've already made GPT-5 the default model in certain key products and tools, touting its faster setup, better results on complex tasks, and a lower price." Some companies said GPT-5 now matches or beats Claude on code and interface design, a space Anthropic once dominated. Box, another enterprise customer, has been testing GPT-5 on long, logic-heavy documents. CEO Aaron Levie told CNBC the model is a "breakthrough," saying it performs with a level of reasoning that prior systems couldn't match...

Still, the economics are brutal. The models are expensive to run, and both OpenAI and Anthropic are spending big to lock in customers, with OpenAI on track to burn $8 billion this year. That's part of why both Anthropic and OpenAI are courting new capital... GPT-5 is significantly cheaper than Anthropic's top-end Claude Opus 4.1 — by a factor of seven and a half, in some cases — but OpenAI is spending huge amounts on infrastructure to sustain that edge. For OpenAI, it's a push to win customers now, get them locked in and build a real business on the back of that loyalty...

GPT-5 API usage has surged since launch, with the model now processing more than twice as much coding and agent-building work, and reasoning use cases jumping more than eightfold, said a person familiar with the matter who requested anonymity in order to discuss company data. Enterprise demand is rising sharply, particularly for planning and multi-step reasoning tasks.

GPT-5's traction over the past week shows how quickly loyalties can shift when performance and price tip in OpenAI's favor. AI-powered coding platform Qodo recently tested GPT-5 against top-tier models including Gemini 2.5, Claude Sonnet 4, and Grok 4, and said in a blog post that it led in catching coding mistakes. The model was often the only one to catch critical issues, such as security bugs or broken code, suggesting clean, focused fixes and skipping over code that didn't need changing, the company said. Weaknesses included occasional false positives and some redundancy.

JetBrains has also adopted GPT-5 as the default for its AI Assistant and for its new no-code tool Kineto, according to the article.

But Anthropic is still enjoying a great year too, with its annualized revenue growing 17x year-over-year (according to "a person familiar with the matter who requested anonymity")
Medicine

Aging Can Spread Through Your Body Via a Single Protein, Study Finds 18

alternative_right shares a report from Phys.org: Take note of the name: ReHMGB1. A new study pinpoints this protein as being able to spread the wear and tear that comes with time as it quietly travels through the bloodstream. This adds significantly to our understanding of aging. The researchers were able to identify ReHMGB1 as a critical messenger passing on the senescence signal by analyzing different types of human cells grown in the lab and conducting a variety of tests on mice. When ReHMGB1 transmission was blocked in mice with muscle injuries, muscle regeneration happened more quickly, while the animals showed improved physical performance, fewer signs of cellular aging, and reduced systemic inflammation. The findings have been published in the journal Metabolism.
Transportation

Global EV Sales Up 27% In 2025 (cleantechnica.com) 144

An anonymous reader quotes a report from CleanTechnica: In a sharp rebuke to the anti-electrification agenda in the US, global EV sales are up 27% over last year, with some legacy automakers -- but not all -- indicating the potential for a successful transition to electric mobility. CleanTechnica has spilled much ink on the pace of plug-in hybrid and full EV adoption, and the latest report from the UK firm Rho Motion (a branch of the price reporting agency Benchmark Mineral Intelligence) adds some fresh insights.

Covering the first seven months of 2025, earlier today Rho Motion totaled up more than 10.7 million EVs sold for a "robust" 27% increase over the same period last year, with China leading the pack by a wide margin. Europe also contributed to the overall robustness. Germany and the UK racked up impressive gains and Italy also turning in a mentionable performance. "The European EV market has grown by 30% year-to-date, with strong momentum in both battery electric vehicles (BEVs) and plug-in hybrids (PHEVs), up 30% and 32% respectively," Rho Motion summarized.

"In contrast, North America's growth has been muted so far in 2025, with the US facing policy headwinds and Canada seeing a slowdown," Rho Motion Data Manager Charles Lester observed. "We expect a short-term lift in US demand ahead of the IRA consumer tax credit deadline in September, followed by a likely dip," Lester added. That short-term lift won't help North America catch up to Europe [...]
Rho Motion's EV sales snapshot shows the recent gains:

Global: 10.7 million, +27%
China: 6.5 million, +29%
Europe: 2.3 million, +30%
North America: 1.0 million, +2%
Rest of World: 0.9 million, +42%

Slashdot Top Deals