Social Networks

Two-Week Social Media 'Detox' Erases a Decade of Age-Related Decline, Study Finds (yahoo.com) 12

Critics say social media is engineered to be as addictive as tobacco or gambling, writes the Washington Post — while adding that "the science has been moving in parallel with the court's recognition." A growing body of research links heavy social media use not only to declines in mental health but to measurable cognitive effects — on attention, memory and focus — that in some studies resemble accelerated aging. Science also suggests we have more control than we realize when it comes to reversing this damage, and the solution is surprisingly simple: Take a break... "Digital detoxes" can sound like a fad. But in one of the largest studies to date, published in PNAS Nexus and involving more than 467 participants with an average age of 32, even a short time away produced striking results — effectively erasing a decade of age-related cognitive decline.

For 14 days, participants used a commercially available app, Freedom, to block internet access on their phones. They were still allowed calls and text messages, essentially turning a smartphone into a dumb phone. Their time online decreased from 314 minutes to 161 minutes, and by the end of the period the participants had improvements in sustained attention, mental health as well as self-reported well-being. The improvement in sustained attention was about the same magnitude as 10 years of age-related decline, the researchers noted, and the effect of the intervention on depression symptoms was larger than antidepressants and similar to that of cognitive behavioral therapy.

But two things were even more mind-blowing... Even those people who cheated and broke the rules after a few days seemed to have positive effects from the break; and in follow-up reports after the two weeks, many people reported the positive effects lingered. "So you don't have to necessarily restrict yourself forever. Even taking a partial digital detox, even for a few days, seems to work," Kushlev said.

The article also notes a November study at Harvard published in JAMA Network Open where nearly 400 people 'found that even a short break can make a measurable difference: After just one week of reduced smartphone use, participants reported drops in anxiety (16.1 percent), depression (24.8 percent) and insomnia (14.5 percent)..."

"Other experiments point in the same direction — whether decreasing social media use by an hour a day for one week or stepping away from just Facebook and Instagram."
Firefox

Firefox vs. Chrome: Which Performs Better on a Linux Laptop? (phoronix.com) 33

Phoronix staged "a showdown" between Firefox and Chrome, testing them both on an Intel Panther Lake laptop running Ubuntu 26.04. JetStream 3.0 was announced at the end of March as the latest major web browser benchmark. This updated version of JetStream is focused on intensive portions of modern JavaScript and WebAssembly web applications... Google Chrome 147 came in at 1.47x the performance of Mozilla Firefox 149. A very strong showing for Google's web browser and to not much surprise Google engineers have been heavily involved in JetStream 3 as part of its open governance model. Chrome debuts very well on JetStream 3 while it will be interesting to see what optimizations Mozilla engineers pursue in the months ahead...

In the recent Speedometer 3.1 benchmark update that is focused on browser responsiveness, Chrome was at 1.24x the performance of Firefox... Firefox picked up wins in the MotionMark and StyleBench browser benchmarks. Google Chrome meanwhile continued to dominate in the JavaScript heavy benchmarks... In some of the WebAssembly benchmarks, there was at least some healthy competition between Firefox and Chrome on Linux.

Across the web browser benchmarks, the Core Ultra X7 358H power consumption came in at 11.44 Watts on average for Chrome and 11.74 Watts for Firefox. Quite close. The slight CPU power difference may come down to the CPU usage with Chrome coming in slightly lower at 8.13% on average to 8.35% with Firefox. Chrome also came in at slightly lower memory consumption across all the benchmarks with total memory usage on average at 4.67GB to Firefox at 4.83GB.

Transportation

AI Is Coming for Car Salesmen 95

An anonymous reader quotes a report from The Drive: An auto dealer software company is pitching AI-powered kiosks designed to replace car salesmen on showroom floors. Automotive News says the industry is "skeptical." But be honest -- would you really rather deal with the average car lot shark than a computer?

Epikar, a South Korean company that cooks up digital management solutions for car dealers, has named its new AI invention the Pikar Genie. The idea is that customers can talk to this device, ask it product questions, and basically do everything you'd do with a car salesman except for actually closing the deal and signing paperwork. Renault, BMW, and Volvo are already using some Epikar products at South Korean dealerships, but this new customer-facing AI product is still in its infancy.

AN reported that "Renault assigns three salespeople to its Seoul showroom enhanced with Epikar automation compared with six for other Renault showrooms in South Korea," according to Epikar CEO Bosuk Han. The company's now looking to expand into America and is apparently already testing its products at at least one dealership stateside.
Car-dealer consultant Fleming Ford (Director of Strategic Growth at NCM Associates) said U.S. dealerships "aren't ready for fully automated showrooms."

"The showroom isn't just where you buy a car," Automotive News quoted him saying. "It's where you decide who to trust to help you to choose the right car."
Windows

Microsoft Pulls Then Re-Issues Windows 11 Preview Update. Also Begins Force-Updating Windows 11 (techrepublic.com) 78

Nine days ago Microsoft released a non-security "preview" update for Windows 11 — not mandatory for the average Windows user, notes ZDNet, "but rather as optional, more for IT admins and power users who want to test them."

TechRepublic adds that the update "was to bring 'production-ready improvements' and generally ensure system stability by optimizing different Windows services." So it's ironic that some (but not all) users reported instead that the update "blocks users at the door, refusing to install or crashing midway through the process."

"It apparently impacted enough people to force Microsoft to take action," writes ZDNet. "Microsoft paused and then pulled the update," and then Tuesday released a new update "designed to replace the glitchy one. This one includes all the new features and improvements from the previous preview update, but also fixes the installation issues that clobbered that update."

Meanwhile, as Windows 11 version 24H2 approaches its end of life this October, Microsoft is now force-updating users to the latest version, reports BleepingComputer: "The machine learning-based intelligent rollout has expanded to all devices running Home and Pro editions of Windows 11, version 24H2 that are not managed by IT departments," Microsoft said in a Monday update to the Windows release health dashboard... "No action is required, and you can choose when to restart your device or postpone the update."
Neowin reports: The good news is that the update from version 24H2 to 25H2 is a minor enablement package, as the two operating systems share the same codebase. As such, the update won't take long, and you should not encounter any disruptions, compatibility issues, or previously unseen bugs... Microsoft recently promised to implement big changes in how Windows Update works, including the ability to postpone updates for as long as you want. However, Microsoft has yet to clarify if that includes staying on a release beyond its support period.

Thanks to long-time Slashdot reader Ol Olsoc for sharing the news.
AI

Anthropic Announces Claude Subscribers Must Now Pay Extra to Use OpenClaw (venturebeat.com) 46

Anthropic's making a big and sudden change — and connecting its Claude AI to third-party agentic tools "is about to get a lot more expensive," writes the Verge: Beginning April 4th at 3PM ET, users will "no longer be able to use your Claude subscription limits for third-party harnesses including OpenClaw," according to an email sent to users on Friday evening. Instead, if users want to use OpenClaw with Claude, they'll have to use a "pay-as-you-go option" that will be billed separate from their Claude subscription.
Anthropic's announcement added these extra usage bundles are "now available at a discount." Users can also try Anthropic's API, notes VentureBeat, "which charges for every token of usage rather than allowing for open-ended usage up to certain limits, as the Pro and Max plans have allowed so far. " The technical reality, according to Anthropic, is that its first-party tools like Claude Code, its AI vibe coding harness, and Claude Cowork, its business app interfacing and control tool, are built to maximize "prompt cache hit rates" — reusing previously processed text to save on compute. Third-party harnesses like OpenClaw often bypass these efficiencies... [Claude Code creator Boris Cherny explained on X that "I did put up a few PRs to improve prompt cache hit rate for OpenClaw in particular, which should help for folks using it with Claude via API/overages."] Growth marketer Aakash Gupta observed on X that the "all-you-can-eat buffet just closed," noting that a single OpenClaw agent running for one day could burn $1,000 to $5,000 in API costs. "Anthropic was eating that difference on every user who routed through a third-party harness," Gupta wrote. "That's the pace of a company watching its margin evaporate in real time."

However, Peter Steinberger, the creator of OpenClaw who was recently hired by OpenAI, took a more skeptical view of the "capacity" argument."Funny how timings match up," Steinberger posted on X. "First they copy some popular features into their closed harness, then they lock out open source." Indeed, Anthropic recently added some of the same capabilities that helped OpenClaw catch-on — such as the ability to message agents through external services like Discord and Telegram — to Claude Code...

User @ashen_one, founder of Telaga Charity, voiced a concern likely shared by other small-scale builders: "If I switch both [OpenClaw instances] to an API key or the extra usage you're recommending here, it's going to be far too expensive to make it worth using. I'll probably have to switch over to a different model at this point."

"I know it sucks," Cherny replied. "Fundamentally engineering is about tradeoffs, and one of the things we do to serve a lot of customers is optimize the way subscriptions work to serve as many people as possible with the best mode..." OpenAI appears to be positioning itself as a more "harness-friendly" alternative, potentially using this moment as a customer acquisition channel for disgruntled Claude power users.

By restricting subscription limits to their own "closed harness," Anthropic is asserting control over the UI/UX layer. This allows them to collect telemetry and manage rate limits more granularly, but it risks alienating the power-user community that built the "agentic" ecosystem in the first place. Anthropic's decision is a cold calculation of margins versus growth. As Cherny noted, "Capacity is a resource we manage thoughtfully." In the 2026 AI landscape, the era of subsidized, unlimited compute for third-party automation is over. For the average user on Claude.ai, the experience remains unchanged; for the power users running autonomous offices, the bell has tolled.

Transportation

Colorado's New Speed Camera System Makes Waze Nearly Useless (motor1.com) 199

Colorado is rolling out an average-speed camera system that tracks vehicles across multiple points instead of catching them at a single camera, making it much harder for drivers to dodge tickets with apps like Waze and Radarbot. Motor1 reports: The state's new automated vehicle identification systems (AVIS) use several cameras to calculate your average speed between them, and if it is 10 miles per hour or more over the limit, you get a ticket. No longer will you be able to slow down as you approach a camera and speed back up after passing it, not that you should be speeding on public roads in the first place.

Colorado began deploying this new camera system after legislators changed the law in 2023, allowing AVIS for law enforcement use. The systems, installed on various roads and highways throughout the state, first began issuing warnings, but police began issuing tickets late last year.

The most recent section of road to fall under surveillance is a stretch of I-25 north of Denver, which brought the state's growing panopticon to our attention. It began issuing tickets on April 2. The Colorado Department of Transportation installed the cameras along a construction zone. The fine is $75 and zero points for exceeding the speed limit, and the police issue it to the vehicle's owner, regardless of who is driving.

United States

Rapid Snow Melt-Off In American West Stuns Scientists (theguardian.com) 112

Scientists say extreme March heat caused an unusually rapid collapse of snowpack across the American West that's leaving major basins at record or near-record lows. "This year is on a whole other level," said Dr Russ Schumacher, a Colorado State University climatologist. "Seeing this year so far below any of the other years we have data for is very concerning." The Guardian reports: [...] The issue is extremely widespread. Data from a branch of the US Department of Agriculture (USDA), which logs averages based on levels between 1991 and 2020, shows states across the south-west and intermountain west with eye-popping lows. The Great Basin had only 16% of average on Monday and the lower Colorado region, which includes most of Arizona and parts of Nevada, was at 10%. The Rio Grande, which covers parts of New Mexico, Texas and Colorado, was at 8%. "This year has the potential of being way worse than any of the years we have analogues for in the past," Schumacher said.

Even with near-normal precipitation across most of the west, every major river basin across the region was grappling with snow drought when March began, according to federal analysts. Roughly 91% of stations reported below-median snow water equivalent, according to the last federal snow drought update compiled on March 8. Water managers and climate experts had been hopeful for a March miracle -- a strong cold storm that could set the region on the right track. Instead, a blistering heatwave unlike any recorded for this time of year baked the region and spurred a rapid melt-off. "March is often a big month for snowstorms," Schumacher said. "Instead of getting snow we would normally expect we got this unprecedented, way-off-the-scale warmth."

More than 1,500 monthly high temperature records were broken in March and hundreds more tied. The event was "likely among the most statistically anomalous extreme heat events ever observed in the American south-west," climate scientist Daniel Swain said in an analysis posted this week. "Beyond the conspicuous 'weirdness' of it all," Swain added, "the most consequential impact of our record-shattering March heat will likely be the decimation of the water year 2025-26 snowpack across nearly all of the American west." Calling the toll left by the heat "nothing short of shocking," Swain noted that California was tied for its worst mountain snowpack value on record. While the highest elevations are still coated in white, "lower slopes are now completely bare nearly statewide."

AI

AI Data Centers Can Warm Surrounding Areas By Up To 9.1C 71

An anonymous reader quotes a report from New Scientist: Andrea Marinoni at the University of Cambridge, UK, and his colleagues saw that the amount of energy needed to run a data centre had been steadily increasing of late and was likely to "explode" in the coming years, so wanted to quantify the impact. The researchers took satellite measurements of land surface temperatures over the past 20 years and cross-referenced them against the geographical coordinates of more than 8400 AI data centers. Recognizing that surface temperature could be affected by other factors, the researchers chose to focus their investigation on data centers located away from densely populated areas.

They discovered that land surface temperatures increased by an average of 2C (3.6F) in the months after an AI data center started operations. In the most extreme cases, the increase in temperature was 9.1C (16.4F). The effect wasn't limited to the immediate surroundings of the data centers: the team found increased temperatures up to 10 kilometers away. Seven kilometers away, there was only a 30 percent reduction in the intensity. "The results we had were quite surprising," says Marinoni. "This could become a huge problem."

Using population data, the researchers estimate that more than 340 million people live within 10 kilometers of data centers, so live in a place that is warmer than it would be if the data centre hadn't been built there. Marinoni says that areas including the Bajio region in Mexico and the Aragon province in Spain saw a 2C (3.6F) temperature increase in the 20 years between 2004 and 2024 that couldn't otherwise be explained.
University of Bristol researcher Chris Preist said the findings may be more complicated than they look. "It would be worth doing follow-up research to understand to what extent it's the heat generated from computation versus the heat generated from the building itself," he says. For example, the building being heated by sunlight may be part of the effect.

The findings of the study, which has not yet been peer-reviewed, can be found on arXiv.
Space

Jupiter's Lightning May Have the Force of Nuclear Weapons (science.org) 17

How powerful is Jupiter's lightning? Thick clouds cover the view, notes Science magazine. But using an instrument on NASA's Juno spacecraft (orbiting Jupiter for the past decade), researchers determined Jupiter's lightning bolts are 100 to 10,000 times more energetic than earth's: A single bolt of lightning on Earth releases about 1 billion joules of energy. That means the most extreme bolts of jovian lightning carry 10 trillion joules of energy, equivalent to 2400 tons of TNT, or one-sixth the power of the atomic bomb dropped on Hiroshima, Japan. Based on the rates of flashes seen by Juno, storms on this tempestuous world can unleash the force of multiple nuclear weapons every minute...

The four storms Juno studied were monstrous, says Michael Wong, a planetary scientist at the University of California, Berkeley and one of the study's authors. There were three flashes per second on average, often emerging from the hearts of storms that are 3000 kilometers across, longer than the distance from New York City to Denver.

The researchers used the Hubble Space Telescope (and photographs from Juno's camera) to track Jupiter's storms with such precision that their radiometer could then pick out individual lightning flashes, according to the article. "It's just a massive ball of gas. It makes sense that there's very energetic lightning happening," says Daniel Mitchard, a lightning physicist at Cardiff University who wasn't involved with the new study. But confirming such suspicions "is exciting," he says, because lightning plays an important role in forging complex chemistry — including the sort that primordial life is built on.
Thanks to Slashdot reader sciencehabit for sharing the article.
Open Source

SystemD Contributor Harassed Over Optional Age Verification Field, Suggests Installer-Level Disabling (itsfoss.com) 193

It's FOSS interviewed a software engineer whose long-running open source contributions include Python code for the Arch Linux installer and maintaining packages for NixOS. But "a recent change he made to systemd has pushed him into the spotlight" after he'd added the optional birthDate field for systemd's user database: Critics saw it not merely as a technical addition, but as a symbolic capitulation to government overreach. A crack in the philosophical foundation of freedom that Linux is built on. What followed went far beyond civil disagreement. Dylan revealed that he faced harassment, doxxing, death threats, and a flood of hate mail. He was forced to disable issues and pull request tabs across his GitHub repositories...


Q: Should FOSS projects adapt to laws they fundamentally disagree with? Because these kinds of laws are certainly in conflict with what a lot of Linux users believe in.

A. Unfortunately, in a lot of cases, the answer is yes — at least for any distribution with corporate backing. The small independent distributions are much more flexible to refuse as a protest.

If we ignore regulations entirely, we risk Linux being something that companies are not willing to contribute to, and Linux may be shipped on less hardware. I'm talking about things like Valve and System76 (despite them very vocally hating these laws). That does not help us; it just lowers the quality of software contributions due to less investment in the platform and makes Linux less accessible to the average person. We need Linux and other free operating systems to remain a viable alternative to closed systems.

Q. Do you think regulations like these will reshape desktop Linux in the next 5-10 years where we might have "compliant Linux" and "Freedom-first Linux"?

A. Unfortunately, yes, to some degree this is likely. I imagine the split will be mostly along the lines of independent distributions and those with corporate backing.

We're already seeing it as far as which distributions plan on implementing some sort of age verification and which ones are not, and that sucks. I'd rather nobody have to deal with this mess at all, but this is the reality of things now. As I said in the previous response, the corporate-backed distributions really have no choice in the matter. Companies are notoriously risk-adverse, but something like Artix or Devuan? Those are small and independent enough where the individual maintainers may be willing to take on more risk.

I was actually thinking about what this would look like if we added it to [Linux system installer] Calamares and chatting about that with the maintainers before that thread got brigaded by bad actors posting personal information and throwing around insults. I completely support the freedom for the distro maintainers to choose their risk tolerance. If the distribution is based out of Ireland or something (like Linux Mint) without these silly laws in the jurisdiction the developer operates in, I think that we should leave it up to them to make a choice here.

They think the installer should have a date picker with a flag to disable it, and "We can even default it to off, and corporate distributions using Calamares or those not willing to take the risk could flip it on if they need to. That way if maintainers of the distributions do not wish to collect the birth date, they won't have to, and no forking is required to patch it out."
Desktops (Apple)

Windows PCs Crash Three Times As Often As Macs, Report Says (techspot.com) 186

A workplace-device study says Windows PCs crash significantly more often than Macs, lag further behind on patching and encryption in some sectors, and are typically replaced sooner. TechSpot reports: Omnissa's 2026 State of Digital Workspace report outlines the IT challenges that various organizations face from the growing use of AI and the heterogeneous deployment of enterprise devices. The relative instability of Windows and Android is a recurring theme throughout the report. The company gathered telemetry from clients located across the globe in retail, healthcare, finance, education, government, and other sectors throughout 2025. The data suggests that IT administrators face frustrating security gaps due to inconsistent patching across a diverse mosaic of devices and operating systems.

Employee workflow disruption, often due to software issues, is one area of concern. The report found that Windows devices were forced to shut down 3.1 times more often than Macs. Windows programs also froze 7.5 times more often than macOS apps and needed to be restarted more than twice as often. Certain industries were also alarmingly lax in securing Windows and Android devices. More than half of Windows and Android devices in healthcare and pharma were five major operating system updates behind, likely leaving them more vulnerable to errors and malware. More than half of the desktops and mobile devices used for education were also unencrypted, putting students' privacy at risk.

Macs also last longer, being replaced every five years on average, compared to every three years for Windows PCs. Despite a recent backlash against Windows, driven by a push for digital sovereignty in countries such as Germany, Windows use on government devices actually doubled last year. Meanwhile, Macs using Apple's M-series chips showcase a significant thermal advantage, with an average temperature of 40.1 degrees Celsius, while Intel processors run at 65.2 degrees.

AI

AI Economy Is a 'Ponzi Scheme,' Says AI Doc Director 58

An anonymous reader quotes a report from Vanity Fair: Focus Features is releasing The AI Doc: Or How I Became an Apocaloptimist in theaters on March 27. If you're even slightly interested in what's going on with AI, it's required viewing: The film touches on all aspects of the technology, from how it's currently being used to how it will be used in the near future, when we potentially reach the age of artificial general intelligence, or AGI. AGI is a theoretical form of AI that supposedly would be able to perform complex tasks without each step being prompted by a human user -- the point at which machines become autonomous, like Skynet in the Terminator franchise. [...]

[Director Daniel Roher] interviews nearly all the major players in the AI space: Sam Altman of OpenAI; the Amodei siblings of Anthropic; Demis Hassabis of DeepMind (Google's AI arm); theorists and reporters covering the subject. Notably absent are Elon Musk and Mark Zuckerberg. "Have you seen that guy speak? He's like a lizard man," Roher says regarding Zuckerberg. "Musk said yes initially, but it was right when he was doing all the stuff with Trump, and we just got ghosted after a while," adds [codirector Charlie Tyrell]. Altman, arguably AI's greatest mascot, is prominently featured in the documentary. But Roher wasn't buying it. "That guy doesn't know what genuine means," he says. "Every single thing he says and does is calculated. He is a machine. He's like AI, and it's in the service of growth, growth, growth. You can be disingenuous and media savvy." [...]

How, exactly, is Roher an apocaloptimist? "We are preaching a worldview," he says, "in a world that's asking you to either see this as the apocalypse or embrace it with this unbridled optimism." He and his film are taking a stance that rests between those two poles. "It's both at the same time. We have to try and embrace a middle ground so this technology doesn't consume us, so we can stay in the driver's seat," says Roher -- meaning, it's up to all of us to chart the course. "You have to speak up," says Tyrell. "Things like AI should disclose themselves. If your doctor's office is using an AI bot, you have to say, I don't like that." The driving message behind the film is that resistance starts with the people. That position is shared by The AI Doc producer Daniel Kwan, who won an Oscar for directing Everything Everywhere All at Once and has been at the forefront of discussions about AI in the entertainment industry. [...]

Roher and Tyrell both use AI in their everyday lives and openly admit to it being a helpful tool. They also agree that this technology can make daily tasks easier for the average consumer. But at the end of our conversation, we get into the economics of AI and how Wall Street is propping up the industry through huge evaluations of these companies -- and Roher gets going yet again. "This is all smoke and mirrors. The entire economy of AI is being propped up by a Ponzi scheme. The hype of this technology is unlike any hype we've seen," he says. "I feel like I could announce in a press release that Academy Award winner Daniel Roher is starting an AI film company, and I could sell it the next day for $20 million. It's fucking crazy." [...] "These people are prospectors, and they are going up to the Yukon because it's the gold rush."
Power

Sodium-Ion Battery Tested for Grid-Scale Storage in Wisconsin (electrek.co) 135

"A new type of battery storage is about to be deployed on the Midwestern grid for the first time," reports Electrek: Sodium-ion battery storage manufacturer Peak Energy and global energy company RWE Americas will pilot a passively cooled sodium-ion battery system in eastern Wisconsin on the Midcontinent Independent System Operator network — the first sodium-ion deployment on that grid. Peak Energy says its technology is specifically designed for grid-scale storage and leverages sodium-ion chemistry's inherent stability. Unlike many lithium-ion systems, sodium-ion batteries don't require active cooling and can operate over a wide temperature range without losing performance.

That simpler design could make a meaningful dent in the cost of storing electricity. According to Peak Energy, its system cuts the lifetime cost of stored energy by an average of $70 per kilowatt-hour. That's roughly half the total cost of a typical battery system today. The company says it achieves those savings by removing energy-hungry cooling systems, eliminating routine maintenance requirements, and reducing the need to overbuild storage capacity to account for battery degradation over time...

If the Wisconsin pilot proves successful, it could open the door to wider adoption of sodium-ion batteries for large-scale energy storage across the US.

AI

Will AI Bring 'the End of Computer Programming As We Know It'? (nytimes.com) 150

Long-time tech journalist Clive Thompson interviewed over 70 software developers at Google, Amazon, Microsoft and start-ups for a new article on AI-assisted programming. It's title?

"Coding After Coders: The End of Computer Programming as We Know It."

Published in the prestigious New York Times Magazine, the article even cites long-time programming guru Kent Beck saying LLMs got him going again and he's now finishing more projects than ever, calling AI's unpredictability "addictive, in a slot-machine way."

In fact, the article concludes "many Silicon Valley programmers are now barely programming. Instead, what they're doing is deeply, deeply weird..." Brennan-Burke chimed in: "You remember seeing the research that showed the more rude you were to models, the better they performed?" They chuckled. Computer programming has been through many changes in its 80-year history. But this may be the strangest one yet: It is now becoming a conversation, a back-and-forth talk fest between software developers and their bots... For decades, being a software developer meant mastering coding languages, but now a language technology itself is upending the very nature of the job... A coder is now more like an architect than a construction worker... Several programmers told me they felt a bit like Steve Jobs, who famously had his staffers churn out prototypes so he could handle lots of them and settle on what felt right. The work of a developer is now more judging than creating...

If you want to put a number on how much more productive A.I. is making the programmers at mature tech firms like Google, it's 10 percent, Sundar Pichai, Google's chief executive, has said. That's the bump that Google has seen in "engineering velocity" — how much faster its more than 100,000 software developers are able to work. And that 10 percent is the average inside the company, Ryan Salva, a senior director of product at the company, told me. Some work, like writing a simple test, is now tens of times faster. Major changes are slower. At the start-ups whose founders I spoke to, closer to 100 percent of their code is being written by A.I., but at Google it is not quite 50 percent.

The article cites a senior principal engineer at Amazon who says "Things I've always wanted to do now only take a six-minute conversation and a 'Go do that." Another programmer described their army of Claude agents as "an alien intelligence that we're learning to work with." Although "A.I. being A.I., things occasionally go haywire," the article acknowledges — and after relying on AI, "Some new developers told me they can feel their skills weakening."

Still, "I was surprised by how many software developers told me they were happy to no longer write code by hand. Most said they still feel the jolt of success, even with A.I. writing the lines... " A few programmers did say that they lamented the demise of hand-crafting their work. "I believe that it can be fun and fulfilling and engaging, and having the computer do it for you strips you of that," one Apple engineer told me. (He asked to remain unnamed so he wouldn't get in trouble for criticizing Apple's embrace of A.I.) He went on: "I didn't do it to make a lot of money and to excel in the career ladder. I did it because it's my passion. I don't want to outsource that passion"... But only a few people at Apple openly share his dimmer views, he said.

The coders who still actively avoid A.I. may be in the minority, but their opposition is intense. Some dislike how much energy it takes to train and deploy the models, and others object to how they were trained by tech firms pillaging copyrighted works. There is suspicion that the sheer speed of A.I.'s output means firms will wind up with mountains of flabbily written code that won't perform well. The tech bosses might use agents as a cudgel: Don't get uppity at work — we could replace you with a bot. And critics think it is a terrible idea for developers to become reliant on A.I. produced by a small coterie of tech giants.

Thomas Ptacek, a Chicago-based developer and a co-founder of the tech firm Fly.io... thinks the refuseniks are deluding themselves when they claim that A.I. doesn't work well and that it can't work well... The holdouts are in the minority, and "you can watch the five stages of grief playing out."

"How things will shake out for professional coders themselves isn't yet clear," the article concludes. "But their mix of exhilaration and anxiety may be a preview for workers in other fields... Abstraction may be coming for us all."
AI

Apple Might Use Google Servers To Store Data For Its Upgraded AI Siri 21

Apple has reportedly asked Google to look into "seting up servers" for a Gemini-powered upgrade to Siri that meets Apple's privacy standards. The Verge reports: Apple had already announced in January that Google's Gemini AI models would help power the upgraded version of Siri it delayed last year, but The Information's report indicates Apple might lean even more on Google so it can catch up in AI.

The original partnership announcement said that "the next generation of Apple Foundation Models will be based on Google's Gemini models and cloud technology," and that the models would "help power future Apple Intelligence features," including "a more personalized Siri." While the announcement noted that Apple Intelligence would "continue to run on Apple devices and Private Cloud Compute," it didn't specify if the new Siri would run on Google's cloud.
Apple's Private Cloud Compute is not only underpowered but it's also underutilized in its current state, notes 9to5Mac, "with the company only using about 10% of its capacity on average, leading to some already-manufactured Apple servers to be sitting dormant on warehouse shelves."
Music

'The Death of Spotify: Why Streaming is Minutes Away From Being Obsolete' 70

An anonymous reader shares a column: I'm going to take the diplomatic hat off here and say with brutal honesty: basically everybody in the music business hates Spotify except for the people who work there. It's a platform that sucks artists for everything they have, it actively prevents community building, and, despite all of that, the platform still struggles to maintain a healthy profit margin.

The streaming business model is fundamentally broken. And eventually, its demise will become more and more obvious to recognize. I'll break down exactly why the DSP era is coming to a grinding halt, why the major labels are quietly terrified, and why the artists who don't pivot now are going to go down with the ship.

[...] Jimmy Iovine put it bluntly: "The streaming services have a bad situation, there's no margins, they're not making any money." This model only works for Apple, Amazon, and Google, because they don't need their music platforms to be wildly profitable. Amazon uses music as a loss-leader to keep you paying for Prime. Apple uses it to sell $1,000 iPhones. As for Spotify, or any standalone music streaming company, they're kind of screwed. And guess what -- when the platform's margins are structurally squeezed, guess who gets squeezed first? The artists.

[...] What if Jimmy is right? If the DSPs are "minutes away from obsolete," what replaces them? Well, I'm not sure the DSPs are going to disappear overnight, but if you're an artist or a manager trying to sustain yourself in this evolving music economy, the answer is direct ownership. The artists who will survive the next five years are the ones who are quietly shifting their focus away from the "ATM Machine."

They are building their own cultural hangars. They are capturing phone numbers on Laylo. They are driving fans to private Discord servers. They are focusing on ARPF (Average Revenue Per Fan) through high-margin merch, vinyl, and hard tickets, rather than begging for fractions of a penny from a playlist placement. We are witnessing the death of the "Mass Audience" and the birth of the "Micro-Community."
AI

Memory Price Hikes Will Kill Off Budget PCs and Smartphones, Analyst Warns 62

An anonymous reader quotes a report from The Register: Ballooning memory prices are forecast to kill off entry-level PCs, leading to a decline in global shipments this year -- and a similar effect is going to hit smartphones. Analyst biz Gartner is projecting a drop in PC shipments of more than 10 percent during 2026, and a decline of around 8 percent for smartphones, all due to the AI-driven memory shortage. Some types of memory have doubled or quadrupled in price since last year, and Gartner believes DRAM and NAND flash used in PCs and phones is set for a further 130 percent rise by the end of 2026.

The upshot of this is that the budget PC will disappear, simply because vendors won't be able to build them at a price that will satisfy cost-conscious buyers, according to Gartner research director Ranjit Atwal. "Because the price of memory is increasing so much, vendors lose the ability to provide entry-level PCs -- those below about $500," he told The Register. PC makers could just raise the price of their cheap and cheerful boxes to above that level to compensate for the memory hike, however, price-sensitive buyers simply won't bite, he added.

Another factor expected to add to declining fortunes of the PC industry this year is AI devices -- systems equipped with special hardware for accelerating AI tasks, typically via a neural processing unit (NPU) embedded in the CPU. These systems were predicted to take the market by storm, but they require more memory to support AI processing and vendors like to mark them up to a premium price. "Historically, downgrading specifications was the way to go when prices were being squeezed, but that's difficult here," Atwal said. "The thinking was that the average price [of AI PCs] would fall this year, and lead to more adoption," said Atwal, "but that's not happening." The lack of killer applications isn't helping either.
Security

CrowdStrike Says Attackers Are Moving Through Networks in Under 30 Minutes (cyberscoop.com) 30

An anonymous reader shares a report: Cyberattacks reached victims faster and came from a wider range of threat groups than ever last year, CrowdStrike said in its annual global threat report released Tuesday, adding that cybercriminals and nation-states increasingly relied on predictable tactics to evade detection by exploiting trusted systems.

The average breakout time -- how long it took financially-motivated attackers to move from initial intrusion to other network systems -- dropped to 29 minutes in 2025, a 65% increase in speed from the year prior. "The fastest breakout time a year ago was 51 seconds. This year it's 27 seconds," Adam Meyers, head of counter adversary operations at CrowdStrike, told CyberScoop. Defenders are falling behind because attackers are refining their techniques, using social engineering to access high-privilege systems faster and move through victims' cloud infrastructure undetected.

AI

Viral Doomsday Report Lays Bare Wall Street's Deep Anxiety About AI Future 52

A 7,000-word "doomsday" thought experiment from Citrini Research helped trigger an 800-point drop in the Dow, "painting a dark portrait of a future in which technological change inspires a race to the bottom in white-collar knowledge work," reports the Wall Street Journal. From the report: Concerns of hyperscalers overspending are out. Worries of software-industry disruption don't go far enough. The "global intelligence crisis" is about to hit. The new, broader question: What if AI is so bullish for the economy that it is actually bearish? "For the entirety of modern economic history, human intelligence has been the scarce input," Citrini wrote in a post it described as a scenario dated June 2028, not a prediction. "We are now experiencing the unwind of that premium."

Many of Monday's moves roughly aligned with the situation outlined by Citrini, in which fast-advancing AI tools allow spending cuts across industries, sparking mass white-collar unemployment and in turn leading to financial contagion. Software firms DataDog, CrowdStrike and Zscaler each plunged more than 9%. International Business Machines' 13% decline was its worst one-day performance since 2000. American Express, KKR and Blackstone -- all name-checked by Citrini -- tumbled. That anxiety, coupled with renewed uncertainty about trade policy from Washington, weighed down major indexes Monday. The Dow Jones Industrial Average led declines, falling 1.7%, or 822 points. The S&P 500 shed 1%, while the Nasdaq composite retreated 1.1%.

[...] Monday's market swings extended a run of AI-linked volatility. A small research outfit that has garnered a huge Substack following for macro and thematic stock research, Citrini said in its new post that software firms, payment processors and other companies formed "one long daisy chain of correlated bets on white-collar productivity growth" that AI is poised to disrupt. [...] Shares in DoorDash also veered 6.6% lower Monday after Citrini's Substack note called the delivery app a "poster child" for how new tools would upend companies that monetize interpersonal friction. In the research firm's scenario, AI agents would help both drivers and customers navigate food deliveries at much lower costs.
Science

Newborn Chicks Connect Sounds With Shapes Just Like Humans, Study Finds (scientificamerican.com) 16

An anonymous reader quotes a report from Scientific American: Why does "bouba" sound round and "kiki" sound spiky? This intuition that ties certain sounds to shapes is oddly reliable all over the world, and for at least a century, scientists have considered it a clue to the origin of language, theorizing that maybe our ancestors built their first words upon these instinctive associations between sound and meaning. But now a new study adds an unexpected twist: baby chickens make these same sound-shape connections, suggesting that the link to human language may not be so unique. The results, published today in Science, challenge a long-standing theory about the so-called bouba-kiki effect: that it might explain how humans first tethered meaning to sound to create language. Perhaps, the thinking goes, people just naturally agree on certain associations between shapes and sounds because of some innate feature of our brain or our world. But if the barnyard hen also agrees with such associations, you might wonder if we've been pecking at the wrong linguistic seed.

Maria Loconsole, a comparative psychologist at the University of Padua in Italy, and her colleagues decided to investigate the bouba-kiki effect in baby chicks because the birds could be tested almost immediately after hatching, before their brain would be influenced by exposure to the world. The researchers placed chicks in front of two panels: one featured a flowerlike shape with gently rounded curves; the other had a spiky blotch reminiscent of a cartoon explosion. They then played recordings of humans saying either "bouba" or "kiki" and observed the birds' behavior. When the chicks heard "bouba," 80 percent of them approached the round shape first and spent an average of more than three minutes exploring it compared with an average of just under one minute spent exploring the spiky shape. The exploration preferences were flipped when the chicks heard "kiki."

Because the tests took place within the chicks' carefully supervised first hours of life outside their eggshell, this association between particular sounds and shapes couldn't have been learned from experience. Instead it may be evidence of an innate perceptual bias that goes back way farther in our evolutionary history than previously believed. "We parted with birds on the evolutionary line 300 million years ago," says Aleksandra Cwiek, a linguist at Nicolaus Copernicus University in Toru, Poland, who was not involved in the study. "It's just mind-blowing."

Slashdot Top Deals