Submission + - ClippyAI says AI is overhyped

Mirnotoriety writes: Why it's overhyped

Most demos are still cherry-picked, brittle, and require heavy human babysitting. (The moment you ask the agent to deal with a slightly weird PDF, a CAPTCHA, an internal tool without an API, or a manager who changes requirements mid-task — it falls apart.)

* Actual enterprise adoption is still tiny. Companies are piloting, not replacing teams at scale.

* The economics don’t work yet for most roles: paying $20–200/month per agent sounds cheap until you need 10–20 specialized agents + human oversight + error correction + compliance checks.

* Many “I replaced my team” stories later get walkbacks when people admit they’re still doing 60–80% of the work themselves.

More honest current state (Dec 2025)

* AI agents are genuinely useful for narrow, repetitive, well-defined tasks (scraping data, writing first drafts, basic QA, simple customer support replies, generating boilerplate code).

* They’re not autonomous workers yet. Think of them as extremely talented but unreliable interns who need constant supervision.

* The real productivity gains right now are coming from centaurs (human + AI) rather than fully autonomous agents.

Submission + - Sodium batteries with 3.6m mile lifespan in 2026 (simcottrenewables.co.uk) 1

shilly writes: CATL has announced it will be launching its new sodium batteries in 2026. They have some major advantages over LFP chemistries, including:
- 65% cheaper at launch ($19 at cell level, expected to drop to $10 in future)
- 85% range at 3.6m miles
- Dramatically less range reduction in very cold conditions
- Inherently lower fire risk
- Can be transported on 0% charge
- Slightly better gravimetric density (175Wh/kg cf 165)
Sodium isn’t a panacea: volumetric density remains lower, for example. But these batteries could well dominate in years to come, not least because they are made of commonly available materials (table salt!). For example, millions of homes across Africa are putting in solar plus storage to have heat, light and power at night, throwing out their kerosene. Sodium could substantially accelerate the trend.

Submission + - America is building a society that cannot function without AI (nerds.xyz)

BrianFagioli writes: The United States is rapidly building a society that assumes artificial intelligence will always be available. AI now sits at the center of banking, healthcare, logistics, education, media, and government workflows, increasingly handling not just automation but decision-making and cognition itself. The risk is not AI being “too smart,” but Americans slowly losing the ability — and habit — of thinking and functioning without it. As more writing, research, planning, and judgment are outsourced to centralized systems, human fallback skills quietly atrophy, making society efficient but brittle.

That brittleness becomes a national risk when AI’s real dependencies are considered. Large-scale AI depends on data centers, power grids, and stable infrastructure that can fail due to outages, cyber incidents, or geopolitical pressure. Foreign adversaries do not need to defeat the US militarily to cause disruption; they only need to interrupt systems Americans assume will always work. A society optimized for AI uptime rather than resilience may discover, very suddenly, that when the intelligence layer goes dark, confusion spreads faster than solutions.

Submission + - Pluribus insanum et ridiculum

Mirnotoriety writes: Pluribus: S01E01: 12:53: Jenn the scientist removes her gloves and starts poking a rat infected with an unknown virus sent from an alien race six hundred light years away and the rat bites her. Jenn isn't even sent into quarantine (that's for amateurs like in actual science ) instead she's allowed to wander the lab and infects Mel at the vending machine. They both go on to infect the rest of the lab.

I was expecting better from Vince Gilligan. Reminds me when those exobiologists in “Alien: Prometheus” got stranded in that crashed alien ship and took off their helmets and had a sniff to determine if the air was breathable.

Submission + - "Pull Over and Show Me Your Apple Wallet"

theodp writes: MacRumors reports that Apple plans to expand iPhone and Apple Watch driver's licenses to 7 U.S. states (CT, KY, MS, OK, UT, AR, VA). A recent convert is the State of Illinois, whose website videos demo how you can use your Apple Wallet license to display proof of identity or age the next time you get carded by a cop, bartender, or TSA agent. The new states will join 13 others who already offer driver's licenses in the Wallet app (AZ, MD, CO, GA, OH, HI, CA, IA, NM, MT, ND, WV, IL).

There's certainly been a lot of foot dragging by the states when it comes to embracing phone-based driver's licenses — Slashdot reported that Iowa was ready to launch a mobile app driver's license in 2014; they got one nearly a decade later in late 2023.

Submission + - Aurora lights: The science behind the nighttime spectacle (dw.com) 1

alternative_right writes: Huge explosions on the surface of the sun, known as solar storms, regularly eject vast streams of electrically charged particles. Some of this plasma ends up traveling toward Earth and is pulled toward the planet's magnetic poles.

"These particles then slam into atoms and molecules in the Earth's atmosphere and essentially heat them up," explained astronomer Tom Kerss on the Royal Museums Greenwich website. "It's very much like heating a gas and making it glow."

The different colors of light depend on the elements in the atmosphere. Oxygen, which makes up about 21% of the atmosphere, emits a green color when heated, while nitrogen tints the light purple, blue or pink.

Submission + - Ask Slashdot: What's the Stupidest Use of AI You Saw in 2025?

destinyland writes: What's the stupidest use of AI you encountered in 2025? Have you been called by AI telemarketers? Forced to do job interviews with a glitching AI?

With all this talk of "disruption" and "inevitability," this is our chance to have some fun. Personally, I think 2025's worst AI "innovation" was the AI-powered web browsers that eat web pages and then spit out a slop "summary" of what you would've seen if you'd actually visited the page. But there've been other AI projects that were just exquisitely, quintessentially bad...

— Two years after the death of Suzanne Somers, her husband recreated her with an AI-powered robot.

— Disneyland imagineers used deep reinforcement learning to program a talking robot snowman.

— Attendees at an LA Comic Con were offered that chance to to talk to an AI-powered hologram of Stan Lee for $20.

— And of course, as the year ended, the Wall Street Journal announced that a vending machine run by Anthropic's Claude AI had been tricked into giving away hundreds of dollars in merchandise for free, including a PlayStation 5, a live fish, and underwear.

What did I miss? What "AI fails" will you remember most about 2025?

Submission + - Beijing Ruled AI-caused Job Replacement Illegal (globaltimes.cn)

hackingbear writes: China's state-affiliated Global Times reported that Beijing Municipal Bureau of Human Resources and Social Security ruled in a labor dispute arbitration that "AI replacing a position does not equal to legal dismissal," providing a case reference for resolving similar cases in the future. A worker with surname Liu had worked in a technology company for many years, responsible for traditional manual map data collection. In early 2024, the company decided to full transition to AI-managed autonomous data collection, abolishing Liu's department, and terminated Liu's labor contract on the grounds that "major changes have occurred in the objective circumstance on which the hiring contract was based, making it impossible to continue implementing the labor contract." Liu objected to the firm's termination, claiming it was unlawful and applied for arbitration. The labor board ruled that the company's introduction of AI technology was a proactive technological innovation implemented by the enterprise to adapt to market competition, and that termination of Liu's labor contract on the grounds that the position was replaced by AI shifts the risk of normal technological iteration onto the employee. The arbitration committee noted that, against the backdrop of the rapid development of AI technology, employers should properly accommodate affected employees through measures such as negotiating changes to the labor contract, providing skills training, and internal job reassignment. If it is indeed necessary to terminate the labor contract, employers must strictly comply with relevant laws and avoid simply applying "major changes in the objective environment" as grounds for termination. "This ruling safeguards Liu's legitimate rights and interests, providing reassurance to the vast number of workers, helping alleviate employees' anxiety about AI," Wang Peng, an associate researcher at the Beijing Academy of Social Sciences, told the Global Times.

Submission + - Rob Pike gets spammed with AI slop

Anomolous Cowturd writes: An AI bot let loose on the world by an outfit called AI Village has seen fit to waste a legend's time and patience. See an article by Simon Willison about it.

Says Rob on Bluesky: "Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank me for striving for simpler software. Just fuck you. Fuck you all. I can't remember the last time I was this angry."

Submission + - AI doesn't care about Ethics: Why technoskepticism must be political 1

TheBAFH writes: Here's a philosophy paper that rejects the opposing extreme positions of technophilia and technophobia and defines technoskepticism which "offers a vision where technological development is not an end in itself but a means to foster an autonomous society based on humanistic values, critical thought, and democratic self-governance. "
The paper is about "AI", but the core ideas apply in technology in general.

Submission + - Sal Khan: Companies Should Give 1% of Profits to Retrain Workers Displaced by AI (nytimes.com)

destinyland writes: Sal Kahn (founder/CEO of the nonprofit Khan Academy), says companies should donate 1% of their profits to help retrain the people displaced by AI, in a new guest essay in the New York Times...

This isn’t charity. It is in the best interest of these companies. If the public sees corporate profits skyrocketing while livelihoods evaporate, backlash will follow — through regulation, taxes or outright bans on automation. Helping retrain workers is common sense, and such a small ask that these companies would barely feel it, while the public benefits could be enormous...

Roughly a dozen of the world’s largest corporations now have a combined profit of over a trillion dollars each year. One percent of that would create a $10 billion annual fund that, in part, could create a centralized skill training platform on steroids: online learning, ways to verify skills gained and apprenticeships, coaching and mentorship for tens of millions of people. The fund could be run by an independent nonprofit that would coordinate with corporations to ensure that the skills being developed are exactly what are needed. This is a big task, but it is doable; over the past 15 years, online learning platforms have shown that it can be done for academic learning, and many of the same principles apply for skill training...

To meet the challenges, we don’t need to send millions back to college. We need to create flexible, free paths to hiring, many of which would start in high school and extend through life. Our economy needs low-cost online mechanisms for letting people demonstrate what they know. Imagine a model where capability, not how many hours students sit in class, is what matters; where demonstrated skills earn them credit and where employers recognize those credits as evidence of readiness to enter an apprenticeship program in the trades, health care, hospitality or new categories of white-collar jobs that might emerge...

There is no shortage of meaningful work — only a shortage of pathways into it.

Submission + - UCLA prof suspended for refusing lenient grading for black students 2

An anonymous reader writes: Judge rules against UCLA prof suspended after refusing lenient grading for black students

Key Takeaways

* A judge ruled against UCLA lecturer Gordon Klein, who was suspended after refusing to grade black students leniently in the wake of George Floyd's death, siding with UCLA on every issue

* Klein sought $13 million in damages, alleging his suspension harmed his expert witness consulting practice, but the judge ruled Klein's own actions contributed to the harm and UCLA acted reasonably

* The judge's ruling allows UCLA to maintain its administrative decisions during a public controversy, finding no violation of Klein's academic freedom, and Klein's legal team has appealed the decision based on what they claim are significant oversights in the ruling

Submission + - Elon Musk Says He's Removing 'Sustainable' From Tesla's Mission (gizmodo.com)

joshuark writes: Elon Musk apparently got “Joy to the World” stuck in his head and decided to change the entire mission of his company because of it. On Christmas Eve, the world’s richest man took to X instead of spending time with his family to declare that he is “changing the Tesla mission wording from: Sustainable Abundance To Amazing Abundance,” explaining, “The latter is more joyful.”

Beyond just changing one undefined term to a nonsensical phrase, Musk’s decision to ditch “sustainable” is another marker of how far he’s strayed from his past positions on climate change. Now Musk is boosting AI as the future and claiming climate change actually isn’t that big of a deal. Last year, Musk said, “We still have quite a bit of time” to address climate change, and “we don’t need to rush” to solve it.

He also claimed that things won’t get really bad for humans until CO2 reaches levels of about 1,000 parts per million in the Earth’s atmosphere, because that would start to cause people to experience “headaches and nausea.”

Looks like all that is out the window. The future is "amazing," it's not necessarily sustainable. What a charge...change...

Submission + - Apple's Brain Drain In Post-iPhone Era Proves It Can Handle Executive Turnover (bgr.com)

anderzole writes: In light of the executive and employee turnover Apple, many analysts and armchair pundits online have begun asking if Apple is losing its magic. The departures, along with rumors of Cook stepping down, have naturally sparked questions about stability within the company, not to mention Apple's ability to keep churning out and developing best-selling products in the years ahead. This speculation is understandable, but Apple is structured in a way such that it can survive any number of key executive departures. This isn't a theory, but rather something that Apple already proved a little more than a decade ago in the years following the release of the iPhone.

If anything, the brain drain following the iPhone release was far more significant than what Apple is experiencing right now. Still, Apple is set up in such a way that it can survive any one person leaving, no matter how important.

Slashdot Top Deals