Submission + - Ask Slashdot: What's the Stupidest Use of AI You Saw in 2025?

destinyland writes: What's the stupidest use of AI you encountered in 2025? Have you been called by AI telemarketers? Forced to do job interviews with a glitching AI?

With all this talk of "disruption" and "inevitability," this is our chance to have some fun. Personally, I think 2025's worst AI "innovation" was the AI-powered web browsers that eat web pages and then spit out a slop "summary" of what you would've seen if you'd actually visited the page. But there've been other AI projects that were just exquisitely, quintessentially bad...

— Two years after the death of Suzanne Somers, her husband recreated her with an AI-powered robot.

— Disneyland imagineers used deep reinforcement learning to program a talking robot snowman.

— Attendees at an LA Comic Con were offered that chance to to talk to an AI-powered hologram of Stan Lee for $20.

— And of course, as the year ended, the Wall Street Journal announced that a vending machine run by Anthropic's Claude AI had been tricked into giving away hundreds of dollars in merchandise for free, including a PlayStation 5, a live fish, and underwear.

What did I miss? What "AI fails" will you remember most about 2025?

Submission + - Beijing Ruled AI-caused Job Replacement Illegal (globaltimes.cn)

hackingbear writes: China's state-affiliated Global Times reported that Beijing Municipal Bureau of Human Resources and Social Security ruled in a labor dispute arbitration that "AI replacing a position does not equal to legal dismissal," providing a case reference for resolving similar cases in the future. A worker with surname Liu had worked in a technology company for many years, responsible for traditional manual map data collection. In early 2024, the company decided to full transition to AI-managed autonomous data collection, abolishing Liu's department, and terminated Liu's labor contract on the grounds that "major changes have occurred in the objective circumstance on which the hiring contract was based, making it impossible to continue implementing the labor contract." Liu objected to the firm's termination, claiming it was unlawful and applied for arbitration. The labor board ruled that the company's introduction of AI technology was a proactive technological innovation implemented by the enterprise to adapt to market competition, and that termination of Liu's labor contract on the grounds that the position was replaced by AI shifts the risk of normal technological iteration onto the employee. The arbitration committee noted that, against the backdrop of the rapid development of AI technology, employers should properly accommodate affected employees through measures such as negotiating changes to the labor contract, providing skills training, and internal job reassignment. If it is indeed necessary to terminate the labor contract, employers must strictly comply with relevant laws and avoid simply applying "major changes in the objective environment" as grounds for termination. "This ruling safeguards Liu's legitimate rights and interests, providing reassurance to the vast number of workers, helping alleviate employees' anxiety about AI," Wang Peng, an associate researcher at the Beijing Academy of Social Sciences, told the Global Times.

Submission + - Rob Pike gets spammed with AI slop

Anomolous Cowturd writes: An AI bot let loose on the world by an outfit called AI Village has seen fit to waste a legend's time and patience. See an article by Simon Willison about it.

Says Rob on Bluesky: "Fuck you people. Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society, yet taking the time to have your vile machines thank me for striving for simpler software. Just fuck you. Fuck you all. I can't remember the last time I was this angry."

Submission + - AI doesn't care about Ethics: Why technoskepticism must be political 1

TheBAFH writes: Here's a philosophy paper that rejects the opposing extreme positions of technophilia and technophobia and defines technoskepticism which "offers a vision where technological development is not an end in itself but a means to foster an autonomous society based on humanistic values, critical thought, and democratic self-governance. "
The paper is about "AI", but the core ideas apply in technology in general.

Submission + - Legend of Zelda: Twilight Princess has been decompiled

willtro writes: The Legend of Zelda: Twilight Princess, one of the entries of the classic Nintendo franchise for the Wii, is currently being decompiled by a team of developers on GitHub.

The project currently sits at around 61% decompiled with the goal of supporting every retail version. Their GitHub readme says the goal is not to produce a PC port:

This project itself is not, and will not, produce a port, to PC or any other platform. It is a decompilation of the original game code, which can be compiled back into a binary identical to the original.

Submission + - Author of LAFD Palisades fire report declined to endorse final version (latimes.com)

An anonymous reader writes: He called it ‘highly unprofessional’

The author of the LAFD’s after-action report on the Palisades fire declined to endorse the report and said the document has undergone “substantial modifications and contains significant deletions of information.”

AKA, Coverup.

Submission + - Sal Khan: Companies Should Give 1% of Profits to Retrain Workers Displaced by AI (nytimes.com)

destinyland writes: Sal Kahn (founder/CEO of the nonprofit Khan Academy), says companies should donate 1% of their profits to help retrain the people displaced by AI, in a new guest essay in the New York Times...

This isn’t charity. It is in the best interest of these companies. If the public sees corporate profits skyrocketing while livelihoods evaporate, backlash will follow — through regulation, taxes or outright bans on automation. Helping retrain workers is common sense, and such a small ask that these companies would barely feel it, while the public benefits could be enormous...

Roughly a dozen of the world’s largest corporations now have a combined profit of over a trillion dollars each year. One percent of that would create a $10 billion annual fund that, in part, could create a centralized skill training platform on steroids: online learning, ways to verify skills gained and apprenticeships, coaching and mentorship for tens of millions of people. The fund could be run by an independent nonprofit that would coordinate with corporations to ensure that the skills being developed are exactly what are needed. This is a big task, but it is doable; over the past 15 years, online learning platforms have shown that it can be done for academic learning, and many of the same principles apply for skill training...

To meet the challenges, we don’t need to send millions back to college. We need to create flexible, free paths to hiring, many of which would start in high school and extend through life. Our economy needs low-cost online mechanisms for letting people demonstrate what they know. Imagine a model where capability, not how many hours students sit in class, is what matters; where demonstrated skills earn them credit and where employers recognize those credits as evidence of readiness to enter an apprenticeship program in the trades, health care, hospitality or new categories of white-collar jobs that might emerge...

There is no shortage of meaningful work — only a shortage of pathways into it.

Submission + - KOREA BUILDS 500B-PARAMETER AI TO BREAK US, CHINA DEPENDENCE (nerds.xyz)

BrianFagioli writes: South Korea has unveiled a massive 519B-parameter AI model designed as national infrastructure, not a consumer chatbot. Backed by SK Telecom and major universities, the system is framed as a sovereign âoeteacher modelâ meant to power smaller AIs, validate domestic chips, and reduce reliance on American and Chinese platforms.

Submission + - TIME FEELS BROKEN? TIKTOK MAY BE TO BLAME⦠(nerds.xyz)

BrianFagioli writes: TikTok users are increasingly claiming that time feels âoeoff,â with many saying December vanished and Christmas arrived without warning. Some online are framing the sensation as a timeline shift or a break in reality itself. A new opinion piece argues the cause is far more mundane: short-form video and streaming culture may be flattening memory, erasing shared experiences, and compressing how time is perceived.

The article points to TikTokâ(TM)s endless scroll, lack of natural stopping points, and constant novelty as factors that prevent the brain from forming clear memory anchors. Combined with the decline of synchronized TV viewing and shared cultural moments, the result is a growing sense that time is accelerating — even though the clock has not changed. The author suggests that restoring structure, rituals, and uninterrupted experiences can make time feel âoenormalâ again.

Submission + - CloudAI is the inevitable future :o

Mirnotoriety writes: CloudAI is the inevitable future of enterprise computing, in which your entire IT fiefdom hangs cheerfully off a single strategic VM image running on some distant, humming cluster you’ll never see outside of a data center you will never visit and cannot pronounce.

Multiple cloned instances lurk behind a load balancer, scattered across at least two availability zones or hosts, all wrapped in layers of redundancy and recovery plans that look stunning in slide decks and almost never get tested on purpose. When that whole majestic cluster face-plants at once, it is not a failure; it is your official invitation to embrace the next evolutionary stage: a multi-cloud architecture.

In this brave new world, multi-cloud means painstakingly re-implementing your snowflake stack across several different providers, each with its own console, IAM model, pricing page, and carefully incompatible managed services and pricing roulette wheel.

You don’t just get resilience; you get three different ways to be locked in, three sets of dashboards, and failover runbooks that read like an incident-themed choose-your-own-adventure. Every outage becomes a live-fire exam in which the only question is: “Which cloud is on fire today, and where did we put the runbook this time?”

Naturally, no modern buzzword bingo card is complete without hybrid cloud. That’s where you heroically wheel hardware back into the racks you proudly decommissioned last strategy cycle. So you can pay for on-prem tin and cloud bills, all in perfect cost-synergy. Your provider then bolts an edge node onto your ISP, shaving a few milliseconds off latency while grafting a fresh recurring charge onto your invoice. Every expense; hardware, colo, bandwidth, licenses, consultants, and the quiet sobbing of your operations team, flows serenely downstream to you the customer as nature intended.

And best of all, all this pageantry unfolds before anyone has seriously started designing and implementing the actual “cloud” solution for your business. That part comes later, after the contracts are signed, the edge nodes are installed, the multi-cloud vision is announced at the all-hands, and the architecture diagram has gone through six rounds of rebranding. Only then do you finally ask the small, charmingly retro question:

“So what problem were we trying to solve again?”

Submission + - UCLA prof suspended for refusing lenient grading for black students 1

An anonymous reader writes: Judge rules against UCLA prof suspended after refusing lenient grading for black students

Key Takeaways

* A judge ruled against UCLA lecturer Gordon Klein, who was suspended after refusing to grade black students leniently in the wake of George Floyd's death, siding with UCLA on every issue

* Klein sought $13 million in damages, alleging his suspension harmed his expert witness consulting practice, but the judge ruled Klein's own actions contributed to the harm and UCLA acted reasonably

* The judge's ruling allows UCLA to maintain its administrative decisions during a public controversy, finding no violation of Klein's academic freedom, and Klein's legal team has appealed the decision based on what they claim are significant oversights in the ruling

Submission + - Elon Musk Says He's Removing 'Sustainable' From Tesla's Mission (gizmodo.com)

joshuark writes: Elon Musk apparently got “Joy to the World” stuck in his head and decided to change the entire mission of his company because of it. On Christmas Eve, the world’s richest man took to X instead of spending time with his family to declare that he is “changing the Tesla mission wording from: Sustainable Abundance To Amazing Abundance,” explaining, “The latter is more joyful.”

Beyond just changing one undefined term to a nonsensical phrase, Musk’s decision to ditch “sustainable” is another marker of how far he’s strayed from his past positions on climate change. Now Musk is boosting AI as the future and claiming climate change actually isn’t that big of a deal. Last year, Musk said, “We still have quite a bit of time” to address climate change, and “we don’t need to rush” to solve it.

He also claimed that things won’t get really bad for humans until CO2 reaches levels of about 1,000 parts per million in the Earth’s atmosphere, because that would start to cause people to experience “headaches and nausea.”

Looks like all that is out the window. The future is "amazing," it's not necessarily sustainable. What a charge...change...

Submission + - The moral critic of the AI industry—a Q&A with Holly Elmore (foommagazine.org)

Gazelle Bay writes: Since AI was first conceived of as a serious technology, some people wondered whether it might bring about the end of humanity. For some, this concern was simply logical. Human individuals have caused catastrophes throughout history, and powerful AI, which would not be bounded in the same way, might therefore pose even worse dangers.

In recent times, as the capabilities of AI have grown larger, one might have thought that its existential risks would also have become more obvious in nature. And in some ways, they have. It is increasingly easy to see how AI could pose severe risks now that it is being endowed with agency, for example, or being put in control of military weaponry.

On the other hand, the existential risks of AI have become more murky. Corporations increasingly sell powerful AI as just another consumer technology. They talk blandly about giving it the capability to improve itself, without setting any boundaries. They perform safety research, even while racing to increase performance. And, while they might acknowledge existential risks of AI, in some cases, they tend to disregard serious problems with other, closely related technologies.

The rising ambiguity of the AI issue has led to introspection and self-questioning in the AI safety community, chiefly concerned about existential risks for humanity. Consider what happened in November, when a prominent researcher named Joe Carlsmith, who had worked at the grantmaking organization called Open Philanthropy (recently renamed as Coefficient Giving), announced that he would be joining the leading generative AI company, Anthropic.

There was one community member on Twitter/X, named Holly Elmore, who provided a typically critical commentary: "Sellout," she wrote, succinctly.

Submission + - Apple's Brain Drain In Post-iPhone Era Proves It Can Handle Executive Turnover (bgr.com)

anderzole writes: In light of the executive and employee turnover Apple, many analysts and armchair pundits online have begun asking if Apple is losing its magic. The departures, along with rumors of Cook stepping down, have naturally sparked questions about stability within the company, not to mention Apple's ability to keep churning out and developing best-selling products in the years ahead. This speculation is understandable, but Apple is structured in a way such that it can survive any number of key executive departures. This isn't a theory, but rather something that Apple already proved a little more than a decade ago in the years following the release of the iPhone.

If anything, the brain drain following the iPhone release was far more significant than what Apple is experiencing right now. Still, Apple is set up in such a way that it can survive any one person leaving, no matter how important.

Submission + - Digital Sovereignty in Europe (theregister.com)

mspohr writes: Europe’s quest for digital sovereignty is hampered by a 90 per cent dependency on US cloud infrastructure, claims Cristina Caffarra, a competition expert and a driving force behind the Eurostack initiative.

While Brussels champions policy initiatives and American tech giants market their own ‘sovereign’ solutions, a handful of public authorities in Austria, Germany, and France, alongside the International Criminal Court in The Hague, are taking concrete steps to regain control over their IT.
These cases provide a potential blueprint for a continent grappling with its technological autonomy, while simultaneously revealing the deep-seated legal and commercial challenges that make true independence so difficult to achieve.

The core of the problem lies in a direct and irreconcilable legal conflict. The US CLOUD Act of 2018 allows American authorities to compel US-based technology companies to provide requested data, regardless of where that data is stored globally. This places European organizations in a precarious position, as it directly clashes with Europe's own stringent privacy regulation, the General Data Protection Regulation (GDPR).

Austria's Federal Ministry for Economy, Energy and Tourism is a case in point. The ministry recently completed a migration of 1,200 employees to the European open-source collaboration platform Nextcloud, but the project was not a migration away from an existing US cloud provider. It was a deliberate choice not to adopt one.

The primary driver was not cost, but sovereignty. "It was never about saving money," Zinnagl adds. "It was about maintaining control over our own data and our own systems."

The decision has triggered a ripple effect, as several other Austrian ministries have since begun implementing Nextcloud. For Zinnagl and Ollrom, this proves that one organization willing to take the first step can inspire others to follow.

Their advice to other European governments is clear: be brave, involve management, and start. "You don't achieve digital sovereignty overnight," Ollrom tells The Register. "You have to do this in many steps, but you have to start with the first step. Don't just talk about it, but execute it."

Submission + - DeepSeek-R1 Exposes AI Weakness: Security Degrades With Ideological Trigger (techreport.com)

An anonymous reader writes: CrowdStrike found DeepSeek-R1’s code security collapses when politically sensitive keywords are present, even when those words have nothing to do with the task. Vulnerability rates jumped by nearly 50%. The failure isn’t a jailbreak or hallucination: it’s alignment leaking into technical reasoning. Political guardrails appear encoded into the model weights themselves.

Submission + - Coup in Paris: How an AI-generated video caused Macron a major headache (euronews.com) 2

alternative_right writes: Alongside the message, a compelling video showcasing a swirling helicopter, military personnel, crowds and — what appears to be — a news anchor delivering a piece to camera.

"Unofficial reports suggest that there has been a coup in France, led by a colonel whose identity has not been revealed, along with the possible fall of Emmanuel Macron. However, the authorities have not issued a clear statement," she says.

Except, nothing about this video is authentic: it was created with AI.

After discovering the video, Macron asked Pharos — France's official portal for signalling online illicit content — to call Facebook's parent company Meta, to get the fake video removed.

But that request was turned down, as the platform claimed it did not violate its “rules of use."

Submission + - Apple's App Course Runs $20,000 a Student. Is It Really Worth It? (wired.com)

An anonymous reader writes: Two years ago, Lizmary Fernandez took a detour from studying to be an immigration attorney to join a free Apple course for making iPhone apps. The Apple Developer Academy in Detroit launched as part of the company’s $200 million response to the Black Lives Matter protests and aims to expand opportunities for people of color in the country’s poorest big city. But Fernandez found the program’s cost-of-living stipend lacking—“A lot of us got on food stamps,” she says—and the coursework insufficient for landing a coding job. “I didn’t have the experience or portfolio,” says the 25-year-old, who is now a flight attendant and preparing to apply to law school. “Coding is not something I got back to.”

Since 2021, the academy has welcomed over 1,700 students, a racially diverse mix with varying levels of tech literacy and financial flexibility. About 600 students, including Fernandez, have completed its 10-month course of half-days at Michigan State University, which cosponsors the Apple-branded and Apple-focused program. WIRED reviewed contracts and budgets and spoke with officials and graduates for the first in-depth examination of the nearly $30 million invested in the academy over the past four years—almost 30 percent of which came from Michigan taxpayers and the university’s regular students. As tech giants begin pouring billions of dollars into AI-related job training courses across the country, the Apple academy offers lessons on the challenges of uplifting diverse communities.

[...] The program gives out iPhones and MacBooks and spends an estimated $20,000 per student, nearly twice as much as state and local governments budget for community colleges. [...] About 70 percent of students graduate, which [Sarah Gretter, the academy leader for Michigan State] describes as higher than typical for adult education. She says the goal is for them to take “a next step,” whether a job or more courses. Roughly a third of participants are under 25, and virtually all of them pursue further schooling. [...] About 71 percent of graduates from the last two years went onto full-time jobs across a variety of industries, according to academy officials. Amy J. Ko, a University of Washington computer scientist who researches computing education, calls under 80 percent typical for the coding schools she has studied but notes that one of her department’s own undergraduate programs has a 95 percent job placement rate.

Slashdot Top Deals