Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Submission + - Cory Doctorow explains how legalising reverse engineer would end enshitification (theguardian.com)

Bruce66423 writes: 'Donald Trump’s tariffs have opened up a new possibility for the technology we have become increasingly dependent on. Today, nearly all of our tech comes from US companies, and it arrives as a prix fixe meal. If you want to talk with your friends on a Meta platform, you have to let Meta’s Mark Zuckerberg eavesdrop on your conversations. If you want to have a phone that works, you have to let Apple’s Tim Cook suck 30p out of every pound you spend and give him a veto over which software you can run. If you want to search the web, you have to let Google’s Sundar Pichai know what colour underwear you’ve got on.

'This is a genuinely odd place for digital computers to have got to. Every computer in your life, from your mobile phone to your smart speaker to your laptop to your TV, is theoretically capable of running all programmes, including the ones the manufacturers would really prefer you stay away from. This means that there are no prix fixe menus in technology – everything can be had à la carte. Thanks to the infinite flexibility of computers, every 10-foot fence a US tech boss installs in a digital product you rely on invites a programmer to supply you with a four-metre ladder so you can scamper nimbly over it. However, we adopted laws – at the insistence of the US trade rep – that prohibit programmers from helping you alter the devices you own, in legal ways, if the manufacturer objects. This is one thing that leads to what I refer to as the enshittification of technology.

'There is only one reason the world isn’t bursting with wildly profitable products and projects that disenshittify the US’s defective products: its (former) trading partners were bullied into passing an “anti-circumvention” law that bans the kind of reverse-engineering that is the necessary prelude to modifying an existing product to make it work better for its users (at the expense of its manufacturer). But the Trump tariffs change all that. The old bargain – put your own tech sector in chains, expose your people to our plunder of their data and cash, and in return, the US won’t tariff your exports – is dead'

Submission + - Should Real-World Examples be Required for Standards and Other Mandates?

theodp writes: If someone wants to impose standards, forms, documentation requirements, and other mandates on others, it seems only fair that they should be able to — and required to — demonstrate it in action first, right? Without real-world examples of what is considered 'good', people are essentially asked to sign off on a black box without a clear idea of what is being demanded, how much work it may entail, and in the end how worthwhile it even may be.

Surprisingly, that's not how things tend to play out in practice in industry, academia, and other organizations. A case in point is the proposed new Computer Science + AI Standards for pre-kindergarten to high school students assembled by a consortium of educators, tech-backed nonprofits, and tech industry advisors that aims to shape how CS+AI is taught in classrooms. A Friday morning LinkedIn post from the Computer Science Teachers Association reminds educators that they have 72 hours to "help us improve them [the standards] by reviewing and completing our feedback form by 9am ET on Monday, January 12."

Under development since 2023, the 247-page standards document is chock full of students-should-be-able-to pronouncements for all grade levels but offers no concrete examples of what that looks like in practice in terms of acceptable student deliverables or teacher lesson plans — e.g., "Students should be able to create a functional, rule-based AI for a Non-Playable Character (NPC) using programming or visual scripting. Students’ implementation must be based on a recognized AI method (e.g., finite-state machine, behavior tree)."

As Ross Perot once said, the devil is in the details. So, in a world where more and more people specialize in governance, risk, and compliance jobs that involve specifying mandates for others to comply with, shouldn't it be a red flag if they can't show real-world examples of how to satisfy those mandates? If you require it, shouldn't you be able to demonstrate it? Otherwise, doesn't it signal that the mandate hasn’t been validated? And open the door to being told “that’s not what I meant” for those left to guess at what was meant?

Submission + - CES Worst in Show Awards Call Out The Tech Making Things Worse (ifixit.com)

chicksdaddy writes: CES, the Consumer Electronics Show, isn’t just about shiny new gadgets, as AP reports (https://apnews.com/article/ces-worst-show-ai-0ce7fbc5aff68e8ff6d7b8e6fb7b007d): this year brought back the fifth annual Worst in Show anti-awards (https://www.worstinshowces.com/), calling out the most harmful, wasteful, invasive, and unfixable tech at the Las Vegas show. The coalition behind the awards — including Repair.org, iFixit, EFF, PIRG, Secure Repairs and others — put the spotlight on products that miss the point of innovation and make life worse for users.

2026 Worst in Show winners include:
  Overall (and Repairability): Samsung’s AI-packed Family Hub fridge — overengineered, hard to fix, and trying to do everything but keep food cold.
  Privacy: Amazon Ring AI — expanding surveillance with features like facial recognition and mobile towers.
  Security: Merach UltraTread treadmill — AI fitness coach that also hoovers up sensitive data with weak security guarantees — including a Privacy Policy that declares the company "cannot guarantee the security of your personal information" (!!)
  Environmental Impact: Lollipop Star — a single-use music-playing electronic lollipop that epitomizes needless e-waste.
  Enshittification: Bosch eBike Flow App — pushing lock-in and digital restrictions that make gear worse over time.
  “Who Asked For This?”: Bosch Personal AI Barista — voice-assistant coffee maker that nobody really wanted.
  People’s Choice: Lepro Ami AI Companion — an overhyped “soulmate” cam that creeps more than comforts.

The message? Not all tech is progress. Some products add needless complexity, threaten privacy, or throw sustainability out the window — and the industry’s watchdogs are calling them out.

Submission + - GLP-1 Medication Gains Are Lost After Stopping Use (bmj.com)

Supp0rtLinux writes: Scientists at the University of Oxford examined multiple of studies following people after they discontinued GLP-1 based obesity medications. Former users typically regained close to a pound a month and they regained weight faster than people who shed their weight through positive lifestyle changes alone (calorie reduction, healthier dietary choices, exercise, etc).

Personally, I need to lose a few pounds but I would rather do it through diet and exercise than a pill; mostly for the personal reward/encouragement factor, but also for the other overall health benefits. But the bigger concern with dropping GLP-1 options could be the fallout from those that saw unrelated, off-market changes related to addiction tendencies. We've read how many obesity drugs don't just suppress appetite, but also help with addictive behaviors and related (smoking, alcohol consumption, sex addiction, other compulsive activities, etc). The question is if you go off the drugs do the other vices return as well? Or since those are more habit forming, can you still get the benefits? Does using a pill long enough to break a habit result in long term results or will you revert similarly to regaining lost weight?

Submission + - Musk lawsuit over OpenAI for-profit conversion can head to trial, US judge says (reuters.com)

schwit1 writes: US District Judge Yvonne Gonzalez Rogers:
"There is plenty of evidence suggesting OpenAI's leaders made assurances that its original nonprofit structure was going to be maintained."

The backstory:
Elon co-founded OpenAI in 2015 and contributed roughly $38 million, about 60% of its early funding, based on assurances it would remain a nonprofit dedicated to public benefit.

Musk left in 2018. Since then, OpenAI cut multibillion-dollar deals with Microsoft and restructured toward for-profit status.

The accusation:
Elon alleges Sam Altman and Greg Brockman plotted the for-profit switch to enrich themselves, betraying OpenAI's founding mission.

OpenAI's response:
They called Elon "a frustrated commercial competitor seeking to slow down a mission-driven market leader."

The judge disagreed. Now a jury will decide.

Submission + - Jurassic Park Was Right: Mosquitoes Really Can Carry Libraries of Animal DNA (sciencealert.com)

alternative_right writes: Mosquito meals really can provide a thorough ecological snapshot of the area they buzz about, new research from the University of Florida finds.

"They say Jurassic Park inspired a new generation of paleontologists, but it inspired me to study mosquitoes," says entomologist Lawrence Reeves.

Reeves, fellow entomologist Hannah Atsma, and their colleagues caught more than 50,000 individual mosquitoes, representing 21 different species, across a 10,900-hectare protected reserve in central Florida over eight months.

Based on the blood contained in a few thousand females, the researchers found that mosquitoes' blood meals can reveal the presence of "the smallest frogs to the largest cows."

Submission + - 'Fish Mouth' Filter Removes 99% of Microplastics From Laundry Waste (sciencealert.com) 1

alternative_right writes: The ancient evolution of fish mouths could help solve a modern source of plastic pollution.

Inspired by these natural filtration systems, scientists in Germany have invented a way to remove 99 percent of plastic particles from water. It's based on how some fish filter-feed to eat microscopic prey.

Submission + - Ready, Fire, Aim: As Schools Embrace AI, Skeptics Raise Concerns

theodp writes: "Fueled partly by American tech companies, governments around the globe are racing to deploy generative A.I. systems and training in schools and universities," reports the NY Times. "In early November, Microsoft said it would supply artificial intelligence tools and training to more than 200,000 students and educators in the United Arab Emirates. Days later, a financial services company in Kazakhstan announced an agreement with OpenAI to provide ChatGPT Edu, a service for schools and universities, for 165,000 educators in Kazakhstan. Last month, xAI, Elon Musk’s artificial intelligence company, announced an even bigger project with El Salvador: developing an A.I. tutoring system, using the company’s Grok chatbot, for more than a million students in thousands of schools there."

"In the United States, where states and school districts typically decide what to teach, some prominent school systems recently introduced popular chatbots for teaching and learning. In Florida alone, Miami-Dade County Public Schools, the nation’s third-largest school system, rolled out Google’s Gemini chatbot for more than 100,000 high school students. And Broward County Public Schools, the nation’s sixth-biggest school district, introduced Microsoft’s Copilot chatbot for thousands of teachers and staff members."

"Teachers currently have few rigorous studies to guide generative A.I. use in schools. Researchers are just beginning to follow the long-term effects of A.I. chatbots on teenagers and schoolchildren. 'Lots of institutions are trying A.I.,' said Drew Bent, the education lead at Anthropic. 'We’re at a point now where we need to make sure that these things are backed by outcomes and figure out what’s working and what’s not working.'"

Submission + - UK company sends factory with 1,000C (1273F) furnace into space

yuvcifjt writes: Cardiff-based company Space Forge has launched a microwave-sized "factory", ForgeStar-1, into orbit and successfully powered its onboard furnace to about 1,000C (1,273F). Their goal is to manufacture higher-purity semiconductors in microgravity and vacuum conditions, which allow atoms to align more perfectly and reduce contamination compared with Earth-based production. The plasma demonstration confirms that the extreme conditions needed for gas-phase crystal growth can now be created and controlled on an autonomous platform in low Earth orbit, enabling production of ultra-pure seed material. CEO Josh Western says space-made semiconductors could be up to 4,000 times purer and used in a range of electronics. The company plans to build a larger factory (ForgeStar-2) capable of producing material for about 10,000 chips and to test re-entry recovery using a heat shield to return materials to Earth.

Submission + - NYC Inauguration Bans Raspberry Pi, Flipper Zero Devices (adafruit.com)

ptorrone writes: The January 1, 2026 NYC mayoral inauguration prohibits attendees from bringing specific brand-name devices, explicitly banning Raspberry Pi single-board computers and the Flipper Zero, listed alongside weapons, explosives, and drones. Rather than restricting behaviors or capabilities like signal interference or unauthorized transmitters, the policy names two widely used educational and testing tools while allowing smartphones and laptops that are far more capable. Critics argue this device-specific ban creates confusion, encourages selective enforcement, and reflects security theater rather than a clear, capability-based public safety framework. New York has handled large-scale events more pragmatically before.

Submission + - Cheap Solar Is Transforming Lives and Economies Across Africa (nytimes.com)

An anonymous reader writes: South Africans ... have found a remedy for power cuts that have plagued people in the developing world for years. Thanks to swiftly falling prices of Chinese made solar panels and batteries, they now draw their power from the sun. These aren’t the tiny, old-school solar lanterns that once powered a lightbulb or TV in rural communities. Today, solar and battery systems are deployed across a variety of businesses — auto factories and wineries, gold mines and shopping malls. And they are changing everyday life, trade and industry in Africa’s biggest economy. This has happened at startling speed. Solar has risen from almost nothing in 2019 to roughly 10 percent of South Africa’s electricity-generating capacity.

No longer do South Africans depend entirely on giant coal-burning plants that have defined how people worldwide got their electricity for more than a century. That’s forcing the nation’s already beleaguered electric utility to rethink its business as revenues evaporate. Joel Nana, a project manager with Sustainable Energy Africa, a Cape Town-based organization, called it “a bottom-up movement” to sidestep a generations-old problem. “The broken system is unreliable electricity, expensive electricity or no electricity at all,” he said. “We’ve been living in this situation forever.” What’s happening in South Africa is repeating across the continent. Key to this shift: China’s ambition to lead the world in clean energy.

Submission + - ClippyAI says AI is overhyped

Mirnotoriety writes: Why it's overhyped

Most demos are still cherry-picked, brittle, and require heavy human babysitting. (The moment you ask the agent to deal with a slightly weird PDF, a CAPTCHA, an internal tool without an API, or a manager who changes requirements mid-task — it falls apart.)

* Actual enterprise adoption is still tiny. Companies are piloting, not replacing teams at scale.

* The economics don’t work yet for most roles: paying $20–200/month per agent sounds cheap until you need 10–20 specialized agents + human oversight + error correction + compliance checks.

* Many “I replaced my team” stories later get walkbacks when people admit they’re still doing 60–80% of the work themselves.

More honest current state (Dec 2025)

* AI agents are genuinely useful for narrow, repetitive, well-defined tasks (scraping data, writing first drafts, basic QA, simple customer support replies, generating boilerplate code).

* They’re not autonomous workers yet. Think of them as extremely talented but unreliable interns who need constant supervision.

* The real productivity gains right now are coming from centaurs (human + AI) rather than fully autonomous agents.

Submission + - Sodium batteries with 3.6m mile lifespan in 2026 (simcottrenewables.co.uk) 1

shilly writes: CATL has announced it will be launching its new sodium batteries in 2026. They have some major advantages over LFP chemistries, including:
- 65% cheaper at launch ($19 at cell level, expected to drop to $10 in future)
- 85% range at 3.6m miles
- Dramatically less range reduction in very cold conditions
- Inherently lower fire risk
- Can be transported on 0% charge
- Slightly better gravimetric density (175Wh/kg cf 165)
Sodium isn’t a panacea: volumetric density remains lower, for example. But these batteries could well dominate in years to come, not least because they are made of commonly available materials (table salt!). For example, millions of homes across Africa are putting in solar plus storage to have heat, light and power at night, throwing out their kerosene. Sodium could substantially accelerate the trend.

Submission + - America is building a society that cannot function without AI (nerds.xyz)

BrianFagioli writes: The United States is rapidly building a society that assumes artificial intelligence will always be available. AI now sits at the center of banking, healthcare, logistics, education, media, and government workflows, increasingly handling not just automation but decision-making and cognition itself. The risk is not AI being “too smart,” but Americans slowly losing the ability — and habit — of thinking and functioning without it. As more writing, research, planning, and judgment are outsourced to centralized systems, human fallback skills quietly atrophy, making society efficient but brittle.

That brittleness becomes a national risk when AI’s real dependencies are considered. Large-scale AI depends on data centers, power grids, and stable infrastructure that can fail due to outages, cyber incidents, or geopolitical pressure. Foreign adversaries do not need to defeat the US militarily to cause disruption; they only need to interrupt systems Americans assume will always work. A society optimized for AI uptime rather than resilience may discover, very suddenly, that when the intelligence layer goes dark, confusion spreads faster than solutions.

Slashdot Top Deals

I use technology in order to hate it more properly. -- Nam June Paik

Working...