Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:Epic video (Score 1) 69

Consumers are borderline unable to do anything about enshittification, they can only refuse to buy things which are pre-existingly shitty, or they can refuse to buy things that can be made shitty. Both options lock them out of some form of modern comforts which is precisley why enshittification is so damn effictive. Whoever came up with it won't spend an eternity in hell, the Devil himself won't let them in because they are too evil.

I have found that sticking to principles works fairly well. I don't always get the "new shiny" but so very often I didn't need it anyway. Sometimes I get the new thing, but I give up the part that ultimately leads to sadness. A couple of examples:

I needed a thermostat with multiple remote sensors in order to better balance heating in my house, and I needed it to have a "circulation" setting to run the fan even if the heat wasn't running. The best product I could find for this was the Ecobee Smart Premium with remote sensors. But it has an Internet-based cloud link that is *must* be activated in order to configure it, and it sets up an account with your address (possibly uploads your WiFi SSID and password - how would you know whether it did or not?), and uploads thermostat usage along with the "presence" information from the temperature sensors (they sense presence so that they can be taken out of the temperature balancing equation of the room is not occupied). Not information I want to have maintained in some corporate data store for their amusement/ exploitation or that of who ever breaks into the systems. So - I bought it, configured my phone to have a mobile hotspot with a temporary SSID, created an account but gave them the address of the fire station 2 miles away, configured my thermostat, then killed off the SSID and Internet connectivity by removing the hotspot from my phone. I get the functionality I needed from the device, but without giving up my data.

I like using digital music playback in my home, with one music library and multiple players distributed among the rooms in my house. There are lots of "cloud" services that are happy to sell me this capability as long as I create an account on their corporate servers and they (presumably) get to collect all kinds of usage information, even if they don't charge me for the service, or I have to pay a monthly subscription fee (which in some cases, I do pay - but I don't like "having to pay" just to use stuff I ostensibly own). I don't use any of those solutions that require the cloud connection. Instead, I have the Lyrion Music Server, which requires no cloud account but can (if I choose to) use a wide variety of streaming services OR host all of my music on my own server. I mainly play my own music, but occasionally use Pandora, and more often listen to Radio Paradise. It was a little more work to set up, but it is a genuinely consumer-oriented system that gives me choice and avoids monthly subscriptions.

I'm also interested in small scale, useful "home automation." There are lots of cloud-based services in this arena, with lots of enshitification opportunities and very problematic capture of very personal information (imagine a log of your daily activities as recorded by which lights/devices you turn on and when throughout the day, what rooms you occupy, when you leave home and return home, down to the granularity of each occupant of your home in some cases. You can even see when someone gets up in the middle of the night to pee if the bathroom light is part of the system.). And... many of the vendors want to lock you into their ecosystem - their set of lights and blinds and whatever. So I look for the individual systems that buck these trends. I find the INSTEON protocol devices that I can integrate with a Universal Devices controller, with no cloud accounts required. Eventually Home Assistant became a thing, and in addition to not requiring a cloud account it starts to link the multiple disparate home automation systems... so now my lighting/switch controls in the Universal Devices controller can integrate with my Lyrion Music system, and the Ecobee thermostat, through its Homekit interface, is integrated into the system as well, without having to use Ecobee's corporate servers or expose my private habits and routines to external consumption and scrutiny.

There *are* people and projects that buck the enshitification trends. Find them, use them, help them grow if necessary, participate in their communities, help other people see that helpful and fun technology doesn't have to come with lock-in, endless monthly fees, and loss of privacy.

Comment Re:Oops (Score 4, Interesting) 51

LOL, Indeed.

What happens if the alleged scofflaws from whom the crypto was seized successfully defend themselves from the government's claims? Does the government have to return the value they seized?

Does the government get to still require the alleged scofflaws to pony up their tax debts now that the crypto the government seized has vanished (and hence can no longer be redeemed to pay off the debt)?

Comment Re:Yes, and yes (Score 2) 46

Children are subject to experiments in learning all the time. Some of them are actually managed as experiments, while others are sold as the "latest greatest educational solution" to school districts who then implement them with no testing what so ever.

Open classrooms come to mind (https://www.ebsco.com/research-starters/education/open-classrooms with school districts buying in and spending $millions to construct school buildings based on these principles, only to find out that they were not necessarily everything promised. Another such inadequately tested educational advancement was the widespread introduction of laptops and the complete conversion of instruction from non-laptop based to entirely laptop based without any proof of benefit. And on-line LLM-AI tools have been an "experiment" driven by the pupils themselves.

On the positive side, Kahn Academy has been used in many school districts including as an experimental classroom instruction method as I described, but usually in a more careful, thoughtful way.

I don't see how there is a testbed for trialing educational technology other than existing schools and pupils. Responsible testing uses actual pupils, but is done in a controlled fashion in a single classroom, for a single subject. Expansion of programs should be based on evidence of positive results. But with sweeping changes in society, inaction by an educational system can be just has harmful as poorly thought out actions.

So what would you propose be done?

Comment Re:Yes, and yes (Score 2) 46

A couple of ideas...

Flip the classroom/homework model, as some had done with on-line tools like Kahn Academy. Kids learn with texts, computers, whatever outside of class. In class they do exercises in either a controlled computer environment or with no computer props at all.

Perform assessments with no computer support; write in a blue book, fill out multiple choice on a bubble sheet (for ease of grading), or do problems/show work on regular paper.

Say "f*ck it," throw up our hands, tell kids they can either learn and get real jobs, or cheat their asses off and become homeless, and stand out of the way.

One problem that comes up in educational environments is a confusion between "learning to use the tool" and "learning." It happened with computers - somehow "becoming computer literate" turned into "use computers for all learning", which (according to studies) has coincided with absolutely no positive change in student learning. "Learning to use AI" is also a worthy goal, but students using AI to do all of their work is like going to the gym and using levers/motors to move all the weights. Most of us have machines that help us perform our physical labors, so actual physical capability isn't as important for many roles as it used to be, but having absolutely no physical capability because of machine use for everything is a disaster.

A big problem for society is that kids are the wrong people to be making the choice. Many/most will go for the perceived "easy" solution without much ability to understand the future ramifications; that seems to be built into the human brain. Yet post adolescent maturation alters the human brain school that it is much less pliable/adaptable in a way that learning some material requires. So by the time they figure out they should have done more/learned more in K-12, that ship has sailed. Never mind the fact that it is a very expensive drag on society to envision a "second childhood" from 20 to 30 years of age for people to actually learn what they should have learned from 6 to 16 years of age. Could be done, though.

Maybe bring back child labor, have kids have to perform real work of some sort (not work them to death, but have a daily useful job they have to perform) until they get to age 15 or so, then offer them education. They might just figure out that they need to really learn things rather than cheat their way through school if they don't want to have to do mindless work for the rest of their lives.

Comment Re:Hyperbole (Score 1) 70

I recently donated to Goodwill (December 2025) a Sony Bravia TV from the mid-2000s that still worked well (40", 16x9 format, ATSC, 1080p). I bought a Sony Bravia 8 TV last November (65", 16x9 format etc.); it can't reliably start up with a Sony Theater Bar 8 sound bar; it randomly switches back to "TV speakers". I bought it for picture quality (which is nice) but expected the basics to "just work". I'm disappointed. I wonder if their attention was already wandering away from ensuring the high quality for which they had a well-justified reputation.

Comment Animal/human cognition versus artificial cognition (Score 1) 105

This is nascent thinking in need of further elaboration and development:

Evolution selectively adapts biological organisms to fit well into environments over time
This is a natural design process
Natural design has created some amazing biological mechanisms with capabilities that outperform artificial mechanisms when all tradeoffs are considered (including *ilities and efficiencies)
One of the things that natural design has created is a biological entity capable of artificial design
Artificial design uses cognition to achieve much larger "edit distances" between product releases, resulting in faster but riskier product evolution

Layering and modularity are two system design principles that are present in both natural and artificial designs; this suggests that they are universal design principles - good mechanisms, whether biological or mechanical, whether the product of natural design or artificial design, tend to use layering and modularity as design principles

Large Language Model AI *models* human language, and does so quite well
Large Language Model AI does not appear to model human minds
LLM-AI does not have the layering or modularity of the human mind

Humans developed mechanisms that performed functions similar to biologics
Humans have developed mechanisms that outperform biologics in narrow ways
Creating mechanisms that outperform biologics in all ways (where tradeoffs are part of the requirements) appears to be exceedingly difficult

Humans use machines to accomplish useful work
Machines can outperform humans in some ways
Humans that try/are forced to keep up with machines that outperform them in certain ways fail/burn out

The similarities between switching systems and the hardware of the animal central nervous system is intriguing
Early thinking around switching systems was that an artificial central nervous system might emerge from a big enough artificial switching system
No switching system, no matter how large, seems to have ever spontaneously developed any behavior similar an animal central nervous system

Analysis of switching system capabilities resulted in design principles that were used to develop âoethinking machinesâ that perform functions similar to animal central nervous systems
Humans have developed âoethinking machinesâ that outperform biological/animal central nervous systems in narrow ways
Creating âoethinking machinesâ that outperform biologics in all ways (in both ultimate capabilities as well as tradeoffs with *ilities and efficiencies) has so far eluded humans
Humans have proven very capable of harnessing âoethinking machinesâ to perform computational tasks where the superior performance of the âoethinking machinesâ is integrated well into achieving human objectives

The results of careful study of the primary functional elements of the animal central nervous system (neurons) was the creation of artificial neural network systems using âoecomputational enginesâ as a foundational layer for the mathematical computations that underly the ANS models
ANSs were initially seen as presaging the rapid onset of artificial intelligence
Early ANSs were capable of some amazing feats, but fell far short of creating an âoeartificial intelligenceâ (as it was called at the time, now referred to as âoeartificial general intelligenceâ)

Repeated attempts to realize emerging theories of animal central nervous system function using artificial computation engines have fallen short of the performance of biologics
Recent efforts to use ANS to model human language (LLM-AI) have resulted in highly capable language systems
LLM-AI can outperform human language capabilities in some ways, but fall short of besting human language performance in all ways

LLM-AI is currently highly dependent on human cognition to produce accurate, well-crafted language outputs

Human social activity is critically dependent on human language
Many human objectives are critical dependent on social behavior between small to large numbers of people
The use of LLM-AIâ(TM)s language abilities have great potential for enhancing human social activity

Animal central nervous systems co-evolved with animal biological mechanisms
The computation/processing that takes place in an animal central nervous system is âoeembodiedâ
Animal central nervous systems of sufficient complexity use layering and modularity in their design
Animal central nervous system layering and modularity is âoeembodiedâ
The human mind is based on a highly complex animal central nervous system
The human mind uses âoeembodiedâ layering and modularity in its design

Past experience in artificial design outcomes (mechanisms, switching systems, computational capabilities) suggests that although artificial intelligence language models can perform as well as and even better than natural design systems (humans in this case) in some narrow areas, it would be highly surprising for them to outperform humans in all ways, especially when *ilities and efficiency tradeoffs are factors

The presence of embodied, layered, and modular elements in natural neural systems that operate at highly-developed animal (especially human) performance levels and the absence of extensive use of these elements in LLM-AIs suggests that LLM-AIs do not âoehave what it takesâ to achieve human performance levels in all ways

The idea that scaling up LLM-AI implementations (even ignoring *illities and efficiencies) will somehow result in an artificial general intelligence roughly parallels earlier thinking that scaling up switching systems would result in artificial general intelligence, with just as little understanding of how that might happen

Too little is known about the intermediate layers of human cognition - above the neural circuits and observable structure, but below the observable behavior - to have more than an extremely remote possibility that mashing what we do know together in various ways over a time-scale vastly shorter than evolutionary time will result in artificial general intelligence

It seems even less likely that an artificial general intelligence brought about in such a way would outperform human intelligence in all ways, including *ilities and efficiencies

Comment Re:The detail is helpful (Score 1) 105

I agree. This is the first article that I've seen that isn't either over-hyped "LLMs will rule the world!" or all gloom and doom "AI is shit" yet at the same time "AI will eliminate all work and humans will starve to death".

The article contains a nuanced view that explores how the Internet affected things from 1999 to now (essentially) and makes an analogy that speaks to how LLM AI might play out now. This may not be the end of the analysis, but it is a lot more insightful than what has come before it. I especially appreciated the breakdown of job roles in a way that contained a meaningful explanation of why some job roles are impacted more quickly and completely than others based on the "bundle of tasks" contained within each job role.

Comment Re: If you walk up to my door (Score 1) 106

Perhaps. But you don't have the right to erect a series of cameras over a large area and use it to surveil a member of the public - or do you? The problem comes with the network of cameras that are connected together and operated by a single entity, the same as the problems that develop from "advertising" on the Internet that accomplishes its function by tracking every site that every individual connects with and creating a large searchable database of everyone's behavior.

Comment Policies should specify "whats" not "hows" (Score 2) 52

Just like project requirements, policies should specify "whats" and not "hows". What outcomes are desirable, and what are not. Also like project requirements, the views of all stakeholders should be reflected in the policies.

Regulatory frameworks that specify the "hows" are more likely to result in meaningless compliance as game-playing organizations seek to maximize returns under the rules. Regulatory frameworks created mainly using input from major players (because they are "experienced") are more likely to align with how those major players want to do business than what concerns really need to be addressed.

One major concern I see with "AI" is the potential for harmful behavior that is excused because "the AI did it". Fortunately, there have been some legal rulings where that defense didn't hold water. A policy that makes it clear that organizations can't escape claims of harm just because a computational system judged to be using "AI" is involved would clarify that the organization is responsible for what the organization does, whether through its people or its systems.

Another major concern I see with "AI" is the creation of dramatically unequal juxtapositions of people/human effort against human-like effort that is really computationally driven in situations where expectations are based on human-effort versus human-effort. An "AI" LLM, for example, can spout vast quantities of human-like output (some percentage of which is bullshit) which can overwhelm the abilities of a real human to understand and respond to in real time. Behavioral norms that are based on real humans interacting with real humans will be upset by real humans interacting with computational systems unless it is made clear that those norms cannot be upheld in those circumstances.

I'm sure that a group of people could identify more potential harms than just these two. I've cited them here as examples and not an enumeration of all concerns.

If someone is going to really develop a policy framework or even policies, then a substantial amount of original thinking based on first principles and identification of the "whats" of actual harms needs to be undertaken. Telling an organization clearly that "if your AI kills someone (or produces outcomes of lesser but still significant harm) you will be held responsible" is much better than telling that organization "you must reduce risk by using red teams to evaluate systems before putting them into production".

Slashdot Top Deals

Successful and fortunate crime is called virtue. - Seneca

Working...