Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Animal/human cognition versus artificial cognition (Score 1) 105

This is nascent thinking in need of further elaboration and development:

Evolution selectively adapts biological organisms to fit well into environments over time
This is a natural design process
Natural design has created some amazing biological mechanisms with capabilities that outperform artificial mechanisms when all tradeoffs are considered (including *ilities and efficiencies)
One of the things that natural design has created is a biological entity capable of artificial design
Artificial design uses cognition to achieve much larger "edit distances" between product releases, resulting in faster but riskier product evolution

Layering and modularity are two system design principles that are present in both natural and artificial designs; this suggests that they are universal design principles - good mechanisms, whether biological or mechanical, whether the product of natural design or artificial design, tend to use layering and modularity as design principles

Large Language Model AI *models* human language, and does so quite well
Large Language Model AI does not appear to model human minds
LLM-AI does not have the layering or modularity of the human mind

Humans developed mechanisms that performed functions similar to biologics
Humans have developed mechanisms that outperform biologics in narrow ways
Creating mechanisms that outperform biologics in all ways (where tradeoffs are part of the requirements) appears to be exceedingly difficult

Humans use machines to accomplish useful work
Machines can outperform humans in some ways
Humans that try/are forced to keep up with machines that outperform them in certain ways fail/burn out

The similarities between switching systems and the hardware of the animal central nervous system is intriguing
Early thinking around switching systems was that an artificial central nervous system might emerge from a big enough artificial switching system
No switching system, no matter how large, seems to have ever spontaneously developed any behavior similar an animal central nervous system

Analysis of switching system capabilities resulted in design principles that were used to develop âoethinking machinesâ that perform functions similar to animal central nervous systems
Humans have developed âoethinking machinesâ that outperform biological/animal central nervous systems in narrow ways
Creating âoethinking machinesâ that outperform biologics in all ways (in both ultimate capabilities as well as tradeoffs with *ilities and efficiencies) has so far eluded humans
Humans have proven very capable of harnessing âoethinking machinesâ to perform computational tasks where the superior performance of the âoethinking machinesâ is integrated well into achieving human objectives

The results of careful study of the primary functional elements of the animal central nervous system (neurons) was the creation of artificial neural network systems using âoecomputational enginesâ as a foundational layer for the mathematical computations that underly the ANS models
ANSs were initially seen as presaging the rapid onset of artificial intelligence
Early ANSs were capable of some amazing feats, but fell far short of creating an âoeartificial intelligenceâ (as it was called at the time, now referred to as âoeartificial general intelligenceâ)

Repeated attempts to realize emerging theories of animal central nervous system function using artificial computation engines have fallen short of the performance of biologics
Recent efforts to use ANS to model human language (LLM-AI) have resulted in highly capable language systems
LLM-AI can outperform human language capabilities in some ways, but fall short of besting human language performance in all ways

LLM-AI is currently highly dependent on human cognition to produce accurate, well-crafted language outputs

Human social activity is critically dependent on human language
Many human objectives are critical dependent on social behavior between small to large numbers of people
The use of LLM-AIâ(TM)s language abilities have great potential for enhancing human social activity

Animal central nervous systems co-evolved with animal biological mechanisms
The computation/processing that takes place in an animal central nervous system is âoeembodiedâ
Animal central nervous systems of sufficient complexity use layering and modularity in their design
Animal central nervous system layering and modularity is âoeembodiedâ
The human mind is based on a highly complex animal central nervous system
The human mind uses âoeembodiedâ layering and modularity in its design

Past experience in artificial design outcomes (mechanisms, switching systems, computational capabilities) suggests that although artificial intelligence language models can perform as well as and even better than natural design systems (humans in this case) in some narrow areas, it would be highly surprising for them to outperform humans in all ways, especially when *ilities and efficiency tradeoffs are factors

The presence of embodied, layered, and modular elements in natural neural systems that operate at highly-developed animal (especially human) performance levels and the absence of extensive use of these elements in LLM-AIs suggests that LLM-AIs do not âoehave what it takesâ to achieve human performance levels in all ways

The idea that scaling up LLM-AI implementations (even ignoring *illities and efficiencies) will somehow result in an artificial general intelligence roughly parallels earlier thinking that scaling up switching systems would result in artificial general intelligence, with just as little understanding of how that might happen

Too little is known about the intermediate layers of human cognition - above the neural circuits and observable structure, but below the observable behavior - to have more than an extremely remote possibility that mashing what we do know together in various ways over a time-scale vastly shorter than evolutionary time will result in artificial general intelligence

It seems even less likely that an artificial general intelligence brought about in such a way would outperform human intelligence in all ways, including *ilities and efficiencies

Comment Re:The detail is helpful (Score 1) 105

I agree. This is the first article that I've seen that isn't either over-hyped "LLMs will rule the world!" or all gloom and doom "AI is shit" yet at the same time "AI will eliminate all work and humans will starve to death".

The article contains a nuanced view that explores how the Internet affected things from 1999 to now (essentially) and makes an analogy that speaks to how LLM AI might play out now. This may not be the end of the analysis, but it is a lot more insightful than what has come before it. I especially appreciated the breakdown of job roles in a way that contained a meaningful explanation of why some job roles are impacted more quickly and completely than others based on the "bundle of tasks" contained within each job role.

Comment Re: If you walk up to my door (Score 1) 106

Perhaps. But you don't have the right to erect a series of cameras over a large area and use it to surveil a member of the public - or do you? The problem comes with the network of cameras that are connected together and operated by a single entity, the same as the problems that develop from "advertising" on the Internet that accomplishes its function by tracking every site that every individual connects with and creating a large searchable database of everyone's behavior.

Comment Policies should specify "whats" not "hows" (Score 2) 52

Just like project requirements, policies should specify "whats" and not "hows". What outcomes are desirable, and what are not. Also like project requirements, the views of all stakeholders should be reflected in the policies.

Regulatory frameworks that specify the "hows" are more likely to result in meaningless compliance as game-playing organizations seek to maximize returns under the rules. Regulatory frameworks created mainly using input from major players (because they are "experienced") are more likely to align with how those major players want to do business than what concerns really need to be addressed.

One major concern I see with "AI" is the potential for harmful behavior that is excused because "the AI did it". Fortunately, there have been some legal rulings where that defense didn't hold water. A policy that makes it clear that organizations can't escape claims of harm just because a computational system judged to be using "AI" is involved would clarify that the organization is responsible for what the organization does, whether through its people or its systems.

Another major concern I see with "AI" is the creation of dramatically unequal juxtapositions of people/human effort against human-like effort that is really computationally driven in situations where expectations are based on human-effort versus human-effort. An "AI" LLM, for example, can spout vast quantities of human-like output (some percentage of which is bullshit) which can overwhelm the abilities of a real human to understand and respond to in real time. Behavioral norms that are based on real humans interacting with real humans will be upset by real humans interacting with computational systems unless it is made clear that those norms cannot be upheld in those circumstances.

I'm sure that a group of people could identify more potential harms than just these two. I've cited them here as examples and not an enumeration of all concerns.

If someone is going to really develop a policy framework or even policies, then a substantial amount of original thinking based on first principles and identification of the "whats" of actual harms needs to be undertaken. Telling an organization clearly that "if your AI kills someone (or produces outcomes of lesser but still significant harm) you will be held responsible" is much better than telling that organization "you must reduce risk by using red teams to evaluate systems before putting them into production".

Comment In what way can an AI be an "employee"? (Score 1) 71

Current "AI" technology, including LLMs, are automated computational systems, not people. In what way would such a system be considered an "employee"?
  • Go through a job interview process to be hired?
  • Fill out a W-4 form to determine withholding?
  • Show proof of work eligibility in the US?
  • Sign up for benefits?
  • Receive compensation that becomes that entity's property?
  • Accrue sick leave and vacation time?
  • Participate in one-on-one's with a supervisor to determine how well they are functioning in the organization?
  • Be eligible for promotion/demotion?
  • Be subject to a PIP or other due process before termination for non-performance?
  • Participate in labor relations actions?

An employee is (American Heritage Dictionary) "noun A person who works for another in return for financial or other compensation. One employed by another. An individual who provides labor to a company or another person."

Somehow I don't think that an AI/LLM is going to be an employee any time soon.

Comment Cost, risk, reward... (Score 1) 54

I use cash as a payment tool when it suits me. Each payment method has its cost, risks, and rewards. I use cash:

  • where sellers offer a cash discount and I'm not likely to need the protections a credit card provides
  • for smaller transactions (under $20 or so) especially where credit/debit drags me into a "How much do you want to tip" dialog on an electronic screen
  • for private-party buying/selling transactions (garage sales, Craigslist, Facebook marketplace, street fairs) where cash eliminates trust concerns on both sides
  • randomly, if for no other reason than to f*** with anyone who is trying to profile me through electronic records

I am amused at people who consider themselves sophisticated because they never touch cash. I'm frustrated at how the electronic payment industry has inserted itself into virtually every transaction, adding a couple of percent onto costs for everyone. Cash has an important place in business, and I don't want to see it eliminated as a tool - so I "use it" as a preventive against "lose it".

Comment Re:What is this? (Score 4, Informative) 17

I read through the article. I expected it to detail the ways in which Apple was able to profit from tracking users while the other companies were not. There wasn't much there. The only issue presented was the problem other companies were having with the fact that Apple made user tracking a choice that the user makes, and (I guess) most users choose no tracking.

It seems like these companies found a sympathetic government that they could "lobby" to try to change Apple's decision to build their device so that the users were in control of whether or not they could be tracked. And the German government seems to be sympathetic to the idea that these company's "business model" is based on tracking, therefore they should not be denied their tracking.

I thought European countries were the epitome of data privacy? Color me confused now. There must be more of an argument somewhere in there. Otherwise, the German ministers can go sit on a Panzer.

Comment Re:Not only Tesla driver UI wise (Score 1) 249

Shitty(?) RAV4 driver here. The RAV4 has some enshittification, primarily in the raft of "Connect" services Toyota seems to think people will buy. (I haven't subscribed to any of them.) But I think you paint with too broad a brush. Where I live, practically every other vehicle is either a Subaru or a Toyota, with many of the Toyotas being RAV4s. So statistically speaking, if shitty drivers are evenly distributed between makes/models, you'll see more "shitty RAV4 drivers" than many other makes/models.

As for the physical controls, the RAV4 doesn't do too badly. All of the HVAC controls are physical, and many other functions can be controlled through steering wheel buttons or voice commands, at least on a mid-grade model.

With respect to the "radio", I have rarely tried to tune in a radio station through the screen controls while driving. 1) I carry a large music collection with me on my mobile phone, and I generally listen to that while I'm driving. The interface between the phone and the car remembers where I am playing my music such that when I get in and turn the car on, my music starts playing where it left off the last time I was driving. If I want to change it, I use the voice command interface on my phone to select something else from my collection. 2) the RAV4 itself has a "voice command" capability (at least in the mid-grade model that I have), and much of the car can be operated by saying "Hey, Toyota" and then a command. When I want to listen to a particular radio station, I say "Hey, Toyota - tune to [frequency] [band]" (e.g., "Hey, Toyota - tune to 92.9 FM").

When I'm driving, eyes on the road and hands on the wheel is a priority. The shift from physical controls on the radio itself to steering wheel controls and then voice activation has been an interesting evolution, but I'm adaptable. And, at least in the Toyota, the volume control remains a physical knob, on the left-hand side of the "media center". Others may vary in their approach.

Comment Re:Don't be silly, nobody needs $5000 in cash (Score 1) 211

I like the idea of the cash (or other asset) being destroyed, as that does mitigate the obvious conflict of interest. I would add that in the event the person in question subsequently does prove the cash/asset was not obtained through illegal activity, that they get made whole. That would put a penalty on the LEOs being wrong.

Neither is as good as just getting rid of civil asset forfeiture. No one should be deprived of their cash/assets except through due process of law.

Comment Re:Don't be silly, nobody needs $5000 in cash (Score 1) 211

You might not be able to imagine circumstances where someone would carry more than 5,000 euro/$5,000 dollars (by order of magnitude these are the same value). But that's just your perspective, and at least in the United States it is a fairly narrow view not shared by everyone.

A few years ago (mid-2000s) I found a Jeep I wanted to buy offered for sale in Kentucky. I lived in Maryland. I got a one-way ticket to Louisville, rented a car at the airport, and drove to the seller's house where I inspected the Jeep then bought it. I drove it back to Maryland.

The price for the Jeep was somewhere in the neighborhood of $6,800 (I don't remember exactly how much). The seller wasn't keen on checks, personal or cashiers (both can be fraudulent). I flew with the money in my pocket. Buying used cars is a very common reason for traveling with fairly substantial chunks of cash---No intention to deceive or evade necessary.

Comment Everything is negotiable (Score 1) 36

Everything is negotiable. Just tell them no, $2,500 isn't enough. If they really want the content, they will eventually offer more. If you are happy with the $2,500, then take it. What is important here is that the expectation that the author needs to approve the use of their work in this way gets firmly embedded in culture as well as law, and this seems to support that. Unless of course, all of the "no" answers are ignored and the material ingested anyway (through some kind of "programming error" or "freak accident".)

I personally don't want to read AI-generated books or articles unless the AI can produce truly compelling content. So far that hasn't been demonstrated. Whether the knowledge of artificial production alone is enough to cause rejection of material no matter how compelling it is intrigues me as a question to be answered.

Comment Any parent who would... (Score 1) 12

Any parent who would let their pre-teen hang out in an inner city pool hall should feel very comfortable with their kids having unfettered access to communication platforms that put them in contact with anonymous individuals.

Any game/social media/comm platform that wants to get $$ from services provided to kids should be able to satisfy parents that there is no way anonymous individuals can chat them up. Letting kids play with their friends from their own neighborhood is risky enough; it provides a good training ground for kids to learn that not everyone has their best interests at heart while not letting things get too far off the rails.

This isn't a censorship issue. Kids can exercise all the free speech they want with their friends and neighbors. As they grow older and hopefully understand the risks, they can take on totally anonymous communications mediated through blind communication channels if they want.

Slashdot Top Deals

It is much harder to find a job than to keep one.

Working...