Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Stock buybacks, perhaps? (Score 1) 28

I'm not sure it has much effect on the value of the company... At least not in a very predictable way.
The buyback directly reduces the market cap of the company, but also directly increases the EPS.
I imagine it comes out to about a wash. However, what it does do is create a great opportunity to greatly enrich the people who are in charge of the buyback when the share price increases, at the expense of the corporation's viability, which feels like a breach in fiduciary responsibility to all the other shareholders.

But then again, maybe a majority of shareholders would agree on it, since their share prices are going to increase, because let's be real. Few people are investing for the purpose of having ownership of a corporation. They want those gains. They'll never be The Greater Fool.

Comment Re:Really? (Score 1) 142

not just with the wod.

Oh ya?

I jest.
Of course the real problem is that you didn't understand the tool you were playing with.
You weren't using GPT-5, you were using GPT-5 mini.
I can replicate the behavior on GPT-5 mini without problem. On GPT-5, I tried 5 times, and not once did it invent a word.

Selecting a specifically dumbed down distilled model as evidence of an overarching concept is problematic for several reasons. Would you like me to explain them to you?

Comment Re:Really? (Score 1) 142

What's the difference between weather, and a simulation of the weather?
Simple- the weather can get you wet. The simulation can't.
But what if I place you in a box, and let the simulation pour water on you, or blow air in your face? Is it real then?

Words- are they real, or are they artificial? Either way- an LLM produces them- they are not simulated. They are real.
If you and an LLM produce the same words, are you real, and it fake?
Are its words fake, but yours real?

You're quick to try to claim that people who reject your magical-brain hypothesis can't tell the difference, but the fact is- you can't even define the difference in any way that passes scientific muster. When you know, you know, amirite?

The problem here is that you're trying to cram all that it is to be human into one word that it simply does not fit into.
My 21 year old niece is intelligent enough to understand this- why aren't you? Are you fake, and she real?

Comment Re:Really? (Score 1) 142

why do you keep using human sounding terms like self-attention?

Why do you think every word that applies to a human is "human sounding"?

Do dogs have attention?
Do they become confused?
These are not "human sounding" words- they are words describing the behavior of something that considers.

you keep arguing for non-human thought while using human like terms.

You keep trying to redefine words to be anthropocentric.

make up your mind, is it human like or not?

Not remotely. Neither is the word attention limited to them, or the word intelligence.

if not then stop using human like terms

The cascade of firing neurons that occurs when your attention shifts can be called attention, so I think we're just fine here.

you are deliberately mixing terms and then claiming other keep applying human qualities to things, your bullshit is evident

I'm deliberately correctly using terms and refusing to let you claim them as exclusive.
Your illiteracy is evident.

Comment Re:Really? (Score 1) 142

LLMs are fundamentally doing comparison and creating output based on comparative weights.

And you think your brain is doing differently?

The human-like jargon you like, self-attention, is just about the scope of comparisons at any given time.

Indeed. And yet is critical for forming the difference between a simple Markovian model, and a model where the state space is the vocabulary * context, that is to say- as big as yours.

This is a bunch of statistical analysis, calculated math, poor quality lossy multiplication at that.

Your neurons are a bunch of signals reaching an action potential and firing. Lossy triggering is literally part of the functional behavior.

Comparison is one kind of judgement, ad hominem attacks are another kind of judgement - the kind that shows your humanness.

Pet peeve- call them insults.
An "ad hominem attack" is just a pretentious way of saying "insult", attempting to piggy back off of the tu quqoe fallacy, of which it has no relation.
The argument being made here is pretty funny though.
If something can evaluate multiple facets of a thing, it's human intelligence?
Well that's no problem- your favorite LLM evaluates ~1000 different facets of every token it considers.
Are 1000 different judgements human?
What about 1000^100000, as it does for an entire context window?

Having only one kind of judgement does not create intelligence.

Oh this is neat. Where can I find this definition of intelligence?

Your fancy calculator that does parlor tricks is impressive but it does not think and it is not intelligent.

Your neat sloppy mess of neurons that does parlor tricks is impressive, but it does not think, and it is not intelligent.

Comment Re:Really? (Score 1) 142

If we define thinking to be a "human endeavor", sure, then LLMs do not think.

The problem, of course, is there are non-anthropocentric definitions, and it m aeets them just fine.
Intelligence does not include sound judgement- it includes judgement, period.
If it only included sound judgement, you'd be declaring that a large body of humans lack intelligence, to which I say- you're one of them.

Comment Re:Really? (Score 1) 142

Where did you make that claim? And why did you see the need to state the obvious.

It was implicit in the demonstration that the argument being given holds for humans as well, who are obviously "intelligent".
Why did I need to state the obvious? Because of you and that dipshit who say shit like:

Binary systems built on silicon are fundamentally different than human biology.

You have claimed an implementation-specific requirement for intelligence, while calling my mockery of such thinking "stating the obvious."

You just did prove it. But since you can't understand, its lost on you.

More blathering dumbshittery. I proved nothing. You, however, have demonstrated that you can't even think through the logical conclusions of the word vomit that emits from you. Perhaps you're an LLM.

Comment Re:Wrong Name (Score 1) 142

Ya, says the person with the fucked karma, lol.

There is no language parser in an LLM- you're just.... well, fucking wrong.
So fucking wrong, it almost feels wrong to call you fucking stupid- because you had to put thought into coming up with that completely wrong fucking thing.

You're just sad all around.

Comment Re:Really? (Score 1) 142

LLMs are essentially a sophisticated pattern recognition algorithm.

No, they're not.
The fact is, we do not know "how" they work except at the very base level.
We use recursive descent to move weights in a very large collection of MLPs with a self-attention mechanism, and they're able to produce text.
Beyond that, we have to evaluate their behavior empirically.

Based on their training, they compose sequences of tokens that approximate what would be expected in response to a prompt.

This is correct, but misleadingly limited.
Based on your training, you compose words that would be expected in response to a prompt.

Models generalize. It's what's in the middle of the prompt and the answer that matters. You're trying to assert that it doesn't "think", while being wholly unable to define "think" in a way that isn't anthropocentric.

AI is to intelligence, as a movie is to motion.

To your big-I anthrointelligence that you have defined to mean your subjective human experience, and defined their internal experience to be not that- sure, yes, I agree with that statement.
It's entirely fucking useless statement- but a statement it remains.

When watching a movie, there is a very convincing appearance of motion, but in fact, nothing on the screen is actually moving. It can be so convincing that viewers using 3D glasses might instinctively recoil when an object appears to fly towards them. But there is no actual motion.

This is the simulation vs. reality argument- and it's flat out logically wrong.
Intelligence is not a physical thing that can be simulated. It is a classification of action. LLMs can, in fact, act.

The characters have no intent, though humans assign intent to what the "characters" are saying and doing. The point is, it's an illusion. And in the same way, AI is an illusion, a fancy (and very useful) parlor trick.

Except this is a philosophical argument, not a physical one.
Next you'll tell me Achilles can't possibly beat the Tortoise.

Comment Re:Really? (Score 1) 142

understand (v):
interpret or view (something) in a particular way.

I know very well how an LLM works, which is why I can tell you that they don't provide answers to prompts.
The prompt is part of the same multi-billion term calculation that their knowledge is. That's what self-attention is.

I think you've just demonstrated that an LLM understands more than you do.

Comment Re:Really? (Score 1) 142

Binary systems built on silicon are fundamentally different than human biology.

A sword and a stick are fundamentally different as well, and yet they can both be used to inflict damage. Certainly you can do better than an anthropocentric argument.... Right?

A giant computer system that uses a megawatt of power to assemble a coherent sentence

Hardly see how that's relevant, unless you're trying to argue that only efficient thinking is thinking?

which the computer does not understand

understand (v):
interpret or view (something) in a particular way.
I'd love to see your proof that an LLM doesn't "understand" something.

is nothing like a human.

You're trying to argue that only humans have intelligence, aren't you?

You claim we don't know how we work and at the same time you claim to have reinvented that which you don't understand.

This is an unsurprising claim from you. First- it's false. I claimed no such thing.
I claimed that intelligence is separate from implementation.

You're not a very intelligent person.

Comment Re:perhaps correct, but a load of bullshit (Score 1) 142

Of course it is.

With enough language ingested, you get the patterns behind the language- the knowledge.
That is why LLMs can communicate in a completely invented (within this context) language with ease.

You clearly have no fucking idea what you're talking about here- why the hell are you chucking your vomit all over this thread?

Slashdot Top Deals

Machines take me by surprise with great frequency. - Alan Turing

Working...