Comment Re:Raise the costs even more! (Score 1) 18
They seem to have bought into the SMR hype, despite nobody having even demonstrated a viable prototype commercial design yet.
They seem to have bought into the SMR hype, despite nobody having even demonstrated a viable prototype commercial design yet.
They need to be able to see outside, and glass is the most reliable way to facilitate that.
My experience with HP^H^H support
Fixed.
I assume the AI will be just as useless, but faster.
Also more tolerant of you treating them like a moron, and more tolerant of you being a moron.
But ya, in terms of actual support for you problem? I'd say they're about even.
not just with the wod.
Oh ya?
I jest.
Of course the real problem is that you didn't understand the tool you were playing with.
You weren't using GPT-5, you were using GPT-5 mini.
I can replicate the behavior on GPT-5 mini without problem. On GPT-5, I tried 5 times, and not once did it invent a word.
Selecting a specifically dumbed down distilled model as evidence of an overarching concept is problematic for several reasons. Would you like me to explain them to you?
why do you keep using human sounding terms like self-attention?
Why do you think every word that applies to a human is "human sounding"?
Do dogs have attention?
Do they become confused?
These are not "human sounding" words- they are words describing the behavior of something that considers.
you keep arguing for non-human thought while using human like terms.
You keep trying to redefine words to be anthropocentric.
make up your mind, is it human like or not?
Not remotely. Neither is the word attention limited to them, or the word intelligence.
if not then stop using human like terms
The cascade of firing neurons that occurs when your attention shifts can be called attention, so I think we're just fine here.
you are deliberately mixing terms and then claiming other keep applying human qualities to things, your bullshit is evident
I'm deliberately correctly using terms and refusing to let you claim them as exclusive.
Your illiteracy is evident.
LLMs are fundamentally doing comparison and creating output based on comparative weights.
And you think your brain is doing differently?
The human-like jargon you like, self-attention, is just about the scope of comparisons at any given time.
Indeed. And yet is critical for forming the difference between a simple Markovian model, and a model where the state space is the vocabulary * context, that is to say- as big as yours.
This is a bunch of statistical analysis, calculated math, poor quality lossy multiplication at that.
Your neurons are a bunch of signals reaching an action potential and firing. Lossy triggering is literally part of the functional behavior.
Comparison is one kind of judgement, ad hominem attacks are another kind of judgement - the kind that shows your humanness.
Pet peeve- call them insults.
An "ad hominem attack" is just a pretentious way of saying "insult", attempting to piggy back off of the tu quqoe fallacy, of which it has no relation.
The argument being made here is pretty funny though.
If something can evaluate multiple facets of a thing, it's human intelligence?
Well that's no problem- your favorite LLM evaluates ~1000 different facets of every token it considers.
Are 1000 different judgements human?
What about 1000^100000, as it does for an entire context window?
Having only one kind of judgement does not create intelligence.
Oh this is neat. Where can I find this definition of intelligence?
Your fancy calculator that does parlor tricks is impressive but it does not think and it is not intelligent.
Your neat sloppy mess of neurons that does parlor tricks is impressive, but it does not think, and it is not intelligent.
Where did you make that claim? And why did you see the need to state the obvious.
It was implicit in the demonstration that the argument being given holds for humans as well, who are obviously "intelligent".
Why did I need to state the obvious? Because of you and that dipshit who say shit like:
Binary systems built on silicon are fundamentally different than human biology.
You have claimed an implementation-specific requirement for intelligence, while calling my mockery of such thinking "stating the obvious."
You just did prove it. But since you can't understand, its lost on you.
More blathering dumbshittery. I proved nothing. You, however, have demonstrated that you can't even think through the logical conclusions of the word vomit that emits from you. Perhaps you're an LLM.
LLMs are essentially a sophisticated pattern recognition algorithm.
No, they're not.
The fact is, we do not know "how" they work except at the very base level.
We use recursive descent to move weights in a very large collection of MLPs with a self-attention mechanism, and they're able to produce text.
Beyond that, we have to evaluate their behavior empirically.
Based on their training, they compose sequences of tokens that approximate what would be expected in response to a prompt.
This is correct, but misleadingly limited.
Based on your training, you compose words that would be expected in response to a prompt.
Models generalize. It's what's in the middle of the prompt and the answer that matters. You're trying to assert that it doesn't "think", while being wholly unable to define "think" in a way that isn't anthropocentric.
AI is to intelligence, as a movie is to motion.
To your big-I anthrointelligence that you have defined to mean your subjective human experience, and defined their internal experience to be not that- sure, yes, I agree with that statement.
It's entirely fucking useless statement- but a statement it remains.
When watching a movie, there is a very convincing appearance of motion, but in fact, nothing on the screen is actually moving. It can be so convincing that viewers using 3D glasses might instinctively recoil when an object appears to fly towards them. But there is no actual motion.
This is the simulation vs. reality argument- and it's flat out logically wrong.
Intelligence is not a physical thing that can be simulated. It is a classification of action. LLMs can, in fact, act.
The characters have no intent, though humans assign intent to what the "characters" are saying and doing. The point is, it's an illusion. And in the same way, AI is an illusion, a fancy (and very useful) parlor trick.
Except this is a philosophical argument, not a physical one.
Next you'll tell me Achilles can't possibly beat the Tortoise.
What's the difference between a computer salesman and a used car salesman? A used car salesman knows when he's lying.