Comment Re:What is thinking? (Score 1) 249
Every token produced by an LLM is an extrapolation.
There are certainly models (like BERT) that have been specifically trained on interpolation, but GPTs are not.
So really, what point were you trying to make?
That your argument was bad. And my point was correct.
You're trying to reframe your argument (which is good, because it was really bad) without invoking shit like NCAP (again, good)
I wasn't taking a dig at BYD- they're cool cars. I was taking a dig at your shit argument.
Like it or not, that was on-topic, because you made it part of the topic when you made the trash-tier argument.
So when AC said "which the computer does not understand" we all have to settle for your cherry-picked definition of "understand"?
That is how definitions work. Were you never taught how to use a fucking dictionary?
Words have multiple definitions, and if one of them is satisfied, the usage is correct.
If you claim that a word does not apply, and one of the definitions does apply- then you are wrong.
Calling it cherry picking is absurd- it's called "how to use a dictionary".
I mean I can say that is not how definitions work because I've picked definition to mean "the formal proclamation of a Roman Catholic dogma" and then call you a dipshit if you try to object.
That is precisely how definitions work.
You are completely correct that it does not satisfy that definition of the word.
It however does satisfy another definition of the word, which means the usage is correct.
It's honestly amazing that I need to take you back to 4th grade English, but here we fucking go.
Let's apply your understanding of a dictionary.
To do this, we will try to determine if you're a person.
Now you might cherry pick 1a: human, individual, but since in your dumbshit universe we can pick any definition that doesn't match and preclude the word from correct usage, I point out that you most clearly are not 5a: one of the three modes of being in the Trinitarian Godhead.
Ergo, by your truly fucking IQ-of-65 reasoning abilities, you are not a human.
Congratulations, dipshit.
How in the fuck has someone with a UID that low survived so long being that fucking stupid? Are you maybe suffering from dementia?
Disney said Piccolo agreed to similar language again when purchasing park tickets online in September 2023. Whether he actually read the fine print at any point, it added, is "immaterial."
Slow clap, my dim-witted friend. Slow clap.
It's like you want to be right and you will be mad if you aren't.
That's you trying to deflect the argument to a tu quoque.
I'm right because I'm right. Your fantasy of my emotions isn't relevant.
Current AI only works at all because we humans have translated most things into a digital representation and buy going to the Ai site we put our queries into a digital representation.
This isn't relevant to the discussion in the slightest.
I've been over this with you- an LLM does not have a concept of a query.
That you can put a query into its latent space and get a result is a feature of the network as a result of its training- but its context is not some kind of query space.
AI cannot operate at all in an analog world as humans do.
What an absurd claim.
LLMs take inputs into their context, and produce outputs.
They can (and are) quite literally used to operate in the real "analog" world.
Also you keep saying there are humans who can't read my notes, but I think you mean they would not *understand* my notes.
No, I mean read.
Concepts like "understanding" are traps. You have defined them to be properties of some magical property you perceive your brain as having.
If input A -> FSM Y -> output B applies for 2 separate FSMs (And I invite you to demonstrate that your brain is not an FSM- but spoiler- you can't. Nobody ever has, and it's aphysical and unscientific to believe so) then we can satisfy any rational definition of "understanding".
They would still be able to read the words that my notes are composed of.
So an illiterate person cannot think?
They would still see the boxes in the diagrams I hand drew.
So can an LLM, once the image is processed by another model into embeddings that the LLM can understand (these are not words or tokens, because again, while the output layer of an LLM produces tokens, the network itself does not work with tokens- they are trained to turn their latent space into tokens)
They would still understand when there is a word and then an arrow pointing to something that the word relates to what the store points to. An AI is totally lost unless I can represent the concept of an arrow digitally somehow.
Incorrect. You can write a word, draw an arrow to a thing, and the LLM will be able to understand that without problem. Try it.
In fact now that I write this I think people are amazed by AI because they have been immersed in a digital world for so long they are forgetting what it means to be human.
No. Because our eyes are open, and we're not in denial.
Your argument is based on bad logic, and ignorance.
You claim LLMs cannot do things they demonstrably can do.
That is why you are wrong.
What I can't understand is the nature of that ignorance. All the tools are available to you to correct it. The denial appears willful.
No, our understanding of transformers is not like our understanding of neurons.
Yes, it is.
Given your severe lack of communication skills and your self-delusion of being the only authority on this topic, I will discontinue this discussion. It's futile trying to discuss with someone who thinks name calling is the best way to establish their position of superior knowledge on a topic, while clearly being ignorant of anything outside their own opinion.
And your severe lack of a fucking clue made discussion with you an analogue to pissing in the wind anyway.
Name calling is just fun. You trying to use it to disengage is just you copping out.
I dunno $52 billion a year for something that in reality, only acts as a glorified search in an FAQ ?
Google makes $350B a year.
What real world money making applications has this thing done?
You just said what it's done.
They have like 35 million paying customers pulling in ~$4B a year.
That's a far stretch from $52, but their growth is also obscene.
Of course the vast majority are free users. This is the case for all products like this. They'll hit $10B a year by next year unless there's a collapse in growth.
If they could fix the hallucination problem (they can't) - then they could trust it to do real tasks (like really handling orders etc) - then it would be worth the valuation. But they can't !
I think you probably shouldn't judge based on what you're seeing on the free shit models.
People are using it today in businesses. Agentic workflows are all over the fucking place.
We have AI to summarize the AI slop articles.
I'm the same, always use a good credit card, but the occasional hassle is worth it. The prices are 1/10th the Amazon ones.
...when fits of creativity run strong, more than one programmer or writer has been known to abandon the desktop for the more spacious floor. - Fred Brooks, Jr.