Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Really? (Score 1) 272

Incorrect.

Your SQL is broken down into tokens that are operands and query values, which then scans a database to satisfy the provided grammar.
You are trying to gloss over everything into a compressed understanding space that works in your head. It's simple, and that probably works for you- but it's wrong.

Comment Re:Really? (Score 1) 272

The hotel contains handwriting (my name, and room number), and the texts are not a mixture of printed, and graphically designed cursive, and logos with text in circular shapes, etc.

It's a perfect test, because it's the kind of thing OCR will fail at, and this will succeed at.
Handwriting is easy in comparison.

Comment Re:Really? (Score 1) 272

That's what they are to you.

Behind the scenes, they're appended to a buffer called a context window, which serves essentially as the model's "short term memory".
To the LLM you're talking to, they're just a series of very large 1000-dimensional vectors that were embedded via the self-attention layers from the context, within the network itself, they're not words, or tokens, but semantic concepts, relationships, inferred meanings, anything it could learn from them and similar configurations of them in its training- called embeddings.

It has no built-in concept of a query. It is not a search engine searching a database.
It's quite literally nothing but an obscenely large nonlinear function with a literally astronomical configuration space.
At the end of all that, a decoder layer turns that into token probabilities, and the application you're using turns that back into actual text via a sampling method.

The fact that it is treated as a query is precisely because the model is so fucking good. It has a concept of questions. It knows how they're typically arranged in a sentence. It has also been trained to be helpful to you- to answer questions you might have.

Comment Re:Really? (Score 1) 272

Any multi-model LLM can read your notes, and cleanly interpret and describe diagrams you have drawn on them.
Qwen3 VL is my current favorite open-weight multimodal model.
That said, I'm quite certain the SOTA models (ChatGPT, Gemini, et al.) all can as well.

Not only do these models outperform traditional OCR at reading, but they can also describe other details like layouts in any kind of format you like, including re-creating the image they're looking at in HTML or things like that.

I will, as an example, ask Qwen3 VL (32B, FP16- the dense model, not the sparse) what it sees while holding open a recent hotel access sleeve and card from a conference in Las Vegas.
Its response.
As you can see, they can read quite fine. And this is an open-weights model, running entirely locally on my laptop. And it's old as hell (at least in LLM terms).

I'm unsure why it's my job to tell you what you could have tried, with the knowledge you already have (That uhhh, ChatGPT... exists) to have seen this for yourself.

I'll grant you that I don't use the paid models very often (and certainly not for something local models can do fine), and that free-tier distilled SOTA models might even suck at this.
Paid models are going to be better than the LLMs you can run locally, though, and they can read notes with ease- even notes I don't find remotely legible- like the room number on that sleeve, which can only be described as chickenscratch.

Comment Re:Really? (Score 1) 272

There are no queries.
Tokenized context is turned into vectors by the self-attention mechanism, and this latent space is fed into the feed forward half, with a new set of attention vectors added with each transformer layer.

The network functionally transforms the context from latent space into a set of logits (token probabilities).

There is nothing that a "query" is run "against". There is no query, just the context and the astronomically large n-dimensional space that it gets transformed into.
There is no query.
If the network interprets its context as a query, it will do what it has been trained to do with it- and that is answer it, in the case of a common "chat bot".

This isn't a semantic argument. You simply do not understand how they work. You've come up with a way that makes sense to you, but it's completely wrong.
The ridiculous part of this, is that you have formulated strong opinions based on your ignorance.

Comment Re:Really? (Score 1) 272

It's not in the slightest.

It's a matter of using your ability to think, which you're refusing to do here.
A reasoning LLM would do better than you, here.

Your carnival ride seeks not to simulate weather. It seeks to simulate environmental effects- and to that end, it's quite a bit more successful. It has no model for weather. Rather, you are butchering the definition of "simulation" by misapplying it to what is being simulated.

Comment Re:Really? (Score 1) 272

I was simply responding to the definition *you* provided, which was:

I know you were.
But you then tried to use something that is not a simulation to demonstrate that the difference isn't so simple.
A carnival ride is not a simulation of the weather. It's a carnival ride with some environmental effects added to it. At no point does it try to simulate weather.
Ergo, you cannot use it as proof that the distinction is not simple.

Now you are moving the goalposts and making the distinction not so simple.

No, I just need to use smaller words with you, I think.

Of course I can't win the debate, you've already made up your mind. That doesn't make you right.

You can't win the debate any more than I can. It doesn't make either of us right. That's what you need to acknowledge.
You're trying to project your subjecting into the objective, and it's simply not possible.

Comment Re:Citation needed [Re:Blaming the victim] (Score 1) 108

Stop trying to move the goalposts.
You said:

The victim had signed the terms of service years ago for the Disney+ streaming service, and Disney declared that since he agreed to those terms for watching videos, he was bound by them for eating at a restaurant.

I said:

Slightly misleading.

1) They also agreed to the terms when they purchased their tickets- though they likely did not read them, that makes them no less binding.

At that point, you could have said, "ya- fair- they saw the terms twice, actually, but neither related to the restaurant.
Had you said that, I would have replied with: "Correct- but it's far more adjacent than Disney+, and leaving it out is misleading."
But you didn't say that.
You said:

Unless you have a citation for that, I will continue to believe the facts as stated in the article we are discussing [npr.org] and not the "facts" that you just made up.

I think you realize the stupidity of your reply now, and so I'll grant you this:
You're correct- the T&C agreed to upon purchase of the tickets isn't any more relevant to the T&C agreed to for Disney+.
However, neither does Disney own the restaurants in Disney Springs. They are independent.
In-park would have been a different story. Which brings us back to:

2) The restaurant was independently owned, and Disney is included as "the owner of the property that the restaurant is on".

Why drain a turnip for $1M, when you can drain Disney for $100M?

Comment Re:Citation needed [Re:Blaming the victim] (Score 1) 108

You aren't digging your way out of this, dipshit.
You said:
Conversation went as such:,br>

1) They also agreed to the terms when they purchased their tickets- though they likely did not read them, that makes them no less binding.

You replied with:

Unless you have a citation for that, I will continue to believe the facts as stated in the article we are discussing [npr.org] and not the "facts" that you just made up.

The "citation for that" was in your own fucking link.

Nobody was talking about "tickets to the restaurant" since that is not a thing.
Back into the short bus, fucker.

Comment Re:What is thinking? (Score 1) 272

An interesting take, but we may be diverging in context.
Every token produced by an LLM is an extrapolation.
There are certainly models (like BERT) that have been specifically trained on interpolation, but GPTs are not.

Comment Re:No One Mentions (Score 1) 108

So really, what point were you trying to make?

That your argument was bad. And my point was correct.

You're trying to reframe your argument (which is good, because it was really bad) without invoking shit like NCAP (again, good)

I wasn't taking a dig at BYD- they're cool cars. I was taking a dig at your shit argument.
Like it or not, that was on-topic, because you made it part of the topic when you made the trash-tier argument.

Comment Re:Really? (Score 1) 272

No- it makes it a carnival ride. The gross simplification was on your part, in calling a carnival ride a "simulation", when in fact it's nothing even remotely close to one.

Our job here is to demonstrate the difference between a faithful simulation manifested in the world, and what you call the "real".
Demonstrating that an unfaithful simulation is not real is not hard.

Honestly, this is a trick question- and you won't win this debate. Minds have been trying to come up with a good answer for this for 2500 years with no success. You're not going to be the one that cracks that nut.

But once you realize that, you can liberate your mind from its biases that lead you to continue to come to illogical conclusions in the face of things that exist today in the real world.

Comment Re:Really? (Score 1) 272

So when AC said "which the computer does not understand" we all have to settle for your cherry-picked definition of "understand"?

That is how definitions work. Were you never taught how to use a fucking dictionary?
Words have multiple definitions, and if one of them is satisfied, the usage is correct.
If you claim that a word does not apply, and one of the definitions does apply- then you are wrong.
Calling it cherry picking is absurd- it's called "how to use a dictionary".

I mean I can say that is not how definitions work because I've picked definition to mean "the formal proclamation of a Roman Catholic dogma" and then call you a dipshit if you try to object.

That is precisely how definitions work.
You are completely correct that it does not satisfy that definition of the word.
It however does satisfy another definition of the word, which means the usage is correct.

It's honestly amazing that I need to take you back to 4th grade English, but here we fucking go.
Let's apply your understanding of a dictionary.

To do this, we will try to determine if you're a person.
Now you might cherry pick 1a: human, individual, but since in your dumbshit universe we can pick any definition that doesn't match and preclude the word from correct usage, I point out that you most clearly are not 5a: one of the three modes of being in the Trinitarian Godhead.
Ergo, by your truly fucking IQ-of-65 reasoning abilities, you are not a human.

Congratulations, dipshit.
How in the fuck has someone with a UID that low survived so long being that fucking stupid? Are you maybe suffering from dementia?

Comment Re:Citation needed [Re:Blaming the victim] (Score 1) 108

Holy shit.

You're a fucking moron.
From the article you just linked:

Disney said Piccolo agreed to similar language again when purchasing park tickets online in September 2023. Whether he actually read the fine print at any point, it added, is "immaterial."

Slow clap, my dim-witted friend. Slow clap.

Slashdot Top Deals

Real computer scientists like having a computer on their desk, else how could they read their mail?

Working...