Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Back when (Score 1) 43

I used to briefly on BlackBerry (OS7? It's been a while). It wasn't long after they acquired Torch that their browser was the best in the mobile space, even beating out the best desktop browsers in features and standards compliance. iirc, they were one of the first with WebGL support. (iOS Safari kinda had it, but you needed to jailbreak to get it.) After than, there really wasn't any reason to use Opera on anything other than a dumb phone with J2ME.

For the record, I've never posted to Slashdot from the toilet, from Opera or any other browser. If it takes you so long to shit that you get bored and need a distraction, see a doctor.

Comment Re:Environmental issues are exaggerated (Score 1) 117

It is true that 0.1% is not as much as other uses, sure, but that doesn't mean that it's an insignificant amount. A broken leg is not nearly as bad as two severed arms, but that doesn't mean it's a trivial problem. A bomb that blows up an apartment building is still national news even though it's nothing compared to a nuclear bomb.

Thus it makes more sense to be concerned with those higher percentage usages, no?

This same ridiculous argument can be used to dismiss all but the single largest problem:

Alice: "x is a problem."
Bob: "y is a bigger problem than x, so it makes more sense to be concerned about y!"
Carol: "z is a bigger problem y, so it makes more sense to be concerned with z!"

Golf courses are a serious problem. So are data centers. Pretending otherwise is as absurd as it is dishonest.

Comment Re:Environmental issues are exaggerated (Score 1) 117

I put just as much effort, if not more, into my replies as was put into the posts to which I reply. Using a small percentage to make a massive quantity appear insignificant? That's not only lazy, it's dishonest. You got a significantly better reply from me than you deserved.

If you want better replies, write better posts. If you want courteous replies, don't be a dishonest scumbag. It's really that simple.

Comment Re: psychiatrist for AI (Score 0) 78

This is "absolutely without question" incorrect. One of the most useful properties of LLMs is demonstrated in-context learning capabilities where a good instruction tuned model is able to learn from conversations and information provided to it without modifying model weights.

You're ignorance is showing. The model does not change as it's used. Full stop. Like many other terms related to LLMs, "in context learning" is deeply misleading. Remove the wishful thinking and it boils down to "changes to the input cause changes to the output", which is obvious and not at all interesting.

Who cares?

People who care about facts and reality, not their preferred science-fiction delusion. I highlight the deterministic nature of the model proper and where the random element is introduced in the larger process to dispel some of the typical magical thinking you see from ignorant fools like you. The model does not and can not behave in the ways that morons like you image.

This is pure BS, key value matrices are maintained throughout.

Do you get-off on humiliation? While some caching is done as an optimization, this has absolutely no effect on the output. Give the same input at any point to a completely different instance of the model and you'll get the exact same results.

Again with determinism nonsense.

LOL! You think that the model isn't deterministic? Again, the only thing the model does is produce a list of next-token probabilities. It does this deterministically. The only non-deterministic part here is the final token selection, which is done probabilistically.

That you believe otherwise suggests that you're either even more ignorant that even I thought possible, or you think that LLMs or NNs are magical. What a fucking joke you are.

These word games are pointless.

The only one playing 'word games' here is you, ignorant troll.

Comment Re: psychiatrist for AI (Score 2, Informative) 78

He's not nice, but he's also not wrong. You have some very odd ideas about what LLMs do.

LLMs absolutely, without question, do not learn the way you seem to think they do. They do not learn from having conversations. They do not learn by being presented with text in a prompt, though if your experience is limited to chatbots could be forgiven for mistakenly thinking that was the case. Neural networks are not artificial brains. They have no mechanism by which they can 'learn by experience'. They 'learn' by having an external program modify their weights in response to the the difference between their output and the expected output for a given input.

It might also interest you to know than the model itself is completely deterministic. Given an input, it will always produce the same output. The trick is that the model doesn't actually produce a next token, but a list of probabilities for the next token. The actual token is selected probabilistically, which is why you'll get different responses despite the model being completely deterministic. The model retains no internal state, so you could pass the partial output to a completely different model and it wouldn't matter.

I vividly remember a newspaper article that said Ai performed better if you asked it to think things through and work it out step by step.

LLMs do not and can not reason, including so-called 'reasoning' models. The reason output improves when giving a 'step by step' response is because you end up with more relevant text in context. It really is that simple. Remember that each token is produced essentially in isolation. The model doesn't work out a solution first and carefully craft a response, it produces tokens one at a time, without retaining any internal state between them. Imagine a few hundred people writing a response where each person only sees the prompt and partial output on their turn and they can only suggest a few potential next words and their rank, the actual next word selected probabilistically. LLMs work a bit like that, but without the benefit of understanding.

I think LLMs resemble the phonoligical loop a bit.

I assure you that they do not. Not even a little bit.

Pretty sure at some point self awareness is needed to stabilize the output.

You probably realize by now that this is just silly nonsense.

The bloody thing hallucinates for Christ's sake!

That's a very misleading term. The model isn't on mushrooms. (Remember that the model proper is completely deterministic.) A so-called 'hallucination' in an LLM's output just means that the output is factually incorrect. As LLMs do not operate on facts and concepts but on statistical relationships between tokens, there is no operational difference between a 'correct' response and a 'hallucination'. Both kinds of output are produced the same way, by the same process. A 'hallucination' isn't the model malfunctioning, but an entirely expected result of the model operating correctly.

Comment Re:The Roblox FUD in the USA has to stop (Score 1) 51

The real problem is this absurd implicit assumption that every childless moron and politician makes that every kid has a 1950s-style middle class nuclear family with educated and involved parents.

"Parents should be the ones who..." Well, lots of kids don't have parents. Lots of kids just have one. Lots of kids have parents who don't have the means (money, intelligence, education, support, etc) to raise them in your mythical ideal way. Lots of kids have parents who abuse them, traffic them, ignore them... Lots of kids are stuck in a system that is all but completely indifferent to them. Others are stuck in a system that is designed to funnel them into private prisons.

...and that's all the defending of Republicans that I can stomach for this week

Yeah, I figured that's where you got that bullshit.

Comment Re:The Roblox FUD in the USA has to stop (Score 2) 51

I figure they ignore the pedo problem because the company also preys on kids, just in different ways.

parental oversight can pretty easily eliminate that threat

Unlikely. Kids can access that cesspit in countless ways from countless places. It's not like they can only access Roblox from the family PC in the living room... Also, not all kids have families or guardians that are interested or capable of effectively monitoring their internet use. Hell, some kids have families that explicitly traffic their children, a horror that is a lot more common than you'd expect.

Slashdot Top Deals

An inclined plane is a slope up. -- Willard Espy, "An Almanac of Words at Play"

Working...