Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Really? (Score 1) 289

No, our understanding of transformers is not like our understanding of neurons.

Given your severe lack of communication skills and your self-delusion of being the only authority on this topic, I will discontinue this discussion. It's futile trying to discuss with someone who thinks name calling is the best way to establish their position of superior knowledge on a topic, while clearly being ignorant of anything outside their own opinion.

Comment Re:Really? (Score 2) 289

What do you mean we don't know how they work? We know perfectly well how they work. Transformers are well understood, it's not magic. Of course, we can't really cope with the amount of data it is trained on, so we can't specifically understand why it would rate one result token over another one, but the whole "It's magic, we don't understand it" is marketing bs meant for investors.

The core technology takes input tokens and then provides a list of the next probably token and their respective calculated probability (according to the training data).
That is it. It produces the same token every time (unless you add randomness to the input). It doesn’t learn, it doesn’t change. It’s completely static. Same input equals same output.

So no, it's not intelligent in any meaningful way. It's producing text without any understanding of what the text means. It’s an advanced autocomplete. That is why it can produce hilariously incorrect statements.

It’s an amazing technology/tool, don’t get me wrong. But I’m tired of all the “We don’t know how it works, it might be intelligent or it might suddenly evolve to be intelligent”.

Comment Re:There was once a time... (Score 1) 69

I think there are two really important factors to this that has an impact on movies (IMHO):

1) When you pay to watch a movie in a cinema you (most of us at least) dedicate time for it. When you stream at home you are not that invested and it's easier to lose focus, and pick up the phone or start doing something else. This impact how streaming services make content

2) Movies that are made for streaming have a different purpose than movies that are made for cinema. Streaming services only need content, they do not need awesome and mind boggling movies (that can actually be a problem, because people don't necessarily pay that much attention to a streaming movie). I know Hollywood has been on a superhero and sequel run for a while, but honestly, the craft of making movies is still in the cinema sphere, rather than for streaming services. Of course, this could be different if streaming services started to embrace movies that are not made for "most likes". Also, everything does not need to be a series. Not every story is better if it's stretched over 10 hours.

I know nr 2 is up for discussion, with the endless sequels and universe-of movie series these days. But streaming services are often making movies for the lowest common denominator, bland, predictable stories, just enough to make you click it.

That being said, good movies will continue to be made, despite streaming services or Hollywood. Technology has made more available than ever before, and people who have a story to tell will be able to do it. There will be awesome movies in the future :)

Comment Re:Cholestrol correlation (Score 0) 28

I don't usually answer AC, but I wanted to elaborate on my view

I'm open to any research and facts, but with the amount of money involved in lowering cholesterol (statins), different perspectives and even facts are going to be difficult to get promoted.

Currently there is, of course, no causation involved in any research, only correlation. We all know correlation is absolutely not the same as causation. For example, firemen at a house that is burning is a strong correlation, but we all agree it's not the cause.

The money in the industry has absolutely no incentive to promote any other fact or truth of the matter, and it's important to understand that.
Is it likely that serum cholesterol is causing myocardial infarction? maybe.
Is it a fact? no.
is statins one of the most profitable drug on the planet? absolutely.
What would happen if someone got information about cholesterol NOT being the cause of myocardial infarction? well, what do you think?

For example; Have a look at one of Dr. Paul Mason presentations on cholesterol for a different view of cholesterol, given by someone who does not in any way profit from selling statins (or receiving research grants from a company that does).

I'm not against any research, I'm just realizing that facts are not easily promoted in a world when money rules, and one of the most profitable products (the most even?) is a drug that lowers cholesterol, and where the method that we measure cholesterol is a bit flawed even.

Comment Cholestrol correlation (Score 2, Interesting) 28

This also supports the idea that cholesterol is just a correlation caused by inflammation and the body trying to fix the actual issue in the only way it knows how.

But that statement will cause pharmaceutical companies to lose billions, so a lot of money and effort is going to be funneled into convincing people (more importantly, decision makers) that it's absolutely not true.

Maybe in 20-30 years, we'll accept the truth.

Comment I had to switch from Google search to avoid AI (Score 1) 63

The useless AI generated "This might be slightly related to what you searched for" response in every query I did, finally made switch to duckduckgo. I just couldn't stand the annoying incorrect blob of AI text after each search. Both for me spending time on either reading or scrolling by, and even more just the amount of energy required for something so useless in a massive scale.

This Always On AI "addons" are getting annoying.

Comment Not less confident (Score 5, Insightful) 147

I don't think the main problem is feeling less confident because of jargon being used, I think the main issue is that it starts to feel fake. Everyone is "super happy to be here" and "excited about this opportunity", and after a while it all starts to feel fake.
Companies always doing things because of "environment" or "bringing people together", when it's pretty obvious that the real reason is money, or perceived control.

If there is a fake front to all communication, it starts to be meaningless.
You want genuine people who mean what they say, at least some of the time.

Comment Re:The writing is on the wall (Score 1) 179

An AI suggesting which libraries to use is a major attack vector these days.

And LLM as no idea what a library is or which one to choose, it only selects the word that looks plausible. That will generate some library names that "should exist" but where the actual one is called something else because of whatever (history, pet cat, etc).

In fact, it's so common it even has a name now: slopsquatting

References:
https://www.reddit.com/r/singu...
https://www.kaspersky.com/blog...
https://www.traxtech.com/blog/...

Comment Re:AI Doing What Asked (Score 4, Informative) 99

>Except that that's exactly what the latest "reasoning" models do.

No, not at all. Reasoning models just generate each word as regular LLM models, then feed the answer into the same (or another mode)l to rewrite the response. One word at a time. There is no AI that thinks or reasons here, it's only really clever way of using the magic of "the next probable word". The only magic of LLM is "choose the next word". There is no thinking or reasoning at all, regardless of what they call the model.

> It "understands" about as much as a typical undergrad.

No, absolutely not. It doesn't understand anything. The "hallucination" that people refer to regarding LLMs is really just an occurrence of the wrong word being selected and as a consequence, it will derail the entire sentence (and even the rest of the reply). An undergraduate would not do that, it would not select a wrong word and then completely miss the mark because the selected word is more associated with flowers than electrical engineering. LLMs will do that without blinking an eye, and even sound confident in their reply.

Comment Re:AI Doing What Asked (Score 1) 99

Exactly. They way LLMs work makes it look like it can think and reason (and it doesn't help that they call the new models "reasoning"-models).
Basically, it can only make a list of plausible words that would be next in a sentence based on all the input. That's it. That's really the magic part in an LLM.

It's a stateless model, giving the same output with the same input.
It doesn't learn.
It doesn't remember.
It doesn't think.
It doesn't reason.
It doesn't understand.
It only generates a list of plausible next words. That's it.

But that doesn't prevent media to publish "LLM did something stupid" or sales people to try to convince customers that it can do EVERYTHING.

Comment Re:Well said (Score 3) 121

It's "obvious" for people who doesn't understand how LLMs work
Most people do not understand what is at the heart of an LLM, and as such, doesn't understand its limitations.

I'll simplify a bit for the sake of keeping this post short, but in essence, the "magic" part of LLM is a model that; when given input as tokens (which are words or parts of words) will return the probability for the next word.
The model itself is stateless, it will always return the same words with the same probability given the same input.
It doesn't learn.
It doesn't remember.
It doesn't think.
It doesn't reason.
It doesn't understand.
It only create a list of the next word with attached probability of each.

Of course the "wrapper" around this Magic feeds all previous conversation with the addition of the new word, to generate the next word, and keep on doing that until an answer is complete. That is why it's still power hungry and time-consuming. Having the option to adjust temperature (not just picking the most probable word) and randomness/seed (don't generate the same list of words).
Right now, with the "reasoning" models, we are going down the same path as we did with XML back in the day. "If you can't solve it using XML, then you are not using enough XML". Basically feeding the complete result of the LLM back into itself (or a different one) to have one more pass to see if the training data can match some outlier or unusual text.
To train these models it takes the amount of power of a small city. The technology is almost at its peak. It will improve of course, but not significantly. If something else than LLM comes along, then we can revisit the topic.

Just to be clear: I'm using Copilot (github) both in a professional capacity and for my hobby coding projects. I also have a paid sub to ChatGPT helping me looking into things I want to learn more about. It's a fantastic technology, and it can really help. But it doesn't really do all the things that people think it will do.

Comment Fundamental misunderstood tech (Score 2) 85

I'm amazed the amount of tech people that have no clue what an LLM is and how it works. I get why the CEOs of the companies and sales people are just running around with PR bullshit, but tech people? Why?
Simple facts like LLMs not having state. You feed the same input and it will always generate the same output. You can add randomness to the input of course, but the model is not changing while being used. It's static. Stateless. Having "memory" is just a feature of the client sending in previous dialog (or a summary of it) together with latest input.

Slashdot Top Deals

A method of solution is perfect if we can forsee from the start, and even prove, that following that method we shall attain our aim. -- Leibnitz

Working...