Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:There was once a time... (Score 1) 69

I think there are two really important factors to this that has an impact on movies (IMHO):

1) When you pay to watch a movie in a cinema you (most of us at least) dedicate time for it. When you stream at home you are not that invested and it's easier to lose focus, and pick up the phone or start doing something else. This impact how streaming services make content

2) Movies that are made for streaming have a different purpose than movies that are made for cinema. Streaming services only need content, they do not need awesome and mind boggling movies (that can actually be a problem, because people don't necessarily pay that much attention to a streaming movie). I know Hollywood has been on a superhero and sequel run for a while, but honestly, the craft of making movies is still in the cinema sphere, rather than for streaming services. Of course, this could be different if streaming services started to embrace movies that are not made for "most likes". Also, everything does not need to be a series. Not every story is better if it's stretched over 10 hours.

I know nr 2 is up for discussion, with the endless sequels and universe-of movie series these days. But streaming services are often making movies for the lowest common denominator, bland, predictable stories, just enough to make you click it.

That being said, good movies will continue to be made, despite streaming services or Hollywood. Technology has made more available than ever before, and people who have a story to tell will be able to do it. There will be awesome movies in the future :)

Comment Re:Cholestrol correlation (Score 0) 28

I don't usually answer AC, but I wanted to elaborate on my view

I'm open to any research and facts, but with the amount of money involved in lowering cholesterol (statins), different perspectives and even facts are going to be difficult to get promoted.

Currently there is, of course, no causation involved in any research, only correlation. We all know correlation is absolutely not the same as causation. For example, firemen at a house that is burning is a strong correlation, but we all agree it's not the cause.

The money in the industry has absolutely no incentive to promote any other fact or truth of the matter, and it's important to understand that.
Is it likely that serum cholesterol is causing myocardial infarction? maybe.
Is it a fact? no.
is statins one of the most profitable drug on the planet? absolutely.
What would happen if someone got information about cholesterol NOT being the cause of myocardial infarction? well, what do you think?

For example; Have a look at one of Dr. Paul Mason presentations on cholesterol for a different view of cholesterol, given by someone who does not in any way profit from selling statins (or receiving research grants from a company that does).

I'm not against any research, I'm just realizing that facts are not easily promoted in a world when money rules, and one of the most profitable products (the most even?) is a drug that lowers cholesterol, and where the method that we measure cholesterol is a bit flawed even.

Comment Cholestrol correlation (Score 2, Interesting) 28

This also supports the idea that cholesterol is just a correlation caused by inflammation and the body trying to fix the actual issue in the only way it knows how.

But that statement will cause pharmaceutical companies to lose billions, so a lot of money and effort is going to be funneled into convincing people (more importantly, decision makers) that it's absolutely not true.

Maybe in 20-30 years, we'll accept the truth.

Comment I had to switch from Google search to avoid AI (Score 1) 63

The useless AI generated "This might be slightly related to what you searched for" response in every query I did, finally made switch to duckduckgo. I just couldn't stand the annoying incorrect blob of AI text after each search. Both for me spending time on either reading or scrolling by, and even more just the amount of energy required for something so useless in a massive scale.

This Always On AI "addons" are getting annoying.

Comment Not less confident (Score 5, Insightful) 147

I don't think the main problem is feeling less confident because of jargon being used, I think the main issue is that it starts to feel fake. Everyone is "super happy to be here" and "excited about this opportunity", and after a while it all starts to feel fake.
Companies always doing things because of "environment" or "bringing people together", when it's pretty obvious that the real reason is money, or perceived control.

If there is a fake front to all communication, it starts to be meaningless.
You want genuine people who mean what they say, at least some of the time.

Comment Re:The writing is on the wall (Score 1) 179

An AI suggesting which libraries to use is a major attack vector these days.

And LLM as no idea what a library is or which one to choose, it only selects the word that looks plausible. That will generate some library names that "should exist" but where the actual one is called something else because of whatever (history, pet cat, etc).

In fact, it's so common it even has a name now: slopsquatting

References:
https://www.reddit.com/r/singu...
https://www.kaspersky.com/blog...
https://www.traxtech.com/blog/...

Comment Re:AI Doing What Asked (Score 4, Informative) 99

>Except that that's exactly what the latest "reasoning" models do.

No, not at all. Reasoning models just generate each word as regular LLM models, then feed the answer into the same (or another mode)l to rewrite the response. One word at a time. There is no AI that thinks or reasons here, it's only really clever way of using the magic of "the next probable word". The only magic of LLM is "choose the next word". There is no thinking or reasoning at all, regardless of what they call the model.

> It "understands" about as much as a typical undergrad.

No, absolutely not. It doesn't understand anything. The "hallucination" that people refer to regarding LLMs is really just an occurrence of the wrong word being selected and as a consequence, it will derail the entire sentence (and even the rest of the reply). An undergraduate would not do that, it would not select a wrong word and then completely miss the mark because the selected word is more associated with flowers than electrical engineering. LLMs will do that without blinking an eye, and even sound confident in their reply.

Comment Re:AI Doing What Asked (Score 1) 99

Exactly. They way LLMs work makes it look like it can think and reason (and it doesn't help that they call the new models "reasoning"-models).
Basically, it can only make a list of plausible words that would be next in a sentence based on all the input. That's it. That's really the magic part in an LLM.

It's a stateless model, giving the same output with the same input.
It doesn't learn.
It doesn't remember.
It doesn't think.
It doesn't reason.
It doesn't understand.
It only generates a list of plausible next words. That's it.

But that doesn't prevent media to publish "LLM did something stupid" or sales people to try to convince customers that it can do EVERYTHING.

Comment Re:Well said (Score 3) 121

It's "obvious" for people who doesn't understand how LLMs work
Most people do not understand what is at the heart of an LLM, and as such, doesn't understand its limitations.

I'll simplify a bit for the sake of keeping this post short, but in essence, the "magic" part of LLM is a model that; when given input as tokens (which are words or parts of words) will return the probability for the next word.
The model itself is stateless, it will always return the same words with the same probability given the same input.
It doesn't learn.
It doesn't remember.
It doesn't think.
It doesn't reason.
It doesn't understand.
It only create a list of the next word with attached probability of each.

Of course the "wrapper" around this Magic feeds all previous conversation with the addition of the new word, to generate the next word, and keep on doing that until an answer is complete. That is why it's still power hungry and time-consuming. Having the option to adjust temperature (not just picking the most probable word) and randomness/seed (don't generate the same list of words).
Right now, with the "reasoning" models, we are going down the same path as we did with XML back in the day. "If you can't solve it using XML, then you are not using enough XML". Basically feeding the complete result of the LLM back into itself (or a different one) to have one more pass to see if the training data can match some outlier or unusual text.
To train these models it takes the amount of power of a small city. The technology is almost at its peak. It will improve of course, but not significantly. If something else than LLM comes along, then we can revisit the topic.

Just to be clear: I'm using Copilot (github) both in a professional capacity and for my hobby coding projects. I also have a paid sub to ChatGPT helping me looking into things I want to learn more about. It's a fantastic technology, and it can really help. But it doesn't really do all the things that people think it will do.

Comment Fundamental misunderstood tech (Score 2) 85

I'm amazed the amount of tech people that have no clue what an LLM is and how it works. I get why the CEOs of the companies and sales people are just running around with PR bullshit, but tech people? Why?
Simple facts like LLMs not having state. You feed the same input and it will always generate the same output. You can add randomness to the input of course, but the model is not changing while being used. It's static. Stateless. Having "memory" is just a feature of the client sending in previous dialog (or a summary of it) together with latest input.

Comment Re:not just game development (Score 1) 85

Several times more productive?
That would be true if you are creating Hello World apps for a living.

LLMs are impressive, but it's nowhere near being able to code reasonable sized software projects. It can certainly help with scaffolding and plumbing, but it just can't be trusted to do bigger tasks. A developers job is not to churn out code 8 hours a day. A developers job is to transform human language from the customer/employee into features/abilities in the software, and do that in a way that is structured, extendible, performant and maintainable.

As the parent poster pointed out "...this is _old_ tech scaled up. All easy to find tricks are already in there."

That is what most people today don't understand. LLMs are insanely scaled technology to pretty much reach the peak of what it can do. There will of course be improvements, but the giant leap to add "reasoning" or even something like "common sense" is not going to happen with this technology. Even if they keep naming the new models "reasoning"-models.

Comment Re:RISC architecture is going to change everything (Score 1) 23

Don't forget that Apple Silicon M series CPUs are RISC. They made quite the impact on launch.

Later this year we'll get Windows running on RISC with the Snapdragon CPU (including the NPU for "AI"). Windows tried that a few times before, but this time the times really have changed. Maybe it will work.

RISC is certainly having a comeback :)

Comment Re:Is there a shortage? (Score 1) 138

Yeah, we'll tell you what you can watch and you'll be happy about it! Be a good consumer and don't you dare own anything...

But to put forward some arguments on the matter. Maybe some people like to own the movies that they like to watch? In case they want to watch them again some time or show them to friends or family? It also comes with a nice bonus that they can alter the movie afterwards to be acceptable for a new country, audience or political climate.
And for those who like to collect stuff, why not movies?

Don't be so dismissive just because you don't understand the reasoning behind owing something instead of just renting it.

Slashdot Top Deals

Breadth-first search is the bulldozer of science. -- Randy Goebel

Working...