Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:Really? (Score 1) 289

We *do* understand LLMs well enough to build all kinds of things from that understanding.

I guess it hinges on what one means by understanding. I think your use is acceptable when given the qualifiers in your above statement, but not acceptable to attack people when they claim we don't understand LLMs. You should generally use the context that gives a reasonable meaning to what someone is saying. That's just how one uses context.

When people say we don't understand LLM (including the people who invented them), they are talking about a scientific understanding/quantification of why they work. Human history is full of examples where people could engineer useful products from things we didn't understand at a scientific level. Science is a relatively recent invention, but humans have been "engineering" things for presumably 300,000 years.

It's not just AI chatbots. There are LLMs tailored to coding, generating images, generating videos, finding security vulnerabilities, screening resumes, taking notes in meetings, the list is endless. It's not necessary to understand every nuance at the deepest level, it's only necessary to understand enough to do useful things with it, and to mitigate risks. That principle holds true for *all* of engineering.

The textbook engineering process is to take scientific understanding and to use those properties to create useful products. That does not imply that if you can engineer something you have a scientific understanding. My interpretation is that we got pretty lucky with this stuff. We created a simple model of the brain's visual system and eventually trained it on a large dataset, and it worked surprisingly well. We've been tweaking that for the last 10+ years and eventually created foundational language models with embeddings that we can use to create interesting if somewhat unreliable products. We don't really understand why these models work so well, but we continue to tweak them and combine them to create new products.

Now a possible angle to attack what I'm saying, it to focus on scientific understanding. What is scientific understanding? Can we ever achieve it for LLMs? If not, then is it really fair/interesting to talk about it in this context?

Comment Re:Really? (Score 1) 289

Everything we do as humans is built on incomplete knowledge.

True for the most part. I guess there are some formal mathematical systems we know completely, but probably not an important distinction.

But to apply that standard, you'd have to also say we don't understand metallurgy or gravity or electricity.

I think the distinction is whether we can formally deduce facts about the properties of these things which is how we generally take our scientific understanding and apply it to engineering. For example, we do understand electricity well enough to build all kinds of things from those properties. In the process, we sometimes find cases where the physics is wrong, so our understanding was not perfect, but I still think it's valid to say we understood it.

When I say that humans *do* understand how LLMs work, I don't mean to say that they understand everything about them. What I mean to say is that humans know enough to *engineer* them to work according to specification.

Perhaps. I think it's a bit debatable whether they can fulfill a specification, but they can direct a LLM with curated data, constant evaluation, and reinforcement learning tuning. But I would say it's closer to training a pet then designing something to a specification.

To me, we don't understand how any sophisticated ML works because we don't have a good theory to explain why the algorithms create models that generalize to new data which is the fundamental reason why these models are useful. It's like having an engine without having a theory to explain why it can move the piston and do work. This is a difficult problem that has existed for many years, and any progress that was made was blown out of the water by LLMs. This is what many researchers are talking about when they say we don't understand how LLM work, and it also applies to most useful ML.

Comment Re:What is thinking? (Score 1) 289

You can't even quote it because I didn't say it. You just misinterpreted what I said. Just because I quote you to give context, doesn't mean I'm denying everything I quote. I said nothing about rocks until later. There I explicitly said rocks can't think. Instead of assuming someone is saying something crazy, you should first consider you are misinterpreting them. Of course, at this point you know this and are just completely ignoring the substance.

Comment Re:What is thinking? (Score 1) 289

Of course, I don't have time to explain all the math here, so as an example I used the fact that rocks don't think. However, some people like you idiotically tried to argue that rocks do think. You are a moron! Why are you even arguing that??? Get off the internet for a while.

Wow. I didn't say anything even remotely close to that. Stop looking in the mirror.

Comment Re:What is thinking? (Score 1) 289

Quoting a high end philosopher with an incomprehensible unrelated quote makes your argument unassailable by small minds.

Sorry, a bit of a (bad) joke on my part given your signature, but also serious. Let me be less cryptic.

Folk Psychology is an idea of Paul Churchland. I took a philosophy of mind seminar, and it's the idea that resonated with me most. Basically it says that these mental states we ascribe to people don't necessarily reflect the actual processes and are just a way humans understand and interact with the world and other people. When you say thinking you are using a folk psychology notion that isn't consistent with other people and probably not internally consistent with your other beliefs. Folk psychology also implies that introspection is not as valuable as people imagine for answering these questions.

Of course, when thinking is scientifically defined, it will be related to the folk psychology notion otherwise they won't use the term thinking, but it should be rigorous enough to allow scientific progress.

But to your point, let's assume we could create a set of objects that can't think (even though we haven't defined what think really means.) I'm not sure that's much progress. There are a lot of things in the world, and it's the tricky ones that are really informative to the definition (just look at how ML works.) While I agree that some eventual scientific definition will exclude rocks and chocolate, people will have different opinions on things like snakes, ravens, and Claude Opus 4.5.

As for Popper, I've read some of his stuff a long time ago for a philosophy of science course. While interesting and influential, it's a bit dated. From my perspective, it's precomputer, so it misses important questions about how science is done. Current philosophy of science people seem to dismiss it for other reasons.

Comment Re:Really? (Score 1) 289

Yes, we do know how LLMs work. Maybe you don't, but the engineers at OpenAI and Microsoft and Google etc., absolutely do. The evidence of this is that over the past three years, they have been able to repeatedly and steadily improve the quality of the chatbots' responses, and to correct incorrect responses of the past. One notable example was an early Gemini image generator, which, when asked to render drawings of historic figures like Lincoln, would render an image with the wrong gender or race. The designers had built this into the LLM, but didn't fully anticipate the ultimate fallout. So they fixed the image generator to be more historically accurate, before they brought it back online. This kind of correction would not be possible, if the engineers didn't understand thoroughly how it works.

Lot's of things can be improved without understanding them at a fundamental level. Trial and error helps build some intuition but it's not real understanding. History is full of examples where humans built and improved things without a good theory to understand how they work. In fact, it's often these inventions that help motivate the development of the science. Look at how the steam engine helped thermodynamics.

Neural networks are a very basic simulation of the neural networks of brains. But they didn't really work that well until people used them in convolutional networks that modeled vision systems. Since then it's been a lot of experiments building intuitions but very little strong theory that leads to understanding. The researchers are still surprised at how well LLMs performed and can't really explain it, but they can run lots of experiments and try different ideas and tweaks. Going back to your example of Google, for their most recent LLM model, after they trained it, they were surprised at the strength of the model. If they really understood it, there would be no surprise just an implementation of their already worked out solution.

Comment Re:Wrong Name (Score 1) 289

To be fair, the phrase was coined in 1955 to describe a field of research. When you read AI, you should interpret it as artificial intelligence research.

But you are correct; the term is now being abused. While many products have been result of AI, I would definitely not call any of those products an artificial intelligence. LLM algorithms are the first invention that could be built into a system that has a chance of succeeding. At a minimum, it's forced us to rethink our notions of intelligence.

Comment Re:PR article (Score 1) 289

What in hell GPT-generated word salad did I just plow through?

Careful. If you don't understand something and then proceed to critique and insult it, it might be you looks like the fool.

Comment Re:What is thinking? (Score 1) 289

It is generally agreed that chocolate bars do not think. Rocks do not think. Pocket calculators do not think. We know what thinking is not, even if we can't define it fully.

I'm sorry this is just folk psychology and not very useful in this context. What we need is a scientific definition before we can address this question. Until then, it's just inconsistent thought, at least from a scientific perspective.

Comment Re:Not really no (Score 1) 317

Pedophilia is not just about age or physical characteristics it's about exerting power over a helpless individual.

I'm not a fan of terms getting new definitions without good reason. Often it's for lazy reasons, but sometimes it's to attack someone. When I grew up, a pedophile was someone attracted to prepubescent children. We had statutory rape laws to cover older children who were not emotionally developed enough to properly consent. I understand the term now has an extra definition in our culture, and this is what's causing the confusion. I don't know the source of this new definition, but it allows people to escape back into their confirmation bubble and claim their side is being unjustly attacked.

Comment Re:As intended (Score 1) 155

The same thing is happening with solar; from a Wall Street perspective it's a failure because prices keep falling and solar companies aren't making profits. From a Chinese perspective it's a success, because they are blanketing their mountains with cheap solar panels and they are going to achieve energy independence which will help the whole economy.

That's one of the ironies of capitalism, everyone publicly waxes about how great it is to have a free market, but the dominate players do everything they can to ruin any free market. A free market drives profits down close to zero. This is why Buffet loves his economic moats.

Comment Re:Planned economies (Score 1) 155

They care about it because it's exactly their plan - they tank the rest of the world's auto industry, then they take over. We've seen this pattern with industry after industry. This is why dealing with them is problematic - they are not operating in good faith.

I'm not sure what that has to do with good faith. Bad faith, in a domestic sense, would be dumping and destroying those markets then taking over and drastically increasing prices. Not sure that even applies when talking international with different economies and different goals/perspectives. However, assuming it does, have the Chinese followed this pattern. Have the prices of the markets they dominated suddenly jumped after they got a dominate position. Standard example would be solar panels and it's not true there. Maybe they just have a system that does a good job lowering prices for consumers.

Comment Re:Actual critical thinking? (Score 1) 224

Well that was predictable. It really doesn't matter what I say, as this is just distraction. You quibble over things that don't matter or misconceptions that got corrected and ignore everything that proves you wrong. What did any of your comment have to do with what the majority of the left believes? As I said, you can always find people on the fringe, so what.

Slashdot Top Deals

"Though a program be but three lines long, someday it will have to be maintained." -- The Tao of Programming

Working...