Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:No shit (Score 1) 99

I guess his point is that LLMs only do rote memorization with so little of proper reasoning steps that we may as well consider them incapable of understanding.

It is also very hard to distinguish between an LLM to simply spitting out a learned answer instead of doing some reasoning from a more generic model to come to the answer. If the LLM was taught an answer to your question then it can just provide the learned text without any (deeper) understanding of it. It may have only done some simple substitutions to the memorized data to give output tailored to your specific question. This is a big deal from my point of view. We do not know whether the model inside LLM is simple enough compared to the model humans have (i.e. Kolmogorov complexity of the model is not too much bigger in LLM than in a human).

It has been shown that LLMs can reason to at least a very limited level. It is not only memorization of the training data. They can do at least one reasoning step (e.g. a simple substitution rule or a simple modus ponens rule ... here and there ... mostly correct :-D ). But it is hard for users to estimate how much of some LLM response is rote memorization and how much of it is a reasoned response from a smaller more generic model. We do not know whether about the same question as we are asking was in the training data.

Comment Re:No shit (Score 1) 99

AI models don't "understand" anything.

A popular sentiment, it seems. Can you please explain what you mean by the word in scare-quotes? What is the intended point? I really can't understand what you mean, and I'm human.

Understanding comes from learning (symbolic) models of reality in our brains and an ability to reason about those models to an arbitrary degree. The reasoning allows us to validate our internal models, update them with newer facts and to derive proper consequences (i.e predict the likely future based on them). That is the whole point of intelligence. Predict the future so that we can optimize our current behavior to do better in the future (i.e. increase our chance of survival into the future).

Additional data collection and the reasoning about the future happens in steps. Each step must be performed correctly to reach the right conclusion. LLM AI can properly execute smaller number of steps than skilled humans. LLMs reason only within their context window size. LLMs discard any data that overflows this context window. LLMs more likely ignore the data more deeper (more ancient) in their context window. The more this context window is filled the more likely they make mistake in each particular step. The result is that LLMs tend to go awry sooner than skilled humans over time.

Comment Re:the race continues (Score 3, Informative) 26

Already, at Intel's 1.8nm, we're looking at ~16 atoms.

The process numbers do not mean feature size for a long time already. They are more like: What feature size the old process would have if we achieved the same part count per unit of area? Lets call this number the new "feature size".

Comment The likely reason looks obvious (Score 2, Insightful) 16

Kiki is a bit more loud at higher frequencies than buba.
Spiky shapes generate higher frequencies than rounded ones.
If the chicks were exposed to any sound and visual info before the test then one would expect this result. They may have learned the correlation between higher frequencies and spiky shapes even during the test. I think there is a tiny chance the correlation may be genetically "pre-wired" in brain.

Comment Re:Not really (Score 1) 165

They produce more tire particles since they are heavier and can accelerate more aggressively.

They also use slower-wearing tires, and wear their tires less to accelerate (hence why they can use lower-wearing tires) on account of their advanced traction and throttle control.

Well, one can use slower wearing tires on ICEs too. That is a feature of tires and not cars. It is easy to replace tires. There may be something considerable in your argument that they have better traction control (which is possibly harder to do with ICEs, maybe only not as common with ICEs). But does this alone compensate for higher weight and higher accelerations of EVs? If so, do you have some good links explaining this?

Comment Re:Not really (Score -1, Troll) 165

allowing for an equivalent-mass EV to perform better on a much slower-wearing tire

EVs are typically 30% heavier (not equivalent mass). They produce more tire particles since they are heavier and can accelerate more aggressively. They produce less brake pad/disc particles since they use regenerative braking.

Comment Re:Endless growth - until the money is gone (Score 1) 60

There is no cap on money, because money represents only one thing. Trust in value. It has no value in of itself.

There is a cap on money. It is called inflation. Well, you can decide to ignore this cap. If you ignore it for a while then it only leads to wealth redistribution to hard asset owners (mostly stock and real estate owners) and destruction of the current bond market, possibly transforming it to inflation based interest rates. This will damage the economy but it can still function albeit less efficiently. If you ignore inflation in the long term then you just force transition to a currency issued by a different state (no monetary control over your economy) or a barter system (much less efficient) or a revolution and the associated capital destruction in the worst case.

Comment Re:I just want one thing (Score 1) 80

It is called variable because you can initialize it to a different value any time the process execution enters the initialization code. Mutable (mut) is added so that is a bit more typing to use them. That is a small incentive to prefer immutable variables (as it should be since it is a good programing practice to to prefer immutable variables).

Comment Re:Good (Score 1) 83

In my experience, the parent is right. Most stuff you buy in EU is the same you can buy from China. The difference is no easy returns, much longer delivery times (if the Chinese company does not have an EU based warehouse) and the price. Though my experience is that the price difference is typically smaller. The Chinese stuff is mostly around 1/2 to 1/4 of the EU price. EU users already pay VAT on all Chinese imports. Addition of occasional duties or even 3.5€ flat fee will not change the situation. VAT is around 20%, there are no duties most of the time or if they are any then they are low (around 5%).

Slashdot Top Deals

Where there's a will, there's an Inheritance Tax.

Working...