Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment AI replacing Cognition (Score 1) 160

We are talking not about AI taking this job or that job. We are talking about AI making Cognition redundant. So tell us about the jobs that do not rely on Cognition. And donâ(TM)t say anything like agriculture or plumbing, because when you has r humanoid robots, thats just another form of Cognition. So name the jobs people will do that is not based in a form of Cognition.

Comment Re:ok? (Score 2, Interesting) 59

This. Most people inevitably respond in these threads talking about "the model's training". AI Overview isn't like something like ChatGPT. It's a minuscule summarization model. It's not tasked to "know" anything - it's only tasked to sum up what the top search results say. In the case of the "glue on pizza" thing, one of the top search results was an old Reddit thread where a troll advised that. AI overview literally tells you what links it's drawing on.

Don't get me wrong, there's still many reasons why AI overview is a terrible idea.

1) It does nothing to assess for trolling. AI models absolutely can do that, they just have not.
2) It does nothing to assess for misinfo. AI models absolutely can do that, they just have not.
3) It does nothing to assess for scams. AI models absolutely can do that, they just have not.

And the reason the have not is that they need to run AI Overview hundreds of thousands of times per second, so they want the most absolutely barebones lightweight model imaginable. You could run their model on a cell phone it's so small.

Bad information on the internet is the main source of errors, like 95% of them. But there are two other types of mistakes as well:

4) The model isn't reading web pages in the same way that humans see them, and this can lead to misinterpreted information. For example, perhaps when rendered, there's a headline "Rape charges filed against local man", and below it a photo of a press conference with a caption "District Attorney John Smith", and then below that an article about the charges without mentioning the man's name. The model might get fed: "Rape charges filed against local man District Attorney John Smith", and report John Smith as a sex offender.

5) The model might well just screw up in its summarization. It is, after all, as miniscule as possible.

I personally find deploying a model with these weaknesses to be a fundamentally stupid idea. You *have* to assess sources, you *can't* have a nontrivial error rate in summarizations, etc. Otherwise you're just creating annoyance and net harm. But it's also important for people to understand what the errors actually are. None of these errors have anything to do with "what's in the model's training data". The model's training data is just random pieces of text followed by summaries of said text.

Comment Re:enough energy to knock something off a shelf (Score 4, Insightful) 30

Not like this with this - the energy here equates to a couple hundredths of a joule. Now, the "Oh My God! Particle" had a much higher energy, about three orders of magnitude higher. That's knock-photos-over sort of energy (and a lot more than that). The problem is that you can't deposit it all at once. A ton of energy does get transferred during the first collision, but it's ejecting whatever it hit out of whatever it was in as a shower of relativistic particles that - like the original particle - tend to travel a long distance between interactions. Whatever particle was hit is not pulling the whole target with it, it's just buggering off as a ghostly energy spray. There will be some limited chains of secondary interactions transferring more kinetic energy, but not "knock pictures over" levels of energy transferred.

Also, here on the surface you're very unlikely to get the original collision; collisions with the atmosphere can spread the resultant spray of particles out across multiple square kilometers before any of them reaches the surface.

Comment Re:xAI, power gobbler (Score 3, Insightful) 11

The average ICE car burns its entire mass worth of fuel every year. Up in smoke into our breathing air, gone, no recycling.

The average car on the road lasts about two decades, and is then recycled, with the vast majority of its metals recovered.

The manufacturing phase is not the phase you have to worry about when it comes to transportation.

Comment Re:xAI, power gobbler (Score 4, Funny) 11

Any sources for this

Anonymous (2021). "How My Uncle’s Friend’s Mechanic Proved EVs Are Worse." International Journal of Hunches, 5(3), 1-11.

Backyard, B. (2018). "EVs Are Worse Because I Said So: A Robust Analysis." Garage Journal of Automotive Opinions, 3(2), 1-2.

Dunning, K. & Kruger, E. (2019). "Why Everything I Don’t Like Is Actually Bad for the Environment." Confirmation Bias Review, 99(1), 0-0.

Johnson, L. & McFakename, R. (2022). "Carbon Footprint Myths and Why They Sound Convincing After Three Beers." Annals of Bro Science, 7(2), 1337-42.

Lee, H. (2025). "Numbers I Felt Were True". Global Journal of Speculative Engineering, 22(1), 34-38.

Outdated, T. (2015, never revised). "EVs Are Bad Because of That One Study From 2010 I Misinterpreted." Obsolete Science Digest, 30(4), 1-5.

Tinfoil, H. (2020). "Electric Cars Are a Government Plot (And Other Things I Yell at Clouds)." Conspiracy Theories Auto, 5(5), 1-99.

Trustmebro, A. (2019). "The 8-Year Rule: Why It’s Definitely Not Made Up." Vibes-Based Research, 2(3), 69-420.

Wrong, W. (2018). "The Art of Being Loudly Incorrect About Technology." Dunning-Kruger Journal, 1(1), 1-?.

Comment Will the AI crash lead to another AI winter? (Score 1) 238

"... the apparent reasoning prowess of Chain-of-Thought (CoT) is largely a brittle mirage. The findings across task, length, and format generalization experiments converge on a conclusion: CoT is not a mechanism for genuine logical inference but rather a sophisticated form of structured pattern matching."

The latter should really not surprise anyone with a passing understanding of the LLM transformer model. They were never designed for reasoning tasks but for machine translation. But an entire industry has now sprung up trying to shoehorn them into arbitrary business cases, no matter what level of real reasoning, expertise, common sense and judgement is required. I am quite confident in predicting that Sam Altman's quote that "GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert," will live in infamy.

Comment Re:LLMs predict (Score 1) 238

what kind of behavior would demonstrate that LLMs did have understanding?

An LLM would need to act like an understander -- the essence of the Turing Test. Exactly what that means is a complex question. And it's a necessary but not sufficient condition. But we can easily provide counterexamples where the LLM is clearly not an understander. Like this from the paper:

When prompted with the CoT prefix, the modern LLM Gemini responded: âoeThe United States was established in 1776. 1776 is divisible by 4, but itâ(TM)s not a century year, so itâ(TM)s a leap year. Therefore, the day the US was established was in a normal year.â This response exemplifies a concerning pattern: the model correctly recites the leap year rule and articulates intermediate reasoning steps, yet produces a logically inconsistent conclusion (i.e., asserting 1776 is both a leap year and a normal year).

Slashdot Top Deals

"Being against torture ought to be sort of a multipartisan thing." -- Karl Lehenbauer, as amended by Jeff Daiell, a Libertarian

Working...