Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:Raise the costs even more! (Score 1) 54

AFAIK, nobody has demonstrated a viable SMR prototype of any kind. No, marine reactors do not count, they have the wrong characteristics and are far too uneconomic for this, even worse than civilian designs. The two that exist (Russian and Chinese) do NOT come with any or any believable cost figures. In addition, the he Russian one is a military design and the Chinese one is a highly experimental pebble-bed reactor based on German patents. The Germans wrecked three of these and two are still highly radioactive ruins that nobody know how to dispose of. On the plus-side, pebble-bed reactors cannot melt down, which is a decided plus.

Still, anybody that has high confidence in the approach is simply an idiot.

Comment Re:Europe exported it's polluting industry (Score 2) 57

Rare earths are not rare. However, processing rare earths are very polluting. By buying these materials from China, Europe doesn't need this polluting industry within it's borders. China can pollute itself for Europe's benefit.

If Europe doesn't want to deal with China, then it either needs to find another source or start processing rare earths within it's own territory and deal with the pollution itself.

Comment Re:PR article (Score 2) 219

Sure do :) I can provide more if you want, but start there, as it's a good read. Indeed, blind people are much better at understanding the consequences of colours than they are at knowing what colours things are..

Comment Re:Summary (Score 1) 24

I've been thinking of the of crypto/AI/techbro behavior as a symptom and just the next generation of the shift in business culture that came about in the 1980's

No, that's not it at all. It's just classic hubris on the part of some people, typically (bad) engineers, or just anybody who has deep knowledge in one particular domain and have themselves convinced that they're experts at every domain.

We see it here on slashdot all the time; think how many people here talk about how "easy" it is to be a CEO even though they literally don't know the first thing about it and do not have even a single second of experience at it. Yet somehow they're convinced they're one of the foremost experts at it. "Trust me bros, I could do that so easily! I just don't want to, that's the only reason I'm not a billionaire." ...Right...

The good thing about living in a free society is you're allowed to do that, and you're allowed to fail, as the vast majority do. What we're seeing with the AI hype isn't even remotely the first of its kind. Railroads, the gold rush, dot-com bubble, and countless others. People thinking there's a lot more money in something than there really is isn't at all new. There's no point in fighting it either, you can even benefit from it if you do so wisely. For example, by selling shovels and pickaxes.

Submission + - X Update Shows Foreign Origin for Many Political Accounts (apnews.com)

skam240 writes: Elon Musk’s X unveiled a feature Saturday that lets users see where an account is based. Online sleuths and experts quickly found that many popular accounts posting in support of the MAGA movement to thousands or hundreds of thousands of followers, are based outside the United States — raising concerns about foreign influence on U.S. politics.

Researchers at NewsGuard, a firm that tracks online misinformation, identified several popular accounts — purportedly run by Americans interested in politics – that instead were based in Eastern Europe, Asia or Africa.

The accounts were leading disseminators of some misleading and polarizing claims about U.S. politics, including ones that said Democrats bribed the moderators of a 2024 presidential debate.

Comment what is a reserve? (Score 2) 66

a government or anyone may decide they need a reserve of something in case it later becomes unaccessible when needed. When can a government *need* BTC? Needing oil or food or water or weapons or gold is understandable, those are real things and it is possible to run out of these items and be in a position where access is limited.

If one "needs" crypto currency they may either purchase it in the market freely or just start their own, even Trump has done this on multiple occasions.

Note, it says "a reserve", not a speculative asset to gamble on its price.

Comment Re:PR article (Score 1) 219

The congenitally blind have never seen colours. Yet in practice, they're practically as efficient at answering questions about and reasoning about colours as the sighted.

One may raise questions about qualia, but the older I get, the weaker the qualia argument gets. I'd argue that I have qualia about abstracts, like "justice". I have a visceral feeling when I see justice and injustice, and experience it; it's highly associative for me. Have I ever touched, heard, smelled, seen, or tasted an object called "justice"? Of course not. But the concept of justice is so connected in my mind to other things that it's very "real", very tangible. If I think about "the colour red", is what I'm experiencing just a wave of associative connection to all the red things I've seen, some of which have strong emotional attachments to them?

What's the qualia of hearing a single guitar string? Could thinking about "a guitar string" shortly after my first experience with a guitar string, when I don't have a good associative memory of it, sounding count as qualia? What about when I've heard guitars play many times and now have a solid memory of guitar sounds, and I then think about the sound of a guitar string? What if it's not just a guitar string, but a riff, or a whole song? Do I have qualia associated with *the whole song*? The first time? Or once I know it by heart?

Qualia seems like a flexible thing to me, merely a connection to associative memory. And sorry, I seem to have gotten offtopic in writing this. But to loop back: you don't have to have experienced something to have strong associations with it. Blind people don't learn of colours through seeing them. While there certainly is much to life experiences that we don't write much about (if at all) online, and so one who learned purely from the internet might have a weaker understanding of those things, by and large, our life experiences and the thought traces behind them very much are online. From billions and billions of people, over decades.

Comment Re:PR article (Score 1, Insightful) 219

Language does not exist in a vacuum. It is a result of the thought processes that create it. To create language, particularly about complex topics, you have to be able to recreate the logic, or at least *a* logic, that underlies those topics. You cannot build a LLM from a Markov model. If you could store one state transition probability per unit of Planck space, a different one at every unit of Planck time, across the entire universe, throughout the entire history of the universe, you could only represent the state transition probabilities for the first half of the first sentence of A Tale of Two Cities.

For LLMs to function, they have to "think", for some definition of thinking. You can debate over terminology, or how closely it matches our thinking, but what it's not doing is some sort of "the most recent states were X, so let's look up some statistical probability Y". Statistics doesn't even enter the system until the final softmax, and even then, only because you have to go from a high dimensional (latent) space down to a low-dimensional (linguistic) space, so you have to "round" your position to nearby tokens, and there's often many tokens nearby. It turns out that you get the best results if you add some noise into your roundings (indeed, biological neural networks are *extremely* noisy as well)

As for this article, it's just silly. It's a rant based on a single cherry picked contrarian paper from 2024, and he doesn't even represent it right. The paper's core premise is that intelligence is not lingistic - and we've known that for a long time. But LLMs don't operate on language. They operate on a latent space, and are entirely indifferent as to what modality feeds into and out from that latent space. The author takes the paper's further argument that LLMs do not operate in the same way as a human brain, and hallucinates that to "LLMs can't think". He goes from "not the same" to "literally nothing at all". Also, the end of the article isn't about science at all, it's an argument Riley makes from the work of two philosophers, and is a massive fallacy that not only misunderstands LLMs, but the brain as well (*you* are a next-everything prediction engine; to claim that being a predictive engine means you can't invent is to claim that humans cannot invent). And furthermore, that's Riley's own synthesis, not even a claim by his cited philosophers.

For anyone who cares about the (single, cherry-picked, old) Fedorenko paper, the argument is: language contains an "imprint" of reasoning, but not the full reasoning process, that it's a lower-dimensional space than the reasoning itself (nothing controversial there with regards to modern science). Fedorenko argues that this implies that the models don't build up a deeper structure of the underlying logic but only the surface logic, which is a far weaker argument. If the text leads "The odds of a national of Ghana conducting a terrorist attack in Ireland over the next 20 years are approximately...." and it is to continue with a percentage, that's not "surface logic" that the model needs to be able to perform well at the task. It's not just "what's the most likely word to come after 'approximately'". Fedorenko then extrapolates his reasoning to conclude that there will be a "cliff of novelty". But this isn't actually supported by the data; novelty metrics continue to rise, with no sign of his suppossed "cliff". Fedorenko argues notes that in many tasks, the surface logic between the model and a human will be identical and indistinguishable - but he expects that to generally fail with deeper tasks of greater complexity. He thinks that LLMs need to change architecture and combine "language models" with a "reasoning model" (ignoring that the language models *are* reasoning - heck, even under his own argument - and that LLMs have crushed the performance of formal symbolic reasoning engines, whose rigidity makes them too inflexible to deal with the real world)

But again, Riley doesn't just take Fedorenko at face value, but he runs even further with it. Fedorenko argues that you can actually get quite far just by modeling language. Riley by contrast argues - or should I say, next-word predicts with his human brain - that because LLMs are just predicting tokens, they are a "Large Language Mistake" and the bubble will burst. The latter does not follow from the former. Fedorenko's argument is actually that LLMs can substitute for humans in many things - just not everything.

Slashdot Top Deals

Real programmers don't write in BASIC. Actually, no programmers write in BASIC after reaching puberty.

Working...