Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:Raise the costs even more! (Score 1) 53

AFAIK, nobody has demonstrated a viable SMR prototype of any kind. No, marine reactors do not count, they have the wrong characteristics and are far too uneconomic for this, even worse than civilian designs. The two that exist (Russian and Chinese) do NOT come with any or any believable cost figures. In addition, the he Russian one is a military design and the Chinese one is a highly experimental pebble-bed reactor based on German patents. The Germans wrecked three of these and two are still highly radioactive ruins that nobody know how to dispose of. On the plus-side, pebble-bed reactors cannot melt down, which is a decided plus.

Still, anybody that has high confidence in the approach is simply an idiot.

Comment It's not supposed to be profitable (Score 1, Insightful) 39

It's supposed to be the answer to the question "if nobody buys the wealthy's products how are they going to stay rich?"

The goal here is to replace as many workers as possible and eliminate the dependency on consumers.

The ultra wealthy want to go back to being like kings. Basically feudalism.

They will have a very tiny number of guildsman and scribes and a handful of knights to keep them in line.

Everyone else has a lifestyle below that of a medieval peasant because you're not even needed to tend the land anymore, they will have machines for that.

It never ceases to amaze me how many people don't realize what's happening here. Even more so there are the people who realize it but just kind of put it out of their mind because the idea of the ultra wealthy dismantling capitalism is so far outside what people view as possible that they can't emotionally comprehend it even if they can understand it intellectually.

And of course there are the numb skulls who think that they are somehow going to profit from the collapse of modern civilization. It's a big club boys and you ain't in it.

Comment You can't cut off cheap Chinese goods (Score 1) 36

Europe like America gives too much money to its 1%. The only way to maintain their economies is with cheap goods made by slave labor in China. That's the only way to offset increasingly large amounts of money being moved from the bottom to the top.

If you want to fix that you have to cut off the flow of money to the top and we're not going to do that. There's a variety of terrible reasons why that is the case but it just is.

I honestly do not know a solution to prevent human civilization from collapsing. I suspect that within 10 or 20 years we are going to hand nuclear launch codes over to religious lunatics and that's going to be gave over for humanity.

I definitely do not know how we avoid regressing back into feudalism even if we don't destroy our species. People just like worshiping rulers and kings and the ones that don't don't have the tendencies towards violence that the ones that do have. If it's one thing Afghanistan taught us it's that a very small number of idiots willing to use terrible violence can install a very very small number of people as absolute rulers.

We could counter this with education and critical thinking but even among people who should be well educated all I'm hearing is how we should all go into the trades and be plumbers or whatever. Anti-intellectualism and a hatred and disdain for experts dominates discourse now. That overpowering 12-year-old urge to not be told what to do has completely overwhelmed society and I do not know how you push back against that.

Basically don't tell me what to do.

Comment Re:PR article (Score 1) 205

Sure do :) I can provide more if you want, but start there, as it's a good read. Indeed, blind people are much better at understanding the consequences of colours than they are at knowing what colours things are..

Comment AI data centers aren't going anywhere (Score 2) 53

and neither are their power demands. AI exists to automated white collar jobs. The Demand for that is huge.

When the AI bubble bursts yes, you and I are going to bail out the banks that loaned doggy AI companies hundreds of billions (either that or they'll crash the global economy, remember, you're a hostage not a consumer)

But all that infrastructure you paid for with your tax dollars will just be bought up for cheap by whoever survives and you'll lose your jobs to it.

But hey, look over there! It's a DEI trans girl playing high school sports carrying a happy holidays sign!

Comment Re:Difference in fundamental rights. (Score 1) 61

But making sure that every single person has access to sufficient food is a core job that government has to do(**)...I understand that from the US' point of view, I am an evil

The scary/evil part is when the government is in complete control of the food supply, because that's how you get Holodomor (ie, the government exports food out of the country during a famine in order to oppress enemies of the regime).

There are exceptions, but the vast majority of Americans believe people shouldn't starve (and most would like the government to do something about it). Even Libertarians think people shouldn't starve, although they don't agree on how to stop it.

Submission + - New Agent Workspace feature comes with security warning from Microsoft (scworld.com)

spatwei writes: An experimental new Windows feature that gives Microsoft Copilot access to local files comes with a warning about potential security risks.

The feature, which became available to Windows Insiders last week and is turned off by default, allows Copilot agents to work on apps and files in a dedicated space separate from the human user’s desktop. This dedicated space is called the Agent Workspace, while the agentic AI component is called Copilot Actions.

Turning on this feature creates an Agent Workspace and an agent account distinct from the user’s account, which can request access to six commonly used folders: Documents, Downloads, Desktop, Music, Pictures and Videos.

The Copilot agent can work directly with files in these folders to complete tasks such as resizing photos, renaming files or filling out forms, according to Microsoft. These tasks run in the background, isolated from the user’s main session, but can be monitored and paused by the user, allowing the user to take control as needed.

Windows documentation warns of the unique security risks associated with agentic AI, including cross-prompt injection (XPIA), where malicious instructions can be planted in documents or applications to trick the agent into performing unwanted actions like data exfiltration.

“Copilot agents’ access to files and applications greatly expands not only the scope of data that can be exfiltrated, but also the surface for an attacker to introduce an indirect prompt injection,” Shankar Krishnan, co-founder of PromptArmor, told SC Media.

Microsoft’s documentation about AI agent security emphasizes user supervision of agents’ actions, the use of least privilege principles when granting access to agent accounts and the fact that Copilot will request user approval before performing certain actions.

While Microsoft’s agentic security and privacy principles state that agents “are susceptible to attack in the same ways any other user or software components are,” Krishnan noted that the company provides “very little meaningful recommendations for customers” to address this risk when using Copilot Actions.

Comment Re:PR article (Score 1) 205

The congenitally blind have never seen colours. Yet in practice, they're practically as efficient at answering questions about and reasoning about colours as the sighted.

One may raise questions about qualia, but the older I get, the weaker the qualia argument gets. I'd argue that I have qualia about abstracts, like "justice". I have a visceral feeling when I see justice and injustice, and experience it; it's highly associative for me. Have I ever touched, heard, smelled, seen, or tasted an object called "justice"? Of course not. But the concept of justice is so connected in my mind to other things that it's very "real", very tangible. If I think about "the colour red", is what I'm experiencing just a wave of associative connection to all the red things I've seen, some of which have strong emotional attachments to them?

What's the qualia of hearing a single guitar string? Could thinking about "a guitar string" shortly after my first experience with a guitar string, when I don't have a good associative memory of it, sounding count as qualia? What about when I've heard guitars play many times and now have a solid memory of guitar sounds, and I then think about the sound of a guitar string? What if it's not just a guitar string, but a riff, or a whole song? Do I have qualia associated with *the whole song*? The first time? Or once I know it by heart?

Qualia seems like a flexible thing to me, merely a connection to associative memory. And sorry, I seem to have gotten offtopic in writing this. But to loop back: you don't have to have experienced something to have strong associations with it. Blind people don't learn of colours through seeing them. While there certainly is much to life experiences that we don't write much about (if at all) online, and so one who learned purely from the internet might have a weaker understanding of those things, by and large, our life experiences and the thought traces behind them very much are online. From billions and billions of people, over decades.

Comment Re:PR article (Score 3, Insightful) 205

Language does not exist in a vacuum. It is a result of the thought processes that create it. To create language, particularly about complex topics, you have to be able to recreate the logic, or at least *a* logic, that underlies those topics. You cannot build a LLM from a Markov model. If you could store one state transition probability per unit of Planck space, a different one at every unit of Planck time, across the entire universe, throughout the entire history of the universe, you could only represent the state transition probabilities for the first half of the first sentence of A Tale of Two Cities.

For LLMs to function, they have to "think", for some definition of thinking. You can debate over terminology, or how closely it matches our thinking, but what it's not doing is some sort of "the most recent states were X, so let's look up some statistical probability Y". Statistics doesn't even enter the system until the final softmax, and even then, only because you have to go from a high dimensional (latent) space down to a low-dimensional (linguistic) space, so you have to "round" your position to nearby tokens, and there's often many tokens nearby. It turns out that you get the best results if you add some noise into your roundings (indeed, biological neural networks are *extremely* noisy as well)

As for this article, it's just silly. It's a rant based on a single cherry picked contrarian paper from 2024, and he doesn't even represent it right. The paper's core premise is that intelligence is not lingistic - and we've known that for a long time. But LLMs don't operate on language. They operate on a latent space, and are entirely indifferent as to what modality feeds into and out from that latent space. The author takes the paper's further argument that LLMs do not operate in the same way as a human brain, and hallucinates that to "LLMs can't think". He goes from "not the same" to "literally nothing at all". Also, the end of the article isn't about science at all, it's an argument Riley makes from the work of two philosophers, and is a massive fallacy that not only misunderstands LLMs, but the brain as well (*you* are a next-everything prediction engine; to claim that being a predictive engine means you can't invent is to claim that humans cannot invent). And furthermore, that's Riley's own synthesis, not even a claim by his cited philosophers.

For anyone who cares about the (single, cherry-picked, old) Fedorenko paper, the argument is: language contains an "imprint" of reasoning, but not the full reasoning process, that it's a lower-dimensional space than the reasoning itself (nothing controversial there with regards to modern science). Fedorenko argues that this implies that the models don't build up a deeper structure of the underlying logic but only the surface logic, which is a far weaker argument. If the text leads "The odds of a national of Ghana conducting a terrorist attack in Ireland over the next 20 years are approximately...." and it is to continue with a percentage, that's not "surface logic" that the model needs to be able to perform well at the task. It's not just "what's the most likely word to come after 'approximately'". Fedorenko then extrapolates his reasoning to conclude that there will be a "cliff of novelty". But this isn't actually supported by the data; novelty metrics continue to rise, with no sign of his suppossed "cliff". Fedorenko argues notes that in many tasks, the surface logic between the model and a human will be identical and indistinguishable - but he expects that to generally fail with deeper tasks of greater complexity. He thinks that LLMs need to change architecture and combine "language models" with a "reasoning model" (ignoring that the language models *are* reasoning - heck, even under his own argument - and that LLMs have crushed the performance of formal symbolic reasoning engines, whose rigidity makes them too inflexible to deal with the real world)

But again, Riley doesn't just take Fedorenko at face value, but he runs even further with it. Fedorenko argues that you can actually get quite far just by modeling language. Riley by contrast argues - or should I say, next-word predicts with his human brain - that because LLMs are just predicting tokens, they are a "Large Language Mistake" and the bubble will burst. The latter does not follow from the former. Fedorenko's argument is actually that LLMs can substitute for humans in many things - just not everything.

Slashdot Top Deals

A mathematician is a device for turning coffee into theorems. -- P. Erdos

Working...