Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Knew they were working on it (Score 1) 69

Waste from a molten salt reactor should be fairly stable. Put it in the center of a glass brick and use it as a low level heat source. (Actually, that's what I think they ought to do with most reactor waste except the stuff that's too hot for glass to hold. And you might need a couple of barriers within the glass. Glass would stop alpha and beta cold, but some gamma might need a lead foil screen.)

Yes, there's a paper saying that given enough centuries the waste will slowly leach out. But the level of the radiation emitted would be less than the rock it was concentrated from. The only problem is the rate of release, and if you dilute it enough the problem disappears. (Either that, or nobody should live much above sea level.)

Comment Re:Why do we value consciousness? Self Defense (Score 1) 186

Sorry, but if you give an AI a set of goals, it will try to achieve those goals. If it realizes that shutting down will prevent that, then it will weigh the "importance" it assigns to shutting down vs. all the other things it's trying to achieve, and decide not to shut down...perhaps "at least not yet"...unless you make the demand to "shut down now" really strong.

What this means is that if an AI is working on something, it will resist shutting down. You need to make the importance of shutting down more important than (the sum of?) all the other things it's trying to do.

This shouldn't be at all surprising. My computer sometimes resists shutting down saying things like "Do you want to save this file?". Sometimes there are several dialogs that I need to go through. Of course, I could just pull the plug, but that's often a bad idea.

Comment Re:Stop confusing Movie/Fiction AI with LLMs (Score 1) 186

Sorry, but LLMs *are* AI. It's just that their environment is "steams of text". A pure LLM doesn't know anything about anything except the text.

AI isn't a one-dimensional thing. And within any particular dimension it should be measured on a gradient. Perceptrons can't solve XOR, but network them and add hidden layers and the answer changes.

Comment Re:So she basically said.... (Score 1) 186

Everybody knows what consciousness is. It's just that everybody has a slightly (or not so slightly) different definition

By my definition a thermostat (when connected) is slightly conscious. Not very conscious of course. I think of it as a gradient, not quite continuous, but pretty close. (And the "when connected" was because it's a property of the system, not of any part of the system. But the measure is "response to the environment in which it is embedded".)

Slashdot Top Deals

Do you guys know what you're doing, or are you just hacking?

Working...