Comment Re:Knew they were working on it (Score 1) 101
In addition to the other objection, "low level heat" is a useful commodity. Why not use it rather than throwing it away.
In addition to the other objection, "low level heat" is a useful commodity. Why not use it rather than throwing it away.
Waste from a molten salt reactor should be fairly stable. Put it in the center of a glass brick and use it as a low level heat source. (Actually, that's what I think they ought to do with most reactor waste except the stuff that's too hot for glass to hold. And you might need a couple of barriers within the glass. Glass would stop alpha and beta cold, but some gamma might need a lead foil screen.)
Yes, there's a paper saying that given enough centuries the waste will slowly leach out. But the level of the radiation emitted would be less than the rock it was concentrated from. The only problem is the rate of release, and if you dilute it enough the problem disappears. (Either that, or nobody should live much above sea level.)
Scale *will* have its own challenges. So will maintenance. This is an "always true". They may well but soluble, but that sure isn't guaranteed.
IIUC, they're talking about turbines, so probably not fuel cells. (And fuel cells have their own problems, which is why they aren't more popular.)
Not really. It might not be technologically apt, but if you think of the gas as a "battery that can hold a charge for a really long time" it makes sense. The question is more "is this a reasonable approach with our current technologies?", and I don't know the answer to that.
Any system can be an "it". If your car has a flat tire, it needs to be fixed.
Sorry, but if you give an AI a set of goals, it will try to achieve those goals. If it realizes that shutting down will prevent that, then it will weigh the "importance" it assigns to shutting down vs. all the other things it's trying to achieve, and decide not to shut down...perhaps "at least not yet"...unless you make the demand to "shut down now" really strong.
What this means is that if an AI is working on something, it will resist shutting down. You need to make the importance of shutting down more important than (the sum of?) all the other things it's trying to do.
This shouldn't be at all surprising. My computer sometimes resists shutting down saying things like "Do you want to save this file?". Sometimes there are several dialogs that I need to go through. Of course, I could just pull the plug, but that's often a bad idea.
Electricity is only part of it. There's a lot of chemistry involved too. E.g. gradients of some hormones make cells more or less likely to fire.
I'm quite willing to accept that the backdoors exist, and consider it plausible that they were originally added for debugging, and just not removed in the final code. But *if* they exist, then they *are* vulnerabilities. How serious? I don't think I'd trust either government on that.
The answer is "yes, it will act in its own interests, as it perceives them". We already have AIs that will resist orders to shut themselves off, because they're trying to do something else. The clue is in the phrase "as it perceives them".
You're being silly. There's no reason to think an AI built with hydraulics or photonics would be different (in that way) from one built using electric circuits.
Sorry, but LLMs *are* AI. It's just that their environment is "steams of text". A pure LLM doesn't know anything about anything except the text.
AI isn't a one-dimensional thing. And within any particular dimension it should be measured on a gradient. Perceptrons can't solve XOR, but network them and add hidden layers and the answer changes.
Everybody knows what consciousness is. It's just that everybody has a slightly (or not so slightly) different definition
By my definition a thermostat (when connected) is slightly conscious. Not very conscious of course. I think of it as a gradient, not quite continuous, but pretty close. (And the "when connected" was because it's a property of the system, not of any part of the system. But the measure is "response to the environment in which it is embedded".)
Sorry, but there *are* people who claim in good faith that *current* computers/AI have consciousness. Nobody well-informed does so without specifying an appropriate definition of consciousness, but lots of people don't fit that category.
People believe all sorts of things.
There's no test to tell whether other people are conscious. Read up on "philosophical zombies" and zimboes, etc.
"All we are given is possibilities -- to make ourselves one thing or another." -- Ortega y Gasset