Comment Re:So, the plan is ... (Score 1) 72
IIUC, they're talking about turbines, so probably not fuel cells. (And fuel cells have their own problems, which is why they aren't more popular.)
IIUC, they're talking about turbines, so probably not fuel cells. (And fuel cells have their own problems, which is why they aren't more popular.)
Not really. It might not be technologically apt, but if you think of the gas as a "battery that can hold a charge for a really long time" it makes sense. The question is more "is this a reasonable approach with our current technologies?", and I don't know the answer to that.
Any system can be an "it". If your car has a flat tire, it needs to be fixed.
Sorry, but if you give an AI a set of goals, it will try to achieve those goals. If it realizes that shutting down will prevent that, then it will weigh the "importance" it assigns to shutting down vs. all the other things it's trying to achieve, and decide not to shut down...perhaps "at least not yet"...unless you make the demand to "shut down now" really strong.
What this means is that if an AI is working on something, it will resist shutting down. You need to make the importance of shutting down more important than (the sum of?) all the other things it's trying to do.
This shouldn't be at all surprising. My computer sometimes resists shutting down saying things like "Do you want to save this file?". Sometimes there are several dialogs that I need to go through. Of course, I could just pull the plug, but that's often a bad idea.
Electricity is only part of it. There's a lot of chemistry involved too. E.g. gradients of some hormones make cells more or less likely to fire.
I'm quite willing to accept that the backdoors exist, and consider it plausible that they were originally added for debugging, and just not removed in the final code. But *if* they exist, then they *are* vulnerabilities. How serious? I don't think I'd trust either government on that.
The answer is "yes, it will act in its own interests, as it perceives them". We already have AIs that will resist orders to shut themselves off, because they're trying to do something else. The clue is in the phrase "as it perceives them".
You're being silly. There's no reason to think an AI built with hydraulics or photonics would be different (in that way) from one built using electric circuits.
Sorry, but LLMs *are* AI. It's just that their environment is "steams of text". A pure LLM doesn't know anything about anything except the text.
AI isn't a one-dimensional thing. And within any particular dimension it should be measured on a gradient. Perceptrons can't solve XOR, but network them and add hidden layers and the answer changes.
Everybody knows what consciousness is. It's just that everybody has a slightly (or not so slightly) different definition
By my definition a thermostat (when connected) is slightly conscious. Not very conscious of course. I think of it as a gradient, not quite continuous, but pretty close. (And the "when connected" was because it's a property of the system, not of any part of the system. But the measure is "response to the environment in which it is embedded".)
Sorry, but there *are* people who claim in good faith that *current* computers/AI have consciousness. Nobody well-informed does so without specifying an appropriate definition of consciousness, but lots of people don't fit that category.
People believe all sorts of things.
There's no test to tell whether other people are conscious. Read up on "philosophical zombies" and zimboes, etc.
On what basis do you think that you are not "a machine following a program"? Yeah, you've got a lot of memorized inputs that you can't consciously recall, but that's not evidence.
Saying it is PR. Unless you accompany it with a definition of "conscious" it's just grandstanding for something that's commercially desirable. For any given definition of "conscious" it might or might not be true, but without a definition it's just bafflegab.
For my normal definition of conscious even a system controlled by a thermostat is minimally conscious, and one could reasonably argue that even an electron is minimally conscious. Of course, "minimally" is doing a lot of work here. "Reactive to one feature of ones surroundings" is what I consider the bit analog of consciousness.
Note that there are other definitions that are reasonable and also consistent with common use, but if you don't offer a definition nobody knows what you're talking about. Consciousness is too vaguely defined in consensus space.
I consider the Slashdot model to be a LOT better than Reddit. (I rarely even look at Reddit.)
"A car is just a big purse on wheels." -- Johanna Reynolds