Comment Re: How much of the science (Score 2) 30
No. ChatGPT parrots language without understanding it. That is not using language. "Using" something implies achieving a purpose by means of that thing, and ChatGPT has no purpose of its own.
No. ChatGPT parrots language without understanding it. That is not using language. "Using" something implies achieving a purpose by means of that thing, and ChatGPT has no purpose of its own.
...seems to be about four years late.
"Grand Theft Auto Forever"
"The Three Mouseketeers"
"They speak a form of english that has been manipulated by marketing people to such a degree that it's often difficult to see the reality behind those words."
In other words, Newspeak.
We can defeat the giant enemy crabs by hitting their weak points for massive damage.
I don't even know *how* to drive a stick!
"Prior art" is only a factor in patents. This ain't a patent.
The day will come that an AI will learn something that we did not deliberately teach it. When an AI is able to improve its own code, it won't be bound by the limitations of its human creator. It's only a question of when.
LK
Can a non-biological entity feel desire? Can it want to grow and become something more than what it is? I think that's a philosophical question and not a technological one.
LK
Don't agree at all and I think that's a morally dangerous approach. We're looking for a scientific definition of "desire" and "want". That's almost certainly a part of "conscious" and "self aware". Philosophy can help, but in the end, to know whether you are right or not you need the experimental results.
Experiments can be crafted in such a way as to exclude certain human beings from consciousness.
One day, it's extremely likely that a machine will say to us "I am alive. I am awake. I want..." and whether or not it's true is going to be increasingly hard to determine.
LK
Only if we define consciousness to be a state of awareness only attainable by human beings.
An LLM can't suddenly decide to do something else which isn't programmed into it.
Can we?
It's only a matter of time until an AI can learn to do something it wasn't programmed by us to do.
Can a non-biological entity feel desire? Can it want to grow and become something more than what it is? I think that's a philosophical question and not a technological one.
LK
No, they aren't bluffing. Detection methods are too well honed; you can't fake a test nuke (nor can you hide one). They've exploded nukes, no question. You can look up the exact dates and times they've done so.
You don't get it. It won't be fixed. There's nothing to fix. It's all working exactly the way the people creating the AI want it to be working.
So you're saying that since the problem is sometimes unavoidable, we should not do something to attempt to avoid it when it is avoidable?
"Remember, extremism in the nondefense of moderation is not a virtue." -- Peter Neumann, about usenet