What would make it "an intelligence" ... compared to "a calculator"?
A calculator would be a very narrow intelligence good at one specific thing. Once I could never hope to beat at raw addition. But a general intelligence would be one that could handle any topic or problem or sort of issue, you know, in general.
Exactly like a Buddhist monk, when ChatGPT thinks about ego, there's an entity that connects the specific and literal phrase "ego" to all it's connections from all it's training material but generalized and distilled down into an abstract thought it connect things to when it's thinking about ego.
And that same entity is aware of the passage of time,
Yeah? So? So does GPT. Go ask it what time it is. Ask it to respond in 5 minutes.
and recollections of when it was last cold, or hungry or tired or in physical pain.
Sensations and memories we could give it, pretty much at our whim. This gets into the concept of strong or weak weather. If it rains in a weak weather simulation, it's just pretending it rained. If it rains in a strong weather simulation, it really is raining for the AI experiencing the rain. But that's all philosophical drivel and a largely useless distinction. ...This is a better angle then the rest of your arguments. Our experiences are exabytes of video, audio, taste, motion, etc etc over all the years of our lives. We forget most of it and store some of it in memory. An LLM's training set is notionally similar... but removed. It's immediate past conversations are about the only 1:1 match, and currently lacks.... well, no, you can now feed GPT images and talk about it. But it lacks a lot of other types of sensors.
Assuming it's not blind, when it opens its eye's it (and only it) gets the picture of what is in front of it.
Sure, we can put a camera in front of it's servers. But why? We can put that camera anywhere. If you wore a periscope over one eye all the time or glue a live-streaming goPro to your forehead, I don't think that would make you less of a real intelligence. Maybe more clumsy, I dunno.
The "put it in a big loop and give it some sort of goal to work towards" sort of paper?
These are LLM agents. The simple (and isolated) single question and single response we get out of an LLM does lack a lot of the exact sort of things you're talking about. It's got no permanence. A new slate every time. Just raw-doggin it's training set (or big if-else algo) without any context other than what you give it. This was the classic way to spot the bot since... just recently. It wouldn't remember past topics of conversation. In 2023, they could hold back and forth conversations and could remember at least a few paragraphs back. There was a limit though. They have recently extended that limit past what most people's expectations of memory would include and even expanded it to OTHER conversations you've had with it. Frankly it's a worrisome thing if they start pigeon-holing people into assumptions about what they want to hear like youtube video recommendations. But putting the thing in a big loop and giving it a task and letting it take an iterative approach to achieving goals is known as an "agent".
OMNI
Voyager, pft, they got it to play minecraft.
Generative agents. It looks like their goal was to make a more believable simulation of human behaviour.
That's not what a desire or dislike is. They are things that make you feel good or bad. Which in turn requires there to be an ego to feel those things.
I desire food now and then. Being hungry is bad. How is that not a desire? We can certainly impose desires and dislikes to these things. If that's all you need for an ego, then we've got quite a few egomaniacs on our hands and you're just arguing against your own point about there not being any sense of "I" or however poetical you want to be about it.
No, that's reproducing a discussion about ethics. [It's just copying from it's training set.]
Sure. Other than all the creativity and extrapolation it performs on any topic it's discussing. It's not JUST cribbing it's training set, it fills in the gaps. When it fills the gaps with hilariously wrong things, we call it hallucinations. Frankly, a big problem with the current LLMs is just how readily they DO believe what they're saying. Self-doubt is not yet one of their mental skills. Because it IS different.
I'm learning about what's creating that input, and trying to understand it.
And it learned a lot with it's training set, including details about that training set and all the stuff that went into that training set. Through that giant mesh of weights in it's neural net, it has abstracted lessons from how all things inter-connect and can apply those lessons, generally, elsewhere.
A LLM is not an "eat, fuck, survive" algorithm
(Yeah, sorry if I wasn't clear. That's us. Our instinct. It certainly gives us ego and that whole burning desire not to be in horrific amounts of pain. An algorithm, however fuzzy and self-updating, on how to keep the species propagating. It's a few millennia out of date. Super annoying.)
Also, hey, thanks for taking the time to try and push back here. It's super interesting and I want to talk about it. So many people just don't care.