Thank you, your comments render a much better understanding on my side
However, let me add a few thoughts.
perceptions ... no continuous real-world sensors
To me, that sounds a lot like conceptual learning isolated from perception. YMMV
Though, I do not know whether "intelligent" behaviour may emerge without the challenges that a body full of sensors as well as (parallel) means to cope with these that is interfaced to a brain that (my view) on a high level (call it consciously, think focus of attention) is concentrating on controlling one task, namely generating "intention" or "goals".
Forgetting could be an actively decided optimization parameter, as opposed to a byproduct of capacity.
Which may occur in the "real world" as well, though presumably focussed in the realm of "emotions" (BTW, this raises the question how emotions interact with more or less cognitive processes).
not a constantly active information stream
Crucial, and I am fine with the whole paragraph, especially as you somehow emphasize the "tool" aspect, which gives you a lot more degrees of freedom compared to efforts to engineer some "reality".
Also, being self-destructive indicates "not intelligent"?
This is taken out of context, namely "immediate trust". My remark was triggered by an (admittedly dim) recall of a classification
that Stegmüller made (K1, K2, K3 systems) with regard to teleological systems. IIRC, one can extend the scheme to a continuum from acting immediately in response to an input to tailoring the action to the outcome of building a "complete" model/simulation of the context (warning: recursion ahead).
I agree that suicide might be an "intelligent choice". Ethics and moral add yet another layer.
Besides, an artificial intelligence ...
You are probably better of if you call your envisioned system along the lines of "cognitive augmentation". This lowers expectations while still complex enough, shifts the focus from "basic" to "applied" (funding? I speculate "applied" has more appeal) and makes the goal scalable (creating backdoors when confronted with too many nontrivial problems) by redefinition of the target group.
Intelligence requires weariness? Intelligence negates meticulousness?The pursuit of goals is not intelligent?
For an autonomous system, which a tool is not, yes to both: sleep, fuzzyness.
It was not "pursuit of goals" but "follow instructions". Anyhow, with the "toolfocus", this is irrelevant.
Given proper sharing of context, instructions in natural language can be unambiguous
For practical purposes, yes. IMHO, theoretically, no (Gödel).
Disclaimer: I am only expressing my opinions here, which are based on what is left from working in the field in the 80ies and loosely following (more or less meager) development since then.