This has nothing to do with what I was talking about.
Then your quotation has nothing to do with the topic that was being discussed. Question was
What can humans do that robots can't?
To which you answered Feel a sense of accomplishment, followed by what now proves to be an irrelevant quote.
You are mistakenly believing that YOUR , or Banks' sense of what is "impressive" has anything to do with whether the "doer" itself feels a sense of accomplishment. That is false from your own reasoning outlined in this post of yours - that one has to believe one's own task "difficult" to get this sense of accomplishment. In believing one's own task difficult, humans are influenced by the thinking of their fellow humans - but that is far from a universal trait in humans themselves, to say nothing about non-human doers.
By this line of logic, what you are considering impossible for a robot is - to make YOU feel the robot's task was "impressive". I agree a robot might never be able to do that, but feeling a sense of accomplishment itself, by any reasonable definition, doesn't seem impossible at all.
c) an accomplishment is defined by the obstacles you overcome to achieve it, so it does not need to be special. You, as a human, faced the challenge with more obstacles than a purpose-built machine.
I don't agree with this line of argument. The perception of challenge is self-perceived. Program the robot to "deem" its own task "difficult", whatever it means. And then program it to "feel" a sense of accomplishment. You could define "difficult" again in the subjective way you define "feel a sense of accomplishment" but such a definition is useless as a functional definition for this discussion for the same reason I outlined in my last post.
There are two problems with your last paragraph:
I see them as proof you got the point rather than "problem". The "problem" is what I was trying to say.
a) How do you, personally, know that everyone around you isn't lying to your face about what they believe? Claiming AI would be non-genuine because you can't "detect" anything more is no different. There would be debugging procedures both equivalent to, and much more powerful than, the fMRI we currently use to detect (what we think are) genuine emotions in humans.
Exactly what I am saying. So your definition of "feel a sense of accomplishment" is useless, especially but not solely because it is in a context of non-humans as well as humans.
You also mistakenly believe there is any such thing as "genuine emotions" when talking together about human and non-human subjects.
b) It would be impossible to build an AI that behaved fully human without either copying a human template or understanding how it worked.
Correct, but it is necessary to to build robots like that only to prove people like you wrong - who feel robots cannot "feel a sense of accomplishment" - by either showing the definition of "feel a sense of accomplishment" useless or actually feeling it by your definition.
If the human template is copied, then the new model has no appreciable difference; if the AI built from scratch, we'd know for certain how experiences would affect its decision-making.
Well, "behave" fully human in this case is satisfied by being more than a human too. So it can still have appreciable "difference", the difference being of being "super-set". If the set of functions of the robot includes feeling a sense of accomplishment, your statement is proven false.