A true artificial intelligence will show evidence of maintaining a mental model of reality, and of testing that model against incoming data, and adjusting the model when necessary. This strongly implies that the AI models itself in some manner, such that it can "imagine" a different way of "looking" at the world, and then judge whether the new model is a better way of thinking about things than the old model. The process is clearly fractal, since at the next level the software would be "imagining" a different way of judging which of two models was better, and eventually reaching the point where it makes decisions about whether in the current context it should act pragmatically or ethically.
Indeed. "Mental" modeling — maintaining and manipulating an abstract computational representation of beliefs — is at the heart of strong AI. Such models include, for example, beliefs about the world, beliefs about other agents (including what they believe about you), and beliefs about self. This is where computer scientists, linguists, cognitive psychologists and others all have some common ground and interdisciplinary research can be very productive. Learning is the ability to make systematic normative changes to mental models as a consequence of reasoning about experience; normative in the sense that such changes improve the ability to reason with and about the model in ways that maximize some value (e.g., ability to make accurate predictions). Experience involves reasoning about both the outside "real" world and the internal reasoning process itself. This is where your comment about "the next level" is germane. Those of us working on this topic call reasoning at multiple levels "meta-cognition", that is, thinking about thinking. There is no theoretical reason to limit meta-cognition to any specific number of levels. Current research on meta-cognition typically considers the level (or two) "above" (abstracted from) experiential belief modeling and action planning. This is also about the right level of abstraction for ethical reasoning ("would", "could", "should", "may" and their opposites). I've observed that most researchers assume a utilitarian ethics, which makes some sense if maximizing performance is the overall imperative. However, I count myself among those who believe that future AIs must be able to reason about moral imperatives if we expect them to behave themselves appropriately as we live and work alongside each other. Ronald Arkin at Ga.Tech is a leader in this area and he is a pioneer on the topic of computational methods to help ensure ethical behavior by potentially lethal robots.
Every successful person has had failures but repeated failure is no guarantee of eventual success.