Its interesting that you think epistemology actually plays a part for the flipping computer.
I could only agree if we are speaking of computer that is intending - by and within its design - to learn like, as well as act like us in a mature state. I agree this may be the most pure way for getting AI to resemble the human condition (for a lack of a better way to put it), but executing on this path is entirely a red herring.
I would say that trying to understand and emulate the learning process is 10 to 100 orders magnitude over the the effort of just getting the damn thing to work at a common, layman intellectual level.
We have no real understanding how we learn, empirically scientifically speaking - we are only beginning to understand this now. The understanding of this process changes rapidly and while we think we have momentum currently, more major unknowns exist. In fact, we don't know what we dont know at this point.
Its been debated as long as man has had the ability too, however... but even throughout the thousands of years of philosophical deep diving, it wasn't until the age of enlightenment that Kant finally got everyone on board for "Epistemology First" in our understanding of our world - we must first understand how we learn about this place, before we can debate the ontological status of the world around us and have any meaningful debate of its metaphysics. Theocratic or not, this rings true - and its only added more complexities to the struggle of what we know about ourselves.
And now, you want to build a robot to approach this condition.. Insanity. The effort is pure insanity and full of hubris. Lets work on simple tasks, and try to get those right, first. And how baout an honest look on who the fuck we are as emotional, sentient, chemically riding and wicked imperfect machines ourselves, before we attempt to perfect it in a model.
The only real saving grace is that this effort could actually be such a mirror for man kind, and accelerate our understanding of ourselves, if only slightly.