Should they imitate how we imagine the mind to work, as a Cartesian wonderland of logic and abstract thought that could be coded into a programming language? Or should they instead imitate a drastically simplified version of the actual, physical brain, with its web of neurons and axon tails, in the hopes that these networks will enable higher levels of calculation? Itâ(TM)s a dispute that has shaped artificial intelligence for decades.
I suspect to get "true" AI, both of these will have to work together. Neural nets (NN's) will provide hunches and guesses, but the AI will have to model these hunches and guesses in an abstract or semi-realistic way to both test the logic of them, and to be able to communicate with humans about its findings or suggestions.
The AI will be able to "draw" or describe a cartoon-like model of suggestions or events the way a human might in a meeting explaining something about travel, events, human relationships, time-lines, etc. This requires some kind of abstract modelling.
This is pretty much how most human minds work: hunches based on past and/or re-occurring patterns teamed up with abstract modeling at an "object" level to both communicate and test hunches, as created by NN-like pattern matching at a mostly sub-conscience level.