I think we can be fairly sure there's nothing non-physical going on in our brains. Nothing else the universe does works like that, our brains are constructed out of perfectly normal matter and thus appear to run on normal physics, and it would make human brains magically special which is crazy. None of that means we're not conscious, it just means that consciousness is something boring and repeatable once you know how to do it.
Prove otherwise and I won't refuse to believe you, but I'm not going to expect that to happen.
There are some scientific indicators though that there may be more to the human mind than physics as known, for example the constant long-term failure to create general ("strong") AI even on the level of an utter moron. It seems this is either excessively hard or impossible.
I have mostly the opposite impression. We know the brain uses neural networks, and we've only really figured neural networks out in the past few years. And in the past few years we've made massive advances in AI everything. Since neural networks are our only real avenue of attack on AGI (our only example is the human brain, and that uses them), I see the current situation as demonstrating less that AGI is hard and more that neural networks were hard. And our ability to do with software many of the things that were previously human-only definitely demonstrates that those parts of the human brain are reproducible.
AGI might still be hard, but it might also simply turn out to be a matter of combining existing neural networks in the right way. Certainly every other AI problem has had people going "oh, but that's just X" or "that's just Y" (exactly like you did). Why not this one?
Note that "we have no idea how the brain works" doesn't mean we can't reproduce it. The neural networks involved in AlphaGo, for instance, are completely inscrutable; we have no idea how they work or why they evaluate any given move they way they do. Yet they demonstrably work just fine for playing Go.
[...] but having jobs for 10-15% of the population is not going to keep the current society-models going.
Yeah. None of these attempts to predict exactly where the limits of AI are will change anything when the limits are clearly high enough that we've got a problem.