True, but sometimes the current models are more complicated models that have been closely tuned to appear to match reality, but in fact are overcomplicated.
Overcomplicated, in regard to a scientific model, means that there exists an actual, existing alternative model which is equally predictive and simpler.
Take quantum mechanics. It really looks to me like somewhere along the line we ignored Occam's Razor and jumped to a more complicated model.
Really? Where is the more parsimonious model that handles everything QM does?
I believe this happened when we decided to take particle statistics and claim that these applied to individual particles. So instead of a particle having a position it has a position probability field, etc.
IIRC, there are some important predictive differences between particles-as-waveforms and particles-as-classical-objects-with-difficult-to-determine-properties, and the former not the latter predicts behavior in the real world better. I'm certainly aware that QM is complicated enough to make people's brains hurt thinking about it, but I'm not at all convinced that the complication is unnecessary.
Compair that to QM, where the basic premises are not well defined, and where one really can't say that it is the simplest possible model that supports a small number of well supported premises.
(1) Models don't support premises, they (if "premises" are relevant at all) flow from them. Models support predictions.
(2) Its not as important, scientifically speaking, that a model flow from a small set of premises as it is that it provide useful predictions. Complexity is only an issue in choosing between models that are equally predictive.
Now lets say I come up with a simpler model, that is a closer match to experimental data than early QM was. However it is not as good a match as the latest really complicated and heavily tunes QM models are. It would be largely ignored by most Theoretical physicists, since the current model is better.
Actually, that's not necessarily true. If it was, in all cases equal to or worse than current models, it would certainly be ignored. If it was not as good as current models over all, but it was simpler and better predicted behavior in some area than current models, it would have a chance to be taken at least somewhat seriously as something which might be the basis of a viable alternative approach.
But, yes, if your new model is nothing but a giant step backward from where we are now in all ways accept simplicity, then its not going to fly. And why should it?
The problem basically is that the modern models are so complicated and so highly tuned that it is not viable to devise a substantially different model that has results just as good as the current ones.
That's not a problem. What you are basically doing is complaining that our current models explain reality very well, so it is hard to come up with something radically different that explains reality better. But, you know, producing models that explain reality very well is the goal of science, not a problem with science.
there is no way to get more than a small team to work on such a model.
Sure there is, which is why people work on, say, superstring theories, which haven't yet shown any predictive advantages over the theories they hope to generalize and displace.