p-values are not probabilities. What people would like it to be are probabilities that one hypothesis is correct compared to another. But that is not what it does, and because people ignore that gap and mis-interpret them it has become such a problem; that's why they are being banned. Many experiments with acceptable p-values (p0.05) are not reproducible.
Actually the inventor of p-values never intended them for a test, only to uncover that there is perhaps worth of further investigation.
p-values tell you, if you collected data under the current model, how frequently you will get data more extreme than the data at hand. p0.01 means, only in 1% of cases you will get such an "outlier". But it assumes that the model itself is correct. It varies the data!
Instead, what should be done is to compare one model versus another one, with the data we have. Bayes factors do that, and should be used and taught.
The problem came to be because social sciences do not have proper, meaningful models, which can be compared. So they have resorted to techniques that do not require specifying models (or alternatives) rigorously. In the physical sciences, you can precisely write a model for a planetary system with 2 planets and one with 3 planets, and the Bayes factor will be meaningful.