"There are lots of cases where you could show a drugs effect on some cells cultured in a dish. You can show what a Protein will do with a computer simulation."
I haven't worked directly in pharma R&D in years so what I'm saying below is no doubt out of date. With that caveat, traditionally, the cell line work is exactly what you would do before you would move into animal models. It was not uncommon for you to observe very different results in animals than you observed in cell lines. It was and is still not uncommon for animal models to fail to capture all of the nuance of what a drug compound will do, or fail to do, in humans. This holds even when the drug you were testing were not first in class.
I don't know how much can be done with computer modeling nowadays, but when I was involved in this the hype vastly overshadowed the delivery. (There was this common hype that shallow but broad and heterogeneous data could somehow be aggregated into what would prove to be reliably predictive models.) I've no doubt the models are much better than they were a decade ago. I'd be surprised though if I live to see them yet live up to their promise. The article also mentioned organ chips. I've read a bit about them but don't know enough to comment. But I'll allow that new technologies come along and, at some point, may obsolete all sorts of ways we did things in the past.
Anyway, my own take is that FDA isn't stupid. If they are going to consider cases it is likely because they can envision scenarios where animal model testing of drugs is unnecessary, and my guess here is the impetus is less about rodent/dog/chimp lives saved and more about paving the way for potentially more rapid development of something perceived as low risk but life saving. Pharmas, to the extent that they think they can plausibly go this route, will be pleased at the possibility of lower costs/timelines. But I expect less sanguine researchers are going to see this as mostly a false trail, where counting on this options leads to dead ends with subsequent timeline delays, and I expect there will be a lot of internal bickering over cost/benefit. Drug R&D is hard, and FDA isn't going to just open the floodgates to this sort of thing.
To this point, the article notes that currently something like 90% of drugs fail in clinical development due to safety or efficacy (there's no other way to fail). While this might bolster the claim that the animal models aren't providing much value, it should also give some pause for us to consider just how hard it is to predict what will happen in humans even with cell lines, computer simulations, *and* animals. And these failure rates have been around a long time. What would be more compelling in my mind is to see evidence that, after introducing one or more of these new methods, the failure rates in human clinical trials were to decrease. Then maybe you could argue some of the historical steps have been adequately supplanted and could be phased out.