This relates to the evolutionary process of random drift, and in particular to one manifestation of it known as the founder effect.
Nearly all surviving balances in nature are stable equilibria. They're not fragile at all. If you perturb them, it just re-stabilizes at a new equilibrium point. e.g. If you tilt the bowl in the wiki picture, the ball doesn't fall off the top of the bowl like in the first picture or roll away like in the third picture.. It just settles in at a different spot on the bottom of the bowl in the second picture, now-tilted slightly.
That's a myth dreamt up by people more concerned with mathematics and engineering to pay attention to how organic systems actually function.
Let us put aside for the moment that this reasoning applies to highly simplified models of ecosystems, and not ecosystems themselves. This adds a whole epistemic layer to the problem: we don't really know shit about what would actually happen given a perturbation; we barely know this for many models and for actual ecosystems you can forget about it.
But then - even model ecosystems are seldom if ever in equilibrium, and the classical stability-based equilibrium analysis may have been cutting edge in 1974 when Robert May published his seminal book, but plenty of problems with this approach have been found since then. There are a plethora of other concepts that have been developed in order to tackle its short comings, for example resilience (how quickly the system returns to equilibrium). All these concepts should always be taken with a pinch of salt; its not obvious they are relevant or even desirable goals in ecosystem management.
To speak of one particularly relevant metric to this particular issue, there are huge parameter ranges in many models in which oscillatory behavior is present. In his 2012 book, Kevin McCann argues we should focus more on whether the eigenvalues are complex (i.e. prone to decaying or sustained oscillations) than on whether their real parts are negative (the classical stability criterion). If dynamics are oscillatory and I perturb a population down, it will overshoot its original value (possibly perturbing other populations) and will also return back down (making the population spend more time in low numbers and increasing extinction risk).
Another critical concept is that of fragility proper; as opposed to the dynamical concepts, fragility is a measure of functional response to the perturbation as opposed to the dynamics of the perturbation. Just because there is a stable equilibrium for some variable doesn't mean perturbing will have no cost in terms of other critical variables. For this see Nassim Taleb's 2012 book Antifragile.
Importantly I would point out the complete disconnect between your statements and empirical observations of ecosystems. We have many studies suggesting that empirically measured ecosystems may be extremely fragile to particular types of perturbations; for example see Solé & Montoya 2001 which identify keystone species by food web degree (number of tropic neighbors) and demonstrate fragility of total biodiversity to extinction of such keystone species. Another example is Montoya et al 2009 where a different identification of the weak spot based on inverse Jacobian / indirect interaction analysis is found. There is also work by Jane Memmott and her colleagues in identifying fragility not only particular species extinctions but also particular habitat loss. One doesn't need sophisticated analysis, however, to see ecosystems collapsing at a rapid rate not only at present but in many historical situations; indeed ecological fragility is quite possible one of the drivers of mass extinctions (present and past).
Finally, I would add that I would be the first to point out the short comings of all of these methods. The burden of proof, however, is on those engaging on system-scale perturbation, and not the other way around. Of course, arguing based on crude models and half a century old theory does not constitute proof (not even close). Risk is not measured by estimating probabilities of unlikely events; this is impossible due to sampling error. Its measured by looking at exposure. You don't compute the probability you will have a motorcycle accident; you know they can happen and put a helmet on (or get a car) to mitigate exposure. In this regard I cite the non-naive precautionary principle
developed by a wholly different school of thought from finance.
Stellar rays prove fibbing never pays. Embezzlement is another matter.