But there is simply no way to prevent the program from doing something unintentional (like cutting the wrong thing) without prior detailed knowledge of the actual intent.
Back in the '80's David Parnas argued that software verification was fundamentally different from hardware verification precisely because software has very nearly infinite bandwidth, which is precisely the opposite of the argument being made in the article.
That is, for hardware, a small change in some parameter will in almost all practical cases result in behaviour that is not wildly different from the behaviour with the original value. This means that we can test it under a finite set of conditions and smoothly extrapolate to the rest.
With software, a single bit flipped can result in behaviour that is arbitrarily different from the original behaviour. As such, nothing short of exhaustive testing over all possible accessible states can prove software correct.
Algorithmic testing of the kind described here will catch some bugs, but provably correct algorithms can still run into practical implementation details that will result in arbitrarily large deviations from ideal behaviour. For example: the inertial navigation module on the Ariane V could have had provably correct code (for all I know it actually did) and the rocket still would have destroyed itself.
Most problems with embedded systems are of the kind, "The hardware did something unexpected and the software responded by doing something inappropriate." Anticipating the things the (incorrectly assembled, failure-prone, unexpectedly capable) hardware might do and figuring out what the appropriate response is constitutes the greater part of building robust embedded code, and this kind of verificationist approach, while useful as a starting point, won't address those issues at all.