Many security bugs are really failures to implement correctly a requirement of the form "No matter what the input to this program is, it must not do X."
This is a special case of Sherlock's theorem:
Once you have eliminated the disallowed, whatever remains, however unintuitive, must be the robust.
It's far easier to debug a sin of omission than a sin of commission. If a piece of code never performs a disallowed function (e.g. leaking memory, failing to sanitize user input) then all failures that remain are sins of omission: the program doesn't actually transfer the file requested, out of excessive restraint on some edge case the programmer never even considered.
Well, the programmer needs to get in there and consider the omission in the harsh light of day. Then the specification document needs to be updated.
And questions need to be asked about the user environment when an edge case is tripped three years into a heavy use cycle.
The only way to achieve software up-front with no failure modes and no functional omissions is to massively gold-plate the validation process, and this rarely works anyway.
I'm never happier writing code than when I'm subtracting stupid.