Yeah, but you can keep them from doing it again.
The reason people don't use these well-known techniques is very simple: it takes time and effort, and people are lazy. So until the customer tells them to, they won't bother.
Which brings me to my biggest objection to this proposed contract. There's lots of documentation requirements, and no assignment of liability. Documentation is expensive to produce, and much of this I really don't care about. (Exception: the document on how to secure the delivered software, and security implications of config options, is an excellent idea.) For most of the documentation requirements, I don't really need to hear how you plan to do it: I just need to know that, if you screw up, you're going to be (at least partially) financially liable. And yet, the contract fails to specify that. What happens when there *is* a security breach, despite all the documentation saying the software is secure? If the procedures weren't followed, then that's obviously a breach of contract — but what if there was a problem anyway?
I actually like designating a single person in charge of security. Finding someone to blame after the fact is a horrible idea. However, having someone who's job it is to pay attention early, with the authority to do something about it is an excellent way to make sure it doesn't just fall through the cracks. By requiring their personal signoff on deliverables, you give them the power they need to be effective. (Of course, if management inside your vendor is so bad that they get forced into just rubber-stamping everything, that's a different problem. But if you wanted to micromanage every detail of how your vendor does things internally, why are you contracting to a vendor?)
I don't agree there re the lazy comment. The reason poor coders release insecure code is because they are lazy. For the rest of us, it is generally because we are told we MUST release X features by go-live date. Go-live date will not slip under any circumstances. X features are non-negotiable for go-live date. The project manager (not the development PM, the project owner) has assigned a certain period for testing, however this testing is never SIT or the such, it is usually UAT of a very non-technical nature and the devs time is spent on feedback from UAT. Development itself has virtually no proper regression / SIT / design time factored in. The development team are never asked how long realistically something will take, instead some non-technical person will tell them how long they have and then tells them to figure out how to make it happen. Specs will change continuously throughout the project so a design approach at the beginning will be all but useless at the end after numerous fundamental changes (got this one on a project I'm working on now - had my part finished, fully tested and ready to deploy about 3 months ago, then change after change after change and I'm still doing dev and if I mention that I need time to conduct SIT / regression testing I'm told "but I thought you fully tested it already a few months ago?"). This leads a dev with a fast approaching deadline, who doesn't have the authority to say "no, this won't give us enough time to test properly" and the emphasis on being feature complete rather than a few features down but fully tested and secure.
This of course does not even touch on the subject of what happens if a third party library or other software sourced externally has vulnerabilities. Can you in good faith sign off that you guarantee a piece of software is totally secure without knowing how third party libraries, runtime environments or whatever were developed? This is not just isolated to open source, try holding MS liable for a security vulnerability that was uncovered after you deployed and see how far you get. This then starts taking us out of the realm of absolutes and into the realm of "best practices" etc. So how good is a contract that expects the signatory to follow "best practices" in regards to security. Who determines these best practices? So the contract has two possibilities, it is either too strict to make it practically meaningful (people will still sign if that is the only way to get employment but it will not make the software any more secure) or it is to vague to have any meaning.
The bridge analogy works very well as the work software developers do would be unheard of while building a bridge. Imagine a civil engineer having to sign off on a contract that went something like "We reserve the right to change the completion date, including move it forward, any time we see fit. A panel of marketers will determine how long the bridge will take to complete with no consultation of any technical people, any timeline set is non-negotiable. We can add new features that you are currently not aware of at any stage of the build process, right up to a week before completion date. If we so choose we can tell you mid project to change the bridge from an arch to a suspension bridge and the deadline shall not move. Up until completion date the marketing team will hold internal discussions whether or not the bridge needs to open for ships, if they decide it does then you must ensure that this conforms to marketing requirements without missing go-live. And you need to sign off that you will be personally responsible if there are any structural defects."