Well put. I work for a company that provides a secure "Proof of Knowledge" support for web logins. (Proofs of knowledge include text passwords, picture passwords, Captcha, etc. - things that require personal knowledge or cognitive self-tests.) The security model for this SAAS is highly motivated by user privacy and security concerns. The actual proof - the password, or whatever - is encrypted into a hash in the browser, and stored as a doubly-encrypted hash in the server. The SAAS never knows the user's identity, only an encrypted code that identifies the user to the requesting website. So connecting the user, the website's user ID, and the proof requires hacking or compromise of all three pieces of the puzzle.
It is even possible (though we haven't rolled out this capability to production yet) for the actual challenge to be encoded by the user in such a way that it's impossible for anyone but the user to even know what the test to be performed is. I won't say how this is done, as the patent is pending.