I'm really happy to read this paragraph. I had the same epiphany when I began planning for a recipe website that allowed for comments without passwords (to login avoid hassle). I also worked out a similar system to the backend of an Omegle clone, essentially pairing abusive (Ctrl+V then exit, Ctrl+V then exit) users with a Cleverbot routine until they stopped spamming, sandboxing them from the greater user base.
From this thread, I learned this system is called "Hellbanning" and some of its downsides are similar to those of honeypots, e.g. you have to store useless data, bandwidth usage goes up by those who think their spam is working, etc. I think these are fair complaints, but the jusy is still out whether these downsides outweigh the benefits of hellbanning.
Hellbanning represents an entirely new way of handling user submitted content. The current norm shows the status of every post to the user who created it. "That comment is awaiting moderation" and "This has been flagged." Essentially, by giving status reports and feedback to abusers, you are grading them on their work and giving them constructive criticism. By obscuring the extent to which their content is shared, they don't know if their efforts are in vain, and they can't improve on their failing techniques if they don't know what is working what isn't.
I would enjoy hearing about anyone else's knowledge about obscurring user content in real world applications, or any theoretical concerns or loopholes someone just hearing about it can come up with.