Monitoring their blacklist for your IPs is not "hard"
Neither is distinguishing between "having open relays", "sending perfectly legitimte e-mail to addresses that have a new (domain) owner" and "sending spam", but they don't do it - you will always be slandered (called "spammer") and your business will be disrupted by their blacklisting, even if no spam e-mail was ever sent by your hosts. Last time I checked, they will even blacklist you for having a vacation responder at the address they send their probes to and on one occurrence they kept blacklisting us with the following reason (i.e. their probes that prolonged the blacklisting were these lines):
postfix/smtp[....]: XXXX: to=, relay=XXXX:25, delay=[...] status=bounced (host XXX said: 571 Your IP is BLACKLISTED at UCEPROTECT-LEVEL 1 - See: http://www.uceprotect.net/rblcheck.php?ipr=XXX (in reply to RCPT TO command))
So basically they extended the blacklisting because we were blacklisted, at least that was the reason in the logs (which we were supposed to use to find a problem on our side).
In fact the problem was that we had a registered user many years ago with a domain that had changed owner in the mean time and was used as a spam honeypot now - how do we "debug" that, let alone prevent it? And why do we need to "punished" with a blacklisting when we obviously did nothing wrong (or should we demand of our users to tell us when their e-mail provider sells a domain or goes belly-up?).
What is usually ignored by people in this thread is the simple fact that no spam e-mail is required to get you blacklisted, they don't seem to classify e-mail at all, that needs to be understood.
If you don't want to be blacklisted, then stop sending spam. Simple.
You're an ignorant fool. Unfortunately, too many sysadmins are just as ignorant, so they trust these badly-run, possibly with malicious intent, services. We've never sent 1 spam e-mail in 12 years doing business online and have been blacklisted several times by UCEprotect due to them recycling old domains (which were used by users to register on our site) for use as spam honeypots. They wasted countless hours of our time for nothing.
A commercial (or open source) forum suite has had way more eyes looking at it than your home-brewed solution.
That's both good (theoretically better code) and bad (large-scale attacks when some exploit is out in the wild). In practice, a decent programmer can write a safe, simple forum for themselves easily, while they will get hit regularly by exploits in phpBB etc. if they just trust such solutions instead.
They could start by actually deleting deleted content
They could, but why should they put themselves at a disadvantage over Google, every other corporation that buys such data and the NSA, who all most certainly do not delete stuff in the way you'd like them to?
Hardware is so cheap
But maintenance isn't, esp. not people with 24/7 availability to fix problems with your hardware. And don't underestimate the huge task of making something fault-tolerant / highly available.
1,08s since a query to the server for each image is still required
That's not the case if the images came with an "Expires" header or similar, browsers will just reuse them without any network operation. You can verify this with all the built-in header/network debugging facilities in major browsers.
Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker