I applaud the direction internet archive takes. They should fully implement it.
A year ago one of my domain names was stolen, through negligence of the registrar. The site was a non-profit resource that I maintained for the past 15 years. The squatter who now owns the name put deny all in robots.txt. As the result the website with some quantity of useful information has totally disappeared from existence and from the archive record.
I do not see sufficiently important reasons to remove information that was once in public access. There are some reasons, however the public benefits of having access to all past public information outweigh all them.
Utter nonsense. If your "website with some quantity of useful information" was in any way important you would have republished the content on a new domain, which would have been indexed by search engines quite quickly. Archive.org is not a substitute for your obvious lack of due care in taking backups of your data.
Archive.org is NOT an official archive of the web. If they stop respecting robots.txt, then why should others keep respecting it? They claim to be special but they are not any different from any other search engine or data harvester.
Canadian con-man Conrad Black
I feel I must rectify. You actually mean: "British con-man Conrad Black".
He hasn't been Canadian for 15 years (and good riddance too, why they let him back in is a mystery to me).
They were outed by a competitor (Nissan).
Nissan subcontracted the manufacturing of some Kei cars to Mitsubishi, and they decided to do some independent emissions testing. So, they only had to "admit" because they'd got caught red-handed.
MAC user's dynamic debugging list evaluator? Never heard of that.