First of all, never call your product a "competitive product". You know what this means? Essentially what you're saying is "the others are just as shitty, so why try harder?" Another thing is that the message is not what you say but what your audience hears. It's nice that you feel like your customer has a seat at your table, but this does not arrive at your customers. They do not feel that way. And if you care about how your customers think about you, this is what matters.
One thing is certain: Goodwill goes a long way, and it takes a long, long time to rebuild from ruins. And let's be honest here, Comcast's goodwill is in the gutter. You have a long uphill battle in front of you if you really care.
So the lack of complaints is simply due to people noticing that complaining only wastes their time without resulting in any measurable improvement?
Guess why most people don't vote anymore...
So instead of telling you to "go to hell" they inform you that they "want you to have a warm, fuzzy feeling"?
Comcast offering a better product cheaper?
What hellhole would you have to live in for this to be even possible?
Complaints are down by 25% in areas where a competitor opened shop and claims they took a market share of about 30ish percent from Comcast...
I AM THE EGG MAN.
...phone roots you.
It should be even easier than that.
Archive.org should archive everything, including the robot.txt contents, at each scan.
The content being displayed from the archive.org website itself however could then still honor robots.txt at the time of the scan, purely for "display" purposes.
This way changing robots.txt to block search engines would not delete or hide any previous information.
Also the new information would still be in the archive, even if not displayed due to the current robots.txt directives.
Although it would require more work to do so properly, this would potentially allow for website owners to retroactively "unhide" content in the archive in the past as well.
Proper in this case would require some way to verify the domain owner, but this could likely be as simple as creating another specifically named text file in the websites root path, with content provided by the archive.
That can be as simple as the old school "cookie" data like so many other services use such as Google, or as complex as a standard that allows date ranges specified along with directives.
But in any case, this would preserve copies of the website for future use, such as for when copyright protection expires.
Despite everyone having a differing opinion on just how long "limited time" should be in "securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries", no one who wants to be taken seriously can argue that this time of expiration must happen at some point.
Since the vast majority of authors make no considerations to protect our property, that task clearly needs to fall on us to secure.
Anything cut to length will be too short.