What about borderline content such as non-pornographic nudity, sexually explicit drawings of imaginary minors, and pornographic images of adults who look like teenagers? It's likely these will be branded as "child pornography", leading to images being suppressed that are legal in many jurisdictions including the United States.
Once service providers start censoring content based on third party reports of alleged child pornography, it becomes much easier to supress other content as well. Organizations such as RIAA and MPAA would love to be able to flag arbitrary content as infringing and have ISPs block such content automatically, bypassing even the need to file DMCA takedown notices. Think of how often YouTube videos are incorrectly flagged as examples copyright infringement and extend this to all ISPs who check against Google's database, and you can see the problem.
ISPs who participate in this system delegate the right to make judgment calls on material that isn't obviously illegal to the maintainers of a central database whose judgment may or may not be consistent with local law. Anything in the database is assumed to be illegal regardless of its actual legal status, and the ISPs just follow along instead of deciding individually whether or not the content is likely to survive a legal challenge. Once the system becomes widespread, ISPs may even feel it is necessary to follow it to avoid secondary liability for content posted by their users.
This is yet another example of a worrying trend, where content alleged to be illegal or infringing is removed without due process and often with little regard for the law and relevant jurisprudence. It's no way to run a network that for many has become a primary means of communication.
Internet users deserve better than to have their content blocked according to extralegal judgments with perhaps no bearing on local law, little or no chance of appeal, and no way to establish legal precedents protecting certain kinds of content.