Now in reality if you diversify the sample enough to account for this, it would end up as neutrally activated and if the correct NN were used, would end up as a wash. That is very hard to do, though, as you would need to account for a lot of community biases.
It would be far more interesting if they used unsupervised training and let the posts gather themselves using a technique such as bag of words or other such proven technologies to do more of a sentiment analysis type approach. Then you could be looking out for posts that tended to be 'extremist' in nature, 'low in content' that is not worth your time, etc. That would be far more interesting, allow you to get some idea about the quality of the post, and not be prone to the community bias as it would be based on what was actually written, rather than what some reviewer thought of it.
My guess is google knows this, but also realizes that this will play well with people, as it will introduce a confirmation bias into the results. The people who use this in their own communities will suddenly see all those posts they disagree with, and are thus toxic, disappear. See it works!
Instead what they should be filtering out is all of the posts that spout the same falsehoods again and again, which unsupervised learning would help with. I want to read ideas that are contrary to mine in order to debate and learn. What I don't want to read are the thousands of posts spouting the same false rhetoric again and again. It makes reading any comments on a news group almost intolerable, no matter what their political slant.