We already live in a world in which what gets aired or published is at the ultimate discretion of the platform owners. It is already like that for a scientific journal, a newspaper, or a cable TV, just to cite a few examples. So that is already the status quo. Facebook also rejects users posts that do not fit with their policy (but other rules currently apply to paid ads).
I for one wouldn't mind if the company that owns a platform did a basic screening (according to transparent rules) on what gets published on it (paid or not). For example avoid hateful language, that would not be hard to enforce and would go a long way to promote civil conversations, and people reasoning with their cortex instead of reacting out of (easily manipulated) anger.
Would you rather be the government be the "arbiter of truth", or perhaps a number of other entities appointed ultimately by people? I don't know ... maybe another solutions would be better than having the company itself decide, but are we making the optimal the enemy of the good here?
Also remember that whatever they are doing is in pursuit of their own business, not of an abstract freedom of speech ideal, whatever they'd like you to believe. So again, as long as the rules are clear and transparent some basic extra screening of isn't necessarily a bad idea in my opinion.