What we need, imho, is to mandate that users must have option to control / tweak the material parameters of the algorithms that select what is shown and not shown to them
Or failing that, at least a clear disclosure of what algorithms are running and what are they promoting/demoting for various groups/types of users. (Maybe incl the data set used to train and the parameters it's trying to maximize or such)
And maybe a small per post charge beyond say 10-20 posts per day.
That could even go towards some independent body to oversee the algo disclosure / controls like the self regulating industry bodies for advt/news etc.
Even if all this affects the corporate's IP to some extent. IP cant trump disclosing what you put in the food or meds you sell so it shouldn't be an issue here.
The downside of black box algos is now quite real and the unintended weird social /psychological consequences are only going to explode.
Every new tech has had some of this sort of regulation throughout history when it affects the public. And software has somehow managed to keep stuff completely hidden even to the extent of not being disclosed in court or criminal cases etc without many hoops to jump through.
The worst thing that can happen is that at least blatantly stupid or insecure software gets outed. Plus people at least get to discuss issues in the open rather than speculating blindly.
I mean, for eg, we will never even know something as basic as why does windows defender says microsoft office is a ransomware ! :) Does it do that to other safe sw too or is ms office actually doing stupid things. Probably both.