The article is full of items like this:
"To address this concern, we use the politically-neutral questionnaire generated by ChatGPT itself. "
There seem to be multiple cases of they asking ChatGPT for baselines. I'm no research scientist but it seems awkward to me to stand on results where you have the tool making these kinds of decisions.
I also am curious as to the inherent "bias" of the underlying data used in the model. Playing devil's advocate, ask the question: "Is it specifically loaded with more Left leaning data or was that a result of more fake news being filtered out of Right leaning internet sources?"
In the end, this is just a really fancy engine siting on top of a pile of data. The engine isn't specifically biased, the data loaded creates these sorts of results. Asking the engine to produce a list of neutral questions to use on itself is going to give you an answer based on the loaded data. Determination of neutral questions needs to come from an external source.
I'm just some idiot, so what do I know, right? Maybe someone who spends their life working on these things will chime in and clarify.