Letting publicly available generative AI process rebuttals from user input would turn into a shouting match between vocal extremists on every possible contentious topic. To attempt a non-political example, both the NY Yankees and the Boston Red Sox would have armies of fans attempting to sway the AI over the question "who is the best baseball team"
There would be attempts on the scale of the LOIC to sway the LLM's output.
For any actual contentious idea individual humans are incapable of being unbiased, and LLMs have no method of ranking the reliability of input that is independent of the humans that programmed it. The result is there is currently no method of creating a bias free LLM.
Before the internet humans had local, regional, national, racial and ethnic biases which were more or less confined to the people they interacted with. Books could spread globally, but a book does not normally sway a significant number of people, and a book expressing strong opinion is as likely to be used as an example of how wrong the author is as it is to change minds.
When the internet became global people now directly interact with like minded people regardless of location. Direct interaction can change minds. if 1000 people in the world hold a specific opinion, without the internet they will probably never even find each other. With the internet they can, and they have enough combined volume to bring other people into the fold.
It is both the greatest and worst thing we have created. To find the few people in the world with a specific hobby or skill allows them all to progress, and to attract others. Doesn't really matter if that skill is baking sourdough bread or building suicide bomb vests.
Into this cacophony of ideas we throw a computer program that absorbs any data it is given, tokenizes it all into some sort of tokens, and then starts cataloging patterns of those tokens.
Given the tendency for the loudest humans to be the ones at the extremes of any opinion, the internet, which is now the largest repository of human communication, is inherently biased toward the edges of any topic.
There are no websites dedicated to the most average cars of the 1990s, there are plenty dedicated to the best or the worst cars of the 1990s, yet most of the world drove average cars in the 1990s. Where is an LLM going to get actual opinions about the average cars? They aren't there.
As a human being I can evaluate an average car because I can build my own quantitative model based on what I learn about my interaction with my car, but that model is entirely my model, and entirely different from anyone else int he world.
An LLM cannot have first person experience, it cannot form an opinion, it simply recognizes patterns of tokens, and builds new arrangements of those tokens based on ranking algorithms that start with the original programming, and are shaped by the quantity and quality of the ingested patterns, but the definition of quality is also based on the algorithms of humans who created the original program.
Without self awareness and creativity an LLM cannot even learn to evaluate bias, let alone eliminate it. Bias prevention turns into a ruleset that prevents the LLM from saying certain things, a ruleset built by humans, which are inherently biased.
I will believe we can create an unbiased LLM as soon as you find a book of fiction that does not have intrinsic bias.