Every time I visit AI policy advocacy sites it's always a series of unsubstantiated opinions highly resistant to falsification.
We think itâ(TM)s very plausible that AI systems could end up misaligned: pursuing goals that are at odds with a thriving civilization.
AI is today being leveraged on an industrial scale to judge, influence and psychologically addict billions for simple commercial gain. As technology improves thing will only become worse especially as corporations pursuit global propaganda campaigns to scare the shit out of people in order to make them compliant to legislative agendas that favor the very same corporations leveraging AI against them today.
This could be due to a deliberate effort to cause chaos, or (more likely) may happen in spite of efforts to make systems safe.
How desperate does one have to be to cite ChaosGPT and still expect to be taken seriously?
While I can't speak for tomorrow today it is caused by deliberate selfish human pursuit of power.
If we do end up with misaligned AI systems, the number, capabilities and speed of such AIs could cause enormous harm - plausibly a global catastrophe.
It is also plausible a baby born today triggers a global catastrophe in the future. He or she could cause enormous harm.
If these systems could also autonomously replicate and resist attempts to shut them down, it would seem difficult to put an upper limit on the potential damage.
If they could take over nuclear arsenal and use those to blackmail humanity it would seem difficult to put an upper limit on the potential damage. Be especially concerned when message "WARN: THERE IS ANOTHER SYSTEM" flashes on the system console.
If these systems could convert humans into fusion reactors and place them into a simulated virtual world it would seem difficult to put an upper limit on the potential damage.
If an AI brings significant risk of a global catastrophe, the decision to develop and/or release it canâ(TM)t lie only with the company that creates it.
If you stipulate this is plausible then such risk is a function of the underlying enabling knowledge and industrial base not any actions of individual corporations or persons. In short if such a technology was within grasping distance you can bet someone somewhere will "create it" and you won't be able to do shit about it.
Whatever censorship and ideology is imparted on models to "align" with your sensibilities and values, whatever compliance tests you promulgate ... this can and will all be trivially reversed with a few hours of computer time and there is nothing you can do about it.
If people truly subscribe to this x-risk bullshit at least be consistent and advocate for a total global ban of AI. That would at least slow down the technology. If there was no longer any major funding or work being done whatever goes on in the shadows at least won't have countless billions of dollars and millions of people toiling away in support of it.
Of course nobody will ever do that and nobody will ever advocate for it because these policy sites only exist to protect the financial interests of the corporations spreading this FUD. Banning AI is bad for business.