The hype, and fear, over current AI is wildly overblown. Perhaps there are too many stories of computers magically acquiring sentience somehow, for instance, in the Terminator movies. I suspect we've grossly underestimated what it takes to achieve real, general intelligence.
Being able to play chess better than any person, ever, is not enough, not even close. All that chess computers have really shown us is that chess is amenable to brute force computation. Ought to have known that all along. Some scientists really hoped that the ability to play chess would translate into or perhaps derive from general intelligence.
Driving a car is another task that's been touted as AI. Not only is the way they do it not intelligent, they can't even do it reliably. They don't understand that they are driving a car, they only respond to visual stimuli. They go wrong in ways that a human would never go wrong.
And finally, these LLMs. LLMs are not in the least intelligent. They merely bandy words, they don't understand them.
The way OpenAI is behaving is so typical of these kinds of businesses. Exaggerate the capabilities of their products to the point of lying, as Tesla did with the self-driving they sell. Try to silence employees, to keep those lies from being exposed. As to the "serious risks", these are not risks that AI is going to get loose, no, these are risks that potential customers are going to believe the hype, and get hurt when the stuff can't perform at the level of expectations the sellers sold everyone on. Again, Tesla's self-driving is a case in point. When Tesla's AI misses, sometimes people die.