Your post is a lot of nonsense. AI is probably a bubble and driven by the notion that there will be broad productivity across industry that is likely to never materialize, that much is true.
However, there is also the possibility that it will be an inflection point that rapidly advances material science, bio-science, and information sciences.
In our largely 'information economy' you know since we already outsourced a huge portion of the production of actual goods, the consequences of being left behind in those areas are very possibly the difference between a second 'American Century' and slow decline and collapse.
In sort we maybe can't afford AI, but we can't afford to not to either. The most critical thing is going to be not regulating what people do with AI; but controlling the market contagion when it does not produce the short term results and the bubble bursts. That much will happen. The problem there is how to do it without also choking off the investment to fund it all. Not many clear policy options for that.
Passing so resolutions not laws that simply say the government will not be bailing out organizations or individuals that experience AI investment related losses could maybe prevent more growth of the systemic risk that NVIDIA, OpenAI, hyperscalers, and data-center developers all with hands in each other pockets already represents. The VCs that want to be big and possible win big still can, Vanguard won't be so pressured to bet as big with your 401k...