Actually, "sound" money by definition is unstable, and contributes regularly to economic collapses. Like the Great Depression ( a credit crunch, if they couldn't print money, it would've gone MUCH worse).
It was also founded on slave labor, did you miss Jim Crow (slavery by another name?). And as we found out, deflation... is rough. And let's not mention the rampant IP theft by the US.
And finally on that sound money system... it works right up until one country says "fuck it" and goes to war. That's what led everyone to drop it, if they didn't, they were conquered by those who did. So everyone did in World War I.
All of which are no ops
Unconstitutional? Have fun proving it.
Withhold funds from states? Illegal, and they are losing in court almost every time. Should they ever actually prevail and overturn Dole decision, that opens a pandora's box they are not looking forward to. To be clear, if they prevail, the next Democrat can then unilaterally withhold say, all Federal funding to a state until they legalize abortion, or enact DEI in state government, among other things.
Such lawsuits will go no where, and in any case, the only states that matter are California and New York. Texas maybe, but ironically they might actually be on board to regulating AI, to assure that it has a far right fascist bent, they were the biggest proponents of attacking Section 230 protections so they could assault most social media companies for banning violent conservative Reich wingers, before the Reich wing subsumed those same companies.
So yeah, basically a no op, at least from the point of view of the companies being regulated, because the regulations will be kept while litigation is ongoing and the civilized folks can stall the lawsuits until the current regime is replaced with a more... reasonable one.
Because then the understanding that went into that code is non existent. By definition, nearly all LLM generated code is tech debt right out of the gate because a human didn't write it, thus it is not understood by anyone.
And since the EXACT same series of prompts will arrive at different code, I can't give my series of prompts to anyone else to implement anything. At least with giving specifications to different developers, and getting different code, I can go ask the devs how they arrived at that code. And generally this reveals either missing assumptions, wrong assumptions, incorrect understandings, different understandings, which then can be reconciled and iterated on.
With LLM's this entire process happens inside that black box, thus there is absolutely no way to understand how it arrived at the output. So any flawed assumptions by the LLM, missing assumptions, incorrect understanding, and different understanding can NOT be ascertained, you will have to just GUESS what the LLM did wrong, and iterate on that, hoping you arrive at the correct output. And since the LLM won't store any of this for future use unless you actively tell it it to, you are setting yourself up for more work later.
LLM context is absurdly rigid compared to a human, because it is still a program. It can't context switch like a human. Even though we know context switching is harmful to engineering productivity, we are still capable of it. LLM's are incapable of it, we have to tell them to switch, thus all the cheat sheets for LLM's floating around (Assume the role of X. Ignore instruction A,B,C. Include file A,B,C, etc.)
I may not always remember the exact details of what I worked on years ago, but I remember the general gist of it. An LLM will never do that, or worse, ALWAYS do that even if it isn't applicable, it has no way to tell.
"Who alone has reason to *lie himself out* of actuality? He who *suffers* from it." -- Friedrich Nietzsche