Bad guy already knows that he is a bad guy, a good guy does not plan anythings bad, any warning will be a false positive.
You forgot dumbshits who don't know shit, who are the primary audience for LLM-based AI.
Tools are tools, they have to be efficient on what they do.
They also have to be fit for purpose. Sometimes this is spelled out explicitly in so many words, in other cases you can just return or reject things that "don't work".
The responsibility for the actions of he user is on the user, not on the tool.
Nobody said it was on the tool, but sometimes, it is factually also on the provider of the tool. Pretending otherwise doesn't change the law. If the provider is negligent, they can share in responsibility. This is how things other than LLMs work, why not LLMs too?
Guns have safeties even though they can get in your way, for safety's sake. Equipment has lockouts. Most things come with warnings. Automobiles are starting to get automated guardrails like automatic braking and eventually won't allow you to e.g. steer into another vehicle, because it's feasible to prevent and there is a public safety interest. There's simply zero justification for the multi-billion dollar corporations producing and selling access to these LLMs to not institute some guardrails of their own.