As humans when someone says "security fraud", depending on the human, it means something different. One person might not know what a security is but know "fraud is bad". Another person might even went to jail for it, or another might of been getting away with it for years. Language models take the word "disapproved" not as "this is super bad thing" to either "1 in 100 will say its bad" to "80 in 100 say its bad". Depending on how the model was setup the weight of that single word determines weather its really bad or "only bad without an excuse"
Think of all those wonderful manager speak out there that go out of their way not to blame anyone or reports that seem too fake but written in such a way that the writer cannot be held accountable. LLM are programed off THOSE kinds of things. That kind of ambiguity causes these machines to look for excuses rather than say no to an illegal act. To be frank, they would be really good at it too if they didn't just wholesale make up references.
Living on Earth may be expensive, but it includes an annual free trip around the Sun.