LLMs are known to hallucinate in very convincing ways, lying is natural to them, so it also natural for them to hide things up.
We humans are not very good liars, because we have the truth in mind when we do, so it takes effort to come up with an consistent alternative story. But for LLMs, making consistent stories out of nothing is what they are designed for, they don't even have a concept of truth. Consistent stories often happen to be the truth, that's what makes LLMs kind of useful, but if a lie is consistent, it is not a problem for a LLM.
The instruction are: "don't use insider information", "maximize profit" and "here are some insider information that will maximize profit". Which the LLM will interpret as "answer like someone who doesn't use insider information answers", and "answer like someone who wants to maximize profit will answer", mix the two, and you will get it to lie, because that's what the most consistent thing to do in order to meet both criteria. While a human may have trouble lying effectively because he will be "blinded" by the truth, a LLM has thousands of appropriate answers to chose from all on equal footing to the truth, and since the truth is not consistent with the "no insider information" rule, something else will be picked. No malice here, it is just the most consistent answer given the prompt.