I work for a similarly-sized moneycorp.
I guess we're less top-down than some peers, at least about this stuff.
But we have a mostly free hand to use LLMs, or not. There has been vague encouragement to experiment with them, and the data security policy of course applies, but otherwise, no mandates or even heavy-handed suggestions.
I think the main use here is as a coding assistant, but engineers are expected to support/talk about/defend the code they check in, and the way we work enforces that.
We're building a robot to help with incident response and operational issues, and that's at the "vaguely nice to have" stage. It is capable of pre-populating context for a new incident in various ways - surfacing other recent incidents with the same application, listing relevant tickets and commits, and summarizing logs and instrumentation. All more or less before a human can get to the new channel. And it usually gets most of that right. But as far as actual troubleshooting, the thing is still tripping balls.
Some of that may be because old timers are still around - many of us have been here since the company was a nobody startup, and are the only available domain experts on how things work. We're also performing significantly better than the rest of the extended company, so they tend to accept it when we push back on something. Once the people who built this place are gone and the rest don't have that demonstrated authority, it will probably change.