For similar reasons (I work on projects with serious security demands) I've gone down the rabbit hole to get local LLMs working and I'm pretty happy now, but it was quite a journey.
We now have stuff like Ollama and LM Studio that can run models locally, open models that have sufficiently large rolling windows, and things like privateGPT as a glue to feed in your own documents. Or Anything LLM if you want an all-in-one solution (though in my tests it didn't quite work as well).
We're getting there. In a few years, we'll have local AI integrated into our desktops.
I personally wouldn't invest into any AI-as-a-Service companies anymore, at least not for generic models. Maybe for models custom-tailored to specific use cases. But for the generic "write a report for me" LLMs, there really isn't a need for any government to rely on cloud services anymore.