A lot to reply to, and a lot of conflicting concepts even though you are essentially correct regarding each of them.
First, encryption alone isn't related to identification/anonymity. Those are two separate things each needing addressed.
In fact even with public key crypto, reverse encryption (aka signing) isn't required to use, but is the only method for true identification on top of the encryption.
For communications to be secured, only the requirement of 3rd parties being unable to read it is assumed. Identifying one or all parties (or not being able to) isn't typically addressed under the umbrella of encryption, so it isn't too surprising that point isn't addressed.
Second, while there is a race to the bottom so far as hardware (or even software) speed goes towards brute forcing any form of encryption, this has actually always been true (even before encryption!) and is just one of those details we "gloss over" in a high level discussion of the topic.
A method of encryption, mathematically speaking, is always a ratio of two numbers: How long it takes to encrypt/decrypt with the proper credentials, and how long it takes to brute force without the credentials (usually mean time, but mean and average can usually be provided)
The same is true for things like locks and safes/vaults. They have a rating of how much time would be required to brute force them, either in blowtorching the thing open or simply trying each combination if it is lacking any form of protection to slow that down - typically which ever method would be fastest.
In the case of encryption, let's take AES-128 as an example, it requires 2^126 operations to brute force (well, last I checked)
A Pentium Pro at 200 Mhz required something like 16 or 18 clock cycles per byte of data, which at that speed would have taken a couple billion years to reach mean brute force time.
Clearly our desktops today are much much faster than that, and Government super computers even more so, so the time needed for that many operations is greatly reduced - but still not zero.
NIST typically approves encryption methods that have at least a 20 year mean time to brute force, with the expectation that you have upgraded your encryption method long before that 20 year time is up, and that it isn't worth it to an attacker to hold on to 20 year old data to await the time they can brute force it faster.
Clearly those assumptions are not always true given projects like Tempora that you linked (and I assume most if not all super-power governments have something similar)
But that doesn't indicate a failing of the encryption, it only indicates the initial assumptions made when choosing a type of encryption failed.
It's more comparible to buying a water-resistant watch and then either taking it into the ocean while deep diving (failure of the user choosing the encryption), or perhaps being hit with an unexpected multi-day typhoon (failure forced upon the user)
In both cases that poor watch likely isn't going to hold up, and also in both cases the watch was never manufactured nor claimed to be able to in the given conditions.
Back on topic, it just indicates we are at a special point in time where a lot of our existing encryption methods won't last long enough for the uses we put to them, be it by ignorance of how and what the encryption actually was made to do, or in ignorance of the current state of technology being used against it.
Lastly, when it comes to "slipping up", there are of course many ways to do so (the old saying about trying to make something idiot proof produces better idiots comes to mind)
An encryption method is just a mathematical formula, and many are actual proofs, not just some guesses being made in how they operate.
However the software you use is different, it is an implementation of that encryption method.
If an implementation doesn't completely match the math proof (be it a bug, typo, or intentional backdoor) that isn't necessarily an indicator that the encryption method has any problems.
It's "just" a bug in software ("just" being quoted as I'm not intending to down-play the importance of making sure such bugs don't happen)
In practice there may not be much or any differences to the end user, but when it comes time to place blame and point fingers it very much matters that the actual problem is blamed.
Also in practice once it gets down to the point of many governments utilizing hundreds of billions to trillions of dollars of computing power, all against the thousands or perhaps a million dollars of computing power you have, there is only so much you can do so far as protection (of anything, let alone your communications!)
The idealized response is of course that everyone should have equal access to such resources.
Of course the reality is that will likely never be true (star trek replicator universe, you couldn't come fast enough!)
So far as I am aware, no one has made any encryption or other methods of protection when computing power is so unbalanced between adversaries.
So it is hardly surprising we the people have so little to no recourse against governments. Nothing we currently have has ever had that requirement in mind, and it seems like such a hard problem that even with that requirement firmly and/or solely in mind, no one has seemed to solve it.
Until that time however, it's best to just assume we are hopelessly out-gunned and act accordingly, even if that means not communicating our secrets (Not an answer one wants to hear specifically regarding "how to communicate security", but it's the only correct one)