Combined with the above, these two assumptions (risk of exposure, looking for compromises) is sufficient to take the approach that the code is not likely compromised on purpose. This is not to say, that there are no risks, just that they aren't likely to be intentional.
Either that, or the backdoors are much more sophisticated and designed to look like genuine errors once discovered.
Probably would look much more like something out of the "Underhanded C Code Contest" than an explicit "If (NSA_flags == ture) then send_to(NSA, data);"
In theory, someone with half a clue would notice that putting backdoors has 2 very strong disadvantages:
- a bug exploitable by your guys is a bug exploitable by Russian/Chinese/etc. a hidden backdoor in software used by US civilians also makes them at risk of being vulnerable to the enemy
- once starting to get discovered, it will ruin the market: US products will get considered as NOT trust-worthy, and foreign customers are going to avoid them. Lowering the secuirty of US software is a commercial suicide (as already witnessed back in the "56bits max" era in DES encryption), and should be avoided at all cost.
In practice, I doubt that such consideration would ever stop the NSA doing its work. It might be also likely that they booby trop the shit out of every single domestic commercial software while knowning 100% sure that foreign agencies too are abusing the backdoors that the NSA put there for themselves.