At the next Black Hat competition, they should really mix it up and have teams trying to embed spy-ware and decryption in lengthy and complex encryption code. Some code would be tainted, other code would be not, and some would just be shoddy so as to obscure the obscure.
It would be interesting to see how easy or hard it is to really catch nefarious code.
Because, unless you or someone working with you can understand EVERY line of code in a program -- and its dependancies, you can't really be sure.
The other thing is, you can have exploitable algorithms that can be manipulated. The "buffer overflow" -- where you stuff malicious code at the end of a command that has more data than the query was designed to handle is not based on malicious code in a program -- just an unforeseen and EXPLOITABLE feature.
To guarantee that a program is not exploitable is more difficult than to guarantee that there are no exploits. And an expert hacker, contributing code, might have done so with the expectation that the backdoor would one day be found. It's now more inconvenient, but perhaps one prime number salts all the random number generation, for instance, and knowing that would reduce the complexity of the pass code by orders of magnitude. Or, a specific string is always at a certain location in all messages after encryption, and the cracking can start by having to find a known 128 bit value in the halfway point of any array of encrypted data -- making the process a bit easier. None of those would yield consistent patterns that might be discovered, without knowing WHY each and every routine does what it does.
OR, you might have infected the compiler, and someone naming a variable; "ReallyGoodPasswordSalt" causes it to compile these little "cracking helpers" into any application that is built on them.
Then you might look a components of the computer executing the instructions. It's possible, for instance, that all INTEL chips or emulators, or maybe a chip from some tiny fab in Asia has a component on your computer that looks for some kind of code, or compiler directive, and embeds a hidden "cracker's helper" in whatever string passes through it. So a contributor, puts in some "good clean code" but they use specific variable names, or common routine calls in a certain order -- all it requires is a "pattern". The Developers don't look for these exploits, because it's not a normal business activity to have men in dark suits show up at an office and tell someone to "build this logic area into your silicon design." They never hear of such things. It's crazy to think of it.
People working at AT&T would have laughed at you if you told them that all the data over their backbone was just copied out -- they still might depending on their level of awareness. Why? Businesses that play ball get special treatment -- like a subcommittee in Congress drops a probe, or there's no lawsuits to break up a monopoly for a while. Whether you think that is nonsense or not, depending on electronics that no one person can know all the functions of means that exploits by an organized and well funded government organization, or maybe an NGO, have more places to hide.
How could we test for a hidden "poisoning" of code on devices we cannot fully guarantee? Perhaps when compiling, have an application take all the variables and libraries and give them new, random names, then compile. See if the same salt, same password, and same text after encryption ends up exactly the same way with both applications.
Try sending out various lengths of encrypted messages from various devices (that are the same), and compare them coming from different equipment, times and locations -- they SHOULD BE the same. If they are not, or the HTTP packets have some unexplained padding and/or different byte lengths, perhaps there is unexplained messaging going on from the devices and not the software.
I'm not in software security, but I do have a devious mind, and if I can think of a way to make encryption more crackable, then others can.