While I agree, and in no way trust the words of defense contractors, this is a common sentiment that's usually applied a bit broadly. One must realize that all security is security through obscurity. Each bit of obscurity increases the effective security exponentially. Yes, it may very well be that not having access to the cipher algorithms in use only provides a few bits of security since they're likely using one of the existing cipher systems, however those are a few bits of security that do exist if not disclosed. Now, (this would be overkill), but let's say they are chaining multiple ciphers together, say, Plaintext -> RC4 -> AES -> Ciphertext, and lets say they repeat that loop N times. Apart from the ciphers themselves the number and order of ciphers and iterations of the loops adds a few more bits of security through obscurity. Each stage of unknown cipher adds a few more bits of security in the selection of that cipher.
Indeed, this is how a cipher itself is built up from the various cryptographic primitives such as mix-rows, S-boxes, XORs, Look-up-tables, processions along a curve, block-chaining strategies, etc. reversible input to output mappings. I've just abstracted the process of constructing a cipher and described it using ciphers themselves in a way that can increase security exponentially even if you do know the details of the ciphers, but without knowing the details we can consider the system that much more secure. If the machine falls into enemy hands or details are leaked then our "element of surprise" bonus to the security is lost, but while it is not divulged it demonstrably more secure -- The same is true of cipher keys themselves, knowing a little bit of the key doesn't break the security but it weakens it; So consider these secret bits part of the key. All security is security by obscurity.
Now, purely hypothetically, let's say I built such a system that uses dual key expansions to derive both the key to use for the ciphers but also to seed a good random number generator which is then used to select which ciphers to chain together and in what order (perhaps the running time of iterations is computed to provide a fixe CPU load). Now, since the cryptographic primitives and implementations themselves have all been hacked on and accepted within the security community, this method of ciphering would also be considered at least as secure without really needing to vet the process. At worst its exponentially more secure than the currently accepted ciphers widely used today.
Do I really need to have everyone hack on this system to assure you it's safe? No. Not really. I could reveal the details to some trusted 3rd party, maybe call him Bruce Security. or BS for short. Then BS can be sworn to secrecy, and yet without revealing the details publicly or hammering on it BS could tell you: "Yes, this system is very secure, even more secure than any crypto system currently in use in the industry." And he'd be right even if such claims on the surface seem highly unlikely due to your own assumptions.
I make a similar argument in the practical vs worst-case time (big O notation) when selecting algorithms. For example: Red-Black Trees were chosen for C++'s map implementation. It was the "academically" and "provably correct" choice not to include hash-maps in the implementation instead or in addition. Practical folks said: FUCK! I need a damn hash-table, because its practical key-location case runs in constant-time, and I need that speed (one of the main reasons folks choose a compiled language). So, everyone went off and made their own hash-table implementations for their maps, some folks even drug out their existing C implementations and used them. Later, since everyone was demanding the standards board pull its head from its ass and give us what we want, they finally added a hash-table implementation in C++2011. However, now we have a bunch of existing codebases using non-standard hash-map implementations which will remain in place (because if it's not broke don't fix it), and now we have exponentially more CPU cycles compared to decades past so the inclusion of hash-table based maps is actually a bit moot at this point. So, the "academically provably correct" decision was actually detrimental in practical terms. They'd have had hast-tables in the STL long ago then we could have avoided a needles chunk of legacy maintenance headache.
In other words: The rule of thumb is: Rarely does any rule govern all thumbs. Common wisdom has typically been foolishly dumbed down. The wise avoid speaking in absolutist terms, there are almost always exceptions. While it's true that showing off the code would allow us to vet whether or not the code is secure, we're likely to miss at least some bugs anyway. It would be equally reassuring to have it vetted by a 3rd party and to listen to their B.S. saying it's secure... Time is a good test too, in practice. It would be prudent to proceed with caution; However, note that the current drones aren't very secure. You can crash them via jammer and tune into their video feed with your old school TV set, so any obscurity at all -- even an XORing everything with a single constant byte value -- would be better than the security we've currently got.
Note, they'd be correct in saying even just a constant-byte value XOR was "hack resistant" in "academically provably correct" terms, which is why one should typically expect BS.