After a lot of soul searching whether or not I should actually honor this obvious attempt at trolling with a comment, I think I should, lest someone actually take it serious and believe it.
Allow me to take you on an excursion into the world of security. Before you get your hopes up, it's not as glamorous or kinda-sorta-shady-sinister-blackhat as you might think. But I'll try to make it as interesting as it can be.
Part of security are audits. Audits are, in a nutshell, attempts to find out whether there are weaknesses in the surface you're auditing. For example, you prod at a server, check its ports, make sure that everything that answers does so in a way that cannot be exploited, and so on.
Those that at least dabbled in security will know about the various "boxes" used to describe the "rules of engagement" in such an audit. Most commonly known, I'd guess, are "black" and "white" box tests. In a "black box" test, you get no or very little information about your target and your task is to find out whatever you can find about it. A "white box" test is the exact opposite, where you get full disclosure of your target's makeup, e.g. what services are running, at what patch level, often even what purpose they serve and what department they belong to, and so on.
One might now think that the more "normal", more "useful" test is a black box test. Because, hey, if I tell you everything, what the hell would you test? But, know what? A black box tests is something that you'd do to test the tester's ability, not that of your target. With a black box test you can rather find out just how much the guy you hired to do your audit actually knows about the whole shit.
If you actually want to test the target, you disclose about any information there is. That might sound odd now, but when you think about it, it starts to make a lot of sense. This information can be available to a potential attacker. A disgruntled ex-employee could have that information. Or someone who spends a lot of time social engineering and prodding can gain it somehow. Assuming that you could increase your security by withholding information from a potential attacker is at best giving you a false sense of security because you can NEVER actually say with at least a semblance of certainty that a potential attacker CAN NOT have that information. Like I said before, all it takes is a pissed off ex admin and this attacker would have ALL the information.
And it's rather trivial to sell information these days...
Now, what does this have to do with the question open vs. closed source?
It means that just because YOU do not have the information does not mean that your attacker does not have it. Closed source is akin to the black box in the aforementioned example, open source the white box. When you audit closed source, you will learn more about the abilities of your auditor rather than about the security level of the software you audit.