I agree, And to simplify this, testing doesn't prove or disprove the existence of bugs. If a bug is obtuse enough (like most security holes), there is a good chance it won't get tested even in day to day use. Most code over a few hundred lines gets sufficiently complex that it starts to take a real effort to do a code review. Couple that with the fact that one needs experience and/or training to read code and recognize security flaws; and most programs are thousands to tens of thousand of line long, or more. I think you will likely find that there are not very many people (or in this case none) who have the time nor inclination to review code for security flaws, regardless of whether the source code is available.
So for sure this ultimately makes open and closed source no better than the other in this regard. In fact I can make the argument that closed source might get more reviews since people are being actively paid to look at the code day in and day out. While in open source, people often won't look at code if it isn't the new shiny thing everyone is buzzing about. I'm not saying closed source vendors are willing to spend the time and money to reengineer the code to fix found security bugs, which might take considerable time and effort (unless they are really, really bad). Mainly because doing so impacts schedules and ultimately money. It's just that in closed source, people might actually know about it sooner than in open source. But in the end, if a security flaw isn't fixed in 25 years, what's the difference which paradigm it falls under? (That's rhetorical.)
Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.