The problem, as calculus has shown us, is that when you are playing with the terms infinite or very large, what may be "obviously" true may not be correct.
Here are some confounding factors (some of which you mention).
* Lifespan of the software is not infinite
* Bugs take not only money to exploit, but time as well. As per Brooks' Law it is incorrect to assume you can reduce that time linearly by throwing more money at it.
* Not all bugs have the same level damage potential. E.g. a bug that requires end user stupidity is somewhat less severe than a bug that requires the end user to do nothing. A bug that requires you to have physical access to the device is much less severe than a bug that can be exploited remotely.
* Not all bugs are equally easy to discover
* There are a limited number of labs, whether white hat or black, capable of finding and implementing high-level exploits.
All that aside, your argument is just dodgy. "It doesn't even matter whether you have a prize program or not; the product is in a permanent state of unfixable vulnerability. " It costs $200 to see a doctor. If I visit the doctor and she discovers nothing, I've wasted $200. If I visit the doctor, and she discovers something, so what? There are an infinite amount of things that could be wrong with me, so no point in ever seeing a doctor, then.
Showing some math, even running a monte carlo simulation, would go a long way in convincing people you were in any way serious about this matter. As it stands, you're just pulling suppositions out of your nether regions.