There is no such thing as "effectively infinite". What you are alluding to is the fact the the number of vulnerabilities is not known until they are all found, and thus you can never be sure how many more there are. Even in a situation where a product is evolving and new bugs are being introduced, at any given point in time there are a finite number of vulnerabilities. There certainly are not trillions of vulnerabilities undiscovered in in Apache. Nothing that would ever approach infinity such that you can say fixing a vulnerability doesn't decrease the number of remaining vulnerabilities.
As each vulnerability is discovered and patched, the effort to find the next one should increase slightly, given that methods which either analyze the code, or make brute force attempts to compromise the system(by brute force I mean, "oh let's try passing ";delete userstables" in this field to see if there is SQL injection, no, how about this field?) will have to search longer before finding a vulnerability, since there are now fewer. Each fixed vulnerability reduces the set of vulnerabilities, regardless if they are known, and thus increases the cost to find the next one. Additionally, it is more likely that researchers will find the more easy to find vulnerabilities, while some may be more elusive. This compounds the increase in cost-to-find.
What you should be more concerned about, is when you have found and fixed all of the easier to find vulnerabilities, what of the small number of finite remaining vulnerabilities? If researchers search and do not find them within a practical time frame that makes the $1,000 prize worth it, then they will not be found. But the blackmarket or other agency might find such a vulnerability to be very valuable, and throw more resources at finding one. Now such a fact doesn't mean the prize program was useless, as it certainly reduced the surface area of vulnerability.