I would say that open source bugs are easier to exploit because you have the source. Closed source bugs rely on reverse engineering and should in theory be harder to find. So yes bad guys can focus on a high-value product or target whether closed or open source, but I think their job is a lot easier if it is open source.
To reiterate my point, I think that this argument is applicable to high value targets mostly. For non-security code or that without strong monetary implications tied to it, open source should still be better than closed source from a bug perspective.
I've read the FOSS argument for years and I guess I have leaned in favor of it from a bug perspective. But in this case, I think closed source would have won, at least to the current point in time. If OpenSSL is truly behind 60-75% of the world's web servers, then the value in hacking it is enormous. Thus if I am a criminal organization, it might be worth spending $1M for guys to read that open source code and find problems that I can then monetize for a big profit.
I don't think you are going to get $1M worth of code inspection on the white hat side for OpenSSL. Maybe going forward it will, and companies may be willing to invest in the upkeep. Not out of goodness, but because it makes good business sense. For a large organization, how many soft and hard dollars have been chewed up in the last week doing analysis, patching, client communication and general PR for Heartbleed? Probably enough that a $10K donation in time or money to OpenSSL upkeep would be feasible.
There is also evidence that the bad guys have been exploiting this in the wild. So the usual argument of "we found the bug quicker with open source" is probably wrong here. The better-funded and more highly motivated bad guys found it quickest.
My guess is the bad guys have been working this bug against Yahoo for awhile. Yahoo told me a couple of months ago (and others I know) that someone was attempting to login to my account from Russia. I would now suspect Heartbleed here.
The logic for finding bugs on the black hat side is OR (find any bug and exploit). The logic on the white hat side is AND (prevent all bugs). The table is always tilted like this unfortunately in the security arena. Bugs will always happen and the good guys can't win every time, regardless of code access.
I started work on what I called ACCP (Advanced Car-to-Car Protocol) in 2004. From the overview:
ACCP is a protocol for communication between two moving vehicles, to assist in making the driving task more efficient, and to make driver intentions explicitly known to those around them. The capabilities of this system advance upon the limited “communications” available today (turn signals and brake lights). Computers within each participating vehicle can talk wirelessly to vehicles near (adjacent) to them.
My intent was for things like signaling "I am looking for an address and don't see it" while driving slowly, and co-operatively determining target speed to aid in passing situations on single lane roads. I was wondering how long it would be before someone started doing something like this (although Michigan is more skewed to safety).
Over the last few years I've second-guessed myself on exactly how much of this I would really want to see. The opportunities for abuse are many and getting the implementation right would be difficult.