Comment Re:90 days is really long (Score 1) 263
90 days is really long when you don't have a massive base to run testing and regression against. Let's just say that the fix is adding a bounds check to the input for a single function. The engineer assigned to the bug adds the bounds check and unit tests to make sure it behaves now. The fix is submitted to the build queue for the (let's say nightly) run to generate the next patch set, and the next production build for Windows. Now QA gets it, and being that this particular item failed for an input, they write a bunch of tests that kick in various input items - numbers, letters, binary data, larger than expected, smaller than expected, etc. This is then run in the "Test this subsystem" run and if it passes, yay, else back to step one. Then they run that test as part of their automated "Test Windows" run (which probably takes hours to do). If everything passes, great. If not, back to step one. Then after it passes QA for "Test Windows", it needs to go through QA for "Test Windows with {list of major software that if we break something it is bad}". If that all passes, then it can go to the patch queue for the next scheduled release. I'd be surprised if an automated "Test Windows" run can be completed in less than a day or two. Probably 3-5 days for the "Test Windows with Other Software Running". So the minimum time to get a tested patch is about a week assuming the problem is super simple. Once it starts involving multiple subsystems, you can start running into weeks to get a good tested patch, assuming that it doesn't take a few weeks for engineering to get a fix ready for testing in the first place.