And that's not to say that sticking with old versions is always bad, it's just that the method of deciding what's stable is literally "is it old?". Why not test things and then update, instead of arbitrarily picking a version and declaring it to be stable? Or keep track of projects that release safe code and give them 2 weeks to make sure there's no horrible bugs, and then update (like what exactly is the reason for holding back Firefox and Pidgin?).
Because it is hard to do so. You cannot assure that there are no regressions as you can only test very few configurations, then somebody has to spend the resources to do the actual testing, and so on.
A stable (i.e. static) release means that at least no *new* regressions are introduced. Sometimes an exception has to be made (e.g. security updates), but these fixes are often backported to the version supplied in the release to avoid changes in behaviour as much as possible, but sometimes there are still regressions (see the security updates Debian provides, sometimes there are follow-ups due to regressions *not* noticed during testing). How many more regressions would there be when *new* upstream releases are included in the stable release?