Software reliability over the past few decades has shot right up.
I think this is a questionable premise.
1) Accurate, though has been accurate for over a decade now
2) Things have improved security wise, but reliability I think could be another matter. When things go off the rails, it's now less likely to let an adversary take advantage of that circumstance.
3) Try/Catch is a potent tool (depending on the implementation it can come at a cost), but the same things that caused 'segmentation faults' with a serviceable stack trace in core file cause uncaught exceptions with a serviceable stack trace now. It does make it easier to write code that tolerates some unexpected circumstances, but ultimately you still have to plan application state carefully or else be unable to meaningfully continue after the code has bombed. This is something that continues to elude a lot of development.
4) Actually, the pendulum has swung back again in the handheld space to 'apps'. In the browser world, you've traided 'dll hell' for browser hell. Dll hell is a sin of microsoft for not having a reasonable packaging infrastructure to help manage this circumstance better. In any event, now server application crashes, client crash, *or* communication interruption can screw application experience instead of just one.
5. Virtualized systems I don't think have improved software reliability much. It has in some ways make certain administration tasks easier and beter hardware consolidation, but it comes at a cost. I've seen more and more application vendors get lazy and just furnish a 'virtual appliance' rather than an application. When the bundled OS requires updates for security, the update process is frequently hellish or outright forbidden. You need to update openssl in their linux image, but other than that, things are good? Tough, you need to go to version N+1 of their application and deal with API breakage and stuff just because you dared want a security update for a relatively tiny portion of their platform.
6. I think there's some truth in it, but 32 v. 64 bit does still rear it's head in these languages. Particularly since there are a lot of performance related libraries written in C for many of those runtimes.
7. This seems to contradict the point above. Python pretty well fits that description.
8. This has also had a downside, people jumping to SQL when it doesn't make much sense. Things with extraordinarily simple data to manage jump to 'put it in sql' pretty quickly. Some of the 'NoSQL' sensibilities have brought some sanity in some cases, but in other cases have replaced one overused tool with another equally high maintenance beast.
9. True enough. There is some signal/noise issue but better than nothing at all.
I think a big issue is that at the application layer, there has been more and more pressure for rapid delivery and iteration, getting a false sense of security from unit tests (which are good, but not *as* good as some people feel). Stable branches that do bugfixes only are more rare now and more and more users are expected to ride the wave of interface and functional changes if they want bugs fixed at all. 'Good enough' is the mantra of a lot of application development, if a user has to restart or delete all configuration before restart, oh well they can cope.