The prime directive of anyone associated with building software for end users must be to create bug free, secure systems that are effortless for people to use.
This needs to flow throughout an organization - whether you are the architect, designer, marketer, developer, tester, accountant, whatever. Everyone must be on the same page when it comes to this goal. Everyone needs to really understand what that entails in practice.
I've been both on the building, and receiving end of things when this goes wrong - and it goes wrong too often than is necessary, primarily because an organization does not have that unifying goal. From a user's perspective it sucks because you end up with a confusing mish-mash of tools with no unifying concept behind the interfaces, and which fail to integrate data effectively to avoid redundancy. 'Painful' is a good adjective that describes using such systems. From the developer's point of view you end up unable to do your best work. Finance or management doesn't provide the right resources, time, or unifying definitions for the solutions in the company's stable - everything seems to be a one-off that you end up throwing over the wall until the next project comes along. Responsibility and ownership is minimal at best - leading to long nights debugging production code, and too many times devolves into finger pointing and recriminations.
Given the current state of affairs I think it is time for people to find new concepts of how software and systems development really should work for all of us.
One thing that occurs to me is we should stop rewarding companies / projects (in the case of open source) for producing poor quality systems and software. If you want to build crufty systems for yourself, that's one thing. Don't foist that off on the public. A way to make it easy for end users to identify such systems could be a certification mechanism - an independent body that could look at various criteria to rate software and systems on an scale (e.g. unrated, low quality, medium quality, high quality, etc). The criteria used could be things that matter - such as bug history, security bug history, availability of code for independent review, complexity vs. simplicity of code reviewed, ease of use, ease of integration with other systems and data, etc.
Similarly, I think development tools, and organizations and companies that develop tools and systems should similarly be rated to allow potential consumers and users of their work to make more informed decisions.