With out a doubt you need to be able to measure what works and what doesn't but the moment some first turns to any kind of standard or the nightmare word "metrics" you have already failed. Too many companies go through all the fads and all the silver bullets. They have scrum masters, and black belts in Six Sigma (I am not making up the black belt part) but the fundamental problems are not fixed. Often the first place to start is with communications. Who does communicate and who is supposed to communicate. It is great if the sales people can bounce stuff off the programmers in the lunch room and even better if the programmers meet the clients but once the sales people are able to direct a project the project will instantly start chasing rainbows.
The second place to look is the why? Software is made for two reasons, to make money or to avoid losing money. This allows you to boil down any "solution" to the money. So if the argument gets into religious territory such as language choice, OS choice, documentation, or even commenting style you can then ask, how does this either make us money or prevent us from losing money? So someone might say, such and such a documentation system is better when you can then ask, lets look at the cost or value of us having no documentation at all vs perfect documentation. After breaking it down you might find it is a huge cost one way or another and your decision is made for you. This prevents programmers from continuing to try and impress their long past CS professor and his insatiable demands for Hungarian notation. But as a pro-documentation example if you are routinely cycling in new programmers into a project great documentation can pay for itself in spades; but first you must calculate the cost of bringing a programmer into a project sans documentation and bathed in documentation. Did that documentation save more than it cost to produce; you might argue that a good documentation is low cost but again compare that low cost to the cost of not having it at all or having less.
So better engineered high quality code feels like a great idea but make sure that the value of increasing quality does not have a disastrous business result. A simple example would be if your company's business is famous for being first to market with each new feature. People might grumble about how it crashes quite a bit but that since they make $500,000 a day using each feature having it a week earlier than the rock solid competition is very valuable. So if you slow the process of delivery down by 8 days and make the software perfect you will be out of business in no time. This is all a bit extreme but I suspect your core business is not making software but doing something else that the software supports. So it is quite possible that your company is mildly irritated by the bugs but that they exploit the features quickly.
Personally I have found that unit testing on larger projects ends up speeding up deliveries but on smaller projects definitely delays delivery.
One bit of horrible experience that I have picked up over the years is that some great systems were built on truly terrible code and truly terrible architectures. The disasters were also legendary but more often than not the cost of the disasters still justified the speed to market of the terrible system. Some systems were then recoded after disasters and made great but often at the cost of many release cycles resulting in a business disaster far greater than the disasters prompting the recode. Often the best solution was to make the code base slightly less terrible and press on at full speed. I have seen this terrible code and it is just solid WTF but when you look at the piles of money generated you just get angry that your own "perfect" code didn't make you rich. But as a counter point I have seen systems so terrible that the disaster took out the company; but even there just a slightly less terrible system would have saved the company. (The example I am thinking of had no backups so they lost everything, POS, inventory, customer lists, and codebase leading to bankruptcy. So with only the addtion of backups a terrible system would have continued to fuel an otherwise solid company.)
To give a more physical example in WWII the Russians made terrible tanks; the welds were crude, the whole thing a bucket of bolts run by a badly trained crew. The German tanks were marvels of Teutonic precision; well trained crews in great works of engineering. But the Russians churned out their tanks with great thick armor and ever bigger guns. The factories were famous in that they were within earshot of some battles and the virgin crews would climb into their unpainted brand new tank and their training was largely gained as they drove to the nearby battle. The Russians ended up kicking the German asses. If you were unaware of the historical outcome and you looked at the numbers, the actual tanks, and the comparative experience and training of the crews most people would agree that the German superiority was beyond consideration. But again a counterpoint can be found in American tanks in the Iraq war. The Russian tanks were crude and again had the advantage in numbers but the American technical superiority so overwhelmed them that I felt a tiny bit bad for the Iraqis. My conclusion is that engineering superiority is sometimes good but sometimes bad. You have to figure out the cost to each option and you have to look at the benefit to each option. These costs can be very hard to pin down (such as the cost of it being hard to find good programmers to work on a bad project) or easy (the cost of a great programmer). Then you look at these money based numbers and the decision should then be as easy as greater than or less than.