...and a good metric is if the feature simplifies the code you are writing or if it makes it more complex.
The more orderly a system is, the smoother and more functional it will behave with fewer unintended consequences. The easier it will be to maintain, and the less time it will take to understand. Simplicity truly is the essence of all that is good in a computer.
What is easily overlooked is that order requires effort to maintain. At zero effort, complexity is going to increase, and looking at any system (or code) it becomes obvious where effort is lacking just by its complexity.
Another key intuition that is easily overlooked is that this principle holds true at every level, from user interface to technical performance optimizations to language features. At any given moment if we find what we are doing is complicated, we must question if it can be done simpler. Windows 8 failed at the interface level to simplify the experience. HTML and CSS failed at the language level for most of their early versions with browser incompatibilities and quirks.
A programmer's main job beyond implementation is to simplify the implementation. And anyone not actively maintaining order is inevitably going to be part of the problem.
System-wide default error handling would actually be useful. It is completely up to the programmer to make sure nothing unexpected happens, but that is true even without the feature. And it isn't as if there is no default already... it just isn't customizable on a system-wide or better yet, scope basis (in most languages).
This isn't about invalidating any logical tenants. It is still up to the code to know how to handle errors. For there to be easier methods of instructing default behavior cannot be a bad thing.
This is a perfect example of the flawed interface design philosophy many tech giants fall prey to, and it boils down to "we know what you want better than you do".
To their credit, companies like Google and Microsoft and Facebook put their best minds behind these problems and come up with technically ingenious solutions. That's part of the problem. It must be correct and it must be better, because we worked so hard on it using proven methods. But people who know what they want find these products difficult to use, difficult to control, and even vaguely insulting.
The Facebook news feed is a triumph in machine learning, as is/was Microsoft's ribbon interface in UI, and Google's search in contextualized search... They're based on solid research, mass user polling, hard big data, and ambitious technical goals of competent engineers. Yet, they can't get it right because they continue to look at the problem and ignoring the people, often condescendingly so.
It takes understanding for users to have clear intentions. As others have said, if the user doesn't know anything about what they are searching for, Google does a good job of educating their guesses. And to their credit, these companies are successfully serving the inept majority. But anyone who continues to use their products inevitably will have clearer intentions, because with use, we naturally get smarter. That is why the more we use these tools, the more we have reasons to hate them. The more we find things we wish to do with these tools, the more we find they are less accommodating.
The technical solution is rather simple. Interfaces are intention driven, and if they're not driven by the intentions of the user, they are driven by the intentions of the developers. Hence, each feature can be tested for the intentions they serve, and those that serve the user must be added and made more prominent. An existing example in facebook is the "don't show me posts from ___" feature. But other's that don't exist would be listing entries in strict chronological order, or listing entries unfiltered. They could be simple checkboxes and implementation would be simple (boring almost).
The technical solution is far easier than what really needs to happen, and that is a change in attitude and philosophy of the people building these products. They need to be more embracing and less insistent on user behavior. They need to stop thinking they know better. They need to stop judging their own solutions by their technical prowess. People who know what they want need to be able to choose, and for the most part, intentions are simple. Simple intentions garner simple select-able features. If this is too boring, maybe they need to stop using users as guinea pigs, quit their insanely high paying job, and go back to academia where they could do some really interesting work.
Science is the axiomatization of falsifiable statements as either true or false through reference to real events and experiments. Experiments are not necessary to generate falsifiable theories, and there are plenty of theories that are impossible or nearly impossible to test. Testing for the Higgs boson took 50 years and billions of dollars. The experiments don't necessarily come first.
Our past is an ample source of real evidence. What happened was real, and is just as useful as any event intentionally produced in a lab. But the true value of science is in controlling and predicting future events and consequences. And for this, experimental validation is not asking for much. If we can't reproduce it in a lab, it will never make it to an iPhone. Hence irreproducible events are worthless to the pragmatic scientist and to every engineer. In most cases they're either not what they seemed or beyond our current scope anyway. We'll either get to a point where we understand it later, or we'll find out the scientist was lying -- the latter has turned out to be quite common.
All this theorizing and hypothesizing is simply part of the initial process. What has no consequence will be unobservable and untestable and unusable anyway. But imagination is already a consequence which at minimum has great entertainment value. The next step -- and quite an important step -- is seeing if any of it will make it out of our imagination.
A list is only as strong as its weakest link. -- Don Knuth