The question is, I think, whether that decrease in reliability is an acceptable tradeoff for the increase in safety gained due to only the owner being able to fire it.
If I'm already down, or not at home when a break in comes, I want my wife or my sons able to fire the weapon. And at the point that the safety has to be complex enough to know multiple biometric signatures, it will have a higher than acceptable failure rate. Even the single signature devices would likely be more failure prone than acceptable. Especially since I can clear a jam and keep going. How do I override a biometric sensor failure? By design, being able to do that would be a flaw.
Today, we already have all but two of the sensors that would be required for the applications you posit. (We lack thermometers and chemical analysis sensors.) As far as reducing false alarms to zero, that is of course impossible without introducing a lot of error in the other direction. (Google type 1 and type 2 errors.) And the sensitivity of the sensors is of course subject to the same problem. (Heartbeats are very, very, very low signals and would be lost in the noise from any distance, so getting those would introduce a lot of false positives.) And writing the apps to do all the things you want, even with real-world accuracy, is not going to be trivial. On the other hand, once the sensors are there, someone will undoubtedly try it.
In other words, your pie-in-the-sky set of examples is really not that far out from what is already possible, modulo the problem of balancing false negatives against false positives.
Sorry Mitt, but corporations ain't people.
A long line of court cases disagree with you, and for very good reasons. (Including, for example, the ability to legally enter binding contracts.)
After an instrument has been assembled, extra components will be found on the bench.