You're advocating one specific quantitative performance metric: the length of the publication list. If I can get into a realm I'm more familiar with, people have been trying for years to come up with quantitative metrics to tell how good programmers are. It doesn't work. First, any such metric I'm familiar with fails to reward the right things. Second, any such metric gets gamed.
What you want a scientist to do is come up with good, innovative, science. The publication list penalizes innovation, since innovations don't always pan out, and rewards scientists who only undertake things they're pretty sure will work. A scientist simply may not be able to afford to investigate something that will take time and might not produce a positive result.
In this case, we saw the system get gamed by outright cheating. There are ways to game it that don't involve actual cheating. I haven't seen anybody suffer from the "least publishable unit" approach, and I remember one paper that left me awestruck at the precision in which the authors pulled out one result from a research program and got a peer-reviewed journal to publish the paper. It was an interesting result, but it was being precisely 1.000 LPU.
In my brief journey through academia, it seemed that the pub list was used a whole lot. There are variations, such as the citation list and journal impact factors, but these are closely related to the list length. This isn't healthy.
We're talking about a whole lot of highly intelligent and creative people here. Why aren't there other measures? Why are you expecting a non-scientist to come up with one?