It's a problem, though, because there's no simple metric to determine whether patent examiners are doing a good job. Using number of patents reviewed as that metric encourages examiners to do a shoddy job actually examining the patents (i.e. what has actually been happening). If they are expected to pass only a certain fraction of patents, this is slightly better since it forces them to actually come up with reasons to reject some patents, but what fraction should they use? Two examiners doing perfect jobs may have very different fractions of accepted patents simply because one got better patents to review than the other, especially if they have different focus areas. Does the patent office even know the fraction of submitted patents in various areas which are good? A better metric would be whether accepted patents survive in the courts, but this depends on somebody actually challenging the patents and takes years after the fact. It might help now throw out some of the patent examiners who clearly haven't been doing their jobs in the past.
I'm not sure what the right solution is. Blind peer review and multiple review? Assign each patent to 2 or 3 different reviewers and call to carpet the ones who most consistently differ from others? Does that even work if half your patent examiners are shirking?