The first story about "on a computer" patents getting invalidated is a good thing. But the second story is perhaps even more important. People are taking notice that patent examiners are not doing their jobs. Too many of them are just working one day a week/month/whatever and just rubberstamping their quota of patents, allowing anything whatsoever through the system, and falsely reporting that they worked full time and even overtime, because there is a corrupt culture that lets them get away with it. Exposing this could lead to mass firings, and some sort of system to ensure real accountability.
It's a problem, though, because there's no simple metric to determine whether patent examiners are doing a good job. Using number of patents reviewed as that metric encourages examiners to do a shoddy job actually examining the patents (i.e. what has actually been happening). If they are expected to pass only a certain fraction of patents, this is slightly better since it forces them to actually come up with reasons to reject some patents, but what fraction should they use? Two examiners doing perfect jobs may have very different fractions of accepted patents simply because one got better patents to review than the other, especially if they have different focus areas. Does the patent office even know the fraction of submitted patents in various areas which are good? A better metric would be whether accepted patents survive in the courts, but this depends on somebody actually challenging the patents and takes years after the fact. It might help now throw out some of the patent examiners who clearly haven't been doing their jobs in the past.
I'm not sure what the right solution is. Blind peer review and multiple review? Assign each patent to 2 or 3 different reviewers and call to carpet the ones who most consistently differ from others? Does that even work if half your patent examiners are shirking?