Don't get me started on the "absence of evidence" line. It's a BS saying by those who want to support a particular claim without having any way to show others they're not just making stuff up. This isn't even absence of evidence, though. It's poorly-stated evidence, or possibly even misleadingly-stated evidence.
When a company that is "an exceptionally competent developer of high performance, low-level graphics software" fails to communicate test results in a fashion which can be meaningfully compared by people who would be interested in such a developer's blog, it either calls into question their competence or the content of their communication. A competent group that realizes "hey, that's really cool, we got a higher average FPS" and also sees "that's weird, there are a lot of sections of benchmarking where the actual FPS drops to 15-20", might choose to communicate the first result and hide the second in order to drum up support. Simply reporting the average FPS does nothing to assure us that the performance is better for an end-user (other considerations include micro-stutter, periods of low playability,
Additionally, such "obvious test practices" do really need to be spelled out, and conformity results reported, for a reader to even infer a meaningful comparison. With the system they described, if they were pumping this out to a 1920x1080 display, the test results mean something completely different than if they were driving 1, 2, or 3 2560x1600 displays. I (and others in the field or even just interested in the field), would like to know this information so that it is meaningful, and not merely an "ePeen number".
Perhaps it's due to [H]ard|OCP that I've come to expect better benchmark reporting, but what is shown here disappoints. A test case can be designed to test a number of factors, so simply stating it's for "testing" indicates you're unaware of testing in the graphics area. Those factors include, but are not limited to, conformance, pixel throughput, vertex throughput, fill rate, average fps, memory utilization, GPU utilization, CPU utilization, inter-frame jitter, and average dropped fps. Not all of these always need to be reported, but some of them are linked, and should be reported together.
So yeah, one factor is marginally faster and was useful for finding a previously-missed bit of overhead. Cool, great work, etc. But don't report a single number which may or may not be misleading (and will obviously be touted by some a place to claim platform or API set x is better than y), without giving meaningful context to the results. As you said, Wraithlyn, they're experts. How about they demonstrate that in a meaningful fashion for the other experts who might gain some insight from their blog?