I used to run AMD's consumer benchmark group during the K6, K7, K8 days. I'm not sure what you mean by "unbiased reports", but I can tell you that the process the company went through to create and execute benchmarks that were unbiased was remarkably fair. In the time I was there, the company ran benchmark results for any application that met three key requirements:
1) repeatable results
2) relevant software
3) practical to benchmark
So this meant that using canned benchmarks from applications such as Winstone for MS Office applications was a great option to look at office productivity software. We spent a lot of time trying to figure out how PC Magazine was weighting the application between the various MS Office applications, and I hit upon a way to do this by changing core frequency during benchmark runs so that we could create a multi-dimensional array of scores vs. frequencies to determine that Word was x%, Excel was x+5%, etc. We came up with a likely weighting scheme, although I don't recall what became of that work.
In the consumer space, the other big hitter is obviously games. At the time of my tenure, AMD used many or most of the same gaming applications that were en vogue with Firing Squad, Toms Hardware, Anand Tech, Sharkey Extreme, etc. There was nothing nefarious about the work we did, nothing unbiased. We looked at these applications with equal weighting and determined that for a given frequency of relevant, competing Intel CPUs, there was an AMD offering that on balance, performed equally or better at a lower frequency. This processor was then given a model name such as 1800+ that was meant to convey it compared favorably to an Intel 1.8GHz CPU.
In the days that my group did this work, AMD made a point of publicizing this process and went so far as to have the process vetted via direct supervision of a 3rd party auditing company who was one of the big-4 industry auditors. It was painstaking work to demonstrate that software load order and procedure was identical for AMD and Intel parts. When a benchmark completed, we showed the score to the auditor. Sometimes benchmarks returned imperfect scores because of a stray hard disk latency event and would throw the score off for either product. We would work with the auditor to show that the result of the otherwise repeatable values was an outlier and subsequently toss it in favor of another run.
Others in this Slashdot post have complained of heat dissipation. My team was solely concerned with instructions per second and performance per watt was not a concern for us. I do vaguely recall that this may have been a factor for the server team. My guess is that based on reading the occasional tech article here and there, AMD has made some important progress on power management.