Maybe he's right, maybe he's wrong, but he certainly doesn't fall under the category of "reliable and unbiased"
He's commenting on how it's easy to de-duplicate and filter down emails based on some very simple parameters (e.g. "Is this in the existing database?" and "Is this to/from Clinton?") and how that would cull down the number of remaining emails to a reasonable level which a small team could easily sort through.
Having said that, his explanation just muddies up the water further. If they could parse the emails this quickly, then why did it take months to do the initial assessment?
It doesn't muddy the waters at all. The first process was to go through all the emails and then go back to original sources and find it if this information was ever classified and when, and then hand it off to various other agencies for them to be able to determine if they need to retroactively classify and redact the emails before releasing them under the numerous FOIA requests. The initial part of the investigation also involved determining if the server's security had ever been compromised, which meant a bit of a deep forensic analysis and going through reams of logs. It's much more time-intensive than simply saying, "have we seen this before, is it to/from Clinton, and is it of a personal nature?"