A double blind study is to prevent placebo effect as well as experimenter bias.
There's yet another reason to do a double blind that has little to do with the results.
If you're a physician administering a placebo to a patient in a drug trial and they die or worsen significantly, it's important that you never know. This is especially true if it turns out that the drug is very effective. The effects of researcher and subject bias when both the experimenter and the subject can communicate nonverbally and instinctively are also an extremely important reason to do double-blind tests in some circumstances.
None of these things apply to subjecting trees to objectively measurable stuff and reporting your methods and results in a reasonably objective way. A certain interest in a question and expectation of the outcome leads one to do experiments in the first place, but that doesn't mean that every test must be carefully tuned to eliminate *any possibility* of researcher bias.
In situations where people are doing controlled, *easily reproducible* experiments and present results based largely on objective measures (in this case, things like biomass, leaf size, fraction of diseased leaf, etc), it's a waste of resources to do blind tests.
Researcher DISHONESTY can still be a problem, but that will ALWAYS be a problem, blind tests or no. People fake data, sabotage results... whatever. That's why independent confirmations are important wherever possible. Working in large groups with blind tests helps reduce this possibility when many independent confirmations are impossible (since it's hard to hold together an actual conspiracy).
But for a single scientist working with reasonably objective methods, it's not worth it. To do a blind test **by yourself** you would have to set up a really convoluted system to hide the methodology from YOURSELF. This is unnecessary if you're doing science right. Some people are prone to confirmation bias. They are bad scientists. They're barely scientists.
Haggerty's paper suggests she's a scientist.
I don't like this result. I don't like it at all. But unless the data were falsified, there's no problem with the method. There are other things that need to be controlled for. It might be something chemically relevant about the aluminum screen. I've found some papers that suggest that the color of a screen can repel certain insects that can pass through the holes anyway... so there could be a difference between fiberglass and aluminum in keeping insects out.
But my concern over the public relations potency of these results and the hypotheses I have about "other factors" are not science. Testing my hypotheses as objectively as I could and presenting the results would be science.