You're asking a couple distinct, and reasonable, questions. About "blind testing" -- I don't know the details for this particular result, but particle physicists put quite a bit of effort into making sure that they aren't fooling themselves. One of the best ways of doing it is so-called "blind analysis". The idea there is to define your entire data analysis strategy based solely on simulated data. There are pretty good simulations available of both the expected backgrounds, and of the process you are trying to actually find (the signal). So you define all of the methods you are going to use using these simulations before you look at the data. This ensures that you don't bias yourself into "finding something" in the data that isn't really there. (I don't know if a strict blinding procedure was used for this analysis, but likely something similar was done.)
The formal peer review system will come into effect now that the result is submitted to a journal. The paper will be distributed to some anonymous referees who will try to judge the merits of the physics and decide whether it merits publication. But I should note that the peer review process in modern particle physics actually starts long before the result is made public. Although there are only 3 or 4 main analysts, the paper is signed by the entire 3000 person CMS Collaboration (of which I am a member). So we have a very stringent internal review process to ensure that the result is sound before we release it with 3000 names taking responsibility. That doesn't mean that particle physics collaborations never make mistakes, but it does mean that results are scrutinized by a number of more or less unbiased eyes before they are made public.