I do not understand the latter part of your question, but the first part is easy:
The mathematical procedure employed in the tests is exactly the same for all experiments that we do. They even had to be agreed upon among the two large LHC experiments (ATLAS and CMS) in discussions that took in excess of months to pin down all details.
Now, on the other hand, the data that is used is different (i.e. we get different views from different selections made to the data, some selecting b quarks, others W bosons, others, photons, etc), the teams of people are different, the software platform are different, the hardware in the experiments is different, the organizational structures are different, the collisions are different, etc, etc.
Since so much is different and independent, and that all results point in one direction (namely that the probability that the observation in data can be due to what we already know is smaller than 1 in 3 million) gives us a lot of confidence in the claim.