I don't think ESP is real either, but the journal editors had first class reasons to reject the replication-failure paper.
The sample size of each replication was 50. They tried 3 times, for a total of 150. It is very hard to prove a null hypothesis--this is not the same as failing to support a research hypothesis. Roughly, the quality of support for a research hypothesis is measured in terms of Type I error, which is assessed by p levels (e.g., p LT .05). The quality of support for a null hypothesis (and not everyone agrees that this is possible in principle) is measured in terms of Type II error, or the power of a statistical test. The power of a test depends on the sample size, the expected effect size, and which statistic (e.g., r, t) is in use.
A replication test of the original ESP paper must have substantial power because the expected effect size is, well, zero. To find a tiny effect size, which would be the fair design, requires more than N=50. Doing the same underpowered study three times doesn't help very much, but even N=150 wouldn't be decisive.
The journal in question is one of the most prominent in psychology. Whether they publish replications or not (and they do--replications aren't done for their own sake, they are implicit in follow-up studies), they certainly shouldn't publish bad ones.