To be totally fair to the grandparent poster, the designs of both Chernobyl and Three Mile Island were vulnerable to multiple points of mechanical failure (Chernobyl didn't even have a containment building!), and even these dated designs would have held up but for the human error involved. Remember, if the staff at Chernobyl had actually followed their procedures and hadn't been conducting a test with improper staffing, the accident never would have happened. And in the case of TMI, if the indicator lamp in the control room had indicated valve position, rather than the presence of power across the actuator solenoid, the operators would have known the valve was stuck open and been aware that they were facing a loss of coolant.
Furthermore, in terms of overall manufacturing experience, humanity does not have the level of expertise with nuclear reactors that we have with, say, cars or airplanes or computers. To have only two major failures out of the first 1000 units built is pretty impressive for any device.
Then again, how do you measure "reliability" here? Does one failure doom a device to the "failure" column forever, even if it operated flawlessly for years prior to the failure? What constitutes a "failure", anyway? Escape of radiation to the atmosphere? Or escape of radiation greater than a certain level? Or something less serious than a radiation escape? In terms of "dangerous" radiation releases per operating hour, the GP is probably right in accidents being a seven-sigma phenomenon.
Of course, this is complicated stuff. If it was easy, we wouldn't be having this discussion, and you do have a pretty good point in that real-world results are what matters here. The consequences of failure are severe, and "only" three-sigma reliability isn't good enough. But we've learned very important lessons from both major accidents, and current designs take those lessons into account. Future designs will, too.