Why do things at a level of granularity of 100,000 lines? Why not get the quantum computer to do the 'repeat x times and pick the most common answer' at *each* instruction? It'll introduce exactly the same slow down factor, and vastly reduce the chance of error propagation.

Conventional computers already have a certain amount of error that creeps in. Suppose during a single tick of the clock cycle there is some chance P of an error occurring. Find the 'repetition factor' n such that the quantum computer guarantees you no more than the same amount of error P, and you've got a quantum computer that gives you exact answers (over arbitrarily long programs) with the same chance of error as a conventional computer having run the same number of arguments. It becomes more an issue of speed, rather than errors.

Having to repeat each fundamental step some large number of times slows down the computer. (To get 1/10^40 chance of error, repeat the calculation ~400 times given any individual calculation has a 21% chance of failure.) However, this will disappear as the underlying hardware is improved, which it invariably will be (given more years of development time, I'm sure errors will be reduced from 21% to far below 1%).

Many real world problems are already built around this notion, whereby we only know the answer to be correct with a high degree of confidence (see 'Monte Carlo algorithms'). The underlying process only needs to be correct more often than it's wrong (it could be 50% + epsilon versus 50% - epsilon) and we can just keep repeating the calculation until we get any arbitrary accuracy we wish.