Why do things at a level of granularity of 100,000 lines? Why not get the quantum computer to do the 'repeat x times and pick the most common answer' at each instruction? It'll introduce exactly the same slow down factor, and vastly reduce the chance of error propagation.
Conventional computers already have a certain amount of error that creeps in. Suppose during a single tick of the clock cycle there is some chance P of an error occurring. Find the 'repetition factor' n such that the quantum computer guarantees you no more than the same amount of error P, and you've got a quantum computer that gives you exact answers (over arbitrarily long programs) with the same chance of error as a conventional computer having run the same number of arguments. It becomes more an issue of speed, rather than errors.
Having to repeat each fundamental step some large number of times slows down the computer. (To get 1/10^40 chance of error, repeat the calculation ~400 times given any individual calculation has a 21% chance of failure.) However, this will disappear as the underlying hardware is improved, which it invariably will be (given more years of development time, I'm sure errors will be reduced from 21% to far below 1%).
Many real world problems are already built around this notion, whereby we only know the answer to be correct with a high degree of confidence (see 'Monte Carlo algorithms'). The underlying process only needs to be correct more often than it's wrong (it could be 50% + epsilon versus 50% - epsilon) and we can just keep repeating the calculation until we get any arbitrary accuracy we wish.
No, it's actually a perfectly reasonable idea. Consider running the device (n+m) times. The probability of it being right n times and wrong m times is given by:
P(n,m) = (n+m)!/n!/m! 0.79^n 0.21^m
Now consider the probability of it being right (majority has the right answer) out of 2n+1 trials. This is the given by:
S(n) = sum( P(n+1+i,n-i), i=0..n )
This can be simplied to a closed form using Legendre and gamma functions, but that's kind of messy and it's far easier to just plug in values and do the summation. As it turns out, doing the experiment 15 times and taking the majority (plugging 7 into S(n)) will give you the correct answer 99.4% of the time. Doing things 35 times gets you to five nines of accuracy... completely reasonable in my books.
Assuming you've got a reasonably modern version of X, and a somewhat capable video card, then xrandr does exactly what you're looking for. Mind you, it's a command-line application, but it's definitely not hard to use. A frequent Ubuntu contributor made a nice little GTK GUI front-end to it called urandr. This does exactly what you want (configure per-output rotation, resolution, etc). The only caveat is that you need to have configured X to have a big enough virtual screen size (x.org, Screen section, Display subsection, Virtual keyword) to support any anticipated output resolutions (combined size of the entire desktop across all outputs).
Suggest you just sit there and wait till life gets easier.