The objection in question ignores Bostrom's basic argument. Bostrom's primary argument for being in a simulation boils down to the observation that it is very likely that an advanced civilization would have the ability to run very accurate simulations. Moreover, one of the things they'd be obviously interested in would be their own past ancestors; if that's the case, then over the very long period that such civilizations will exist one will expect many more "copies" of people on ancient Earth than any of the originals, unless one expects civilization to die out well before we get to that technology level. If the laws of physics are simulated badly enough that we can notice, then they aren't doing an effective ancestor simulation, so the objection here doesn't make sense.
There are a lot of issues with Bostrom's argument; for example, one might question whether simulations of that level of detail will ever be able to be made on a large scale. But the argument being made here doesn't grapple with the fundamental issues.
The Indiana exception isn't about DST but that some parts are on Central rather than Eastern time. As somebody who lives about 15 minutes from that border, it's pretty aggravating and causes way too many problems for us but we still forget and assume everyone is the same time we are.
Because if I understand quantum theory correctly, it both works, and doesn't. There is no measurement for a half binary state in a binary world of absolute on and off.
I'm not sure what you mean by "it" here, but pretty much every interpretation of this is wrong. In fact, measurement of quantum superpositions do return specific classical states, with a probability based on the superpositions.
I think pursuing analogue supercomputers might be a better place to start.
We have specific theorems about what analogue classical computers can do. See for example http://www.sciencedirect.com/science/article/pii/0196885888900048 and https://arxiv.org/abs/quant-ph/0502072. In general, analog computers cannot do error correction and can when used to do optimization get easily stuck in local minima.
A more reasonable argument would be "We need more money to continue milking this quantum cow that never produces anything."
Quantum computing is still in its infancy and is best thought of as still in the basic research category. But even given that, there's been massive improvement in the last few years, both in terms of physical implementations (how many entangled qubits one can process) and in terms of understanding the broader theory. One major aspect where both the experimental and theoretical ends have seen major improvement is quantum error correction https://en.wikipedia.org/wiki/Quantum_error_correction.
One of the major issues is the need for actual empirical evidence that quantum computers can do things that classical computers cannot with reasonable time constraints. Right now, the general consensus is that if we understand correctly the laws of physics this should be the case, but there are some people who are very prominent holdouts who are convinced that quantum computing will not scale. Gil Kalai is the most prominent https://gilkalai.wordpress.com/2014/03/18/why-quantum-computers-cannot-work-the-movie/. It is likely that before any 50 bit quantum computer we'll have already answered this question. The most likely answer will be using boson sampling systems https://en.wikipedia.org/wiki/Boson_sampling which in their simplest form give information about the behavior of photons when scattered in a simple way. Scott Aaronson and Alex Arkhipov showed that if a classical computer could efficiently duplicate boson sampling with only a small increase in time then some already existing conjectures in classical computational complexity had to be false. (In particular, the polynomial hierarchy would have to collapse and we're generally confident that isn't the case.) Boson sampling is much easier to implement than a universal quantum computer, although no one has any practical use of boson sampling at present.
All of that said, the "a few years" in the article is critical- it isn't plausible that a 50 qubit universal system will be sold in 5 years. But 10 or 20 years are plausible. It also isn't completely clear how practically useful a 50 qubit system would be. At a few hundred qubits one is clearly in the realm of having direct practical applications, but 50 is sort of in a fuzzy range.
So they'll let shit-storm come, file bankruptcy and then sell the technology/patents/trademarks in the liquidation sale to a new company that will repeat the mess.
You have options already if that's what you need.
The only perfect science is hind-sight.