The objection in question ignores Bostrom's basic argument. Bostrom's primary argument for being in a simulation boils down to the observation that it is very likely that an advanced civilization would have the ability to run very accurate simulations. Moreover, one of the things they'd be obviously interested in would be their own past ancestors; if that's the case, then over the very long period that such civilizations will exist one will expect many more "copies" of people on ancient Earth than any of the originals, unless one expects civilization to die out well before we get to that technology level. If the laws of physics are simulated badly enough that we can notice, then they aren't doing an effective ancestor simulation, so the objection here doesn't make sense.
There are a lot of issues with Bostrom's argument; for example, one might question whether simulations of that level of detail will ever be able to be made on a large scale. But the argument being made here doesn't grapple with the fundamental issues.
Because if I understand quantum theory correctly, it both works, and doesn't. There is no measurement for a half binary state in a binary world of absolute on and off.
I'm not sure what you mean by "it" here, but pretty much every interpretation of this is wrong. In fact, measurement of quantum superpositions do return specific classical states, with a probability based on the superpositions.
I think pursuing analogue supercomputers might be a better place to start.
We have specific theorems about what analogue classical computers can do. See for example http://www.sciencedirect.com/science/article/pii/0196885888900048 and https://arxiv.org/abs/quant-ph/0502072. In general, analog computers cannot do error correction and can when used to do optimization get easily stuck in local minima.
A more reasonable argument would be "We need more money to continue milking this quantum cow that never produces anything."
Quantum computing is still in its infancy and is best thought of as still in the basic research category. But even given that, there's been massive improvement in the last few years, both in terms of physical implementations (how many entangled qubits one can process) and in terms of understanding the broader theory. One major aspect where both the experimental and theoretical ends have seen major improvement is quantum error correction https://en.wikipedia.org/wiki/Quantum_error_correction.
One of the major issues is the need for actual empirical evidence that quantum computers can do things that classical computers cannot with reasonable time constraints. Right now, the general consensus is that if we understand correctly the laws of physics this should be the case, but there are some people who are very prominent holdouts who are convinced that quantum computing will not scale. Gil Kalai is the most prominent https://gilkalai.wordpress.com/2014/03/18/why-quantum-computers-cannot-work-the-movie/. It is likely that before any 50 bit quantum computer we'll have already answered this question. The most likely answer will be using boson sampling systems https://en.wikipedia.org/wiki/Boson_sampling which in their simplest form give information about the behavior of photons when scattered in a simple way. Scott Aaronson and Alex Arkhipov showed that if a classical computer could efficiently duplicate boson sampling with only a small increase in time then some already existing conjectures in classical computational complexity had to be false. (In particular, the polynomial hierarchy would have to collapse and we're generally confident that isn't the case.) Boson sampling is much easier to implement than a universal quantum computer, although no one has any practical use of boson sampling at present.
All of that said, the "a few years" in the article is critical- it isn't plausible that a 50 qubit universal system will be sold in 5 years. But 10 or 20 years are plausible. It also isn't completely clear how practically useful a 50 qubit system would be. At a few hundred qubits one is clearly in the realm of having direct practical applications, but 50 is sort of in a fuzzy range.
Here's what it means: One major aspect of modern cryptography are "hash functions"- a hash function is a function which essentially has the property that in general two inputs with very small differences will give radically different outputs. Also, ideally a hash function will also make it hard to detect "collisions" which are two inputs which have the same output. In general, hash schemes are used for a variety of different purposes, including determining if a file is what it claims to be (by checking that the file has the correct hash value).
Every few years, an existing hash system gets broken and needs to be replaced. MD5 is an example of this; it was very popular and then got replaced.
One of the major currently used hash schemes is SHA-1. However, a few days ago, a group from Google described an attack that allowed them easily find collisions in SHA-1 (easy here is comparative- the amount of computational resources needed was still pretty high). The group released evidence that they could do so but didn't describe how they did so in detail. They gave an example of two files with a SHA-1 collisions and they also described some of the theory behind their attack. What TFS is talking about is how based on this, others have since managed to duplicate the attack and some make some even more efficient variants of it; so effectively this attack is now in the wild.
If you are a large organization, you can afford more.
Yes, but the point is the way it scales; If you are tiny you can reasonably assume that the almost no occasions will occur when you need to do multiple hashes in a small amount of time. If you are larger then you end up with a lot of extra RAM that you aren't going to use regularly but will need to use during peak log-in times. I agree that you can probably afford more, but getting corporations to do so is difficult; at the end of the day, everyone cares about their bottom lines.
RSA is old, broken crypto which should be migrated away from.
This suggests that you have some very opinionated and somewhat unique views.
I hate to resort to appeal to authority, but the actual analysis required to prove it is way more effort than I have time for this morning. Take a look at keylength.com, it has a host of authoritative references.
I'm familiar with many of the references there, so if there are specific ones you'd like to point to (given the large number there) it might be helpful. But I will note that what they say there agrees to a large extent with what I wrote earlier, in that they explicitly say that they are trying to provide key sizes for a desired level of protection.
It's a valid counterexample because RSA key generation, and, to a much lesser extent, RSA private key operations, are computationally expensive enough to stress low-end devices (an issue I often have to deal with... I'm responsible for some of the core crypto subsystems in Android). But it's a weak counterexample because RSA is not modern crypto. It's ancient, outmoded, we have some reasons to suspect that factoring may not be NP hard, using it correctly is fraught with pitfalls, and it's ridiculously expensive computationally. And even still, the common standard of 2048-bit keys is secure for quite some time to come. As that stackoverflow article you linked mentions, the tendency has been to choose much larger-than-required keys (not barely large enough) rather than tracking Moore's law.
As discussed in the same stackexchange link, the key choice is due to infrastructural reasons (and in fact I specifically mentioned that in the part of my above comment you apparently decided not to quote). What actually happens is that we use keys that are larger than required and then use them for a *long time* before jumping to larger key sizes when we really need too. Again, the failure to perfectly track Moore's law (or even improvements in algorithms) is infrastructural, and similar issues will apply to many other crypto systems.
Frankly, I'm concerned that you claim to be someone who has done serious crypto work when you say that "we have some reasons to suspect that factoring may not be NP hard, using it correctly is fraught with pitfalls" because this indicates some serious misconceptions; first it isn't that a suspicion that factoring may not be NP-hard. We're very certain of this. If factoring were NP hard then a whole host of current conjectures that are only slightly stronger than P != NP would have to be true. Since factoring is in NP intersect co-NP if factoring were NP-hard then we'd have NP=co-NP we'd have the polynomial hierarchy collapse. Moreover, since factoring is in BQP by Shor's algorithm we'd also have NP in BQP, which we're pretty confident doesn't happen.
But there's a more serious failure here; which is pretty much no major cryptographic system today relies on an NP-hard problem, and reliance on such is not by itself a guarantee of success. For example, Merkle–Hellman knapsack was based on a problem known to NP-hard and it was broken. Similarly, NTRUE has a closely related NP-hard problem but it isn't actually known to be equivalent.
There's also another serious failure here; being reliant on an NP-hard problem isn't nearly as important as being reliant on a problem that is hard *for a random instance*. It isn't at all hard to make an NP-complete problem where the vast majority of instances are trivial. In fact, most standard NP-complete problems are easy for random instances under most reasonable distributions. 3-SAT is a good example of this; while there are distributions which seem to give many hard instances with high probability, naive or simple distributions don't do that.
I do agree that RSA is not ideal in terms of some aspects especially concerns about computational efficiency. But the idea that RSA is "broken" is simply not accurate. And criticizing it as old misses that that is one of its major selling points; the older an encryption system is the most eyes that have looked at it. In contrast, far fewer people have looked at elliptic curve cryptographic systems. Moreover, the one unambiguous way that RSA is actually broken (in the sense of being vulnerable to quantum attacks) applies just as well to ECC.
I suspect that some of our disagreement may stem from the fact that many of the terms we have been using have not been well-quantified, so the degree of actual disagreement may be smaller than we are estimating.
But this is exactly why good password hashing algorithms are moving to RAM consumption as the primary barrier. It's pretty trivial for a server with many GiB of RAM to allocate 256 MiB to hashing a password, for a few milliseconds, but it gets very costly, very fast, for the attacker. And if you can't afford 256 MiB, how about 64?
Using memory dependent hashes works better if one is a small server since one will rarely have a lot of people sending in their passwords at the same time, so the RAM space you need isn't that large. If you are a large organization then this doesn't work as well because you then need room to be able to do many such calculations functionally simultaneously.
Nope. The leverage factor in the password hashing case is linear, since the entropy of passwords is constant (on average). The leverage factor for cryptographic keys is exponential. The reason we don't use much longer keys for public key encryption, etc., is because there's no point in doing so, not because we can't afford it. The key sizes we use are already invulnerable to any practical attack in the near future. For data that must be secret for a long time, we do use larger key sizes, as a hedge against the unknown.
I agree that there's a linear v. exponential difference there(although for many of these it is more like linear and subexponential due to algorithms like the number field sieve), but the rest of your comment is essentially wrong. We keep keys just long enough that we consider it to be highly unlikely that they are going to be vulnerable, but not much more than that. That's why for example we've been steadily increasing the size of keys used in RSA, DH and other systems. Note by the way that part of the concern also is that many of these algorithms require a fair bit of computation not just on the server side but on the client side as well which may be a small device like a tablet or phone. In fact, it would be a lot safer if we increased key sizes more than we do, but there are infrastructural problems with that. See e.g. discussion at http://crypto.stackexchange.com/questions/19655/what-is-the-history-of-recommended-rsa-key-sizes The only way that the linear v. exponential(or almost exponential) comes into play is how much we need to increase the underlying key size or how long we need to make the next hash system if we want it to be secure. Keys only need to be increased a tiny bit, whereas hashes need to grow a lot more. But in both cases we're still not making them any longer than we can plausibly get away with for most applications.
Blessed be those who initiate lively discussions with the hopelessly mute, for they shall be known as Dentists.