If you are a large organization, you can afford more.
Yes, but the point is the way it scales; If you are tiny you can reasonably assume that the almost no occasions will occur when you need to do multiple hashes in a small amount of time. If you are larger then you end up with a lot of extra RAM that you aren't going to use regularly but will need to use during peak log-in times. I agree that you can probably afford more, but getting corporations to do so is difficult; at the end of the day, everyone cares about their bottom lines.
RSA is old, broken crypto which should be migrated away from.
This suggests that you have some very opinionated and somewhat unique views.
I hate to resort to appeal to authority, but the actual analysis required to prove it is way more effort than I have time for this morning. Take a look at keylength.com, it has a host of authoritative references.
I'm familiar with many of the references there, so if there are specific ones you'd like to point to (given the large number there) it might be helpful. But I will note that what they say there agrees to a large extent with what I wrote earlier, in that they explicitly say that they are trying to provide key sizes for a desired level of protection.
It's a valid counterexample because RSA key generation, and, to a much lesser extent, RSA private key operations, are computationally expensive enough to stress low-end devices (an issue I often have to deal with... I'm responsible for some of the core crypto subsystems in Android). But it's a weak counterexample because RSA is not modern crypto. It's ancient, outmoded, we have some reasons to suspect that factoring may not be NP hard, using it correctly is fraught with pitfalls, and it's ridiculously expensive computationally. And even still, the common standard of 2048-bit keys is secure for quite some time to come. As that stackoverflow article you linked mentions, the tendency has been to choose much larger-than-required keys (not barely large enough) rather than tracking Moore's law.
As discussed in the same stackexchange link, the key choice is due to infrastructural reasons (and in fact I specifically mentioned that in the part of my above comment you apparently decided not to quote). What actually happens is that we use keys that are larger than required and then use them for a *long time* before jumping to larger key sizes when we really need too. Again, the failure to perfectly track Moore's law (or even improvements in algorithms) is infrastructural, and similar issues will apply to many other crypto systems.
Frankly, I'm concerned that you claim to be someone who has done serious crypto work when you say that "we have some reasons to suspect that factoring may not be NP hard, using it correctly is fraught with pitfalls" because this indicates some serious misconceptions; first it isn't that a suspicion that factoring may not be NP-hard. We're very certain of this. If factoring were NP hard then a whole host of current conjectures that are only slightly stronger than P != NP would have to be true. Since factoring is in NP intersect co-NP if factoring were NP-hard then we'd have NP=co-NP we'd have the polynomial hierarchy collapse. Moreover, since factoring is in BQP by Shor's algorithm we'd also have NP in BQP, which we're pretty confident doesn't happen.
But there's a more serious failure here; which is pretty much no major cryptographic system today relies on an NP-hard problem, and reliance on such is not by itself a guarantee of success. For example, Merkle–Hellman knapsack was based on a problem known to NP-hard and it was broken. Similarly, NTRUE has a closely related NP-hard problem but it isn't actually known to be equivalent.
There's also another serious failure here; being reliant on an NP-hard problem isn't nearly as important as being reliant on a problem that is hard *for a random instance*. It isn't at all hard to make an NP-complete problem where the vast majority of instances are trivial. In fact, most standard NP-complete problems are easy for random instances under most reasonable distributions. 3-SAT is a good example of this; while there are distributions which seem to give many hard instances with high probability, naive or simple distributions don't do that.
I do agree that RSA is not ideal in terms of some aspects especially concerns about computational efficiency. But the idea that RSA is "broken" is simply not accurate. And criticizing it as old misses that that is one of its major selling points; the older an encryption system is the most eyes that have looked at it. In contrast, far fewer people have looked at elliptic curve cryptographic systems. Moreover, the one unambiguous way that RSA is actually broken (in the sense of being vulnerable to quantum attacks) applies just as well to ECC.
I suspect that some of our disagreement may stem from the fact that many of the terms we have been using have not been well-quantified, so the degree of actual disagreement may be smaller than we are estimating.