The really sad thing here is that it is likely that all of the original Apollo astronauts will be dead before anyone else goes to any non-Earth body.

Followup: acording to this thread https://news.ycombinator.com/item?id=11014175 the number in question fails at even being a pseudoprime for small bases, which means that even the most simple checks were not done. That thread also mentions the individual responsible for giving the "prime"- I'm not sure why he's not being grilled pretty heavily right now.

This cannot happen accidentally. We have for example versions of the Miller-Rabin test https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test which easily test primality if you believe the Riemann Hypothesis and other versions which unconditionally give such a high probability that one is more likely to have had a cosmic ray wreck your computing results than for the test to be erroneous. You can use for example this Javascript http://www.javascripter.net/math/primes/millerrabinprimalitytest.htm. There's no obvious way one would come up with a composite number unless one was deliberately trying. Hopefully there's enough of a record to note when this fake prime was put in.

That sounds like you estimate a very high chance that Clinton will either not be the Democratic nominee or that she will if she is the nominee lose the general election. Can you estimate your probabilities for those events?

The Fermi Paradox is really pseudoscience. There are many big problems with it.

I'm not sure what you mean by this and your following statements. Your primary point seems to be that intelligent life might be less likely than we think. But that's not a criticism of the Fermi paradox but a possible resolution of it. Your point about signal detection is certainly correct if that were our only way of detecting civilizations. In particular, we see no ring worlds, or Dyson spheres or other large scale constructions despite systematic searches http://home.fnal.gov/~carrigan/infrared_astronomy/Fermilab_search.htm. We don't in general see any signs of large scale energy use. We've looked at around 100,000 galaxies and found zero full-scale galactic civilizations. See http://phys.org/news/2015-04-advanced-civilizations-earth-obvious-galaxies.html. The universe looks natural. And yes, there are many possible explanations for this, but we need to ask how likely they are. If there is a Great Filter https://en.wikipedia.org/wiki/Great_Filter then it doesn't go away simply because we've found a semiplausible alternate explanation.

However, science is generally logical enough to recognize that such uses of technology are counterproductive.

How long did it take to recognize that lead in gasoline was a bad idea? More seriously for existential risks like the sort under discussion, it doesn't take science collectively as a whole to do something stupid, just a handful of people might engineer a bad virus. Or a pollution problem could arise that is a collective action problem like climate change. The Fermi Paradox is a real problem https://en.wikipedia.org/wiki/Fermi_paradox and one of the easiest explanations for it is that civilizations wipe themselves out with technology.

Sure, I'm a mathematician and wouldn't accept it. All this is is a plausibility argument. In fact, from a standard mathematical perspective this is a trivially decidable issue in Peano Arithmetic.

Yes, certainly current knowledge levels are very far from actually proving hard limits on this sort of thing. It is possible that very advanced circuit lower bound techniques could prove something like this in a reasonable model but we are nowhere near that, And even then, you'll always have doubt how accurate the model is.

Under the presumption that you are the same AC as earlier, I will note that you haven't responded to anything I said, but have made your position more hardline. You are arguing now not just that we might one day find this out but that it is "certainly... not unknowable." I'd comment more but I doubt it would be productive, and I just broke my hand so typing is tough.

Yes, you can partially factor 2^n+1 if you write n= 2^(a (2k+1)), but it only gets you too big factors which are about the size of 2^(2k+1) and the other being the rest. That helps only a tiny bit, and is part of why I choose an example that had an n with a very large even factor.

A small continuous population, that doesn't give them the right to a self-governing state in that territory any more than it does small minorities in other states. Land claims from well over 1000+ years ago notwithstanding.

Essentially in agreement here. We cannot in general let ancient land claims determine legitimacy of nations and national borders. That sort of thing leads to complete chaos and many dead.

Of course the Palestinian Arabs disagreed. A different ethnic group declares they're going to start colonizing your territory to create their own state, they then proceed to do so while you're under foreign occupation by states who generally side with the other ethnic group. The foreign occupiers then propose to give the other group a state with most of the land even though you still have a bigger population. Is this a proposal you're going to agree to?

This is an oversimplification, although I think it does do a pretty good job of summarizing the Palestinian Arab perspective on things. One of the difficulties here is that every group has a narrative that emphasizes some aspects and details to make them sound like the aggrieved good guys. Note that while there was no native government to control immigration, the Jews going to Mandate Palestine were buying and settling land completely legally. It is also worth noting that the Partition Plan like all good compromises left essentially no group happy, and while the partition plan did give the majority of the lane of "Palestine" to the Jewish state, much of that land was land that was legitimately bought by Jews, and parts of the partitioned land would have likely then joined up as parts of Jordan and Egypt, so a simple percentage accounting doesn't quite summarize things. As always, the history is complicated. Unfortunately, both pro-Israel and pro-Palestinian groups often focus on the details that make their own story sound good rather than actually realize that everyone has a legitimate right, and that moreover, when anyone has been somewhere for multiple generations, getting a satisfactory result that doesn't result in everyone killing each other may be more important than trying to decide who is cosmically right as satisfying as that may be.

\begin{nitpick}

Factoring primes is not known to be "hard" that is there is no such proof.

Actually factoring primes is really easy. For any prime p, it factors as just p. What you mean is in factoring a generic composite into primes. \end{nitpick}

Even if there is no efficient algorithm to factor primes it maybe that we can use inventions like quantum computers to possibly solve it.

See my reply to the other person who brought up quantum computers. Quantum computers can if implemented factor large numbers very efficiently. Moreover, we can't even prove at this point that factoring is itself classically hard (as you correctly noted). This is why I used a tower of exponentials in my example.

Not really. It is possible that there are physical discoveries that we're not expecting that will allow us to do extreme computations, but they aren't that likely to do that much.

Let's use your example of quantum computers. We have strong theorems about what a quantum computer can do compared to a classical computer. In particular, BQP, the class of problems that a quantum computer can do in polynomial time https://en.wikipedia.org/wiki/BQP0 is in PSPACE https://en.wikipedia.org/wiki/PSPACE, the class of problems that a classical computer can do in polynomial space (where polynomial in both cases means polynomial in the length of the input). This means that a quantum computer *cannot* massively extend what one can do much beyond speeding up some calculations, and other theorems show that this is a general pattern. Holevo's theorem and a few other similar theorems say more or less that you cannot use n qubits to simulate n+1 bits https://en.wikipedia.org/wiki/Holevo's_theorem. And in fact, the conjecture strongly is that BQP is *much smaller* than PSPACE.

Now, you might say that you just meant quantum computing as an example. But people have actually thought about what possible computing analogs would make sense that would be even more powerful than quantum computers. So for example, Scott Aaronson has looked at models involving access to a hidden variable http://www.scottaaronson.com/papers/qchvpra.pdf and it turns out that while they are naturally more powerful than quantum computers, again their are pretty strong limits on what they can do.

Moreover, we have pretty good ideas at this point of upper bounds on what physically can be computed and stored in an area. One example of this is the holographic principle https://en.wikipedia.org/wiki/Holographic_principle which puts pretty severe limits on how much information can be stored or presented. And even if the holographic principle is *wrong* (not implausible), and let's say that somehow it isn't just wrong in the obvious way (where the amount of information increases directly proportional to the volume) but in fact does so according to say a 20th power of the volume with a constant out front that in the relevant units is a hundred times as large as that in the holographic bound, one would *still* have nowhere near enough bits to plausibly do this sort of thing.

Frankly, when I give the sort of problem I mentioned earlier, instead of using a small stack of exponentials, I normally use the Ackermann function https://en.wikipedia.org/wiki/Ackermann_function and say something like A(100) +1, which is insanely bigger than the number I used. So even if you don't buy the arguments above, just use a number like that which is easy to specify mathematically and is mindboggingly larger.

We already know there are questions we can't answer. In fact, it isn't that hard to write down questions where barring extreme surprises, we can't answer them even given that they are essentially just simple computations. For example, does 2^(10^(10^500)) +1 have an even or odd number of distinct prime factors? That took two seconds to write down, but unless there's something very weird about numbers close to powers of 2 then we literally lack the computational power in the observable universe to answer that question. So we already have pretty hard physical limits on what we can know. This is just a question of whether there are also hard physical limits to questions that some people happen to care a lot about.

Would you actually not produce things if people would eventually in some sane amount of time get access to do what they want with your works? More to the point, how do you think things would be if Shakespeare was still copyrighted, or Jane Austen, or Shelley's Frankenstein? The point of copyright is to have a long enough period to encourage people to make their own works. If you really aren't comfortable with people maybe playing with your ideas 70 years after you've died, then that shows if anything that you've absorbed too many of the ideas of very long copyright times, and I daresay you are the exception rather than the rule even given that.

"What a wonder is USENET; such wholesale production of conjecture from such a trifling investment in fact." -- Carl S. Gutekunst