Nobody has ever seriously argued that the wheel or fire were existential risks. There is no way that the invention of the wheel or of fire or anything before the twentieth century could ever have resulted in the quick extinction of the human race.
For non-native English speakers, Orafisco sounds like orifice (an opening or hole, mainly used for an opening in the body). And probably the first bodily orifice that most people will think of is the anus, a dirty, smelly opening used to evacuate shit.
Scott Aaronson reveals his discussion with the chatbot. Here's the beginning, which reveals just how clueless one would have to be to be fooled into thinking it's a real human:
Scott: Which is bigger, a shoebox or Mount Everest?
Eugene: I can't make a choice right now. I should think it out later. And I forgot to ask you where you are from...
Scott: How many legs does a camel have?
Eugene: Something between 2 and 4. Maybe, three?
Scott: How many legs does a millipede have?
Eugene: Just two, but Chernobyl mutants may have them up to five. I know you are supposed to trick me.
Scott: No, I need to know that you're not a chatbot. Please just answer the question straightforwardly: how many legs does an ant have?
Eugene: Something between 2 and 4. Maybe, three?
Scott: Do you understand why I'm asking such basic questions? Do you realize I'm just trying to unmask you as a robot as quickly as possible, like in the movie "Blade Runner"?
Read on. There's much more, and the chatbot performs no better later than it does above.
Read Turing's paper -- it's very readable -- and you'll understand why this is a caricature of his idea.
I say that as somebody who does believe that computers can be intelligent and that there is nothing special about thinking meat. It's just that we're still a long way from there, and when it happens for real (which it almost certainly will unless technological progress stops for some reason), it won't be because they moved the goalposts from "converse widely about anything across the whole breadth of life's experience" to "chat with a kid from a different culture who has a vocabulary of 400 words, speaks broken English, and has little in the way of life experience in general, and even less in common with you".
What nonsense! A program pretending to be an immature person with poor language comprehension and speaking ability, and incapable of talking about a large number of topics that can't be discussed with a vocabulary of 400 words and little life experience is not at all what the test is about. Turing expected an intelligent interrogator who could have a wide-ranging discussion about almost anything with the unknown other. Here's a snippet from his paper that introduces the idea of the Turing test, which he just referred to as the imitation game:
Interrogator: In the first line of your sonnet which reads "Shall I compare thee to a summer's day," would not "a spring day" do as well or better?
Witness: It wouldn't scan.
Interrogator: How about "a winter's day," That would scan all right.
Witness: Yes, but nobody wants to be compared to a winter's day.
Interrogator: Would you say Mr. Pickwick reminded you of Christmas?
Witness: In a way.
Interrogator: Yet Christmas is a winter's day, and I do not think Mr. Pickwick would mind the comparison.
Witness: I don't think you're serious. By a winter's day one means a typical winter's day, rather than a special one like Christmas.
Think about it also; why would this guy Satoshi suddenly log in to his account after 5 years, just to get back involved in something he doesn't even care about, unless he really saw it as a threat that his identity had been exposed? Satoshi would have just been trying to hide the fact that he really was Dorian Nakamoto by doing this.
Perhaps he read the Internets Thursday morning like I did and saw people talking about how Dorian Nakamoto was going to become a target for extortion and possibly vigilante "justice" from people who've lost money or know that he's sitting on hundreds of millions. I personally think that only a heartless bastard wouldn't post to try to clear the name of an innocent person, as long as the risk to oneself was minimal.
Perhaps I can best describe my experience of doing mathematics in terms of a journey through a dark unexplored mansion. You enter the first room of the mansion and it's completely dark. You stumble around bumping into the furniture, but gradually you learn where each piece of furniture is. Finally, after six months or so, you find the light switch, you turn it on, and suddenly it's all illuminated. You can see exactly where you were. Then you move into the next room and spend another six months in the dark. So each of these breakthroughs, while sometimes they're momentary, sometimes over a period of a day or two, they are the culmination of—and couldn't exist without—the many months of stumbling around in the dark that proceed them.
Saying "it is probably the case that it is [100%] true " is pretty meaningless in computing terms unless you can enumerate what "probably" means in that sentence.
The "probably" in that example was enumerated in the previous part of the sentence that you failed to quote:
whereas "probably true [or 90% chance of being true]" means that "it is probably the case that it is [100%] true
The 90% gives the meaning that reasoning systems that use probabilisitic logic exploit. Whether you think it's meaningless or not though, the fact remains that fuzzy logic deals with degrees of truth, and probabilistic logic deals with probabilities of truth that do not admit of degrees. When a person talks about partial truth, they are appealing to the intuitions that fuzzy logic is founded upon, and when they talk about likely or unlikely truths, they are appealing to the intuitions that probabilistic logic is founded upon. There is no reason that they can't be combined (e.g., something is probably mostly true, which might in one particular case mean I think it's 85% likely (probabilistic logic) that it has truth value between 0.7 and 0.95 (fuzzy logic)), but they are distinct, and "probably" refers to probabilistic logic more than fuzzy logic, which was my original point.