I'm not all that sure of the "hard to trace" part.
A point, but now we are into Bayesian reasoning rather than the normal philosophy of science.
P.S.: The assertion is even more limited than you suggest. Many sciences are observational rather than experimental in nature, and the observations available to be made may not either validate or refute some particular theory. (E,g,: Was there ever a three toed dinosaur? Perhaps there is no evidence that there was, but that's not proof, as the record is quite incomplete. That would be a theory that could be plausibly confirmed, but not falsified. But it's also not one that would be strongly believed in the absence of evidence.)
What test is used for the presence of a soul? Please note that self reporting is not a valid test, as it's easy to program such an assertion in a loop.
Also, on what basis do you assert that my dog doesn't have a soul?
You misunderstand. It's impossible to verify a scientific theory, though one should be able to replicate the results. But the possibility of falsification is what makes a theory scientific.
WRT verification, all you can say is "It fits the available evidence, and of that evidence xxxx was not known at the time the theory was constructed." You can NEVER prove it true.
Perhaps you need to refer to the original article. And I suspect that you have misunderstood the antecendant used by the GP.
Also to those who actually had to read the stupid and obscene book. I admit to enjoying the part about Lot and his daughters, but I didn't understand it until much later. (Also "Lot's wife's name was Ester. Because she was an organic salt."...That's from when I had to reread it in high school.)
I will admit that there are selected verses that support a decent morality. But most of them are the morality of a street gang. Even in the "New Testament" in the gospels you find considerable immorality, though I'll admit it's much closer to moral. But what did that poor fig tree ever do to Jesus, and why should he expect figs in April? Or, possibly, March. And while it's reasonable to say that business shouldn't be conducted in the temple, perhaps you haven't considered the function of money changers. Their purpose was to allow you to make an offering, even if your coins were issued in another country, or by another government. (And probably to do the equivalent of breaking a $20.) So was it moral to mob them? Not hardly.
Because the marketing team does visuals?
The characterization of humans as animals is, indeed, arbitrary, just as the heliocentric solar system is arbitrary. Ptlomeics CAN handle the same information.
I would be interested in the definition that you use for animal that includes all other animals, and does not include humans.
I'll admit that I've no more than looked at the Gtk's objectivication of C. I shuddered, and went elsewhere, but I can't really say that I analyzed it.
I would never say that higher level languages are faster. Most of the features I consider most valueable slow down execution at run time. (Array bounds checking, garbage collection, etc.) But the penalty doesn't need to be large, and when the concept you are working with is, say, a hash table, it works a lot better if it's built into the language rather than added on as a library. (And yes, the language implementation will probably be in C, or something like it [which is mainly assembler].)
OTOH, it's also true that higher level languages often make bad choices as to how to represent their abstract features. C++ templates come to mind. Or Java generics. Or Ada string literals. (Note that the Ada string literal problem is likely to be BECAUSE the language has only an optional [and rarely implemented] garbage collector. So strings are by default of a fixed length. And can only work with other strings of the same length. This is fixed with bounded strings [and with unbounded], but that's not the default.)
But do note that these arguments are only for general purposes. For specific purposes different languages are superior. There are even places where assembler is superior to C (timing loops, e.g.) but those tend to be CPU dependent.
That's a real problem when different areas have different laws. It means that you are responsible for knowing all possible laws that might affect you in every country of the world, and that's actually impossible. Because you don't know what a law means until a court decides what it means....and the next court may decide something different.
IIRC, in Germany anyone can bring suit to enforce a copyright, not just the owner. In fact, I seem to recall that they can even do it when the owner of the copyright declines to enforce it. And that they can claim a share of the winnings for enforcing it. And that there are some companies of lawyers that do almost nothing else.
It was a few years ago, so the details are hazy, but I read about it on Slashdot, and I seem to recall that they were enforcing one of SuSE's pattents against the will of the company.
There ARE competent people doing C. Then there are the others. For some applications it's the right choice for a language to write in. But often it's "If all you have is a hammer, everything looks like a nail". C is often used in inappropriate contexts. It's an excellent portable assembler, and it's usually appropriate where assembly code would otherwise be appropriate. But it's a poor choice for a complex program, and it encourages bad habits. There are other valid criticisms, but almost all of them are directed at the poor fit between the design of C and the thought processes of humans. It's an excellent language to compile a "higher" language into. ("Higher" here means that it deals with concepts that are more difficult to map into assembler.)
I agree with your points about code written in C by programmers, however...
You can't do much better than C by going to assembler, so C is a good target language. But much code written in many languaged could be improved by an automated translation into C with concurrent optimization. (The optimization needs to be done before it hits C, because you lose too much information in the process of translation.) The automatic translation avoids the difficulties of bounds checking, etc. If done properly it would (optionally) implement bounds checking wherever it couldn't prove that it could be omitted, etc.
Also, native C garbage collectors are inherently inefficient, because the C language doesn't reliably separate pointers from integers (etc.). But the translator, knowing the original language, could do this much more efficiently in the process of compiling from the original language to C.
So there's no reason not to standardize on C at the base level. And, of course, C compilers can optimize the C code.
FWIW, I dislike programming in C for multiple reasons. One of them is how it handles unicode. Another is the difficulty of implementing "class instance variables". (Class variables are easy, though. You just have one "class" per file, and static variables are equivalent of class variables.) I also prefer to have a good garbage collector. I dislike using pointers to reference structures. (In C++ I prefer to pass references as parameters rather than pointers to structs.) Etc. Of the languages that I'm familiar with, D is my preferred language, but it's missing a lot of library support, so I often use Python. Vala would be an excellent choice, if it coudl ever get it's documentation even to a beta level. (Do note that valac, the Vala compiler, has an option to allow you to generate C code.)
Not clear. The current computers aren't being used at nearly their optimum.
E.g., imagine an application that would take a program written for a virtual machine, and compile it to native code, and then optimize that code. If that seems unreasonable, you should not that LISP was originally an interpreter, and it was originally believed that it couldn't be compiled. Now almost all LISP implementations are compilers. (I have no idea how good they are at optimizing.)
Please note: For the purpose of this post I'm presuming that CPU/GPU development is frozen at it's current level, and that only software continues to develop. That is obviously false. But my estimate is that computers are used in a way that averages less than 50% of optimal. (This is, again obviously, a Wild Ass Guess. But I don't think you can do any better right now. And my secondary guess is that it's more likely to be too high than too low.)
Not really. This same kind of article pops up whenever a change in technologies starts to become appropriate. I.e., what we have installed now can't make the new stuff, and it's too expensive to build a new factory.
It's not that there isn't some validity to it's points, either. It's just that it's a short term perspective. The curve is bumpy, has been bumpy, and will continue to be bumpy. Sometimes it changes faster than Moore's law predicts, sometimes slower. The average is about right (though if I recall correctly, it's been adjusted in the past to speed it up).
It's also true that EVENTUALLY Moore's law (and it's associates) must end. When is an unanswerable question, though one can be fairly sure it will be before atomic level gates. (The noise level would be too high. You need to bring it down by dealing with multiple atoms. N.B.: This noise level has been a problem since at least the days of vacuum tubes. I'm not sure it was a major problem with gear driven calculators.)