Imagine the natural evolution of a virus that figures out a way to attack and destroy all eukaryote cells. Even if it's just a human specific disease. Especially with air travel these days, epidemics can spread very quickly and quarantining is difficult.
There's a very high probability that virus would cease to exist in short order. Evolution favors viruses that preserve their hosts at least until they've had ample opportunity to spread. Even artificially developed viruses, not subject to the pressures of natural selection at the time of their creation, must still replicate and spread to cause epidemics and will be influenced by selective pressure in the environment.
However, consider the range of natural viruses and the incredible diversity of symptoms that they can cause. What worries me is not that some nutjob will create a virus which merely kills people - that sort of thing is swift, obvious, susceptible to existing protocols for controlling infectious diseases, and probably self-limiting - but that some nutjob will create a virus that alters people in subtle ways, body or mind (Vernor Vinge explores the theme of a mind-control virus in one of his sci-fi novels, Rainbows End (sic)). When a virus infects your cells it can write whatever code its creator wants into them. However difficult doing any high-level coding with this may initially be, "libraries" will be developed and such things will eventually be as easy as programming a computer is today. In fact, this would be awesome if not for the threat it represents (and if not for the fact that people are going to do some really immature if not outright harmful things with that ability - think a real life version of the Spore creature library). It would increase biodiversity tremendously from the outset, though common "library" sequences would likely be more or less homogeneous.
In any case, I do not think a designer virus would spell the end of all humanity, although it could cause widespread devastation. For any single pathogen there is a segment of the population which is, for some reason or another, by cause of some mutation or another, simply not susceptible to it. It would be extremely challenging for a virus writer to take the level of diversity among all humanity into account. We evolve too. What's more, designer viruses would also enable us to begin building our own defenses against such things if the researchers can keep up with the bio kiddiez.
As for advanced AI presenting a threat, I'm not as concerned about that one: I don't think an advanced AI would want to kill us any more than we want to kill off the chimps. If anything it would want to study our behavior - if it's that advanced we're no threat to it, and if it's not we still have a chance of stopping it.
None of this is in disagreement with your argument that establishing distant colonies would be beneficial for the robustness of humanity and of life, BTW. That's still the best long term solution.
Accuracy isn't all that telling a figure. I'd have expected sensitivity and specificity: i.e. what proportion of lies are actually detected vs. (1-)the proportion of true statements which are falsely identified as lies. Actually, I was kind of hoping for an ROC curve in the paper. It is kind of the standard classification metric in this field.
False positives in a system like this can be pretty dangerous. "Innocent until proven guilty" means they should be trying to reduce the false positive rate even if it compromises the ability to identify actual lies. But to do that they need to separate out the different types of mistakes the system can make.
at one point, even the arm of a human volunteer.
I don't know about Germany but in the USA such a study would never pass the IRB at most research universities and labs.
"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight