The 80% figure, which is the AUC (area under the curve), refers to threshold tuning. In order to make that usable in the Real World, you'd have to crank it so that it has nearly zero false positives (and thus very few detected trolls) or else you'd have to make it flag posts non-fatally, perhaps with nearly impossible captchas, which immediately defeats its anti-troll utility (not to mention angering all of the falsely identified trolls!).
The article, like the paper itself, ends on this note:
Regarding the possibility of developing automated methods for identifying and even banning trolls, the researchers are circumspect, since 1 in 5 of users were misclassified by their analysis system, which otherwise claims to spot a persistent comment pest within as few as ten posts. “While we present effective mechanisms for identifying and potentially weeding antisocial users out of a community, taking extreme action against small infractions can exacerbate antisocial behavior (e.g., unfairness can cause users to write worse)“