The conclusion I'm taking away from this is that the article (and perhaps study) are complete crap. The stats in the reporting fall apart at the slightest touch. For instance...
1) They're lumping everything from "the phone might've felt a little slow that one time" to "this phone literally summoned the Four Horsemen to usher in the end of the world" into a single "failure" bucket. No weighting, no granularity, and no consideration for the fact that we wouldn't even refer to most of those as "failures" or even the fault of the manufacturer.
2) Their math doesn't add up because they use the term "failure rate" to arbitrarily refer to multiple different concepts, most of which aren't even rates. The most obvious example comes from looking at the Android charts, in which they indicate that Android devices have an overall failure rate of 35%, with the worst manufacturer (Samsung) having a failure rate of 26%. But that makes no sense. If the worst manufacturer has a failure rate of 26%, then the highest the overall failure rate could possibly be (if that manufacturer sold 100% of devices) would be 26%. What they appear to be doing (but don't disclose) is using the term "failure rate" to refer to the share of failures that correspond to each manufacturer.
3) For similar reasons, you can't even compare their own numbers against each other. As the fine print in the image indicates, the "failure rate" for each model actually represents that model's share of the failures for their platform. Basically, there's a pie representing all iOS failures, and another representing all Android failures. The iPhone 6 gets 29% of the first pie, and the Le 1S gets 10% of the second pie, but who's to say which slice is actually bigger, since they never tell us how big each pie is? Plus, they cleverly hide the fact that the quantity of slices in each of those pies is likely orders of magnitude different by only telling us about the top 5 models from each.
This feels like a case of someone massaging the statistics until they get something that suits their need, given the odd bucketing and double-use of terminology. Blancco Technology Group, which authored the study, apparently counts at least one Android manufacturer on its list of clients, but given the way that manufacturer was unfavorably represented, I doubt that manufacturer is behind these trashy statistics. I don't know if Blancco is the one doing the massaging (since the report is behind a "give us your info and agree to receive our marketing" wall) or if it's Softpedia, but either way, there's no useful information in the article.
Were the stats flipped to favor the other side, I'd have the same critiques, since it's trash reporting either way, and Slashdot should be doing a better job of weeding articles that have no factual basis with which to prop up their clickbait headlines.