I have different concerns with that article.
"Security is not a property of a technical system," she noted in her talk at the Hack in the Box conference in Amsterdam. "Security is the set of activities that reduce the likelihood of a set of adversaries successfully frustrating the goals of a set of users."
No. "Security" does not exist. You can be MORE secure than X or you can be LESS secure than X but you cannot achieve "security".
For me, being MORE secure means that fewer people can successfully attack you (or that the attack requires more of them to work together).
Saitta realized that a lot of what we know in the security world can't be effectively used if someone in the real world is targeted by a determined adversary.
No. That is getting back to the MORE secure or LESS secure. If the attacker has to drop armed forces onto your office building then you are MORE secure than if they exploited a 0-day on your web site.
We shouldn't work on assumptions or go by intuition - we should set aside our egos, and consult with the end users - learn about their goals and adversaries.
I'd say that 99.9+% of them have no idea who their adversaries are. Other than "that asshole Bob" or "the Chinese".
In the case of high-risk users, usable security is a must.
Is there ever a case where unusable security is a must?
As she vividly put it: if you're on a rooftop, trying to get a connection and successfully send out an encrypted message because your life or freedom - or that of others - depends on it, and you know that there are snipers waiting to take a shot at you - there is simply zero room for using a tool as complex as PGP.
Choose the right tool for the job AND LEARN HOW TO USE IT PRIOR TO THE EMERGENCY.
And if her example is, literally, snipers on the rooftops then whomever did the computer security did a fucking great job. This is an example of a win, not a failure.