I read the featured article, and I see ways that publishers could misuse some of the recommendations as excuses for profit-grabbing practices that plenty of Slashdot users would detest.
For example, some organizations will claim a real business need to store intellectual property or other sensitive material on the client. The first consideration is to confirm that sensitive material really does need to be stored on the client.
Video game publishers might take this as an excuse to shift to OnLive-style remote video gaming, where the game runs entirely on the server, and the client just sends keypresses and mouse movements and receives video and audio.
I'm not sure how binary code and assets for a proprietary computer program could be watermarked without needing to separately digitally sign each copy.
Authentication via a cookie stored on a browser client may be sufficient for some resources; stronger forms of authentication (e.g., a two-factor method) should be used for more sensitive functions, such as resetting a password.
For small web sites that don't store financial or health information, I don't see how this can be made affordable. Two-factor typically incurs a cost to ship the client device to clients. Even if you as a developer can assume that the end user already has a mobile phone and pays for service, there's still a cost for you to send text messages and a cost for your users to receive them, especially in the United States market where not all plans include unlimited incoming texts.
a system that has an authentication mechanism, but allows a user to access the service by navigating directly to an “obscure” URL (such as a URL that is not directly linked to in a user interface, or that is simply otherwise “unknown” because a developer has not widely published it) within the service without also requiring an authentication credential, is vulnerable to authentication bypass.
How is disclosure of such a URL any different from disclosure of a password? One could achieve the same objective by changing the URL periodically.
For example, memory access permissions can be used to mark memory that contains only data as non-executable and to mark memory where code is stored as executable, but immutable, at runtime.
This is W^X. But to what extent is it advisable to take this principle as far as iOS takes it, where an application can never flip a page from writable to executable? This policy blocks applications from implementing any sort of JIT compilation, which can limit the runtime performance of a domain-specific language.
Key management mistakes are common, and include hard-coding keys into software (often observed in embedded devices and application software)
What's the practical alternative to hard-coding a key without needing to separately digitally sign each copy of a program?
Default configurations that are “open” (that is, default configurations that allow access to the system or data while the system is being configured or on the first run) assume that the first user is sophisticated enough to understand that other protections must be in place while the system is configured. Assumptions about the sophistication or security knowledge of users are bound to be incorrect some percentage of the time.
If the owner of a machine isn't sophisticated enough to administer it, who is? The owner of a computing platform might use this as an excuse to implement a walled garden.
On the other hand, it might be preferable not to give the user a choice at all; or example if a default secure choice does not have any material disadvantage over any other; if the choice is in a domain that the user is unlikely to be able to reason about;
A "material disadvantage" from the point of view of a platform's publisher may differ from that from the point of view of the platform's users. Another potential walled garden excuse.
Designers must also consider the implications of user fatigue (for example, the implications of having a user click “OK” every time an application needs a specific permission) and try to design a system that avoids user fatigue while also providing the desired level of security and privacy to the user.
Google tried this with Android by listing all of an application's permissions up front at application installation time. The result was that some end users ended up with no acceptable applications because all applications in a class requested unacceptable permissions.
A more complex example of these inherent tensions would be the need to make security simple enough for typical users while also giving sophisticated or administrative users the control that they require.
That or an application or platform publisher might just punt on serving sophisticated users.
Validate the provenance and integrity of the external component by means of cryptographically trusted hashes and signatures, code signing artifacts, and verification of the downloaded source.
This too could be misinterpreted as a walled garden excuse when a platform owner treats applications as "external components" in this manner.