Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Ken Thompson's Hack (Score 1) 115

No, actually, you can't. Its computationally infeasible to find deliberately hidden malware in a body of code, whether source or object. So no amount of analysis and/or testing can ever reliably tell you whether in fact your existing system is corrupt. You can only accomplish that by starting with a formal set of requirements that you can then successively refine into code that is (a) minimal, and (b) demonstrably maps directly to your formal specifications and their requirements. Any excess beyond what is minimally required to accomplish your task is a potential reservoir of latent malware. Note that such malware may be present in the source (ref. Ken Thompson's attack), or if it is, it may make use of various global state variables on the system as a trigger, or key, to unlock its functionality.

Comment Segmentation to represent security objects? (Score 1) 115

Give the long established practice in high assurance computer systems design to use segments to represent base-level security objects (so as to maximize alignment of hardware-enforced security policies with promised protections of the objects), will your new OS design rely on segments to represent security objects, or if not, what hardware abstraction will you use, instead?

Botnet

Feds To Remotely Uninstall Bot From Some PCs 211

CWmike writes "Federal authorities will remotely uninstall the Coreflood botnet Trojan from some infected Windows PCs over the next four weeks. Coreflood will be removed from infected computers only when the owners have been identified by the DOJ and they have submitted an authorization form to the FBI. The DOJ's plan to uninstall Coreflood is the latest step in a coordinated campaign to cripple the botnet, which controls more than 2 million compromised computers. The remote wipe move will require consent, and the action does come with warnings from the court that provided the injunction against the botnet, however. 'While the 'uninstall' command has been tested by the FBI and appears to work, it is nevertheless possible that the execution of the 'uninstall' command may produce unanticipated consequences, including damage to the infected computers,' the authorization form reads. FBI Special Agent Briana Neumiller said, 'The process does not affect any user files on an infected computer, nor does it ... access any data on the infected computer.' The DOJ and FBI did not say how many machines it has identified as candidates for its uninstall strategy, but told the judge that FBI field offices would be notifying affected people, companies and organizations."

Comment Re:Welcome to Multi-Level Security (Score 1) 237

One, at least, has been developed, and certified multiple times, using x86 commodity PC hardware, with a proprietary security controller (to provide isolated identity secret key and crypto support, like would be provided via TPM chip, today)

What would it do if it ran in a virtual machine with thunks provided so that the virtual machine had direct access to the proprietary security controller? In other words, how easy is it to perform a MITM attack on the OS and security controller?

With a Class A1 security kernel, or a Class A1 VMM, each virtual machine access to hardware would necessarily be mediated by the TCB - thus the reference monitor - whether that's done via partitioning hardware to the VMs, or virtualizing some hardware (like network interfaces, console, etc) via trusted services provided by the TCB. The design would necessarily preclude your MITM attacks.

I wouldn't blame monolithic kernels per-se. In my secure computing utopia, hardware privilege separation wouldn't even be necessary. Formal verification at the instruction level provides everything (and more) that hardware protection could, while ECC verification prevents the exploitation of bit flips in pointers that was used against the JVM. Formal verification could be attempted on an existing monolithic kernel, and might even succeed, but the work would be in defining a secure operating model and then tracking down all the bugs that violate the model when the proof generator fails, and writing proofs by hand for the hairy parts.

No, formal verification of the monolithic kernels is not possible. At the system level, we want the formal verification to take the form "demonstrate that there is no insecure state into which the system may enter". Specifically, the verification needs to demonstrate that there are no hidden interfaces, not even cryptographically protected ones that use long keys of system-wide state to decrypt or enable them, as those are the sorts of attack that a determined adversary might be expected to use.

A rational approach is to use hardware mechanisms, like segmentation, and MMU-enforced privilege level separation mechanisms (e.g., rings) to reduce the footprint subject to subversion attacks. Yes, the hardware mechanisms need also to be verified and protected against modification (whether by changing resistance or capacitance values, voltage levels, or loading hostile microcode). But that's all part of the environmental and trusted distribution aspects of the system security policy, overall.

I'm a believer in layered, modular composition in this regard - it gives you at least a prayer of analyzing and understanding the nature of a system. Formally verify the hardware and the software allowed unmediated access to it. Allocate (TCB Subsets and partitions) responsibility for other aspects of your security policy to high layers and other processors. Rely on the formally verified TCB to enforce separation and controlled sharing of information according to the policies it implements.

Let applications be applications. Put your foot down when they try to be trusted, particularly with regard to memory management (paging, for example). In this way, enable the vast majority of applications to work they way they were intended to, without knowledge nor worrys about mandatory policies. If they're good applications, teach them to use search paths to aggregate their data sources, so they can "read down" without having to be taught anything about MLS - just add the appropriate paths to their search and let the TCB control whether and which can see what.

So - don't create a bias towards trusting apps - use trusted systems that protect apps from each other, and protects your data from even hostile apps.

Comment Re:Welcome to Multi-Level Security (Score 1) 237

As you point out - on a weak system, anyone with read access to data can probably write it out somewhere, too. Not so with an MLS system supporting data integrity and/or secrecy protections, combined with adequate label integrity mechanisms for exported data (a form of strong DRM, if you like). Of course, protecting against undocumented features (Easter eggs) required another level of assurance, and the ability to verify that there are no hidden trap doors available for (or planted by) determined adversaries.

Good luck implementing a system like that on anything but dedicated proprietary hardware.

One, at least, has been developed, and certified multiple times, using x86 commodity PC hardware, with a proprietary security controller (to provide isolated identity secret key and crypto support, like would be provided via TPM chip, today)

Virtualization is enough to blow mandatory access controls out of the water without even talking about Van Eck phreaking, physical separation between networks of different privilege levels, etc.

Virtualization DOES require special attention, which is what is so disturbing about the half-baked virtualization support provided for current architectures - see Karger's papers on requirements for MLS VMMs and experience learned working on the DEC Class A1 VMM effort.

In general, I would also disagree with your claim that a proper mandatory access control system allows users of different clearances to use the same device. Covert channels are too difficult to prevent without severely limiting the usefulness of the system. Basically if two users share the same CPU or hardware they can almost always find a way to use some resource conflict as a covert channel, whether it's just the average response time for a certain operation to complete or something like the key recovery attacks that use cache latency to actually steal keys from a non-cooperating process.

Covert channels are hard, but they're more manageable than the hopeless pursuit of zero-day 'sploits that characterizes modern commercial practice. That evaluated system I mentioned above included in its configuration options for the elimination of any storage channels, and controls to manage bandwidth of timing channels.

The disconnect is this - customers have arguably MLS applications (in the commercial sense) that they're running without benefit of even the most rudimentary controls on information flow. The security industry is wrapped up in a hopeless test-and-patch cycle that is demonstrably flawed. The consumer has to conclude, in this market, that computer security is an oxymoron at its heart, and unattainable by any means.

That's too bad. You CAN control information flows according to a sound, reasonable, usable security policy (integrity, secrecy, whichever). You CANNOT do it using systems that were never designed nor intended to do that. You CAN combine trusted systems with untrusted systems such that trusted systems prevent unintended information flows, while untrusted systems provide expansive functionality on the information you allow them to access. You CAN allow "read-down" by untrusted systems to lower security data IF the sharing of that data between sensitivity levels is provided by a trusted system (complete with controls on channels). No, you shouldn't let high systems issue their own request to low networks (ie, browse the Internet) without trusted mediation - either by doing the browsing from a trusted (high assurance) client with trusted cut-n-paste (a thin client would do), or via some trusted server intermediary to cache and clear low data for high system consumers.

But in today's world, everything, and I do mean everything, from routers to hubs and switches to firewalls, workstations and servers, run on multi-million line monolythic kernel systems, whether Linux, Solaris, BSD or Windows. None of those are evaluable above EAL4 (which is "low assurance" by definition).

Comment Re:Welcome to Multi-Level Security (Score 1) 237

That having been said, no matter how 'secure' your OS, right down to privilege separation by process (can't have a Top Secret level document open at the same time as a Secret level notepad

Actually, with a sufficiently secure OS, you can give TS subjects within your process ability to read, but not write, information at lower levels without copying it up to TS, without any trusted code. But as you note below, that's not entirely sufficient.

) you can still write down notes, take pictures of the screen, and so on. Hell, memorize the salient points and take them home.

Any document, once read, is in the wild. The best you can hope to do is a) make it expensive to copy, be it in time, effort or money, and b) make any given copy of the document identifiable enough to find a scape goat.

Which is why a security policy must incorporate both technical and non-technical measures. If you really can't trust the people you let read your sensitive information, you're sunk. THAT is my point.

Most IT environment security fails because they don't really have a crisp idea of what they want to accomplish - for instance, they want to give untrusted partners access to really sensitive information and then try to control what the untrusted partner does with their newfound knowledge.

That's the domain of contract law. You wrap up the untrusted partner in contracts that spell out consequences of violating what little trust you place in them.

You audit to verify they adhere to the contract terms, which spell out the security policy you require them to enforce.

You use MAC and MLS policies to keep them from getting to stuff you really don't trust them, under any circumstances, to see - your really private stuff, your other partners stuff, and stuff they just have no business seeing from you.

If only "defense in depth" had been defined in terms of MAC, DAC/Audit, Personnel Policy and Contract Law, instead of in terms of multiple broken unaudited DAC systems leaning against one another like a pile of dominoes.

Perhaps in another lifetime.

Comment Re:Welcome to Multi-Level Security (Score 1) 237

Sorry, no - mandatory access controls mean that users of the system, including administrators, cannot override the security policy - which is usually expressed in terms of sensitivity labels on protected objects (data, devices, etc.) for comparison to clearance labels according to a dominance relationship.

NT and its successor OSes have always been considered "single level", lacking any concept of labels or supporting multiple clearances of users. Consider that they have been consistently evaluated under the Controlled Access Protection Profile (CAPP) of the Common Criteria, which maps to the TCSEC Orange Book C2 system requirements.

Whether NT was "built from the ground-up to enforce" mandatory access controls is moot - it's never been sold that way. And it can't be used that way, if you really care about "this class of users can have access to this, but THIS OTHER class of user can't". Which is what makes it so dangerous to use Windows on a DMZ supporting Internet users while giving the DMZ host access to internal systems.

In the case of the original post - DRM is a poor-man's attempt to use cryptography to get around the inherent weakness and insecurity of the operating system they've chosen to use - a fools game.

As you point out - on a weak system, anyone with read access to data can probably write it out somewhere, too. Not so with an MLS system supporting data integrity and/or secrecy protections, combined with adequate label integrity mechanisms for exported data (a form of strong DRM, if you like). Of course, protecting against undocumented features (Easter eggs) required another level of assurance, and the ability to verify that there are no hidden trap doors available for (or planted by) determined adversaries.

But that's grist for another post. This is already more than most are likely to read.

Comment Welcome to Multi-Level Security (Score 2, Insightful) 237

First, though, if you don't have a document handling and marking policy for PAPER documents, you're unlikely to succeed implementing one for electronic documents. In other words, if you don't presently mark printed documents with restrictive handling requirements ('secret', 'confidential', 'proprietary', 'atty-client privileged'), it won't do you any good to try to control their electronic versions.

Second, Windows has never been designed to try to enforce more than discretionary controls. What does that mean? It means that EVERYONE who touches the machine or its data is presumed to be cleared to see whatever is on the machine. They may not have the need to know what's there (that's what DAC does), but they're cleared to see it - so they're TRUSTED to handle it correctly.

If that doesn't describe your environment, you should reconsider whether a single-level system, like Windows, is suitable for storing, printing and using your documents in your environment.

Slashdot Top Deals

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...