Follow Slashdot stories on Twitter


Forgot your password?
Slashdot Deals: Cyber Monday Sale Extended! Courses ranging from coding to project management - all eLearning deals 20% off with coupon code "CYBERMONDAY20". ×

Comment Re:Two words (Score 1) 355

First of all, who cares how much money he made and how much I make.
I don't even know who that guy is.

Secondly: I'm not arguing about him or his ideas but about my parent who wants Germans and Swiss and other water rich countries to switch back to nuclear power, to desalinate water when they have excess power.

Hint: Switzerland has not een access to salt water!

OK, then power a free public transportation system with the excess electricity. The point is, you can always find uses for extra electric power; I know what the vast majority of the world would use it for, even if you wouldn't use it for the same thing.

Submission + - Canonical Patches Two Kernel Vulnerabilities in Ubuntu 14.04

jones_supa writes: Canonical has announced that a new kernel update is now live in the default software repositories of Ubuntu 14.04 operating system. According to the security notice, two Linux kernel vulnerabilities have been fixed. The first security flaw was discovered in the SCTP (Stream Control Transmission Protocol) implementation, which conducted a wrong sequence of protocol-initialization steps. The second kernel vulnerability was discovered by Dmitry Vyukov in Linux kernel's keyring handler, which tried to garbage collect incompletely instantiated keys. The both vulnerabilities allow a local attacker to crash the system by causing a denial of service (DoS). To fix the issues mentioned above, Canonical urges all users of Ubuntu 14.04 to update their kernel packages on all platforms.

Comment Re:Levels of Security (Score 1) 118

I'm not going to write an entire paper here on Slashdot.

You already kind of did lol. This is good stuff though. I have some follow-up questions if you don't mind:

1) How are you aware of (and able to control) lower-level things like the page size, or which functions go into which groups of pages?

In a general, hand-wavy fashion, things like page size are an attribute of the compilation environment, and do not vary.

In practice, there are some older MIPS systems and the original NeXTStep which would "gang" 4K pages into 8K pages, and of course there's the Intel variety of superpages, depending on operating mode and contents of CR4, and the PSE bit being set, with or without the PAE bit being set, to give you either 4M or 2M pages. There are also some other architectures that allow even weirder variants.

As a general rule, most of these things other than the default are accessed via two interfaces: Either a section attribute in an executable section descriptor -- meaning it's handled by the kernel -- or via a special user mode interface for allocating large pages. The user mode interface may or may not be hidden in the malloc internals, in order to prevent direct use by a program. In addition, there are potentially device specific controls (in UNIX systems, these would be ioctl's) to map large pages into a user space process; as an example, the frame buffer memory in a Wayland or X Server, and so on.

Practically speaking, one of the most useful things you can do with large pages in a Linux, BSD, or UNIX running on an Intel system is to put the kernel itself into large pages; location won't matter, without a kernel code injection exploit. It's useful because Intel processors maintain separate TLBs (Translation Look-aside Buffers) for large and small pages, and this means that user space processes, and kernel interrupts, traps, or traps from user space to kernel space (e.g. system calls) won't be ejecting each other's pages from the look-aside. Depending on how frequently you end up running in the kernel vs. user space, this can result in an up to about 36% performance improvement.

One of the problems with this is that there's a known bug in Intel processors where INVLPG won't invalidate the page mappings in both, so there was an early bug that tended to hit Linux systems -- but not FreeBSD system -- where the INVLPG instruction kicked a page out of one TLB but not both TLBs, if it was mapped in both. This was mostly an issue when you tried to convert from running in real mode to running on the PMMU, and then from there, from 4K pages to large pages. The work around is to INVLPG twice, or to reload CR3, which flushes all the TLBs (making it the "big hammer").

Anyway, that was a digression, and in the scenario I discussed using statistical protection, you'd use the compiler and linker to make sections per function or function group, and then the linker would put linkage vtables in each of these groups when creating the executable, and then the exec function in the kernel would interpret these as allocation units, and put the sections in as few contiguous pages as possible, and then randomly locate them some place in the process address space. Which on an Intel/PPC architecture would locate them in a 64 bit virtual space, out of a 52 or 53 bit physical addressable space.

When the loader resolved the linkages for shared libraries through the fault/fixup mechanism, it'd do it by library:section, rather than by library alone, using the per-section vtables.

2) Why is it called "container-in-a-mailbox?"

Fair question.

Historically speaking, there are several ways to pass things around around between components. One of these is via register reference to the address of the thing. Another is via stack reference to the address of the thing. Another is via descriptor (in VMS, this is the function descriptor; in Mach, this is a Mach Message that is defined in an IDL compiled with a tool called "mig"; in ORBs like OmiORB or Corba, or even COM/DCOM in Windows, it's via an IDL stub that's either compiled via a compiler like mig, or is generated automatically during the link stage, etc.). And then there is message passing, as is done on the data bus of things like the Northern Telecom DV1, DV2, DV3, etc., which are used in the implementation of phone switches.

When you pass messages, it's generally impossible to pass the entire message; and while the message passing could be a partial address space mapping hand-off, as in Mach, another method is to use what's called a "mailbox".

In its simplest implementation, a mailbox is a file in which one program copies a large chunk of data, and then informs some other program that there's data waiting in the mailbox.

In one of the early implementations of a TPS (transaction processing system) implemented by AT&T Bell Labs (named "Tuxedo", if you care), mailboxes were placed in System V shared memory segments, and the notification between programs occurred via System V message queues. In something like sendmail or qmail, they're implemented as separate message and index files in what's called a "mail queue directory".

In practice, if you are using something like statistical memory protection on a system with a very large address space, and a large amount of physical RAM (or less, if you are willing to page, but you will almost certainly page, due to front-to-back linear processing of mailbox contents), you could just place the messages in anonymous memory mappings, and then "forget" them once their address has been handed off to the next component interface.

So on most modern systems, mailboxes are ideal.

The reason it's container-in-mailbox is that there is a separate logical phase for container receipt vs. validation vs. use of container contents. So you would have to put the container in the mailbox so the validation phase could safely operate on it without trusting the component handing it the data has not been compromised (and likewise, with the handoff between validation and utilization.

It's a security domain separation, rather than a protection domain separation or an address space domain separation (although address space, in the case of statistical protections -- or hypothetical future hardware or use of ring 1/2, if you can live with a granularity of "3" -- generally amounts to one of those as an implementation detail).

3) you wrote, "Most modern (predominantly research) security architectures" who is doing this research, and where can I find it?

Wow. Pretty much everyone in OS software who cares?

IBM and Microsoft are players, OpenBSD is, for some types of things. Apple is; Linux people (though I think it was a DARPA project run by IBM?) were the first to implement ASLR; I think Apple was the first to ASLR absolutely everything? And to do page level executable signature verification in the paging path? Though I think they mostly did it for DRM reasons, rather than to be helpful to users. I think compiler stack probes came from the LLVM folks?

The hardware guys have pretty much been warts on the tail of progress; they're not very fast to implement anything for which there isn't already a proven market, because of development costs; T-Zones on ARM are the one thing I can think of that went big-time, and that was mostly to allow running application software and baseband software to run on the same processor in a cell phone, and that was mostly so that the SDRs (Software Defined Radios) could get certification by agencies like the FCC in the U.S. and whatever passes for the FCC in various other countries.

As part of this, you define an interface contract: you are permitted to call down to the interfaces below yourself, and you are permitted to call across, within the same layer to auxiliary functions, but under no circumstances are you permitted to call upward.

That would ruin (or improve) a lot of modern OO techniques.

Well yes, and no.

From a security perspective, if you were using an object interface vtable on a linkage for a C++ object, it would certainly prevent you doing things like hiding the function pointers from other code that's allowed to call into the object. So the mechanics of OO language design are inherently inimical to security, from that stand point. On the other hand, you could be handing off the call as a descriptor with an object to other code that knows how to do the dispatch, and performs information hiding to keep you from adding 4 or 8 or whatever to the known address of the function to get to the next function, which might be a friend function or a private function located sequentially in the object.

This wouldn't entirely preclude layering violations, but it would certainly make them more difficult. That would improve security, but whether it improved the techniques? It depends on whether your techniques were already predicated on interface violations, or whether you were accessing data embers directly, rather than trough an accessor/mutator function, and whether or not you were using a global static object instance with a reference or entry counter for critical sectioning, and so on.

Sidebar: as a general rule: critical sections should almost never be used, except when dealing directly with hardware interrupts, and not even then, if your hardware is correctly designed to block/queue interrupts until a previous interrupt is explicitly acknowledged. Protect access to data objects, not to the code that accesses or modifies data objects. Code should be intrinsically reentrant. There are some really cheap mechanisms you can use here to avoid pipeline stalls, like atomic increment when adding a reference, and only taking a lock on code when decrementing for the 1->0 check, and so on. You need a data pipeline barrier in that case, but you don't incur both a data and a code pipeline stall, etc. (if you want to see an example of how this is done, look at the kern_cred.c and kern_credential.c code in Mac OS X -- it's on

In general, I think anything that results in code being reentrant, particularly OO code, where the object itself acts as a stateite, and there may be multiple threads operating on separate objects through the same code simultaneously, it just makes sense.

Other examples are turnstiles in Solaris, and prohibiting lock recursion as a "kernel panic/segfault offense" to prevent layer revisiting from even being legal at all, etc..

The reason I like DJB's work is because he seems to carefully think about what problems may arise every time he writes a line of code. He may not always succeed, but if you don't have that way of thinking, you will automatically fail at "identifying architectural layers for your libraries in order to abstract complexity of each layer from the layer below it," and will have bugs no matter what rules you follow.

The problem I really have with his work is that it's largely academically oriented, rather than practical. It's Like Peter Druschel's work at Rice University on LRP (Lazy Receiver Processing), which is quite brilliant, but impossible to reduce to practice.

At a previous employer, we actually reduced LRP (without the "rescon" additions, which are patented and IMO not useful) to practice. Getting this to actually work usefully as a solution to receiver livelock involved going well beyond the work he and his students had done, and required things like modification of the firmware in Tigon II and Tigon III cards to not interrupt until their last interrupt was acknowledged, and for incoming connections, you had to modify the way the routing and socket handling in the accept(2) system call operated, or you'd get a kernel packet when there wasn't actually an mbuf hanging there read to receive the incoming connect request, and so on.

It was a lot of "resolving implementation details which are inconvenient for me is left as an exercise for the student".

DJB's work has a lot of that flavor to it.

In particular, if you look at DJBDNS, it has no support for secondaries, it has no support for interior vs. exterior DNS resolution (I wrote the RFC draft for the in the IETF DNS Working Group, which is a mailing list named "namedroppers"), and it has no support for zone transfers.

These were all considered "insecure", and he expected that all DNS servers would be authoritative primaries, and that "zone transfers" would use an out of band communication mechanism (I believe at the time, he was suggesting "rsync" on the zone data files for this?). This was my first experience with his philosophy of partitioning function by program, and functional decomposition as a solution to complexity.

I didn't see this change with qmail -- although he admittedly did cover a larger proportion of the problem space, he still failed to map all of it -- and the places where he "compromised his principles by doing so" demonstrated later weaknesses in the philosophy, like the exploit we've been using as an example in this discussion.

I really don't like "proof is left to the student" type stuff any more than I first liked it when I saw in Feynman's lectures that he was in fact using Clifford Algebras to do quantum physics, and didn't bother to share this fact with the rest of us, or when I found out that Newton had invented calculus, and was able to pop Sir Edmund Halley's bubble on his big announcement by answering his rhetorical question "and do you know what shape the orbit describes?", and Newton pipes up "it's an ellipse, of course".

Anything where parts of the problem space that are supposedly being mapped by someone's solution aren't reduced to practice tend to be very annoying. But perhaps that's just me... 8^)

Comment Re:Devs continue to develop for these gimped thing (Score 1, Insightful) 135

Oh please, that strawman was debunked ages ago, its no different than the *.A.A claiming that piracy cost them more than the GDP of the planet! All one has to do is look at the incredible mountains of cash valve is generating without putting out new AAA titles in years to see PC gaming is a HUGE money maker and that users do not mind DRM if its not intrusive always online horseshit like UbiSuck.

No lets cut through the bullshit, shall we? The REAL reason that you see so many developing for consoles is that console users are easy to fuck and fuck hard, end of story. They can keep the prices jacked for FAR longer on consoles because they do not have to deal with a free market, with consoles they have a captive monopoly so they are the only game in town. With PC you have Steam and GOG and the Humble Bundles and Origin and UbiPlay and Greeman Gaming and Amazon, with PCs the users have a huge market to shop from and endless titles going back 30+ years so they don't have to put up with the "take it and like it" bullshit. Check a new title 3 weeks after release, where is it cheaper? PC. 3 months? PC, 6 months, a year, 2 years? PC, PC, and PC, simply because if you don't compete? You ain't getting that money.

So don't give us the party line bullshit, its because of money alright, its because they can royally fuck console users and they have no choice but to bend over and take it.

Comment Re:Summary is so broken (Score -1, Troll) 135

Are the console OSes really so primitive that they cannot handle these things without dedicated cores? Because I have an 8 core and despite the OS having much more to do i have zero effort doing any of the above while playing games and running plenty of background tasks. Considering the hardware of both consoles is really nothing more than an HTPC it seems kinda ridiculous that it cannot perform basic tasks that PCs have been able to do quite easily for ages with hardware that powerful.

Maybe somebody that actually works console dev can chime in with the inside scoop, how primitive are the OSes on these things?

Comment Re:Levels of Security (Score 1) 118

btw, I'm pretty sure you have an interesting point here when you said this:

Functional decomposition is a really poor way of abstracting complexity, when it's being used in isolation, and does not include mandatory boundary layer order and direction of operations over said boundary.

but I'm not entirely sure what you meant. Could you clarify? What other option is there besides functional decomposition?

DJB's philosophy is to minimize individual attack surfaces by reducing code complexity. This has three components, of which DJB himself is a proponent of two of them. I'm not sure whether this is because he doesn't realize that it's a consequence of his implementation paradigm, or whether he simply thinks it's too obvious to talk about. These are the components:

(1) Reduce complexity by separating the problem domains into individual processes. This separates necessary privilege escalations from other code, and separates cross-functionality address space based attacks on the code.

(2) Reduce complexity into functional time domains involving serialization of operations which could (potentially) otherwise take place in parallel. This is also done through use of individual processes, but is based on the trigger initiating the processes being separate, and therefore not under the control of an attacker. This increases the difficulty of an attack by requiring serial attacks for each component between the intermediate targets and the final target of an exploit (as in the previously referenced "shellshock" attack). For a shellshock attack, this particular precaution was meaningless, since there was a direct passthrough of the data without prevalidation without action before passing the data onto another component. In other words: the particular attack zips through this security precaution.

(3) This may or may not have been intentional, but he reduces the network and system call footprint for each of the components in such a way that it reduces the remotely accessible attack surface (you can only attack things you can talk to) to something which can be firewalled, and the system call footprint of individual components into something that could have local application sandboxing applied to prevent particular system calls being used by individual program components, or even sequences of system calls being used outside a particular order, or in excess of a particular number of times. This was probably not a design goal, given that neither deep packet inspection/stateful firewalls, nor sandboxing, were utilized in most systems at the time qmail was originally written.

That's cool and all, but it's taking a hammer to a problem which is actually a result of programmer discipline and machine architecture, and, frankly, some of those architecture issues have been addressed at the operating systems and compiler level for years, and others are better addressed through other mechanisms. It also failed miserably in intentional strategy #2, above.

The first mechanism is boundary layer violations. The most infamous email program in existence is Microsoft Outlook, and it's for good reason. Outlook engaged in interface layering violations. These are responsible for nearly all the initially exploited Outlook vulnerabilities.

What avoiding boundary layer violations means is that, if you are designing correctly, you identify architectural layers for your libraries in order to abstract complexity of each layer from the layer below it. As part of this, you define an interface contract: you are permitted to call down to the interfaces below yourself, and you are permitted to call across, within the same layer to auxiliary functions, but under no circumstances are you permitted to call upward. A good example of a boundary layer violation in libc is the use of a function pointer for the compare function in the qsort library routine, which will result into an upcall from the libc layer, to upper level code. In general, this should be avoided -- and if you have multiple protection domains, such as ring 1 and ring 2, which are generally unused by most operating systems, it should be prohibited in hardware. A "poor man's" version of hardware prohibition is achievable through a rather more radical utilization of large address spaces than is used in ASLR: statistical page protection. If you can't find the page, and if functions in a library are not laid out in adjacent pages in the process address space when the library is loaded, you can use a computed location based on a known call site to find an attack vector.

Another boundary layering violation which Outlook has failed on -- and which qmail periodically fails upon, and which constitutes a number of usable qmail exploits -- is container boundary verification.

In the second vector for Outlook exploits, after the layering violations are dismissed, we have container boundary verification. While qmail is not subject specifically to MIME based container boundary verification issues, it has its own problems with containers. In Outlook, these took the form of intentionally malformed data content being passed as part of a message. The easiest of these is the fact that, in order to render a message more quickly, and, specifically to support the rendering of HTML messages (which Microsoft still things are "Nifty!(tm)"), Outlook started decoding the container contents before verifying the validity of the container. Specifically, it would start rendering GIFs before they were verified to be valid GIFs, it would start rending other content before it was determined the content was verified to be valid content. This is where we get the "malformed attachment" exploits in Outlook.

The correct thing to do is to download the content, verify each container matches its purported size, and then the containers inside the containers -- images, audio, video, etc. -- are themselves valid, before handing them off to the rendering component. Outlook failed to do this, and treated the header as a dispatch item, handing off the data stream to the rendering component, which allowed a header on a container to cause mush more of the byte stream than the container boundary to execute payload in subsequent containers. Qmail fails in a similar way with the handing off to a renderer unverified content container content ...and this is precisely how the second component failed them in the shellshock scenario.

Most modern (predominantly research) security architectures have moved to a container-in-a-mailbox mechanism. You put the contents of the container into a mailbox, and then you run a verifier -- separate from the renderer -- on the mailbox contents, thus preventing an assignment of meaning, and therefore a communication of intelligence (attack data) to a target; only after that, do you hand the mailbox off to the content renderer.

Note that this application of containers in mailboxes has a couple of significant advantages: (A) it's really amenable to things like statistical memory protection, since if you run off the end of a page, you fault, instead of getting meaningful payload data, and (B) for hardening purposes, you can put the container contents end at the end of the page boundary, and index the start of the mailbox into the first page, rather than at the start of the page (you can do this, because you are aware of the content length as a result of looking at the initial container and having vetted it before mailboxing the data). This means that a scan forward into the container past the boundary results in an immediate fault, even though your hardware perhaps only supports 4K page boundaries, rather than byte-level mapping. And finally, (C), you can map the mailbox contents as read-only, non-executable, non-writeable, before you hand them off even to the verifier, thus preventing self-referential execution as part of an attack.

To deal with the issue of attack surface at interface layers, which is handled by decomposition into processes, instead you can rely on the link-loader. In most modern operating systems, the link-load phase is handled in the kernel exec/fork/spawn functions, which also manage ASLR. An alternative is to only make the addresses of code in inferior layers (one should never access data in an inferior layer, other than through use of an accessor/mutator function, rather than directly), you make known the function only at the call site. Further, you decompose the functions locations into groups of pages which are non-contiguous. Thus the fprintf() function in libc might (should) be in a physically discontiguous location from the gethostent() or other libc function. Thus address space decomposition is a better approach than functional decomposition based on role and program boundaries: it's much more granular.

There's at least 5 additional techniques that you might be expected to use, each with diminishing security utility, which you could utilize to do a better job than qmail does, but you get the basic idea, and I'm not going to write an entire paper here on Slashdot.

Comment Re:I really wonder how other employers/employees.. (Score 1) 124

In the cases I have seen "contractors" have all been W-2s I should move to your part of the country, I hate being a W-2

The easiest way to accomplish this is to start your own contracting agency, and then employ yourself, and any friends who are in the same boat, as a 1099 worker. The bonus is that this will let you deduct most of your taxes as either "operating expense" or "capital outlay" on the part of the agency, you can run an expense account for most of the day to day expenses, including a car if you want, you can incorporate retirement fund operating company for the contracting agency to reallocate income into for the principals in the contracting agency, and you still get your 1099 job on top of it.

BTW: This is how most massage studios, day spas, nail salons, hair salons, and so on operate. Everyone who does the actual work is a 1099, with the exception of the owner, and maybe a hourly receptionist, if the business is big enough to merit one for bookings.

Artificial intelligence has the same relation to intelligence as artificial flowers have to flowers. -- David Parnas