It's a real struggle between the typical programmer's desire to loosely specify what they want in quick-and-dirty code, vs the operating system's desire to enforce this moving "security" target we keep talking about. And all sandwiched around the limitations of current hardware. Researchers wanting to play in the hardware space have it tough as well; the hardware/software co-design needed (to hack the FPGA breadboards together, get kernels booting, figure out why the VHDL compiler is segfaulting on valid code, user-space gcc hacks to emit new instructions, etc) are pretty demanding.
There's been huge amount of work on the rigorous software theory side in the last few decades, which eliminates the need for hardware to dynamically enforce things that can be statically proven away. Peter Sewell and his team recently successfully
formally specified the socket API and machine-proved it. Projects like Microsoft's
Singularity, or Cornell's
Typed Assembly Language are looking at making practical, high-performance type-safe operating systems. I'd expect that, in the next few years at least, we'll see one of these approaches come to fruition before fine-grained hardware enforcement. And as a long-term goal, isn't it more desirable to have higher quality software than more paranoid hardware?