Containers are an interesting beast. Solaris has had Zones (aka containers) since 2005. In Solaris, these Zones are more akin to virtual machines, except much more efficient. All zones shared a single kernel, they just had virtual network interfaces, storage, and could be managed independently. Now, in 2014, Docker brings the same simplicity of Solaris Zones to Linux.
Sure, we've had CGroups in Linux since 2004/2006 but Docker finally brought Linux up to speed with a simple to use capability for creating isolated containers on Linux. Only, the implementation brings with it the same flawed approach as Solaris Zones. Do we really need a full OS image running in a container? I don't think so. Docker images are based on a Linux distro (Ubuntu or CentOS, etc). So we look at this and say, "cool, virtualization without the overhead of interrupts for everything from writing to disk to sending packets over the wire." But is that really the best we can do?
I think what Rocket really represents is a way to do containers right. Containers should run a single process. We shouldn't look at containers as a more efficient VM. We should see containers as a way to increase security and reduce overhead. Do you really want to have to run apt-get or yum inside every container? No. Containers should provide process isolation and application management capabilities. They shouldn't include the OS and the kitchen sink of user land utilities.
This is where Docker has failed. Instead of simplifying administration and deployment, it's introduced its own nuanced approach to system management. The reason we need a Docker competitor (replacement?) is because Docker has failed to live up to its hype.