Disclaimer: I'm not super experienced in this stuff. I am open to correction if I have any of these points wrong.
Do we really need a full OS image running in a container?
I think we probably do.
One of the key selling points of Docker is that the container is load-and-go. Do you have some wacky old software that has a hard dependency on particular versions of some libraries? You can build a container with just the right libraries and get your software to work... and, after you do that work, the container is just another container. It may have been a pain for you to get it working, but then anyone can run it on any Docker host as easily as any other container. This seems kind of powerful to me.
Do you need to see how your software runs on CentOS and Debian? You can set up a container for each, and run the tests on a single host system.
And if you want maximum security, it's kind of neat that each docker container can use just its own private file system and containers can't affect each others' running state.
So, if you are content with running an up-to-date system, and always running the latest versions of everything, and upgrading everything together, you could make a security isolation system lighter weight than Docker, but trading off some of the simplicity and flexibility of Docker. You might think it's a good choice, but I don't think you can reasonably claim that it's better in all ways.
Containers should run a single process. We shouldn't look at containers as a more efficient VM.
As I understand it, it is considered best practice in Docker to run a single process per container. Some people do use Docker as a sort of lightweight VM but not everyone likes it.
Are you arguing that Docker is flawed because it doesn't enforce one process per container? Because I'm not seeing it. I would rather have the flexibility; if I want to use Docker as a lightweight VM, the option is there, and I don't see that as a bad thing.
Do you really want to have to run apt-get or yum inside every container?
Please correct me if I'm wrong, but my understanding is that you don't have to run a package manager inside every container. You would have a "base system" image, and you would update that image from time to time; then you build your specific containers as layers on top of the base image.
I believe a container could simply be a script that starts up a service, and config files that configure the service, with the actual packages for the service in the "base system" image. I'm not sure if that is standard practice or what.
I'm hoping that with Docker I could make micro-servers, like a Docker container with just a web server in it, not even a Bash shell. If someone cracks my server I want him in a desert, with no tools to help him escalate his privileges. I'm not sure how feasible that is now, but I think Docker is at least headed in that direction.
I'm not opposed to this new Rocket thing, but I'm still not clear on its actual advantages over Docker.