Some of your questions make more sense than others. In the context of deploying, docker looks a lot like a VM deployment - except instead of having to build up a VM using chef, and with all the luggage that usually entails, the docker image tends to only be running the service. Imagine a linux kernel with only one process running. So there aren't a lot of edges to harden. Usually the service image gets launched in something that looks like a DMZ behind a firewall with a loadbalancer in front of it.. In our case it's a little wild west.
But part of the joy/ease of docker is that you build on some particular image. It would be easy to imagine an organization that was more .. organized than ours specifying a few base image flavors that developers would have to build off of - then you could harden those images all you wanted. I don't know that I would be a big fan of that, however. You can see where redhat supplies a lot of images, for example (https://access.redhat.com/search/#/container-images).
Concrete (semi bogus) example: you want to run redis (maybe on windows - where it's not trivial). There is a redis docker image: https://hub.docker.com/_/redis...
"docker run -p 6379:6379 redis"
That will fire up a docker container running redis and will map the port on the image to the port on the docker host (usually localhost, these days). You can now connect to port 6379 and talk to redis. The container running redis is hard to describe - it's a full (stripped) linux system and it's only running that one process. You can fire up a shell on that container using another docker command and it has a bunch of the things you expect. Some shell (maybe bash or something lighter). Most of the filesystem stuff you expect. It's like a VM - if you built a VM to only run redis.
I don't know if any of this helps - it really is a weird concept - at least I had a really hard time wrapping my head around it. But I do love what I can do with it.