A container is a package containing your application and designating how much of which version of the OS, libraries, file system, utilities, etc. it gets to see. It looks to the app like it's running on its own little machine, just like in a VM. But it's actually running (along with everything else) under the native Linux kernel, which is using several compartmentalization mechanisms to give the app its own, limited and tuned, view of task numbers, file system, tables, etc.
This is where I get lost with containers vs. virtualization. How does a container choose what version of the OS it gets to use if it runs under a given OS? The library aspects I think I get, assuming you're able to install multiple copies of the libraries or apps in question in the OS.
Or is that the part I don't get -- it's more like an app build process, where you essentially compile the app and install its binaries and linked libraries, including system libraries into the container? I guess this makes sense, but then I don't get how you're able to obtain OS portability for containers without essentially throwing every bit of the OS into the container it might need. Or do they not have OS portability, and the container is more or less locked to the OS it was built under?
At some point I'm curious how containers aren't just basically a method of obtaining what amounts to a statically linked binary with FS jails and networking baked into the container host.
It all pretty much started here