You have the same issues with Docker containers you know. You still have to set up DB connections, install data, etc, etc. Config files and static data can be put anywhere in both situations, so Docker really doesn't solve much more in the case you supplied. All you're doing is sticking it into a docker image rather than a directory structure.
Not entirely. In many apps, there are internal objects and external objects. When the app is installed as a native OS app, the difference is not apparent, so you have a fairly untidy collection of stuff all over the place.
When I package for docker, I leave most of the app's characteristics internal, and the docker run specs clearly document the external characteristics. At most that's usually config, data, and log volumes and one or 2 ports.
The difference between a container and just trying to grab everything in a tarball is that A) invariably something important doesn't make it into the tarball and B) the target system may have conflicts with what's in the tarball. For internal resources, all that's tidily collected within the Docker image and hidden by container virtualization. For external resources, I can, if necessary simply remap locations, since I'm not in the habit of making containers use shared external resources. Plus, thanks to container linking, I can often inject certain characteristics from container to container and not even have to worry about their external aspects.
It's true. No such thing as a Silver Bullet. And any sufficiently-advanced technology can be turned into a screaming nightmare when placed in the hands of incompetents. Done judiciously, however, I find Docker makes for a tidier system, and one that's a lot easier to assure business continuity on.