Tags: Docker
We formerly deployed server applications to a Linux server using manual processes. An advanced team might use shell scripts to automate deployment. Over time tools like Chef or Ansible and more grew to handle ever-more-complex server application deployment scenarios. A few years ago, Docker came onto the scene with a whole new approach involving building a "Container" housing a complete operating system image that runs your application. Having built the Container, it's easy to ship that container to a server or run it on your laptop. The compelling gain is having the exact same development environment on your laptop as is deployed to your servers. Using the EXACT same environment streamlines your work by removing a ton of potentially destabilizing variables.
The preferred method is to build a Docker container image on your laptop, or on a build server, and upload the image to a Docker Registry. The image can then be downloaded from the Registry onto any number of systems.
What if you don't want to, or cannot, use a Registry? You could instead deploy the source code to the server, and build the container image on the server. That's a very unwise move, and it's better to ship the container image to the server. Turns out that is easy to do.
The typical workflow is to build the image, as said above, on your laptop or a build server. The image is then copied to a Docker Registry such as Docker Hub, or a self-hosted Registry. The outline is something like:
$ docker build -t group-name/image-name .
$ docker push group-name/image-name
The latter command pushes the container image to a registry. By default it pushes to Docker Hub, but it can also push to a self-hosted registry.
But there are many reasons to not use Docker Hub, and it might be too burdensome to self-host a Docker Registry.
The Docker CLI tool has a docker image
command that let's you export a container image to a tarball. That tarball can be copied elsewhere, such as your deployment server. That tarball can then be imported into the Docker environment on that server.
The first step is:
$ docker build -t group-name/image-name .
$ docker image save -o image-name.tar group-name/image-name
The second command exports the container into a tarball. You're free to compress the tarball if you prefer, of course.
Then you upload the tarball to the server, and execute:
$ docker image import image-name.tar image-name
This imports the container image so it can be used on the server. Once imported you can run the container image as normal.
Um... a CORRECTION on the incorrect advice above
I tried replicating the above advice a day later and got this problem upon running the container:
$ docker run -d --name image-name --expose 8080 -p 8080:8080 --restart always -t group-name/image-name
docker: Error response from daemon: No command specified.
Some DuckDuckGo searching turned up a Github issue thread where it's said that docker export
and docker import
loses "metadata" including the command in the Dockerfile. The Dockerfile I used in this case does use an ENTRYPOINT and therefore had a command. But since using docker import
causes that to be stripped, the container indeed did not have a Command.
The solution is to use docker load
on the target machine as so:
$ docker load < image-name.tar
decd896c7365: Loading layer [==================================================>] 41.37MB/41.37MB
Loaded image: group-name/image-name:latest
And that results in:
$ docker run -d --name mage-name --expose 8080 -p 8080:8080 --restart always -t group-name/image-name
929c279e95fe260b7542ae0ddc66326b1d802c1e37651c1894d5954cdcb03447
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
929c279e95fe group-name/image-name "sh -c 'java $JAVA..." 4 seconds ago Up 3 seconds 0.0.0.0:8080->8080/tcp mage-name