; Date: June 1, 2020
In a Docker Swarm, the
docker commands can be run on any swarm manager node to affect any node in the swarm. That's cool, because it doesn't matter which swarm node your terminal session is running to make changes to the swarm. But what if you could remote control a single Docker system, or a Docker Swarm, from the comfort of your laptop?
docker context command you can not only remotely control Docker hosts, but you can easily switch between hosts. If you have one swarm you may have other swarms as well, depending on the work you're doing. Ergo, you may need to remotely control any of several swarms.
In this post we will examine what it takes to set up remote control of a Docker system.
The Docker documentation makes this sound scary, between exposing the Docker API on a public port and setting up your own TLS certificates. In the
Docker daemon documentation we learn that remote control involves configuring
dockerd to export the Docker API on a TCP socket, when it's normally exported on a Unix domain socket, and that this is insecure unless you work out how to deploy TLS certificates and some other stuff that makes ones eyes glaze over.
But, thanks to a couple not-well-documented features built in to Docker it is actually trivial to remotely control Docker. Starting in Docker 18.09 one can access a Docker system using an SSH URL.
Docker supports three ways to remotely control a Docker instance through an SSH URL: 1) set the DOCKER_HOST variable to the URL, 2) the
-H option can also takes the URL, 3) use
docker context to record the URL.
docker context approach is cleaner but extremely similar to the other two modes.
Setup - Docker instances in a virtual machine emulator pretending to be a remote server
It would take too long to show you how to set up Docker on an AWS EC2 instance. It's much simpler to just use a local virtual machine in Multipass. Multipass is an excellent virtual machine emulator that lets you quickly setup light weight Ubuntu instances. There are other choices for launching a Linux instance on your laptop, after having used several of them I'm very impressed by Multipass.
In an earlier post I showed how to set up a Docker Swarm on your laptop using Multipass: Creating a Docker Swarm using Multipass and Ubuntu 20.04 on your laptop
The setup instructions here are adapted from that post. One first goes to
https://multipass.run to download the Multipass installer. It runs on Windows, Linux and macOS.
$ multipass launch --name swarm1 focal
Later we'll be creating another swarm node, but let's just start with the one for right now. This command creates an Ubuntu 20.04 instance on your laptop, naming it
Once the instance is running, set up Docker on
swarm1 using the following commands:
sudo apt-get update sudo apt-get upgrade -y sudo apt-get -y install apt-transport-https \ ca-certificates curl gnupg-agent software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo apt-key fingerprint 0EBFCD88 sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update sudo apt-get upgrade -y sudo apt-get install -y docker-ce docker-ce-cli containerd.io sudo groupadd docker sudo usermod -aG docker ubuntu sudo systemctl enable docker
The first section enables support for downloading packages from APT repositories that are accessed via HTTPS. The middle section sets up packages required to run Docker. Then the last couple lines puts the
ubuntu user into the Docker group, and enables the Docker service as a persistent background service.
That setup procedure is adapted from the official Docker instructions for installation on Linux. It gets you the latest version of Docker-CE.
To run these commands start with:
multipass shell swarm1
Then you run the above commands, ending with testing the Docker installation by typing:
docker run hello-world
This gets you a running Docker instance that is functionally identical to a remote Docker instance that could be on another continent.
Enabling passwordless SSH login to the Multipass instance
For remote Docker control to work we must have password-less SSH access to the machine which is running Docker. Out of the box Multipass doesn't support password-less SSH, but the standard method for doing so works fine with Multipass.
The summary is:
- Get your SSH key from the contents of your
- Add your SSH key to the
~/.ssh/authorized_keyson the Multipass instance.
Once you do those steps you can do this:
$ multipass list Name State IPv4 Image swarm1 Running 192.168.64.14 Ubuntu 20.04 LTS $ ssh firstname.lastname@example.org Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-33-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage ...
It obviously doesn't matter whether the Ubuntu/Linux instance is inside Multipass on your laptop or on a server thousands of miles away. These steps are the same - you're simply ensuring to set up password-less SSH access to a Linux host running Docker.
Enabling remote access using DOCKER_HOST
One way to run Docker commands on the remote host is:
$ ssh email@example.com docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
This will always work, but you have to remember to add the SSH command to the command line.
To enable remote access using the DOCKER_HOST environment variable, start with:
$ export DOCKER_HOST=ssh://firstname.lastname@example.org
Substitute the IP address or domain name of your choice here, of course. All of the Docker commands recognize this environment variable, and know to connect to the Docker service on the named host using an SSH connection.
$ docker run -d nginx Unable to find image 'nginx:latest' locally latest: Pulling from library/nginx afb6ec6fdc1c: Pull complete b90c53a0b692: Pull complete 11fa52a0fdc0: Pull complete Digest: sha256:6fff55753e3b34e36e24e37039ee9eae1fe38a6420d8ae16ef37c92d1eb26699 Status: Downloaded newer image for nginx:latest 237844d841c77fd2a161fcf2f3367d94a956e73969afcd0ff40f468029b739a1 $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 237844d841c7 nginx "nginx -g 'daemon of…" 12 seconds ago Up 9 seconds 80/tcp sharp_tu
This looks just like running Docker commands against the local host, but it's on a remote host. To prove that it's a remote host do the following:
$ unset DOCKER_HOST $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
This removes the environment variable, meaning that Docker will now access the Docker on the local host. No containers running on the laptop. Then let's re-enable the remote access:
$ export DOCKER_HOST=ssh://email@example.com $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 237844d841c7 nginx "nginx -g 'daemon of…" 5 minutes ago Up 5 minutes 80/tcp sharp_tu
And once again we're accessing the remote host.
Notice that this approach does not require exposing the Docker API port on the Internet or other complex setup. It simply requires a password-less SSH connection to the server.
Enabling remote access using
That's cool that we can easily have remote access to a Docker instance. It's very powerful and as we'll see later we can even access a whole Docker Swarm this way. But first we need to learn about
docker context command lets you record several configurations for different remote Docker instances. Then you can easily switch between them simply by running a
docker context command. You could do this by setting different values to DOCKER_HOST for different remote hosts. BUt with
docker context the access URL strings are remembered for you, making it far more convenient:
$ docker context create swarm1 --docker host=ssh://firstname.lastname@example.org
This sets up a new context.
$ docker context ls NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR default * Current DOCKER_HOST based configuration ssh://email@example.com swarm ec2 ssh://firstname.lastname@example.org swarm1 ssh://email@example.com Warning: DOCKER_HOST environment variable overrides the active context. To use a context, either set the global --context flag, or unset DOCKER_HOST environment variable.
It's possible to have several contexts available and to switch between them as needed.
Notice that it tells us about DOCKER_HOST. That variable was set earlier and hadn't been unset. If this variable is set, Docker pays attention to the variable rather than to the currently selected Docker Context.
$ unset DOCKER_HOST $ docker context ls NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR default Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm ec2 * ssh://firstname.lastname@example.org swarm1 ssh://email@example.com
Notice how the
default context has now switched to a
unix:// URL, whereas before it had been the
ssh:// URL that had been set in DOCKER_HOST.
$ docker context use swarm1 swarm1 Current context is now "swarm1" $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 237844d841c7 nginx "nginx -g 'daemon of…" 2 hours ago Up 2 hours 80/tcp sharp_tu
docker context use lets you switch between contexts. Here we've switched to the
swarm1 context to see the Nginx container we started earlier. Previously the currently selected context was my
ec2 context where I'm currently working on a Docker deployment to a swarm on EC2 instances.
$ docker context use default default Current context is now "default" $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
default context is, as implied by the listing above, the Docker system installed on the local system. Remember that no Docker containers have been deployed locally.
Add a second node to create a Docker Swarm
To demonstrate using this technique with a Docker Swarm, let's add a second Multipass node on the laptop. As I write this, I am actively debugging an application deployment to a Docker Swarm running across some EC2 instances on AWS. The technique I'm using there is exactly the same as here.
To prepare a Docker Swarm, first run this command on the existing Docker instance:
$ ssh firstname.lastname@example.org docker swarm init Swarm initialized: current node (imisueklbrwkyqiw44a8l6u1j) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-1l161hnrjbmzg1r8a46e34dt21sl5n4357qrib29csi0jgi823-cos0wh6k5rsq7bi4bskgo5y9r 192.168.64.14:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
docker swarm init, as the command name implies, initializes Swarm Mode on the Docker instance. Since it is the first node of its swarm, the node becomes the Swarm Leader and is automatically a manager node. We don't need to worry about what all that means, because we're not going to do much with Swam Mode in this post.
docker swarm join is executed on a Docker host, and it causes that host to enable Swarm Mode and to join the cluster indicated by the other arguments.
You don't have to remember this command, because it's easy to find out what it is:
$ ssh email@example.com docker swarm join-token "docker swarm join-token" requires exactly 1 argument. See 'docker swarm join-token --help'. Usage: docker swarm join-token [OPTIONS] (worker|manager) Manage join tokens $ ssh firstname.lastname@example.org docker swarm join-token manager To add a manager to this swarm, run the following command: docker swarm join --token SWMTKN-1-1l161hnrjbmzg1r8a46e34dt21sl5n4357qrib29csi0jgi823-3g80csolwaioya580hjanwfsf 192.168.64.14:2377
docker swarm join-token manager command gives the join command to cause a node to join as a manager node. To have it join as a worker node use worker instead.
I just noticed I've been running these commands with
ssh email@example.com. Technically it's not necessary since I have set the current context to
At this moment we haven't set up the
swarm2 node. To do so, simply redo the above commands but substituting
At the end of setting up the
swarm2 node, run this:
$ ssh firstname.lastname@example.org docker swarm join --token SWMTKN-1-1l161hnrjbmzg1r8a46e34dt21sl5n4357qrib29csi0jgi823-3g80csolwaioya580hjanwfsf 192.168.64.14:2377 This node joined a swarm as a manager.
As it says, the node has joined the swarm as a manager.
Verifying remote access to Swarm cluster using DOCKER_HOST and
Supposedly we now have a swarm with two nodes? Let's verify.
$ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION imisueklbrwkyqiw44a8l6u1j * swarm1 Ready Active Leader 19.03.10 x61gjx018d8p2dcmez9fu0a6t swarm2 Ready Active Reachable 19.03.11
Yes, it does, and notice that we didn't
ssh this into one of the servers. Therefore this worked by using the current Docker Context which is set to
We can now add
swarm2 as another context:
$ docker context create swarm2 --docker host=ssh://email@example.com swarm2 Successfully created context "swarm2" $ docker context use swarm2 swarm2 Current context is now "swarm2"
docker node ls we get this:
$ docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION imisueklbrwkyqiw44a8l6u1j swarm1 Ready Active Leader 19.03.10 x61gjx018d8p2dcmez9fu0a6t * swarm2 Ready Active Reachable 19.03.11
Notice that it's all the same, but the
* is now on
swarm2 rather than
swarm1. If we switch back to
docker context use swarm1), the
* switches to the
swarm1 line. Hence the
* is indicating on which node in the swarm the command is running.
Now, let's put a workload on the swarm.
$ docker service create --name httpd --replicas 2 -p 90:80 httpd oa08q5awelrssaxmpp7wbqtqq overall progress: 2 out of 2 tasks 1/2: running [==================================================>] 2/2: running [==================================================>] verify: Service converged
This is two instances of Apache HTTPD, with port 90 mapping to port 80 in the container.
$ docker service ls ID NAME MODE REPLICAS IMAGE PORTS oa08q5awelrs httpd replicated 2/2 httpd:latest *:90->80/tcp $ docker service ps httpd ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS jy5i2jk6gft4 httpd.1 httpd:latest swarm2 Running Running 3 minutes ago uvrt3a0z1iho httpd.2 httpd:latest swarm1 Running Running 2 minutes ago
So, yeah, we have a Docker Swarm running across Docker on two Multipass instances. We are accessing that swarm using the Docker Context feature. We have shown that with
docker context use we can switch back and forth between the two.
Swarm mode is interesting because it doesn't matter which manager node is used to execute a command. The Swarm mode code transmits that command across the node.