Tags: Docker »»»» Docker Swarm
To remotely manage a Docker instance, we can SSH into that host to run Docker commands there. But a not-well-documented Docker feature lets us remotely access, and manage, Docker instances from the comfort of our laptop, via SSH. It's fast, easy and very powerful.
We can easily host a Docker instance on a virtual private server (VPS) to deploy our application. To manage servers on that Docker instance, we might SSH into the server to run Docker commands on the host machine. But that's a little clumsy since we'd be constantly copying files back and forth from our laptop. If instead we had several VPS's configured as a Docker Swarm, we'd have several Docker instances to maintain, SSH'ing into each as needed.
It's possible to instead remotely control Docker instances, running Docker commands on our laptop that maintain Container instances on a remote system. The Docker documentation suggests this requires exposing the Docker API on a public port, set up TLS certificates, etc, at some unknown risk of hackers breaking into our Docker instance. The process is just enough convoluted to appear as a big security risk, and it's understandable for most to shy away from using that feature.
But, a related feature, the Docker Context, lets us safely implement remote control using a simple SSH connection. The method is not well documented, but is trivial to implement. This tutorial deals with remotely controlling a Docker host, from our laptop, using a Docker Context over SSH.
Starting in Docker 18.09 it became possible to create a Docker Context with an SSH URL. Using this, the docker
command on your laptop can interact with the Docker API of a remote Docker instance, over SSH, without opening a public Docker TCP port. With this we can run any Docker command on the remote host from the comfort of our laptop.
Docker Context's lets us deal with remote Docker services. The SSH Context lets us interact with a remote Docker Engine, or remote Docker Swarm. Recently the Context feature has been extended to support remote control of services onto AWS ECS or Azure ACI. Conceptually each Context deals with a particular type of remote Docker infrastructure.
With the docker context
command you can easily switch between hosts. If you have one Docker host to manage you may have other others. You can define as many Docker Contexts as are needed, and can use the docker context
command to switch between them. The docker
command also takes a --context
flag to direct a particular command to run on a specific context.
Setup - Docker instances in a virtual machine emulator pretending to be a remote server
It would take too long to show you how to set up Docker on a VPS rented from a hosting provider, such as an AWS EC2 instance. Instead, it's much simpler to just use a local virtual machine in Multipass. Multipass is an excellent virtual machine emulator that lets you quickly setup light-weight Ubuntu instances. It will let us quickly set up what is effectively a virtual server, on our laptop, that will let us experiment with Docker SSH Context's without incurring any cost from a hosting provider.
If you don't want to install Multipass, it's easy to use a VPS provider instead. Digital Ocean is an easy-to-use service with which you can do almost the exact same instructions. Through this signup link (sponsored) you can get a discount letting you run through this tutorial for free.
In an earlier post I showed how to set up a Docker Swarm on your laptop using Multipass: Creating a Docker Swarm using Multipass and Ubuntu 20.04 on your laptop
The setup instructions here are adapted from that post. One first goes to https://multipass.run
to download the Multipass installer. It runs on Windows, Linux and macOS.
$ multipass launch --name swarm1 focal
This is how we launch a virtual Ubuntu instance using Multipass. We've given it a name, swarm1
, on the off-chance we might want to install multiple instances. The command requires just a minute or so to provision the new machine - compare that to the hour or so it takes to install Ubuntu on a virtual machine emulator like VirtualBox. The name focal
refers to Ubuntu 20.04. Running multipass find
lists the available machine images that can be used.
The first step is installing Docker on the machine. Run multipass shell
to login to swarm1
, and then run the following commands:
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get -y install apt-transport-https \
ca-certificates curl gnupg-agent software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
sudo groupadd docker
sudo usermod -aG docker ubuntu
sudo systemctl enable docker
That setup procedure is adapted from the official Docker instructions for installation on Linux. It gets you the latest version of Docker-CE.
The first section enables support for downloading packages from APT repositories that are accessed via HTTPS. The middle section sets up packages required to run Docker. Before running the groupadd
and usermod
commands let's talk about why they're required.
Running Docker command requires the docker
command to be sufficiently privileged to access the Docker API port. Obviously the root
user can access that port. But, in Multipass, the multipass shell
command logs us into the ubuntu
user ID. Therefore that ID must be added to the docker
group, which is what the groupadd
and usermod
commands accomplish.
If you provision an AWS EC2 Ubuntu instance, the same policy is followed - by default you login as ubuntu
, and need to run the groupadd
and usermod
commands shown above. On the other hand, on Digital Ocean, you login as root
and therefore the groupadd
and usermod
commands are not required.
Of course Docker is not limited to Ubuntu instances. We're showing Ubuntu because it is what Multipass supports. Installing Docker on a different Linux distribution requires running other commands, which you can find in the official Docker documentation.
To verify that Docker is correctly installed, run: docker run hello-world
Enabling passwordless SSH login to the Multipass instance
For remote Docker control to work we must have password-less SSH access to the machine which is running Docker. Out of the box Multipass doesn't support password-less SSH, but the standard method for doing so works fine with Multipass.
See: How to enable passwordless SSH login on Ubuntu inside Multipass
The summary is:
- Get your SSH key from the contents of your
~/.ssh/id_rsa.pub
- Add your SSH key to the
~/.ssh/authorized_keys
on the Multipass instance.
Once you do those steps you can do this:
$ multipass list
Name State IPv4 Image
swarm1 Running 192.168.64.14 Ubuntu 20.04 LTS
$ ssh ubuntu@192.168.64.14
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-33-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
...
It obviously doesn't matter whether the Ubuntu/Linux instance is inside Multipass on your laptop or on a server thousands of miles away. These steps are the same - you're simply ensuring to set up password-less SSH access to a Linux host running Docker.
Enabling remote access using DOCKER_HOST
One way to run Docker commands on the remote host is:
$ ssh ubuntu@192.168.64.14 docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
This will always work, but you have to remember to add the SSH command to the command line.
What we're looking to do instead is attaching the docker
command on your laptop to a Docker instance on another machine. We're simply pretending that Multipass instance is a remote machine.
There are two ways we can directly tell docker
to connect with a remote Docker.
$ docker --host=URL ...
#.... OR by using an environment variable
$ export DOCKER_HOST=URL
$ docker ...
The official Docker documentation says to use this as so:
$ docker -H tcp://N.N.N.N:2375 ps
#.... OR by using an environment variable
$ export DOCKER_HOST="tcp://N.N.N.N:2375"
$ docker ps
Substitute for N.N.N.N
the IP address of your Docker host, like 192.168.64.14
. However, this requires that you publicly expose the Docker port. How do you prevent miscreants from accessing that port? To do this safely it's recommended to deploy TLS certificates. But that starts to be extra complication, and there is a much easier way to achieve the same goal.
The URL can instead be: ssh://ubuntu@192.168.64.14
Because it's a password-less SSH connection, it is automatically secured by SSH keys, and the data is automatically encrypted.
Specifically, run this:
$ export DOCKER_HOST=ssh://ubuntu@192.168.64.14
$ docker run -d nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
afb6ec6fdc1c: Pull complete
b90c53a0b692: Pull complete
11fa52a0fdc0: Pull complete
Digest: sha256:6fff55753e3b34e36e24e37039ee9eae1fe38a6420d8ae16ef37c92d1eb26699
Status: Downloaded newer image for nginx:latest
237844d841c77fd2a161fcf2f3367d94a956e73969afcd0ff40f468029b739a1
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
237844d841c7 nginx "nginx -g 'daemon of…" 12 seconds ago Up 9 seconds 80/tcp sharp_tu
This looks just like running Docker commands against the local host, but it's on a remote host. To prove that it's a remote host do the following:
$ unset DOCKER_HOST
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
This removes the environment variable, meaning that Docker will now access the Docker on the local host. No containers running on the laptop. Then let's re-enable the remote access:
$ export DOCKER_HOST=ssh://ubuntu@192.168.64.14
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
237844d841c7 nginx "nginx -g 'daemon of…" 5 minutes ago Up 5 minutes 80/tcp sharp_tu
And once again we're accessing the remote host.
Notice that this approach does not require exposing the Docker API port on the Internet or other complex setup. It simply requires a password-less SSH connection to the server.
Enabling remote access using docker context
That's cool that we can easily have remote access to a Docker instance. It's very powerful and as we'll see later we can even access a whole Docker Swarm this way. But, there is an even simpler way to use this feature than by maintaining environment variables.
First, make sure you don't have a DOCKER_HOST
variable setup:
$ unset DOCKER_HOST
As we saw earlier, the docker
command will revert to controlling Docker on your laptop.
The docker context
command lets you record several configurations for different remote Docker instances. Then you can easily switch between them simply by running a docker context
command. You could do this by setting different values to DOCKER_HOST for different remote hosts. But with docker context
the access URL strings are remembered for you, making it far more convenient:
$ docker context create swarm1 --docker host=ssh://ubuntu@192.168.64.14
This sets up a new context.
$ docker context ls
NAME DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock
ec2 ssh://ubuntu@34.219.69.219
swarm1 ssh://ubuntu@192.168.64.14
It's possible to have several contexts available and to switch between them as needed. The default
context always exists, and refers to the Docker installed on your laptop. This listing shows one Docker instance that's deployed on an AWS EC2 instance, the other is the swarm1
instance we setup earlier.
$ docker context use swarm1
swarm1
Current context is now "swarm1"
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
237844d841c7 nginx "nginx -g 'daemon of…" 2 hours ago Up 2 hours 80/tcp sharp_tu
The docker context use
lets you switch between contexts. Here we've switched to the swarm1
context to see the Nginx container we started earlier. Previously the currently selected context was my ec2
context where I'm currently working on a Docker deployment to a swarm on EC2 instances.
$ docker context use default
default
Current context is now "default"
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
The default
context is, as implied by the listing above, the Docker system installed on the local system. Remember that no Docker containers have been deployed locally.
With this setup you can now run any docker
or docker-compose
command on the remote host. You must either use the --context
option to specify which host to use, or docker context use
to switch between hosts.
For example you can edit a Compose file on your laptop, using docker-compose up
to deploy updates to the remote host just as easily as using the local Docker on your laptop.
Adding a second node to create a Docker Swarm
To demonstrate using this technique with a Docker Swarm, let's add a second Multipass node on the laptop.
To prepare a Docker Swarm, first run this command on the existing Docker instance:
$ ssh ubuntu@192.168.64.14 docker swarm init
Swarm initialized: current node (imisueklbrwkyqiw44a8l6u1j) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-1l161hnrjbmzg1r8a46e34dt21sl5n4357qrib29csi0jgi823-cos0wh6k5rsq7bi4bskgo5y9r 192.168.64.14:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
The docker swarm init
, as the command name implies, initializes Swarm Mode on the Docker instance. Since it is the first node of its swarm, the node becomes the Swarm Leader and is automatically a manager node. We don't need to worry about what all that means, because we're not going to do much with Swam Mode in this post.
The command docker swarm join
is executed on a Docker host, and it causes that host to enable Swarm Mode and to join the cluster indicated by the other arguments.
You don't have to remember this command, because it's easy to find out what it is:
$ ssh ubuntu@192.168.64.14 docker swarm join-token manager
...
This will print out the the command to join the swarm as a manager node. To have it join as a worker node use worker instead.
To create a new Docker host, follow the same instructions you followed earlier, such as creating a new Multipass instance. Substitute the name swarm2
for swarm1
when naming the virtual machine. Then, run the swarm join
command on each new node.
Verifying remote access to Swarm cluster using docker context
Supposedly we now have a swarm with two nodes? Let's verify.
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
imisueklbrwkyqiw44a8l6u1j * swarm1 Ready Active Leader 19.03.10
x61gjx018d8p2dcmez9fu0a6t swarm2 Ready Active Reachable 19.03.11
Yes, it does, and notice that we didn't ssh
this into one of the servers. Therefore this worked by using the current Docker Context which is set to swarm1
.
We can now add swarm2
as another context:
$ docker context create swarm2 --docker host=ssh://ubuntu@192.168.64.15
swarm2
Successfully created context "swarm2"
$ docker context use swarm2
swarm2
Current context is now "swarm2"
Rerunning docker node ls
we get this:
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
imisueklbrwkyqiw44a8l6u1j swarm1 Ready Active Leader 19.03.10
x61gjx018d8p2dcmez9fu0a6t * swarm2 Ready Active Reachable 19.03.11
Notice that it's all the same, but the *
is now on swarm2
rather than swarm1
. If we switch back to swarm1
(docker context use swarm1
), the *
switches to the swarm1
line. Hence the *
is indicating on which node in the swarm the command is running.
Now, let's put a workload on the swarm.
$ docker service create --name httpd --replicas 2 -p 90:80 httpd
oa08q5awelrssaxmpp7wbqtqq
overall progress: 2 out of 2 tasks
1/2: running [==================================================>]
2/2: running [==================================================>]
verify: Service converged
This is two instances of Apache HTTPD, with port 90 mapping to port 80 in the container.
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
oa08q5awelrs httpd replicated 2/2 httpd:latest *:90->80/tcp
$ docker service ps httpd
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
jy5i2jk6gft4 httpd.1 httpd:latest swarm2 Running Running 3 minutes ago
uvrt3a0z1iho httpd.2 httpd:latest swarm1 Running Running 2 minutes ago
The Swarm split the two httpd
containers between the hosts in the Swarm. Swarm is a powerful Docker container orchestration system that is worth learning about.
Summary
In this post we've learned about a powerful new tool letting us easily and simply control a remote Docker instance using a simple SSH connection.