Installing a self-hosted Docker Registry to aid Docker image development

; Date: March 15, 2020

Tags: Docker »»»» Docker Development »»»» Self Hosting

When developing Docker images it's useful to store them in a Docker registry. While using Docker Hub is free, it's bad form to fill up that shared resource with images built for personal use. It's better to only publish the Docker images that are truly useful to everyone, and that have documentation. That leaves us with the problem of a location to host our own Docker images. Do we pay for a private repository somewhere? Or, as a self-hoster, do we host a local Docker Registry to store our personal Docker images? In this post let's explore the latter idea.

The Docker Registry is an open source service for storing and distributing Docker images. It is easy to set up, easy to use, and provides the basic service of storing Docker images.

Documentation: (docs.docker.com) https://docs.docker.com/registry/

The Docker project has two Registry implementations. The open source Registry is what we'll install and is available under the MIT license. It has a minimal feature set, such as not having a GUI, but gets the job done. The other is a paid closed source product, the Docker Trusted Registry, that has a GUI and other niceties.

We use a Registry by either pushing images to the registry, or pulling images from the registry. Most of us are pulling images, most often from Docker Hub. But anyone developing Docker images are probably pushing images to a registry.

Docker Hub is a public Docker image registry service run by the Docker team. It is free for anyone to use Docker Hub either for retrieving images, or for publishing images. Therefore it is a very convenient place to look for Docker images. But you'll also find a long list of images that were obviously built for personal use, or experiments, or for examples in books. Hence, hosting your own Docker registry gives you a place to store such images.

Setting up a Docker registry is very simple, and gives you a chance to hone your Docker skillset.

Kicking the tires

A software developer might launch a Registry on their laptop for extremely personal use. Let's see how to do so:

$ docker run -d -p 5000:5000 --name registry registry:latest

That's the minimum which is required to launch a Registry instance.

The next step is to store an image in the registry. Pretend that you have a project involving creating a Docker image. To push an image to the registry, do this:

$ docker image tag my-image localhost:5000/my-image
$ docker push localhost:5000/my-image

We're all accustomed to the name pattern of orginization-name/image-name:version for Docker images on Docker Hub, but this is a little different.

Looking at the (docs.docker.com) official documentation of the docker pull command we see this:

By default, docker pull pulls images from Docker Hub. It is also possible to manually specify the path of a registry to pull from. For example, if you have set up a local registry, you can specify its path to pull from it. A registry path is similar to a URL, but does not contain a protocol specifier (https://).

It goes on to show an image specifier like: myregistry.local:5000/testing/test-image.

What this means is that my-image, and testing/test-image, is the specifier for an image Repository on the specified Registry server. A Docker Registry instance is therefore a container for several Repositories.

In fact it's illustrative to look carefully at this:

$ docker pull mysql/mysql-server:8.0
8.0: Pulling from mysql/mysql-server
Digest: sha256:342d3eefe147620bafd0d276491e11c8ed29e4bb712612cb955815b3aa910a19
Status: Image is up to date for mysql/mysql-server:8.0
docker.io/mysql/mysql-server:8.0

The image specifier mysql/mysql-server:8.0 is, look carefully at the last line, actually this: docker.io/mysql/mysql-server:8.0. In other words, the actual specifier for an image stored on Docker Hub starts with docker.io.

Now that we better understand Docker image specifiers, let's return to the image we just pushed to the local Docker Repository.

Then when we want to use this image, we run

$ docker pull localhost:5000/my-image

Just like pulling an image from Docker Hub, this retrieves the image to the local image storage area in Docker. From there we'd use docker run to launch a container based on the image.

The last thing is this:

$ docker container stop registry && docker container rm -v registry
registry
registry
$ docker pull localhost:5000/my-image
Using default tag: latest
Error response from daemon: Get http://localhost:5000/v2/: dial tcp [::1]:5000: connect: connection refused

We've stopped and then deleted the Registry container. Therefore no service is at localhost:5000 to respond to the request. Even worse, the entire storage area for images pushed to this repository vaporized because we did not store the images in persistent storage.

That gives us the basics of using a local Docker registry. Running the registry this way is obviously not what we'd do for a production deployment. It's only good enough for ad-hoc use on our laptop. For real use it should have persistent storage, user identification, and have a regular domain name.

Robust deployment of a self-hosted Docker Registry

A self-hoster should strive to have reliably deployed their self-hosted services. Let's take a look at how to do this for the Docker Registry. Let's spell out a little better the attributes that are required:

  • Persistent storage of images: Administrators must be able to stop and recreate the Registry container at will without losing images stored in the Registry
  • Hosted with a regular domain name: We must have easy access to the Registry from anywhere
  • Authentication for users, especially for the Push operation: We must be able to limit who can store images in this Registry
  • Reliable hardware on which to run the service

In my case I have an Intel NUC sitting on my desk running a local software development infrastructure. There's a Git repository (Gogs), Jenkins server, and more, so a Docker registry would fit right in. The NUC is already visible on the Internet so that e.g. git.example.com is visible from anywhere. Therefore hub.example.com would be an excellent domain name for this server.

In my environment I have a directory, /home/docker, containing a set of sub-directories one for each service I have deployed. Following that pattern, create a directory - /home/docker/registry.

In that directory we'll store any configuration files, a Dockerfile (or else a docker-compose.yml), and any required data directory.

version: '3'

services:
    registry:
        restart: always
        image: registry:2
        ports:
          - "5000:5000"
        # environment:
        volumes:
          - ./data:/var/lib/registry
        networks:
          - registry

networks:
    registry:

This is mostly a transliteration of the earlier example, but with two refinements.

First we've mounted a local directory onto /var/lib/registry. Clearly since Docker containers are ephemeral, it is required to store the images outside of the container. With the earlier example, deleting the Registry container means deleting any images that had been pushed into that Registry. By mounting the storage directory we ensure the images will not be lost if we delete and recreate the container.

The second is to attach this container to a Docker bridge network named registry. This will allow other local containers to be attached to the registry network, to use its services.

This is more robust than just using docker run. It records the deployment parameters, and the file can be checked into a source repository.

To launch the Registry:

$ docker-compose build
$ docker-compose up -d

This is easy, and will keep the Registry container running thanks to the restart option. But we're missing a few things and cannot expose this container to the Internet.

In this case the image specifiers would become something like nuc.local:5000/my-image, which is a slight improvement. The registry server will restart itself any time the machine is restarted. The biggest improvement is that the image repository is persisted to a directory. The biggest problem is it does not use HTTPS, and the users are not authenticated. Another problem is that the registry is not visible on the public Internet, but since the users are not authentication it would be sheer folly to open this to the public.

Configuring user authentication in the Docker Registry

Let's get a solution for the biggest problem. Since there's no user authentication, anyone can store any Docker image in our registry at any time. That's obviously not a best practice, but is instead a worst practice.

It appears that out of the box the only authentication is to use htpassword to create a password file. For example we can run this:

$ docker run --rm --entrypoint htpasswd registry:latest -Bbn testuser testpassword
testuser:$2y$05$/bEr9Ccrzl1aEmM67Ah.W.jJUJ9SocIY3xNpDSC.Ady2yWwKdh81y

This runs the htpassword command provided by the Registry container. The output is a password file using the htpassword format. All we have to do is store this password file somewhere, then configure the Registry server to use that file.

The Registry server has an extensive configuration file available: (docs.docker.com) https://docs.docker.com/registry/configuration/ The documentation also says we can specify environment variables to override configuration file values. Let's do that instead.

The config snippet to for an htpassword file is this:

auth:
  htpasswd:
    realm: basic-realm
    path: /path/to/htpasswd

This then requires two environment variables: REGISTRY_AUTH_HTPASSWORD_REALM and REGISTRY_AUTH_HTPASSWORD_PATH to correspond to these values.

Let's re-run this command and save the output:

$ docker run --rm --entrypoint htpasswd registry:latest \
        -Bbn testuser testpassword >htpass.txt

This saves the htpassword text into a file. Then we can change the docker-compose.yml to reference that file:

version: '3'

services:
    registry:
        container_name: registry
        restart: always
        image: registry:latest
        ports:
          - 5000:5000
        environment:
          REGISTRY_AUTH_HTPASSWD_REALM: basic-realm
          REGISTRY_AUTH_HTPASSWD_PATH: /var/lib/htpass.txt
        volumes:
          - ./data:/var/lib/registry
          - ./htpass.txt:/var/lib/htpass.txt
        networks:
          - registry

networks:
    registry:

We've added the environment variables to set up htpassword authentication, and then mounted the password file into the container.

Then we can try to login to the Registry as so:

$ docker --context default login --username testuser --password testpassword http://nuc.local:5000
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get https://nuc2.local:5000/v2/: Service Unavailable

The WARNING is something we can ignore. Yes, it's insecure to put a password on the command line because the command is stored in the Bash history and can leak. Yup. It's better to use the --password-stdin option instead.

The primary problem here is that it wants to use HTTPS to access the Registry, and we haven't configured HTTPS.

Unfortunately this is where we get stuck. The Registry implementation advertises an ability to directly register itself with the Lets Encrypt service, but they use the ACME v1 implementation which has been deprecated.

The configuration changes to this:

version: '3'

services:
    registry:
        container_name: registry
        restart: always
        image: registry:latest
        ports:
          - 5000:5000
        environment:
          REGISTRY_AUTH_HTPASSWD_REALM: basic-realm
          REGISTRY_AUTH_HTPASSWD_PATH: /var/lib/htpass.txt
          REGISTRY_HTTP_TLS_LETSENCRYPT_CACHEFILE: /var/lib/lets-encrypt/cache
          REGISTRY_HTTP_TLS_LETSENCRYPT_EMAIL: EMAIL-ADDRESS@EXAMPLE.com
        volumes:
          - ./data:/var/lib/registry
          - ./lets-encrypt:/var/lib/lets-encrypt
          - ./htpass.txt:/var/lib/htpass.txt
        networks:
          - registry

networks:
    registry:

We are required to create a cache file for Lets Encrypt. To support that we mount a new directory into the container, and we end up creating the cache file like so:

$ mkdir lets-encrypt 
$ touch lets-encrypt/cache

But, trying to bring up the Registry results in this error:

$ docker-compose up 
Recreating registry ... done
Attaching to registry
...
id=74c127a6-bfd7-494a-8329-a7a6b21aef43 service=registry version=v2.7.1 
registry    | 2020/09/28 00:57:29 [INFO] acme: Registering account for EMAIL-ADDRESS@EXAMPLE.com
registry    | time="2020-09-28T00:57:29.921807552Z" level=fatal msg="register: acme: Error 403 - urn:acme:error:unauthorized - Account creation on ACMEv1 is disabled. Please upgrade your ACME client to a version that supports ACMEv2 / RFC 8555. See https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430 for details." 

Going by the documentation it would be possible to instead specify these variables for configuration:

  • REGISTRY_HTTP_TLS_CERTIFICATE Path name for an X.509 public certificate
  • REGISTRY_HTTP_TLS_KEY Path name for an X.509 private key

It's possible to generate those from Lets Encrypt, but I don't see how to do that. I have an infrastructure for registering domains with Lets Encrypt, but don't see how to apply that in this case.

What are we left with?

Obviously Docker Inc doesn't have much incentive for helping folks run a Docker Registry on their own hardware. Docker Inc gets revenue from hosting registries for the public.

As it stands we're blocked from safely hosting this service. First, the authentication method offered is htpassword which is not secure. Second, the best way to secure that mode is with HTTPS, and we're blocked from using HTTPS.

About the Author(s)

David Herron : David Herron is a writer and software engineer focusing on the wise use of technology. He is especially interested in clean energy technologies like solar power, wind power, and electric cars. David worked for nearly 30 years in Silicon Valley on software ranging from electronic mail systems, to video streaming, to the Java programming language, and has published several books on Node.js programming and electric vehicles.