Self-hosted Docker infrastructure in home or office using low-cost computers like Intel NUC

; Date: Fri Mar 26 2021

Tags: Docker »»»» Docker MAMP »»»» Self Hosting

Using Docker, and a simple small computer, you can build a powerful computing "cloud" in your home, on your desktop, at low cost, giving you control over your data. If you need more power, adding another computer or three to the mix quickly adds more capabilities. For almost any popular 3rd party service like Github, Dropbox, and Trello, there is an open source package that might even be better. Those cloud services are temptingly easy to sign up and get going, but there are tradeoffs. What happens when a 3rd party service suddenly shuts down? That has happened on multiple occasions, sometimes deleting customer data in the wake. Or what about miscreants breaking in and stealing data? With open source operating systems, open source packages like Gitea, Nextcloud, and Kanboard, you gain control over your destiny.

The computer pictured here, an Intel NUC, is small, consumes very little electricity, but packs a lot of computing power. This particular computer has 16GB of memory, with over 2 terabytes of disk internally, and another 15 terabytes or so of external drives attached. Because it's built with parts normally used in laptops, the NUC consumes extremely little energy. The 5th generation Core i5 is no slouch, and I have it running Gitea, Jenkins, NextCloud, Plex, Bookstack, and more. In other words, it's my own private development server, that's far more secure and lower cost than if I'd used any of the popular 3rd party services.

Computers like this are small enough to stick into an out of the way corner. My Intel NUC sits on the desk buried under piles of loose papers and other stuff. It stays running for weeks at a time, requiring very little maintenance.

Self-hosting Docker services on a computer in your home starts with setting up a suitable computer. You then install the services you want available. Then to access these services from outside your home, you must point suitable domain names to your home DSL/Cable connection, and configure your router to allow inbound traffic to reach the Docker host.

With my computer, I can access any service hosted on this computer from anywhere on the Internet. For example, when living in Bucharest a couple years ago, I used this computer (it's located in California) as if it was in the other room.

Here's an article about this machine:

Intel NUC's perfectly supplant Apple's Mac Mini as a lightweight desktop computer

Many years ago Apple introduced the Mac Mini, a lightweight desktop computer where you brought your own keyboard and display. Unlike other Mac's, it was a simple box containing a logic board, memory, disk, DVD drive, USB, FireWire, and DisplayPort interfaces. Speaking for myself, the Mac Mini is attractive because its power requirements are miniscule (30 Watts or so), the box is small meaning it doesn't dominate our desk space, and you can choose your preferred keyboard, mouse and display. Unfortunately the current Mac Mini is suboptimal, since it's suffering from Apple's policy of non-repairable computers, and the design hasn't been updated in years.

The Mac Mini concept as a small box, low energy requirements, fitting into any desktop situation, is attractive, even if the current implementation is not attractive. Other computer companies have not been sitting still, and several have implemented similar computer designs. In particular the Intel NUC's are almost directly equivalent, and do not come with the hindrance of being Windows computers.

Comparing costs between self-hosted Docker at home, versus hosting on a rented virtual private server (VPS)

Where is the best place to self-host a few services, using Docker? The typical answer is to rent a virtual private server (VPS) from a web hosting provider. There are hundreds such companies around the world, renting out servers from cloud hosting facilities. But a different answer is a do it yourself (DIY) Docker hosting solution at home, or at the office.

Consider my Intel NUC as the low-end of the DIY at home hosting solution. That Intel NUC cost about $300 to set up, and I've had it running for at least 4 years. The electricity cost is minimal enough to be ignored. I do spend occasional time with its configuration and upkeep, such as making sure to run apt-get upgrade every so often, or the occasional operating system upgrade. Beyond that it has provided reliable service for the whole time, with almost zero downtime.

A comparable VPS with Digital Ocean would cost:

Memory Disk Data Transfer Cost 48 Months
2GB 50GB 2TB $10/mo $480
2GB 60GB 3TB $15/mo $720
4GB 80GB 4TB $20/mo $960
8GB 160GB 5TB $40/mo $1920
16GB 320GB 6TB $80/mo $3840

I don't know about you, but renting a virtual private server can quickly rack up a big bill. If I were a full fledged business earning enough revenue, then it's easy to see that as a cost of doing business. But for smallish personal needs these costs would be excessive.

Comparing against renting services from cloud providers

Another typical choice is to instead rent services, rather than rent servers. For example, you might pay for a business account with Github, Gitlab, Travis CI, Dropbox, or the like.

Service Monthly cost 48 months
Github personal Pro $4/month $192
Github organization team minimum $4/user/month minimum $192
Gitlab premium minimum $19/user/month minimum $912
Travis CI minimum $69/month minimum $3312
Dropbox pro solo (3TB) $16.85/month $808.80
Dropbox team (5TB) $12.50/user/month $600/user
Box.Net team $15/user/month $720/user
Trello Business $10/user/month $480/user

Compare that against a self-hosted Gitea instance, self-hosted Jenkins, self-hosted Kanboard, and self-hosted NextCloud on your own hardware. Plus, with your own hardware there is a broader selection of available services for self-hosting. Try finding a commercial hosted equivalent to Plex to host your own video files.

Gaining control over your data, enhancing personal privacy, by self-hosting your own Docker services

I have many open source repositories on both Github and Gitlab, plus I have a Github Organization. But, I have other repositories I keep private. That includes a handful of repositories created to collaborate with clients where they had me sign non-disclosures and promise to keep the data extra private. It was safer to host repositories for those clients on my personal server, than to use Github.

Another self-hosted application I run is Bookstack, which is a platform for writing semi-WYSIWYG text documents. The documents are structured similarly to a "book", hence the name. It has some of the capabilities of Google Docs (which I also use) but the important thing is it's hosted on my own hardware.

I know that there's no risk of a hosting provider snooping into anything hosted on my hardware. Do you have that assurance for any 3rd party service? For example, Dropbox claims to provide encrypted storage, but what does that mean in practice? Do you really believe that Dropbox has no ability to inspect the files you upload?

Of course there is a risk that a miscreant could break into my self-hosted server. In fact, I see that my Gitea server has had several random users register what looks like spammer accounts. Hurm, after studying the situation, I had neglected to edit the Gitea configuration file to disable registering new accounts. That's now fixed. The lesson is to not stop at I've installed the service, but to configure services for your security needs.

Setting up a single NUC/SFF/USFF computer for local Docker infrastructure

As we said, setting up a local Docker infrastructure, for home or office, starts by selecting a suitable computer. Depending on your needs the computer can be a Small Form Factor (SFF/USFF) computer, or even a mid-tower or full-tower desktop PC. Since it's best to use Linux to host Docker, look for a computer that doesn't have a Windows license to minimize your cost. You must choose ample memory, at least 16 GB, and enough data storage for your needs.

The advantage of SFF/USFF computers is ultra-low electricity consumption, small size, and less noise. But the tradeoff is less upgradability. That's where the tower PC's shine, because you can easily upgrade parts on the logic board, add cards to the PCI slots, and can add several internal drives.

Once you've bought and configured the hardware, it's time to set up the operating system. Linux is recommended for two reasons, only one of which is minimizing the cost. A computer with a bundled Windows license adds cost to that computer. To see that cost, go to the Dell website and search for their Linux solutions. I'm looking at the Precision 3240 Compact Workstation and see these operating system costs:

Notice that Ubuntu is $0 cost relative to the other choices. The cost for the Red Hat choices is because they require a support contract with Red Hat. Other versions of Linux, like Ubuntu, cost $0. If you need a support contract, the Linux vendor will usually have options available.

The more important reason to use Linux is because Docker is native to Linux. You might be comfortable with Windows or macOS because you use one or both on your desktop computers. While Docker runs on both, it is not native to either, and therefore there is higher overhead to running Docker on Windows or macOS. That overhead doesn't exist on Linux, again because Docker is native to Linux.

It is important to use a physical ethernet cable to attach this computer to your Internet router. A physical ethernet connection is far more reliable than if it's attached using WiFi. In fact, you should probably turn off WiFi in this computer, and you'll have one less source for stray EMF signals in your home. Do we understand with certainty that electromagnetic signals do not have negative health consequences?

Once you've installed the operating system, you next setup Docker. We went over Docker setup in this article:

Getting started with Docker: Installation, first steps

Let's start this journey into Docker by learning how to install it on popular systems, namely macOS, Windows and Linux. Installation is very simple thanks to the hard work of the Docker team. You youngsters don't know how easy you have it now that Docker for Mac and Docker for Windows exist. Long gone are the days when we had to install VirtualBox along with a specialized virtual machine to use Docker.

Pay attention to the Linux section. Docker does run on other operating systems, if you prefer, but that requires another layer which implements a Linux virtual machine within which is running the Linux version of Docker. By running natively on Docker you skip that virtual machine layer.

Configuring a Docker server to be easily used on home network

You've got the chosen computer configured and it's running Docker, and you have run a few sample Docker commands on that computer. What's next is to ensure you can easily access the local Docker host from other computers on your home network.

Of course the Docker host has an IP address, but using IP addresses is inconvenient. There's a feature made popular by macOS in which local computers are automatically available as hostname.local. Apple variously called this Rendezvous or Bonjour, and it many useful things like automatically sharing printers and other devices around a local network.

The end goal is for your Docker host to be accessible as hostname.local (note - .local domain name) from hosts on the local network.

$ ping nuc2.local
PING nuc2.local (192.168.1.103): 56 data bytes
64 bytes from 192.168.1.103: icmp_seq=0 ttl=64 time=0.432 ms
64 bytes from 192.168.1.103: icmp_seq=1 ttl=64 time=0.445 ms
^C

Fortunately this feature is based on several Internet protocols, including multicast DNS (mDNS) and DNS Service Discovery (DNS-SD). We shouldn't have to get too deep into what that means, because for the most part use of those protocols is transparent. Any user of macOS machines, and possibly any Windows user, have used these protocols for years without being aware. There are open source implementations that run on Ubuntu and other Linux distros.

In my case the NUC can be accessed as nuc2.local from both my macOS and Windows laptops. Unfortunately I don't remember whether this required any special configuration on Ubuntu.

In case this does require special configuration, it appears one must install the following packages:

$ sudo apt install avahi-utils libnss-mdns

Avahi is a Linux service for facilitating service discovery on a local network via mDNS/DNS-SD. That may sound like mumbo-jumbo, but these are the protocols which macOS and possibly Windows machines use for zero-configuration access to local hosts. Simply installing these two packages should be enough to make the Docker host visible to other computers as hostname.local. For information see (github.com) https://github.com/lathiat/avahi, but expect to be confused because the information is very terse.

The second package, libnss-mdns, has to do with configuring the Docker host to look for host names the same way. It configures GLIBC on the Linux/Docker host on resolving the names for things. The (github.com) documentation shows a configuration file, /etc/nsswitch.conf, to configure on the Linux host.

Success for this section is to run a service on Docker on the Docker host, then to access it as http://hostname.local:###.

Configuring WiFi/DSL router for public access to in-home server

By default a home router prevents inbound traffic into the home network. In general, it's not desirable to host a public website from a home Internet connection. The home Internet connection doesn't have the necessary reliability level, nor the bandwidth, to support anything but infrequently used web services.

Internet Service Providers therefore configure home Internet connections for consuming Internet services. But with some careful configuration we can expose some web services on the public Internet from our home-based computer.

First, ask whether you want to expose a service to the public Internet. What if an intruder invaded your home network, for example? Is it truly necessary to access Docker services self-hosted at home from outside your home network?

Accessing Docker services hosted on your home network is fairly simple. You start by assigning one or more domain names to the home network, probably working with a Dynamic DNS provider to do so. You then reconfigure the Cable/DSL router to direct at least HTTP (port 80) and HTTPS (port 443) inbound traffic to your home Docker host.

This is exactly what I've done for my home network, and as mentioned earlier that let me use services on my Intel NUC from over 10,000 miles away. Your need might not extend beyond the coffee shop down the street. In either case you're facing the same problem, which starts with the fact that a home internet connection by default does not support hosting services at home. But, we can fix that and do it anyway.

The end goal of this section is that you can access services on your home network using a normal domain name like service.myhomenet.xyz. For the purpose of discussion, we'll assume these services:

  • Gitea at git.myhomenet.xyz
  • Jenkins at build.myhomenet.xyz
  • and NextCloud at cloud.myhomenet.xyz

You must have already registered your domain, myhomenet.xyz, with a domain name registrar, of course.

The simplest route is if your internet service provider (ISP) offers fixed IP addresses. If so, make sure your home Internet connection has a fixed IP address. However, most ISP's instead use dynamic IP address assignment, meaning your public IP address will change from time to time.

To get around that we'll turn to a Dynamic DNS provider. These services help with assigning a domain name to your home network, accommodating changes to the IP address. One such service is DuckDNS, a free Dynamic DNS provider.

Remotely access your home network using free dynamic DNS service from DuckDNS Do you have files stashed on a computer at home but you're halfway around the world? In theory the Internet is equally usable by all of us, but in practice it's a little different. We're told we can only store our data on servers owned by other people, and cannot do so on our own server. In actual practice it is fairly easy to access computers on your home network, and the first step is associating a domain name with your home network.

The bottom line is that you'll configure your Docker host with a Cron Job that fetches a specific URL every five minutes or so. The DuckDNS service uses that URL fetch to automatically update your domain name record. The process is very simple.

This gives you a DuckDNS-provided domain name like my-name.duckdns.org. To associate your preferred domain name use a CNAME record like: git.myhomenet.xyz CNAME my-name.duckdns.org, like so:

git.myhomenet.xyz CNAME my-name.duckdns.org
*.git.myhomenet.xyz CNAME my-name.duckdns.org
build.myhomenet.xyz CNAME my-name.duckdns.org
*.build.myhomenet.xyz CNAME my-name.duckdns.org
cloud.myhomenet.xyz CNAME my-name.duckdns.org
*.cloud.myhomenet.xyz CNAME my-name.duckdns.org

CNAME records are a kind of alias in the Domain Name System. It says that anyone requesting git.myhomenet.xyz is given the DNS records for my-name.duckdns.org. The second record, for *.service.myhomenet.xyz, is there to handle any subdomains you might use.

If, instead, you had a fixed IP address the DNS records would look like this:

git.myhomenet.xyz A 123.231.123.231
*.git.myhomenet.xyz A 123.231.123.231
build.myhomenet.xyz A 123.231.123.231
*.build.myhomenet.xyz A 123.231.123.231
cloud.myhomenet.xyz A 123.231.123.231
*.cloud.myhomenet.xyz A 123.231.123.231

In the domain name system (DNS), an A record gives the IP address of a DNS name, while a CNAME record is an alias connecting one domain name to another. The A record only supports IPv4 addresses, and if you instead have an IPv6 address you would use the AAAA record instead.

That configures some domain names pointing at your home network. To test, you should be able to use your mobile device (cell phone). Turn off its WiFi access, then type the domain name into the web browser. If nothing else you won't see a Host Not Found error.

This still isn't enough because the routers supplied by Internet Service Providers do not allow incoming traffic, only outgoing traffic. Remember, the home Internet connection is designed for consumption of Internet services, and is not about providing public Internet services. We are looking to provide our own services for ourself at a personal scale, and therefore need to do one more thing to work around the ISP-provided router.

Namely, most ISP-provided routers allow us to forward ports through the router to an internal IP address. To support a web service we must forward both HTTP (port 80) and HTTPS (port 443) to the Docker host. To support SSH access, we must forward port 22.

This means, at the minimum, we configure the ISP-provided router to forward HTTP, HTTPS and SSH traffic to the Docker host on our home or office network. You may have other services to expose that requires forwarding other ports. It is important, for security, to forward only the ports absolutely required by your needs. Any service you expose to the Internet could possibly be found by a miscreant, and is therefore a possible hole through which they can intrude your home or office network.

Since each router is different we cannot tell you exactly what to do. DuckDNS recommends this website: http://portforward.com/

Hosting multiple Docker applications on a single computer

We've configured the domain names git.myhomenet.xyz, build.myhomenet.xyz, and cloud.myhomenet.xyz to point to our home Internet connection. And, we've configured the ISP-provided router to forward all HTTP and HTTPS traffic to the Docker host. But that's not enough, since there's nothing configured to direct inbound traffic to the corresponding container.

What's needed is something like this:

What's shown here is to use what's called a Reverse Proxy to handle traffic for multiple domain names. All that means is to configure a server like NGINX to recognize those domains, and depending on the domain used for each request to route the data to the correct backend server.

A highly recommended reverse proxy application is NGINX Proxy Manager. It is easy to deploy in Docker, and is easy to configure multiple backend services. It even simplifies getting HTTPS/SSL support.

To handle the three domains we mentioned earlier, the server would run containers for the proxy manager, Gitea, Jenkins, and Nextcloud, plus additional containers for any required database servers.

Outbound domain Backend service
git.myhomenet.xyz Gitea
build.myhomenet.xyz Jenkins
cloud.myhomenet.xyz Nextcloud
NOT PUBLIC MySQL?
admin.myhomenet.xyz PHPMyAdmin?

Configuring multiple computers for local Docker Swarm or Kubernetes infrastructure

Maybe you won't be happy with a single computer. If so, Docker makes it easy to deploy services across multiple servers.

Deploying Docker services across multiple computers requires a Container Orchestration service. Setting one up requires setting up multiple computers, each running Linux, each having Docker installed, then to initialize the Orchestration service on each.

The simplest-to-setup Container Orchestrator is Docker Swarm. On one of your computers you simply run docker swarm init, and it gives you a command to execute on each Docker host.

For more details see:

Creating a Docker Swarm using Multipass and Ubuntu 20.04 on your laptop Docker is a cool system for deploying applications as reusable containers, and Docker Swarm is a Docker Orchestrator that let's us scale the number of containers across multiple machines. Multipass is a very light weight virtual machine manager application running on Windows, Linux and macOS, that let's us easily set up multiple Ubuntu instances on our laptop at low performance impact. Therefore Multipass can serve as a means to easily experiment with Docker Swarm on your laptop, learning how it works, setting up networks, etc.

Summary

This tutorial was purposely kept at a high level, because getting down into the weeds of every last step to implement all this would take 10x the amount of space. Instead, we have provided a high level overview of why and how.

About the Author(s)

David Herron : David Herron is a writer and software engineer focusing on the wise use of technology. He is especially interested in clean energy technologies like solar power, wind power, and electric cars. David worked for nearly 30 years in Silicon Valley on software ranging from electronic mail systems, to video streaming, to the Java programming language, and has published several books on Node.js programming and electric vehicles.