Scheduling background tasks using cron in a Docker container

; Date: May 27, 2018

Tags: Docker »»»» Docker MAMP

Sometimes you want a Docker container to execute background tasks, and therefore want cron to be installed and running. Having cron running in the background is part of normal Unix/Linux/etc system admin practices. Even though the crontab format is kind of hokey, we all learn it and set up automated background tasks to keep the world functioning. Let's see how to set this up in a Docker container.

My first thought to ever run a cron service in Docker came while writing the 4th Edition of Node.js Web Development. The book covers the full gamut of Node.js application development from soup to nuts, that is from initial concept to delivery on real cloud hosting servers, and even consideration of security setup. The last included setting up HTTPS service for a Node.js application, since the web is moving strongly to requiring HTTPS on every website.

I wanted to demonstrate setting up Let's Encrypt in a Docker container. Let's Encrypt is a $0 cost free service for getting the SSL certificates required to run an HTTPS server. Since I didn't find any tutorials on running Let's Encrypt in a Docker container, I had to invent something. The tooling to run Let's Encrypt involves running a command-line program every day, and that program checks for any SSL certificates which need to be renewed. Having an occasionally executed background tasks simply screams cron.

Well, that's true for someone like myself who learned Unix administration in the 1980's. The youngsters taking over the world may have a better idea sooner-or-later and replace cron with something else.

There's approximately three scenarios to consider:

  1. A single-purpose container containing just a cron daemon
  2. A container with two services, the cron daemon and another
  3. A container with several services, a process manager, and having cron as one of the services

Docker container running JUST a cron daemon

In Node.js Web Development, I showed this Dockerfile:

FROM debian:jessie

# Install cron, certbot, bash, plus any other dependencies
# (1) install cron
RUN apt-get update && apt-get install -y cron bash wget
RUN mkdir -p /webroots/ && mkdir -p /scripts
WORKDIR /scripts
RUN wget
RUN chmod a+x ./certbot-auto
# Run certbot-auto so that it installs itself
RUN /scripts/certbot-auto -n certificates
# /webroots/DOMAIN.TLD/.well-known/... files go here
VOLUME /webroots
VOLUME /etc/letsencrypt

# (2) Set up a crontab entry
# This installs a Crontab entry which
# runs "certbot renew" on the 2nd and 7th day of each week at 03:22 AM
# cron(8) says the Debian cron daemon reads the files in /etc/cron.d,
# merging into the data from /etc/crontab, to use as the system-wide cron jobs
# RUN echo "22 03 * * 2,7 root /scripts/certbot-auto renew" >/etc/cron.d/certbot

# (3) Start cron in the foreground
CMD [ "cron", "-f" ]

There's some Let's Encrypt specifics in this, so let's focus on a couple key points.

  1. Make sure to install the cron daemon, using the method for your preferred OS. In this case we're running Debian Jessie, and therefore use apt-get install cron
  2. Set up the required crontab entries. The example shown here, creating a file in /etc/cron.d, is correct for the cron implementiation in Debin Jessie. It seemed from my research that different OS's have differences in how to do this. This method can be performed when the container is built, since it's just a file plopped into the file system.
  3. Start cron in the foreground using the -f flag.

The last is important both here and in the next section. Docker considers the container is successfully running as long as the process started in the CMD instruction is still in existence. As soon as that process exits, Docker thinks the container has died. Depending on the setting of the restart option, Docker will either kill the container, or restart it, if it believes the container has died.

By running cron -f the cron daemon stays in the foreground, does not spin itself into the background, and therefore Docker knows the daemon is still running, and thinks the container is still alive.

In this case we only need one crontab entry. Your application may require multiple crontab entries. It's simple enough to replicate this RUN command enough times to create all the entries you require.

Docker container with both cron and another service

A couple months ago I wanted to use Let's Encrypt to provision SSL certificates for services I run at home and expose to the world. For details, see Easily use Let's Encrypt to HTTPS-protect your own server, for free

In that case, I wanted to run both nginx and cron in the same container. It's important to decide between a standalone cron container, versus bundling cron into a container with one or more services.

In Node.js Web Development, I rationalized the standalone cron as the necessity to provision SSL certificates in one place. The book describes building a server application where one may want to deploy 10's or 100's of instances for load balancing or whatever other purpose. Setting up Let's Encrypt SSL management in every such container doesn't make sense in the slightest, because SSL provisioning needs to be centralized.

For my home server, I wanted to use nginx to proxy several servers and to use nginx to implement the SSL. There would be one nginx service, so therefore it would be great to have the Let's Encrypt cron jobs running inside that same container.

Bottom line - for some applications you need a standalone cron service, and in other applications the cron service needs to be integrated with another service. You might even have cases where cron services exist in multiple containers.

Here's the implementation:

FROM nginx:stable

# Inspiration:

# (1) Install cron, certbot, bash, plus any other dependencies
RUN apt-get update \
   && apt-get install -y cron bash wget
RUN mkdir -p /webroots/ /scripts

# /webroots/DOMAIN.TLD/.well-known/... files go here
VOLUME /webroots
VOLUME /etc/letsencrypt

# /webroots/DOMAIN.TLD will be mounted 
# into each proxy as http://DOMAIN.TLD/.well-known
# /scripts will contain certbot and other scripts

COPY register /scripts/
RUN chmod +x /scripts/register

WORKDIR /scripts
RUN wget
RUN chmod a+x ./certbot-auto
# Run certbot-auto so that it installs itself
RUN /scripts/certbot-auto -n certificates

# (2) This installs a Crontab entry which 
# runs "certbot renew" on several days a week at 03:22 AM
RUN echo "22 03 * * 2,4,6,7 root /scripts/certbot-auto renew" >/etc/cron.d/certbot

# (3) Run both nginx and cron together
CMD [ "sh", "-c", "nginx && cron -f" ]

The three steps are pretty much the same as before. There is a difference with the CMD instruction, since we need to start two services.

I experimented a lot with this until developing this particular invocation.

In the official nginx Docker container, the CMD instruction is a little different (see (

CMD ["nginx", "-g", "daemon off;"]

The -g option sets global configuration settings. Setting daemon off means that nginx will not spin itself into the background, and instead stay in the foreground. (see (

In order to start two (or more) commands in one command-line, we need the first few to spin themselves into the background. The only command which should stay in the foreground must be the final command in the sequence. Hence we need nginx to spin into the background, and for cron to stay in the foreground.

Or, do we need cron in the background and nginx to stay in the foreground? If you believe that to be the case, the two should be reversed. I wasn't able to get that combination to work correctly, however. I had a lot of difficulty getting a CMD instruction to launch two processes in any syntax other than what's shown here.

Docker container with multiple services, including cron

A little further down the slippery slope is those Docker containers consisting of several service processes in one container. Two that I'm familiar with are Gitlab and Gogs, both of which are Github alternatives. I'm using Gogs in the system I mentioned in the previous section.

The official Gogs Dockerfile uses Alpine Linux and something called s6-svcscan to start up multiple services. (See (

The s6-svscan program scans for multiple s6-supervice services to manage, and it starts all of them. S6 is a suite of programs to manage process supervision.

This is only one example of a process supervision system. One could plausibly run traditional /etc/init in a Docker container as process #1 just like you'd do on a regular Linux/Unix system. Or you could use any of the other process supervisors.

I don't have an example to show in this case, other than to refer you to the Gogs Dockerfile.

About the Author(s)

David Herron : David Herron is a writer and software engineer focusing on the wise use of technology. He is especially interested in clean energy technologies like solar power, wind power, and electric cars. David worked for nearly 30 years in Silicon Valley on software ranging from electronic mail systems, to video streaming, to the Java programming language, and has published several books on Node.js programming and electric vehicles.