Cronjob in Docker container uses outdated settings in Django, despite environment variables in Docker Compose

I'm facing a persistent issue with a Django app using Django REST Framework. The application has several models, including one called Project with a created_at timestamp. There’s a management command in manage.py that archives a project if a user hasn't taken action on it within 72 hours of its creation. This command is executed by a cronjob in production, and the entire setup runs within a Docker container.

Initial Setup

In the past, I separated the back-end environment into two Docker containers:

One container ran the Django application and served the API. The other container was dedicated solely to running the cronjob that executed these management commands.

Reason for Combining Containers

This setup had the same issue with outdated settings, and managing both containers separately added significant overhead, as I needed to clear both containers’ caches and rebuild them frequently. To simplify, I combined the API and cronjob into a single container and used Supervisor to manage the cronjob within this container. Note that Supervisor is functioning correctly, as the cronjob itself runs on schedule (logs confirm this), but it’s only an environment variable issue within the cronjob.

The Problem

The cronjob uses outdated settings—specifically an old DB_HOST that points to a test database instead of the correct database URL defined in the environment variables set by Docker Compose. However, the Django application itself (when accessed normally) connects correctly to the production database as specified in these environment variables.

Oddly, when I manually run the archive_old_projects command using

docker exec -it backend python3 manage.py archive_expired_new_project

it works fine and uses the correct DB_HOST variable. It’s only when the cronjob executes that it falls back to an old, inaccessible database configuration. This forces me to frequently clean Docker, remove cached data, and rebuild the image to restore functionality, an impractical solution for ongoing maintenance.

Question

Why would the cronjob run with outdated settings, even though both it and the Django application reside in the same Docker container and should have access to the same environment variables?

Steps I’ve Tried

Docker Cleanup: Rebuilt the Docker image, cleared cached layers, and ensured the Docker Compose environment variables are up-to-date. This temporarily resolves the issue, but it reoccurs.

Config Check: Verified that DB_HOST and other critical settings are correctly loaded from the environment in settings.py.

Cronjob Command: Examined the cronjob entry to ensure it uses the correct manage.py path and doesn’t somehow reference an outdated config or settings file.

Relevant Setup

Dockerfile

FROM python:3.10.12-slim
WORKDIR /app

COPY requirements.txt /app/
RUN pip install --no-cache-dir -r requirements.txt
COPY . /app/

RUN apt-get update && \
    apt-get install -y cron && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

COPY crontab /etc/cron.d/cron_jobs
RUN chmod 0644 /etc/cron.d/cron_jobs
RUN crontab /etc/cron.d/cron_jobs

RUN pip install supervisor

COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf

EXPOSE 8000

CMD ["supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]

crontab

*/5 * * * * /usr/local/bin/python3 /app/manage.py archive_expired_new_project >> /app/logs/cron.log 2>&1
*/5 * * * * /usr/local/bin/python3 /app/manage.py archive_expired_to_contact_project >> /app/logs/cron.log 2>&1

supervisord.conf

[supervisord]
nodaemon=true

[program:gunicorn]
command=gunicorn --bind 0.0.0.0:8000 backend.wsgi:application
autostart=true
autorestart=true
stderr_logfile=/dev/stderr
stdout_logfile=/dev/stdout

[program:cron]
command=cron -f
autostart=true
autorestart=true
stderr_logfile=/dev/stderr
stdout_logfile=/dev/stdout

docker-compose.yml

name: my-app

services:
  backend:
    image: docker-user/repo-name:1.0.2
    container_name: backend
    expose:
      - "8000"
    networks:
      - app-network
    volumes:
      - ./logs:/app/logs
      - ./media:/app/media
    environment:
      SECRET_KEY: "*****"
      ...
      DB_HOST: "DATABASE_URL"
      DB_DATABASE: "db_name"
      DB_PORT: "5432"
      DB_USER: "db_user"
      DB_PASSWORD: "*****"
      ...

  frontend:
     ...

  nginx:
     ...

settings.py

...

DATABASES = {
  'default': {
    'ENGINE': 'django.db.backends.postgresql',
    'NAME': getenv('DB_DATABASE'),
    'USER': getenv('DB_USER'),
    'PASSWORD': getenv('DB_PASSWORD'),
    'HOST': getenv('DB_HOST'),
    'PORT': getenv('DB_PORT', 5432),
    'OPTIONS': {
      'sslmode': 'require',
    },
  }
}

Since I couldn't find any reason why Docker is keeping this cache, I used the following workaround in case other users are facing the same issue with the same type of project.

I stopped using the management command and the dedicated app cron container.

I moved those management commands to REST views that only a specific user with a specific auth token can launch.

I created a new simple container for a cron job that performs curl requests on those URLs.

Everything works now.

Back to Top