How do you make gunicorn forward SIGINT to uvicorn when running inside Docker?
I have a script running inside Docker (using wsl2), started with CMD
, that is behaving strangely w.r.t. SIGINT
signals. This is the script:
#!/usr/bin/env bash
python manage.py init_db
exec gunicorn foobar.asgi:application \
--worker-class uvicorn.workers.UvicornWorker \
--bind 0.0.0.0:8000 \
--graceful-timeout 5 \
--log-level debug \
-w 4
The problem is that when I press Ctrl+C
, gunicorn ends up having to forcefully kill the running uvicorn workers. I see the following errors after 5 seconds:
^C[2024-10-29 21:15:35 +0000] [1] [INFO] Handling signal: int
[2024-10-29 21:15:40 +0000] [1] [ERROR] Worker (pid:8) was sent SIGKILL! Perhaps out of memory?
[2024-10-29 21:15:40 +0000] [1] [ERROR] Worker (pid:9) was sent SIGKILL! Perhaps out of memory?
[2024-10-29 21:15:40 +0000] [1] [ERROR] Worker (pid:10) was sent SIGKILL! Perhaps out of memory?
[2024-10-29 21:15:40 +0000] [1] [ERROR] Worker (pid:7) was sent SIGKILL! Perhaps out of memory?
I have found three workarounds of sorts, which may be useful to help understand what is going on.
- Bash into the container and start the script from inside. Now
Ctrl+C
seems to work better, because now the uvicorn workers quit on time, but gunicorn still prints some errors:
^C[2024-10-29 21:21:56 +0000] [1] [INFO] Handling signal: int
... worker shutdown cleanup output omitted
[2024-10-29 21:21:56 +0000] [7] [ERROR] Worker (pid:15) was sent SIGINT!
[2024-10-29 21:21:56 +0000] [7] [ERROR] Worker (pid:14) was sent SIGINT!
[2024-10-29 21:21:56 +0000] [7] [ERROR] Worker (pid:13) was sent SIGINT!
[2024-10-29 21:21:56 +0000] [7] [ERROR] Worker (pid:10) was sent SIGINT!
[2024-10-29 21:21:56 +0000] [7] [ERROR] Worker (pid:11) was sent SIGINT!
- Add -it to the docker run command:
docker run -it -p 8000:8000 -v $(pwd):/app foobar:latest
This results in the same behavior as with 1.
- Replace
exec
with manual signal forwarding:
#!/usr/bin/env bash
python manage.py init_db
# Forward SIGINT signal
trap 'kill -INT $PID' INT
# Start Gunicorn
gunicorn foobar.asgi:application \
--worker-class uvicorn.workers.UvicornWorker \
--bind 0.0.0.0:8000 \
--graceful-timeout 5 \
--log-level debug \
-w 4 & \
PID=$!
# Wait for Gunicorn process
wait $PID
Now, pressing Ctrl+C
, I only see the following, but I notice that the workers don't seem to run any of their shutdown code anymore:
^C[2024-10-29 21:26:37 +0000] [7] [INFO] Handling signal: int
What's going on here, and what's a good way to make sure that the uvicorn workers shutdown properly on Ctrl+C
?
EDIT: Here's my Dockerfile
, since a commenter asked me about it. And note that I get the same Ctrl+C
behavior if I run gunicorn directly from the Dockerfile
, instead of running the start-django
bash script:
CMD ["gunicorn", "foobar.asgi:application", \
"--worker-class", "uvicorn.workers.UvicornWorker", \
"--bind", "0.0.0.0:8000", \
"--graceful-timeout", "5", \
"--log-level", "debug", \
"-w", "4"]
Dockerfile:
# >>> Build stage <<<
FROM python:3.11-slim AS build
WORKDIR /app
# Install C toolchain and C build-time dependencies.
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
# gcc, make, etc.
build-essential \
# psycopg2 client libs and header files for building psycopg2
libpq-dev && \
rm -rf /var/lib/apt/lists/*
# Create virtual environment and add it to PATH.
RUN python -m venv /venv
ENV PATH="/venv/bin:$PATH"
# Copy requirements and install.
COPY requirements.txt .
RUN pip install --upgrade pip && \
pip install --no-cache-dir --no-warn-script-location -r requirements.txt
# >>> Run stage <<<
FROM python:3.11-slim
WORKDIR /app
# Install C runtime dependencies.
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
# psycopg2 runtime lib
libpq5 && \
rm -rf /var/lib/apt/lists/*
# Create and switch to appuser.
RUN groupadd -r appuser && \
useradd --no-log-init -r -g appuser appuser && \
chown -R appuser:appuser /app
USER appuser
# Copy virtual environment from the build stage and add it to PATH.
COPY --chown=appuser:appuser --from=build /venv /venv
ENV PATH=/venv/bin:$PATH
# Copy the rest of the application code.
COPY --chown=appuser:appuser . .
# Set Python environment variables.
# Prevent Python from writing .pyc files
ENV PYTHONDONTWRITEBYTECODE=1
# Ensure output is sent to stdout/stderr immediately
ENV PYTHONUNBUFFERED=1
# Start server.
CMD ["/app/deployment/start-django"]
EDIT2: I'm seeing the exact same behavior on a completely new Django project with only these three dependencies in my requirements.txt
, using the same Dockerfile and bash script from above:
django
gunicorn
uvicorn[standard]