How can I secure my Dockerized Django + PostgreSQL app in production on a VPS using Nginx?
I’m using Django and PostgreSQL for my web project, which I’ve containerized using Docker. I run it on a VPS and serve it with Nginx (running outside Docker).
I'm concerned about the security of my PostgreSQL database and want to make sure it's properly locked down. Specifically, I want to understand best practices to protect the database when:
- Docker containers are running in detached mode
- The PostgreSQL service is inside Docker
- Nginx is acting as the web entry point from the host
My questions:
- How can I ensure my PostgreSQL container is not exposed to the public internet?
- What are Docker and PostgreSQL-specific security improvements I can apply?
- Are there any changes I should make in the following Dockerfile or docker-compose-prod.yml?
My current Dockerfile
FROM python:3.x.x
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=x
ENV PYTHONUNBUFFERED=x
WORKDIR /src
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
curl \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Poetry
RUN curl -sSL https://install.python-poetry.org | python3 - && \
export PATH="/root/.local/bin:$PATH"
# Add this line to make sure Poetry is in the PATH for subsequent commands
ENV PATH="/root/.local/bin:$PATH"
COPY pyproject.toml poetry.lock* ./
# # Install dependencies
RUN poetry config virtualenvs.create false \
&& poetry install --no-interaction --no-ansi
# Copy the rest of the code
COPY . .
My current docker-compose-prod.yml file
services:
web:
restart: always
build:
context: .
command: bash -c "demo command"
ports:
- "8002:8002"
volumes:
- .:/src
- other-demo-volumes....
depends_on:
- db
environment:
- demo environment
db:
restart: always
image: postgres:16.2
volumes:
- app-pg-data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- POSTGRES_DB_HOST=${POSTGRES_HOST}
- POSTGRES_PORT=${DB_PORT}
volumes:
app-pg-data:
other-demo-volumes....
- Improved clarity and focus. Added relevant code for review
- Clarified security concerns with PostgreSQL in Docker on VPS
How can I ensure my PostgreSQL container is not exposed to the public internet?
If it doesn't have Compose ports:
, it's not accessible over the network. (For completeness, network_mode: host
will also make it network-accessible, but you should typically avoid this option.)
What are Docker and PostgreSQL-specific security improvements I can apply?
Make sure your container does not have access to the host's filesystem; delete the volumes:
from the Compose file that (read-write) mounts host content into the container. Correspondingly, you should usually make sure your image's content is owned by root and not world-writable, and then runs as some non-root user. This will prevent accidentally overwriting the application's code or static assets.
FROM python:3.x.x
# Create the non-root user.
#
# This is independent of the application content, so you save a minor bit of time
# during rebuilds doing it first. The syntax is different for Alpine-based images
# (but Python should almost always use a Debian/Ubuntu base). The user ID does not
# need to match anything else in particular and you shouldn't need to manually
# specify it.
#
# Do NOT chown files to this user.
RUN adduser --system appuser
# Do all the other setup as before, still as root.
ENV ...
COPY ...
RUN pip install ...
# Switch to the non-root user only at the end, when you're otherwise specifying
# what the default runtime behavior is.
USER appuser
CMD ["demo", "command"]
Are there any changes I should make in the following Dockerfile or docker-compose-prod.yml?
Consider using a multi-stage build to avoid keeping the very heavyweight C toolchain from the build-essential
package in the final image.
Prefer specifying the default CMD
in the Dockerfile over a command:
override in the Compose file. Prefer sh -c
to bash -c
and avoiding bash-specific syntax; for simple commands that don't use multiple commands or environment substitution or other shell features, skip the shell entirely.
Do not mount content into the container that hides the image's content. The mount targeting /src
hides almost everything in the image, and it means you're running whatever happens to be on the target machine rather than what you tested in your CI environment.
If you can, avoid mounting anything else into your application container either. This usually means making sure all data is stored in the database or some sort of network-accessible storage. If you have volumes to "share files" between the application container and a reverse proxy, be aware that this is not an especially reliable approach, and in particular that you will not see updates to your static content if you update the image. Stateful containers are harder to run in environments like Kubernetes, where filesystem storage needs to be network-accessible and where you'll often have multiple copies of your container running.