Django doesn't release database connections for re-use, long after request has finished
My django app loads several pages (images) at once, e.g. to show a document, and queries the database mainly to check permissions. Django keeps the connections alive, but doesn't re-use them. At some point the maximum number of connections of postgresql is reached, and will never be 'unreached' because of this behavior. I've added CONN_MAX_AGE
of 15 seconds and intermediate pgbouncer
, but the behavior is the same, the connections stay open, even after 15".
{
"default": {
"ENGINE": "django.db.backends.postgresql",
"HOST": "pgbouncer","PORT": 6432,"NAME": "...", "USER": "...",
"PASSWORD": "...",
"CONN_MAX_AGE": 15, "CONN_HEALTH_CHECKS": true
}
}
The settings point to a pgbouncer proxy in this case, but I also have this issue without pgbouncer. Here's what I see. I've loaded a couple of pages on the website, and they fill up pg_stat_activity
.
SELECT
split_part(query, 'WHERE', 2) AS sql,
COUNT(*) AS count
FROM pg_stat_activity
WHERE datname = '...'
GROUP BY query
sql | count |
---|---|
... "page"."page_id" = 12 ... | 1 |
... "page"."page_id" = 65 ... | 1 |
... "page"."page_id" = 23 ... | 1 |
... "page"."page_id" = 78 ... | 1 |
... "page"."page_id" = 32 ... | 1 |
... many more rows ... | ... |
These entries stay, with exactly the same page_id
, for hours, eventhough the development console of the website has no traffic and also when the webpage is closed in the browser. Nor is there traffic in the nginx logs. These are only released by killing django. Otherwise, these connections stay open indefinitely, and aren't re-used as the sql stays the same.
I also tried "CONN_MAX_AGE": None
and "CONN_MAX_AGE": 0
, the connections stay open.
FYI, the pgbouncer config:
[databases]
...= dbname=printedregistries_... host=... port=5432 user=... password=...
[pgbouncer]
auth_type=scram-sha-256
auth_file=/etc/pgbouncer/users.txt
pool_mode=session
listen_addr=*
listen_port=6432
verbose=3
# Connection limits
max_client_conn=100
default_pool_size=20
min_pool_size=5
reserve_pool_size=5
max_db_connections=50
max_user_connections=50
# Timeouts to prevent connection accumulation
server_idle_timeout=300
server_lifetime=3600
client_idle_timeout=300
client_login_timeout=60
# This is the SSL of the postgres server, not of whats served by pgbouncer.
server_tls_sslmode=require
When flooded, there are two outcomes. Either query_wait_timeout
with the settings above or connection failed: connection to server at "...", port 6432 failed: ERROR: no more connections allowed (max_client_conn)
when only max_client_conn
is set. That is, pgbouncer tries to fix the lack of connections by waiting, but of course that won't matter if they never come free.
How can I make django use the database connections in a 'sane' way, e.g. how can I avoid the connections from staying open AND 'locked' or 'used' or 'occupied' even though the request has long finished?
This I tried:
- client-side loading pool, load max 5 pages at once. Didn't help, limit is eventually still reached.
- pgbouncer intermediate, didn't help, pgbouncer also doesn't re-use connections if they are kept 'used' by django.
- django
CONN_MAX_AGE
, seems to do nothing.
After logging every function call in DatabaseWrapper
I still have no idea why django keeps my connections blocked, contrary to their docs. So this issue drove me nuts, and I asked kilocode to fix it for me. This closes the database after each request, the issue is gone and I'm using pgbouncer to pool my connections. Seems to work well.
import logging
from django.db import connections
from django.utils.deprecation import MiddlewareMixin
logger = logging.getLogger('database-connection-cleanup')
class DatabaseConnectionMiddleware(MiddlewareMixin):
"""
Middleware to force database connection cleanup after each request.
This addresses Django runserver's connection management issues.
"""
def process_response(self, request, response):
"""Force close database connections after each request."""
try:
# Close all database connections
for conn in connections.all():
if conn.connection is not None:
logger.debug(f"Closing connection for database: {conn.alias}")
conn.close()
# Clear connection pool cache
connections._databases = {}
except Exception as e:
logger.warning(f"Error closing database connections: {e}")
return response
def process_exception(self, request, exception):
"""Ensure connections are closed even if an exception occurs."""
try:
for conn in connections.all():
if conn.connection is not None:
logger.debug(f"Closing connection after exception for database: {conn.alias}")
conn.close()
except Exception as e:
logger.error(f"Error closing database connections after exception: {e}")
return None
MIDDLEWARE = [
# ...
'pr.middleware.DatabaseConnectionMiddleware', # Force connection cleanup
]