ECS Fargate Service task running but Target Group shows unhealthy and curl to localhost:8000/health fails
Problem Summary:
I’ve deployed a Django backend in ECS using EC2 launch type (not Fargate), behind an Application Load Balancer (ALB). The service runs a containerized Gunicorn server on port 8000, and the health check endpoint is /health/. While ECS shows one task running and healthy, the Target Group shows the task as unhealthy and curl to localhost:8000 fails from the EC2 instance.
Setup Details ✅ Django App URL path for health check:
def health_check(request):
return JsonResponse({"status": "ok"}, status=200)
path("health/", views.health_check, name="health_check"),
Dockerfile:
FROM python:3.12
ENV PYTHONUNBUFFERED=1
WORKDIR /app
# Install pipenv
RUN pip install --upgrade pip
RUN pip install pipenv
# Install application dependencies
COPY Pipfile Pipfile.lock /app/
# We use the --system flag so packages are installed into the system python
# and not into a virtualenv. Docker containers don't need virtual environments.
RUN pipenv install --system --dev
# Copy the application files into the image
COPY . /app/
# Expose port 8000 on the container
EXPOSE 8000
CMD ["gunicorn", "Shop_Sphere.wsgi:application", "--bind", "0.0.0.0:8000"]
ECS task definition:
{
"taskDefinitionArn": "arn:aws:ecs:ap-south-1:562404438272:task-definition/Shop-Sphere-Task-Definition:7",
"containerDefinitions": [
{
"name": "Shop-Sphere-Container",
"image": "adsaa/ee:latest",
"cpu": 0,
"portMappings": [
{
"name": "django-port",
"containerPort": 8000,
"hostPort": 8000,
"protocol": "tcp",
"appProtocol": "http"
}
],
"essential": true,
"environment": [
{
"name": "RAZOR_KEY_ID",
"value": "keyid"
},
{
"name": "SECRET_KEY",
"value": "secretkey"
},
{
"name": "AWS_S3_CUSTOM_DOMAIN",
"value": "domain.cloudfront.net"
},
{
"name": "DEBUG",
"value": "False"
},
{
"name": "AWS_QUERYSTRING_EXPIRE",
"value": "15"
},
{
"name": "DJANGO_SETTINGS_MODULE",
"value": "Shop_Sphere.settings.prod"
},
{
"name": "DATABASE_URL",
"value": "postgres://sdadsdsda:dadsddadadaddadd@dadsdddadsdadda.ap-south-1.rds.amazonaws.com:5432/dasda"
},
{
"name": "CORS_EXPOSE_HEADERS",
"value": "X-CSRFToken"
},
{
"name": "ALLOWED_HOSTS",
"value": "*"
},
{
"name": "AWS_S3_REGION_NAME",
"value": "ap-south-1"
},
{
"name": "EMAIL_HOST_PASSWORD",
"value": "pass"
},
{
"name": "DEFAULT_FROM_EMAIL",
"value": "@gmail.com"
},
{
"name": "EMAIL_HOST_USER",
"value": "@gmail.com"
},
{
"name": "RAZOR_KEY_SECRET",
"value": "keysecert"
},
{
"name": "AWS_CLOUDFRONT_KEY_ID",
"value": "keyid"
},
{
"name": "AWS_STORAGE_BUCKET_NAME",
"value": "ddsadsdads"
},
{
"name": "REDIS_URL",
"value": "redis://clustesdadsddkjajdnsda.aps1.cache.amazonaws.com:6379"
}
],
"environmentFiles": [],
"mountPoints": [],
"volumesFrom": [],
"ulimits": [],
"healthCheck": {
"command": [
"CMD-SHELL",
"curl -f http://localhost:8000/health || exit 1"
],
"interval": 30,
"timeout": 5,
"retries": 3
},
"systemControls": []
}
],
"family": "Shop-Sphere-Task-Definition",
"taskRoleArn": "arn:aws:iam::562404438272:role/ECS-EC2-Role-For-S3-Access",
"executionRoleArn": "arn:aws:iam::562404438272:role/ecsTaskExecutionRole",
"networkMode": "awsvpc",
"revision": 7,
"volumes": [],
"status": "ACTIVE",
"requiresAttributes": [
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.24"
},
{
"name": "com.amazonaws.ecs.capability.task-iam-role"
},
{
"name": "ecs.capability.container-health-check"
},
{
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
},
{
"name": "ecs.capability.task-eni"
}
],
"placementConstraints": [],
"compatibilities": [
"EC2"
],
"requiresCompatibilities": [
"EC2"
],
"cpu": "1024",
"memory": "1024",
"runtimePlatform": {
"cpuArchitecture": "X86_64",
"operatingSystemFamily": "LINUX"
},
"registeredAt": "2025-07-06T08:06:23.502Z",
"registeredBy": "arn:aws:iam::562404438272:root",
"enableFaultInjection": false,
"tags": []
}
Load Balancer SG
Inbound: TCP 80 from 0.0.0.0/0
EC2 Instances SG
Inbound:
TCP 8000 from ALB Security Group
SSH 22 from 0.0.0.0/0
both Load balancer and EC2 intance beong to the same SG.
Environment Variables:
SECURE_SSL_REDIRECT = True
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
ALLOWED_HOSTS = ["*"]
What I've Tried
✅ Verified Gunicorn runs: gunicorn Shop_Sphere.wsgi:application --bind 0.0.0.0:8000
✅ ps aux | grep gunicorn confirms Gunicorn is listening on PID 1 and 7.
✅ python3 -c "import socket; s = socket.socket(); print(s.connect_ex(('localhost', 8000)))" returns 0 (port is open)
curl http://localhost:8000/health
returns:
curl: (7) Failed to connect to localhost port 8000: Connection refused
✅ From inside the container: curl -v http://localhost:8000/health shows:
[ec2-user@ip-10-0-1-14 ~]$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
408286a7c603 forkmemaybe/aws-shop-sphere:latest "gunicorn Shop_Spher…" 2 hours ago Up 2 hours (healthy) ecs-Shop-Sphere-Task-Definition-7-Shop-Sphere-Container-90b2d0fed8c7c7e91a00
f4bc560732d3 amazon/amazon-ecs-pause:0.1.0 "/pause" 2 hours ago Up 2 hours ecs-Shop-Sphere-Task-Definition-7-internalecspause-aaabecd7888ad49b2f00
50480628fcce amazon/amazon-ecs-agent:latest "/agent" 11 hours ago Up 11 hours (healthy) ecs-agent
[ec2-user@ip-20-4-9-18 ~]$ sudo docker exec -it 4082 /bin/bash
root@ip-10-0-2-77:/app# curl -v http://localhost:8000/health
* Trying 127.0.0.1:8000...
* Connected to localhost (127.0.0.1) port 8000 (#0)
> GET /health HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/7.88.1
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< Server: gunicorn
< Date: Sun, 06 Jul 2025 10:47:46 GMT
< Connection: close
< Transfer-Encoding: chunked
< Content-Type: text/html; charset=utf-8
< Location: https://localhost:8000/health
< X-Content-Type-Options: nosniff
< Referrer-Policy: same-origin
< Cross-Origin-Opener-Policy: same-origin
< Vary: origin
<
* Closing connection 0
So I’ve set health check path to /health/ Exposed port 8000 in Dockerfile and ECS definition
CloudFormation Events
While deploying ECS service:
Deployment failed: tasks failed to start.
ECS Deployment Circuit Breaker was triggered.
You are using the awsvpc
network mode, which means the ECS container gets its own Elastic Network Interface in the VPC. So it would be completely expected behavior that you would not be able to do curl http://localhost:8000/health
from the EC2 instance, but only from inside the container.
The problem is most likely the HTTP response returned by the /health
endpoint:
HTTP/1.1 301 Moved Permanently
By default, an AWS load balancer target group only considers a 200 OK
response to be successful. If it receives any other response code for the health check endpoint it will consider the target as unhealthy.
You should probably look into why your /health
endpoint is returning a 301
response. However, the easiest way to fix your current issue is to update the target group to accept a 301
response code in the health check settings.