Running celery worker on ECS Task and using SQS as a broker

I am building a web application that requires some long running tasks to be on AWS ECS using celery as a distributed task queue. The problem I am facing is that my celery worker running on ECS is not receiving tasks from SQS even though it seems to be connected to it.

Following are the logs from ECS task.

/usr/local/lib/python3.8/site-packages/celery/platforms.py:797: RuntimeWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
  warnings.warn(RuntimeWarning(ROOT_DISCOURAGED.format(
 
 -------------- celery@ip-xxx-xxx-xxx-xxx.us-east-2.compute.internal v5.0.1 (singularity)
--- ***** ----- 
-- ******* ---- Linux-4.14.252-195.483.amzn2.x86_64-x86_64-with-glibc2.2.5 2021-12-14 06:39:58
- *** --- * --- 
- ** ---------- [config]
- ** ---------- .> app:         emptive_portal:0x7fbfda752310
- ** ---------- .> transport:   sqs://XXXXXXXXXXXXXXXX:**@localhost//
- ** ---------- .> results:     disabled://
- *** --- * --- .> concurrency: 2 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** ----- 
 -------------- [queues]
                .> emptive-celery2.fifo exchange=sync(direct) key=emptive-celery.fifo
                
[tasks]
 
  . import_export_celery.tasks.run_export_job
  . import_export_celery.tasks.run_import_job
[Errno 2] No such file or directory: 'seq_tokens/emptive-staging_web_sequence_token.txt'
[Errno 2] No such file or directory: 'seq_tokens/emptive-staging_web_sequence_token.txt'
2021-12-14 06:39:58 [INFO] Connected to sqs://XXXXXXXXXXXXXXXXX:**@localhost//
[Errno 2] No such file or directory: 'seq_tokens/emptive-staging_web_sequence_token.txt'
[Errno 2] No such file or directory: 'seq_tokens/emptive-staging_web_sequence_token.txt'
2021-12-14 06:39:58 [INFO] celery@ip-xxx-xxx-xxx-xxx.us-east-2.compute.internal ready.

To be noted, I have ran the same container, that I have deployed to ECS, locally on the same machine as the django webserver is on which is sending tasks. That celery worker doesn't have any problem receiving tasks.

I have also tried giving ecsTaskExecutionRole full permissions to SQS but that doesn't seem to affect anything. Any help would be appreciated.

So I finally fixed this. The issue was really stupid on my part. :) I just had to replace BROKER_TRANSPORT_OPTIONS with CELERY_BROKER_TRANSPORT_OPTIONS in the celery config.

New config:

AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
    AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
    # SQS CONFIG
    CELERY_BROKER_URL = "sqs://%s:%s@" % (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
    CELERY_ACCEPT_CONTENT = ['application/json']
    CELERY_RESULT_SERIALIZER = 'json'
    CELERY_TASK_SERIALIZER = 'json'
    CELERY_BROKER_TRANSPORT_OPTIONS = {
        'region': 'us-east-2',
        'polling_interval': 20,
    }
    CELERY_RESULT_BACKEND = None
    CELERY_ENABLE_REMOTE_CONTROL = False
    CELERY_SEND_EVENTS = False
Back to Top