You call task.delay(), grab the AsyncResult, check result.state, and see PENDING. You wait. You check again. Still PENDING. No error. No traceback. Just an eternal, unhelpful PENDING.
This is one of the most disorienting bugs in Flask + Celery development because it can mean at least four completely different things, and Celery won’t tell you which one it is.
PENDING almost always have one of four root causes: (1) no result backend configured so Celery can't store task state, (2) the worker isn't running or is connected to a different broker queue, (3) the task module isn't imported by the worker so the task never gets registered, or (4) the task crashes immediately due to a missing Flask application context. Work through the diagnostic steps below to pinpoint yours.
Why PENDING Is Ambiguous
PENDING is Celery’s default state for any task ID it doesn’t recognize. This is the key insight that makes debugging so confusing: Celery doesn’t distinguish between “the task is waiting in the queue” and “this task ID doesn’t exist anywhere.”
If you ask for the state of a completely made-up task ID like AsyncResult("fake-id-xyz"), you’ll get PENDING. If your worker crashed before touching the task, you’ll get PENDING. If there’s no result backend, you’ll get PENDING even after the task finishes successfully. The state itself tells you almost nothing.
That’s why you need to diagnose which type of PENDING you’re dealing with before you can fix it.
Diagnosing the Root Cause
Add this small debug function to your Flask app before trying any fixes:
from celery import Celery
from celery.result import AsyncResult
def inspect_celery(app, celery):
"""Dump Celery connection and worker state for debugging."""
inspector = celery.control.inspect(timeout=3.0)
active_workers = inspector.active()
registered_tasks = inspector.registered()
reserved_tasks = inspector.reserved()
print("=== Celery Diagnostic ===")
print(f"Active workers: {list(active_workers.keys()) if active_workers else 'NONE - no workers reachable'}")
print(f"Registered tasks: {registered_tasks}")
print(f"Reserved (queued) tasks: {reserved_tasks}")
print(f"Broker URL: {celery.conf.broker_url}")
print(f"Result backend: {celery.conf.result_backend}")
Call this after starting your Flask app. If Active workers shows NONE, your problem is in worker connectivity. If workers appear but your task isn’t in Registered tasks, your problem is task registration. If everything looks right but state is still PENDING, your problem is the result backend.
Scenario 1: No Result Backend Configured
This is the single most common cause. Without a result backend, Celery executes tasks just fine — but it has nowhere to write the result or update the task state. Every AsyncResult.state call hits a dead end and returns PENDING.
Symptom: The task actually runs (you might see print statements or side effects), but .state always returns PENDING and .result returns None.
# The problematic setup — broker only, no backend:
celery = Celery('myapp')
celery.conf.update(
broker_url='redis://localhost:6379/0',
# result_backend is missing entirely
)
@celery.task
def send_email(user_id):
# This runs fine, but no one will ever know
user = User.query.get(user_id)
send(user.email)
return "sent"
# Fire the task and check state:
result = send_email.delay(42)
print(result.state) # PENDING — even after the task finished
print(result.result) # None
The fix: Add a result backend. Redis is the most common choice when you’re already using it as a broker:
celery = Celery('myapp')
celery.conf.update(
broker_url='redis://localhost:6379/0',
result_backend='redis://localhost:6379/1', # use a different DB index
result_expires=3600, # clean up results after 1 hour
)
If you prefer to keep state in your database, SQLAlchemy works too:
celery.conf.update(
broker_url='redis://localhost:6379/0',
result_backend='db+postgresql://user:pass@localhost/mydb',
)
After adding the backend, restart your worker and broker. Tasks that were already queued before this change won’t retroactively get their state updated — only new tasks will.
Scenario 2: Worker Not Running or Wrong Queue
If your worker isn’t running, or it’s running but connected to a different Redis database or broker than your Flask app, tasks pile up in the queue unprocessed and sit in PENDING indefinitely.
Symptom: celery.control.inspect().active() returns None or an empty dict. Tasks accumulate in Redis but never execute.
# Check if any workers are running:
celery -A myapp.celery inspect active
# You'll see this if nothing is running:
# Error: No nodes replied within time constraint.
This is also a common gotcha when using environment-specific Redis URLs. Your Flask app might point to redis://localhost:6379/0 in development, but the worker was started with a different CELERY_BROKER_URL env var pointing to a different database or host entirely.
The fix: Make sure your worker is started pointing at the same broker as your Flask app:
# Start the worker explicitly specifying your app module:
celery -A myapp.celery worker --loglevel=info
# Or with concurrency settings for production:
celery -A myapp.celery worker --loglevel=info --concurrency=4 -Q default,high_priority
For environment parity, always configure the broker URL from an environment variable in both your Flask config and your Celery startup:
import os
celery = Celery('myapp')
celery.conf.update(
broker_url=os.environ.get('CELERY_BROKER_URL', 'redis://localhost:6379/0'),
result_backend=os.environ.get('CELERY_RESULT_BACKEND', 'redis://localhost:6379/1'),
)
When you start the worker, export CELERY_BROKER_URL to match what your Flask app is using. A mismatch here is easy to introduce when switching between local dev and Docker environments.
Scenario 3: Task Not Registered by the Worker
Celery workers only know about tasks in modules they’ve explicitly imported. If your task is defined in myapp/tasks/email.py but the worker never imports that module, it can’t execute the task. Celery silently queues it, the worker picks it up, then rejects it because it doesn’t know what myapp.tasks.email.send_email is — and you get PENDING.
Symptom: celery.control.inspect().registered() doesn’t include your task name.
# Check registered tasks from a shell:
from myapp import celery
inspector = celery.control.inspect(timeout=3.0)
registered = inspector.registered()
print(registered)
# {'worker1@hostname': ['celery.backend_cleanup', 'celery.chord_unlock']}
# Your task is not in here — that's the problem
The fix is to either explicitly import task modules or use Celery’s autodiscover_tasks. Here’s the correct pattern with a Flask application factory:
# myapp/__init__.py
from flask import Flask
from celery import Celery
celery = Celery()
def create_app(config=None):
app = Flask(__name__)
app.config.from_object(config or 'myapp.config.DevelopmentConfig')
# Initialize extensions
db.init_app(app)
# Configure Celery
celery.conf.update(
broker_url=app.config['CELERY_BROKER_URL'],
result_backend=app.config['CELERY_RESULT_BACKEND'],
)
# CRITICAL: autodiscover finds tasks in all installed apps
celery.autodiscover_tasks(['myapp.tasks.email', 'myapp.tasks.reports'])
return app
# myapp/tasks/email.py
from myapp import celery
@celery.task(name='myapp.tasks.email.send_email')
def send_email(user_id, subject, body):
# explicitly named tasks are easier to inspect
...
After updating, restart your worker. You should see your task names appear in inspector.registered().
Scenario 4: Flask App Context Missing in Task
This one bites developers who move from small scripts to proper Flask apps. When Celery executes a task in a worker process, there’s no active Flask application context. Any code that tries to access db, current_app, g, or anything else that depends on the app context will raise a RuntimeError — and the task crashes immediately, before Celery can update its state, leaving it in PENDING.
Symptom: The task appears in worker logs as received but doesn’t complete, and you see RuntimeError: Working outside of application context in the worker output.
# This task crashes on the first line in a worker:
@celery.task
def process_report(report_id):
report = Report.query.get(report_id) # db needs app context — boom
report.status = 'processing'
db.session.commit()
The standard fix is to push an app context inside the task:
# Option 1: Push context manually in every task (verbose but explicit)
@celery.task
def process_report(report_id):
from myapp import create_app
app = create_app()
with app.app_context():
report = Report.query.get(report_id)
report.status = 'processing'
db.session.commit()
That works but gets repetitive. A cleaner approach is a custom task base class that wraps every task call in an app context automatically:
# myapp/celery_utils.py
from celery import Task
class FlaskTask(Task):
"""A Celery Task subclass that pushes a Flask app context before running."""
def __call__(self, *args, **kwargs):
from myapp import create_app
app = create_app()
with app.app_context():
return super().__call__(*args, **kwargs)
# Register it as the default base class:
celery = Celery('myapp', task_cls=FlaskTask)
Now every task in your app gets an app context for free:
@celery.task
def process_report(report_id):
# app context is already active, db works fine
report = Report.query.get(report_id)
report.status = 'processing'
db.session.commit()
return report.id
This pattern is also much easier to test — you can push a test app context once in your fixtures rather than mocking every database call. If you’re struggling with related Flask context errors, check out our deep dive on Flask’s request context errors for more on how Flask manages its context stack.
Prevention Tips
Once you’ve fixed the immediate problem, a few practices will stop these issues from coming back.
Always log task state transitions. The on_failure, on_success, and on_retry hooks give you visibility that the default setup doesn’t:
@celery.task(
bind=True,
max_retries=3,
default_retry_delay=60,
)
def resilient_task(self, payload):
try:
result = do_work(payload)
return result
except TemporaryError as exc:
raise self.retry(exc=exc)
except Exception as exc:
# Log to your error tracker here
raise
Set explicit timeouts. A task with no timeout can hang forever, blocking a worker slot and contributing to backpressure:
@celery.task(
soft_time_limit=30, # raises SoftTimeLimitExceeded after 30s
time_limit=60, # kills the task hard at 60s
)
def time_sensitive_task(data):
from celery.exceptions import SoftTimeLimitExceeded
try:
return process(data)
except SoftTimeLimitExceeded:
# clean up and return a partial result
return None
Separate queues for separate priorities. A single default queue means a flood of low-priority tasks can starve high-priority ones:
# Route tasks to separate queues
celery.conf.task_routes = {
'myapp.tasks.email.*': {'queue': 'email'},
'myapp.tasks.reports.*': {'queue': 'reports'},
}
Start dedicated workers for each queue, and you can scale them independently.
Monitor task age in production. If a task has been PENDING for more than a few seconds and your worker should be processing it instantly, something is wrong. Set up an alert on task.time_in_queue or use Flower (the Celery monitoring UI) to see queued task counts in real time.
Still Not Working?
A few less common causes worth checking if the above didn’t fix it:
Serialization errors at dispatch time. If the arguments you pass to .delay() can’t be serialized by Celery’s default JSON serializer (custom Python objects, SQLAlchemy models, datetime with timezone, etc.), the task may fail before it even hits the queue. Pass primitive types — IDs, strings, numbers — not objects.
Broker connection is intermittent. Sporadic Redis connection errors can cause tasks to be dropped before they’re acknowledged. Add broker_transport_options to enable heartbeats and connection retries:
celery.conf.update(
broker_transport_options={
'visibility_timeout': 3600,
'socket_timeout': 10,
'socket_connect_timeout': 10,
},
broker_connection_retry_on_startup=True,
)
Task ID collision. If you’re generating your own task IDs, duplicates will result in the second task being silently ignored. Let Celery generate IDs automatically unless you have a specific reason to override.
Worker prefetch count too high. If a worker prefetches too many tasks at once, tasks can sit in a worker’s local queue rather than the broker. Other workers that could process them don’t see them. Set worker_prefetch_multiplier=1 for long-running tasks:
celery.conf.worker_prefetch_multiplier = 1
Summary Checklist
Work through this list top-to-bottom when you see tasks stuck in PENDING:
- [ ]
celery.conf.result_backendis set (notNoneor missing) - [ ] The worker is running and reachable:
celery inspect activereturns worker names - [ ] The broker URL matches between Flask app config and the running worker
- [ ] Your task module is imported by the worker:
celery inspect registeredincludes your task - [ ] Tasks that use Flask extensions are wrapped in an app context
- [ ] Task arguments are JSON-serializable primitives, not Python objects
- [ ]
task_time_limitis set to prevent hung tasks from blocking workers
Celery’s PENDING state is frustratingly vague, but every instance of it has a concrete cause. Once you know which category yours falls into, the fix is usually a one-liner. Use Debugly’s trace formatter to quickly parse any Python tracebacks that appear in your Celery worker logs — pasting a raw traceback from a crashed task will show you exactly where execution stopped and what triggered the failure. You can also check our guide on Flask SQLAlchemy session errors if your tasks are failing specifically because of database transaction problems.