Article

Celery and Redis in FastAPI: Why You Need Them and How to Set Them Up

A plain-English guide to why FastAPI needs Celery and Redis for background tasks, what they actually do, and a step-by-step setup including auto-reload for development.

Setup steps in this post are based on the excellent tutorials at FastAPI Tutorial — First Steps with Celery and Auto-reloading Celery when files are modified. Highly recommended if you want to go deeper.


Imagine your API needs to send an email, process an uploaded PDF, call a slow third-party API, or send push notifications to thousands of devices.

You could do it synchronously, right inside the request handler.

The user sends a request. Your endpoint does the work. Responds when done.

This works until it does not. Slow tasks block the server thread. Response times spike. Users see timeouts. Your API feels broken, even though the code is technically correct.

This is the exact problem Celery and Redis solve.

The Core Problem with Synchronous Backends

FastAPI is async-capable, but async alone does not fully solve CPU-bound or I/O-heavy background workloads.

Consider a push notification endpoint. You need to call Firebase or SNS for each device token. For ten tokens, fine. For ten thousand tokens, your request suddenly takes minutes.

No amount of async def will fix that. You need to hand the work off entirely, respond immediately to the user, and let something else process in the background.

That is the task queue pattern, and Celery is one of the best tools for it in Python.

What Redis Is Doing Here

Redis is not the task runner. It is the message broker and result store.

When your FastAPI app creates a task, it does not run the task. It puts a message into Redis saying “hey, there is work to do.” Celery workers are watching Redis, pick that message up, and execute the task.

So the flow is:

FastAPI app → puts task message → Redis (broker)

                              Celery worker picks it up

                              Executes task, writes result back → Redis (result backend)

The app responds to the user almost instantly. The real work happens separately.

Why Not Just Use FastAPI’s Built-in BackgroundTasks?

FastAPI does have a BackgroundTasks utility and it is useful for simple fire-and-forget tasks within the same process.

But it has real limits:

  • Tasks run in the same process as your web server.
  • One slow task can impact other requests in the same worker.
  • You cannot distribute work across multiple machines.
  • No retry logic, no monitoring, no task queuing.

Celery gives you all of that. Use FastAPI BackgroundTasks for lightweight, low-risk work. Use Celery when tasks are slow, critical, or need to scale across servers.

Setting It Up Step by Step

1. Install the Dependencies

Create a requirements.txt:

uvicorn==0.21.1
fastapi==0.95.0
redis==4.5.4
celery==5.2.7
python-dotenv==1.0.0

Install with:

pip install -r requirements.txt

Windows users: Celery support on Windows is limited. Try pip install wheel==0.40.0 followed by pip install celery==3.1.25 as a fallback.

2. Set Up Project Structure

Keep things clean from the start:

📁 ./
├─ config.py
├─ main.py
├─ .env
├─ .gitignore
├─ requirements.txt
└─ 📁 env/

3. Configure Environment Variables

Put your broker and backend URLs in .env:

CELERY_BROKER_URL=redis://127.0.0.1:6379/0
CELERY_RESULT_BACKEND=redis://127.0.0.1:6379/0

And add .env to .gitignore so credentials never end up in version control.

This pattern also means each developer can point to their own local Redis, and staging/production can use a real server IP without changing any code.

4. Load Config Cleanly

Create config.py:

import os
from dotenv import load_dotenv

load_dotenv()

class Config:
    CELERY_BROKER_URL: str = os.environ.get("CELERY_BROKER_URL", "redis://127.0.0.1:6379/0")
    CELERY_RESULT_BACKEND: str = os.environ.get("CELERY_RESULT_BACKEND", "redis://127.0.0.1:6379/0")

settings = Config()

5. Start Redis

The easiest way is Docker:

docker run -p 6379:6379 --name redis_service -d redis

Check it is running:

docker ps

You should see something like:

CONTAINER ID   IMAGE   STATUS         PORTS                   NAMES
80c3394ec097   redis   Up 8 seconds   0.0.0.0:6379->6379/tcp  redis_service

And send a quick ping to confirm:

docker exec -t redis_service redis-cli ping

You should get PONG back. If you do, Redis is ready.

6. Write the FastAPI App with a Celery Task

Now main.py:

import time
from fastapi import FastAPI
from celery import Celery

from config import settings

app = FastAPI()

celery = Celery(
    __name__,
    broker=settings.CELERY_BROKER_URL,
    backend=settings.CELERY_RESULT_BACKEND
)

@celery.task
def send_push_notification(device_token: str):
    time.sleep(10)  # simulates a slow external network call
    with open("notification.log", mode="a") as notification_log:
        response = f"Successfully sent push notification to: {device_token}\n"
        notification_log.write(response)

@app.get("/push/{device_token}")
async def notify(device_token: str):
    send_push_notification.delay(device_token)
    return {"message": "Notification sent"}

Notice .delay() on the task call. That is what hands it off to Celery asynchronously, instead of running it in the request thread.

7. Start the Celery Worker

In a separate terminal:

celery -A main.celery worker --loglevel=info

You should see output like:

-------------- celery@yourmachine v5.2.7
.> transport:   redis://127.0.0.1:6379/0
.> results:     redis://127.0.0.1:6379/0
.> concurrency: 12 (prefork)

[tasks]
  . main.send_push_notification

Three things worth noting here:

  • The broker and result backend are confirmed.
  • The default queue is named celery.
  • Celery has picked up and registered your task.

8. Start the API Server

In another terminal:

uvicorn main:app --reload

Now hit GET /push/{device_token} a few times. You will see:

  • The API responds in milliseconds every time.
  • Celery processes the tasks in parallel in the background.
  • Logs appear in notification.log after the simulated 10-second delay.

That is the entire point demonstrated in one working example.

The Development Pain: Celery Does Not Auto-Reload

Once you start iterating on your task logic, you will hit an annoying problem fast.

FastAPI auto-reloads via uvicorn when you save a file. Celery does not. You have to manually kill and restart the worker every time you change a task.

This kills development flow.

Fix: Auto-Reload with Watchdog

Add watchdog to your requirements.txt:

watchdog==3.0.0

Install it:

pip install -r requirements.txt

Then start your Celery worker like this instead:

watchmedo auto-restart --directory=./ --pattern=*.py --recursive -- celery -A main.celery worker --loglevel=info

What this does: watchmedo monitors every .py file in the current directory recursively. The moment any file changes, it kills the Celery process and restarts it automatically.

Your development loop now feels the same as working with uvicorn.

What You Have Now

After following these steps, your architecture looks like this:

  • FastAPI handles HTTP requests and dispatches work.
  • Redis acts as the message bus between your app and workers.
  • Celery workers execute long-running tasks in a separate process.
  • Watchdog keeps Celery auto-reloading during development.

This is a solid foundation for production-grade async task processing. From here, you can add task monitoring with Flower, retry logic with Celery’s built-in retry(), multiple queues for task priority, and distributed workers across servers.

When to Actually Use This Pattern

Not every slow function needs Celery. Use it when:

  • tasks take more than a few hundred milliseconds,
  • you need retry on failure,
  • task failures should not crash the request,
  • work needs to be distributed across multiple machines,
  • you want to queue and process work at a controlled rate.

If the task is fast and fire-and-forget with no retry needs, FastAPI BackgroundTasks is often enough.

If the task is slow, critical, or needs to scale, reach for Celery.

← Back to blog