You’re building a FastAPI app, everything seems fine during development, but suddenly your API starts crawling. Requests that should take milliseconds are taking seconds. Sometimes the entire server just hangs. You check your database, your network—everything looks fine. Then you realize: you’ve been mixing synchronous and asynchronous code incorrectly, and you’re blocking the event loop.
This is one of the trickiest issues FastAPI developers face. Unlike traditional frameworks where everything runs synchronously, FastAPI leverages Python’s async capabilities for incredible performance. But with great power comes great responsibility—one blocking call in an async function can bring your entire server to its knees.
TLDR: Quick Fix for Async/Sync Blocking
Most Common Cause: Running blocking I/O operations (database calls, file reads, API requests) in async functions without proper handling.
❌ Before (blocks event loop):
import time
from fastapi import FastAPI
app = FastAPI()
@app.get("/slow")
async def slow_endpoint():
# This blocks the entire event loop for 5 seconds!
time.sleep(5)
return {"message": "Done"}
# While one request sleeps, ALL other requests wait
✅ After (properly async):
import asyncio
from fastapi import FastAPI
app = FastAPI()
@app.get("/fast")
async def fast_endpoint():
# This yields control to other requests
await asyncio.sleep(5)
return {"message": "Done"}
# Other requests can be processed during the sleep
Quick Prevention Tips:
- Use
awaitwith async operations (never use blocking calls in async functions) - Use
def(notasync def) for CPU-bound or blocking I/O operations - Wrap blocking code with
run_in_executor()if you must call it from async - Use async database drivers (asyncpg, motor, databases)
Symptom Description
When you’re blocking the event loop, you’ll see these symptoms:
Performance Issues:
- API responds slowly even with low traffic
- Response times increase linearly with concurrent requests
- Server can’t handle more than a few requests at once
- One slow endpoint makes ALL endpoints slow
Error Messages:
# You might see warnings like:
RuntimeWarning: coroutine 'get_data' was never awaited
# Or errors like:
RuntimeError: Task got bad yield: <some_object>
# Or the dreaded:
RuntimeError: no running event loop
Mysterious Behavior:
- Endpoints work fine when tested individually
- Load testing reveals terrible performance
- Adding more workers doesn’t help
- CPU usage is low but requests are slow
If any of this sounds familiar, you’re likely blocking the event loop somewhere.
Diagnostic Steps
Before we fix anything, let’s identify where the problem is.
Step 1: Check Your Function Definitions
Look at how you’ve defined your route handlers:
# Async function - expects to use await
@app.get("/data")
async def get_data():
pass
# Sync function - runs in thread pool automatically
@app.get("/data")
def get_data():
pass
FastAPI handles these differently. Async functions run on the event loop, sync functions run in a thread pool. If you’re using async def but calling blocking code inside, that’s your problem.
Step 2: Add Timing Middleware
See which endpoints are slow:
import time
from fastapi import FastAPI, Request
app = FastAPI()
@app.middleware("http")
async def add_timing_header(request: Request, call_next):
start_time = time.time()
response = await call_next(request)
process_time = time.time() - start_time
response.headers["X-Process-Time"] = str(process_time)
print(f"{request.url.path}: {process_time:.2f}s")
return response
Any endpoint taking longer than expected is a candidate for investigation.
Step 3: Use Python’s Async Debug Mode
Run your app with async debugging enabled:
# This will warn you about common async mistakes
PYTHONASYNCIODEBUG=1 uvicorn main:app --reload
This catches issues like forgetting await or running blocking operations.
Step 4: Profile with Load Testing
Use a tool like wrk or locust to generate concurrent requests:
# Install wrk
# On Ubuntu: apt-get install wrk
# Test with 10 concurrent connections
wrk -t10 -c10 -d30s http://localhost:8000/api/endpoint
# Look at the results
# Good: Requests/sec should be high (hundreds or thousands)
# Bad: Requests/sec is single digits with high latency
Cause #1: Blocking Database Calls
The most common mistake—using synchronous database libraries in async routes.
The Problem:
from fastapi import FastAPI
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
# Synchronous database setup
engine = create_engine("postgresql://user:pass@localhost/db")
SessionLocal = sessionmaker(bind=engine)
app = FastAPI()
@app.get("/users")
async def get_users():
# This is an async function...
db = SessionLocal()
# But this database call is BLOCKING!
users = db.query(User).all()
# While this blocks, NO other requests can be processed
db.close()
return users
When you call db.query(User).all(), it’s a synchronous operation. Your async function waits, but it’s not using await, so it blocks the entire event loop. All other incoming requests have to wait.
The Fix (Option 1): Use Async Database Driver
from fastapi import FastAPI
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
from sqlalchemy.orm import sessionmaker
# Async database setup
engine = create_async_engine(
"postgresql+asyncpg://user:pass@localhost/db"
)
AsyncSessionLocal = sessionmaker(
engine, class_=AsyncSession, expire_on_commit=False
)
app = FastAPI()
@app.get("/users")
async def get_users():
async with AsyncSessionLocal() as db:
# Now using await - yields control during I/O
result = await db.execute(select(User))
users = result.scalars().all()
return users
The Fix (Option 2): Make Route Handler Sync
from fastapi import FastAPI
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
engine = create_engine("postgresql://user:pass@localhost/db")
SessionLocal = sessionmaker(bind=engine)
app = FastAPI()
# Changed to regular 'def' - FastAPI runs this in a thread pool
@app.get("/users")
def get_users():
db = SessionLocal()
users = db.query(User).all()
db.close()
return users
When you use def instead of async def, FastAPI automatically runs your function in a thread pool. The blocking database call won’t block the event loop anymore.
The Fix (Option 3): Run in Executor
If you MUST keep the async function and use sync code:
import asyncio
from fastapi import FastAPI
app = FastAPI()
def blocking_db_query():
"""Synchronous database operation"""
db = SessionLocal()
users = db.query(User).all()
db.close()
return users
@app.get("/users")
async def get_users():
# Run the blocking function in a thread pool
users = await asyncio.get_event_loop().run_in_executor(
None, blocking_db_query
)
return users
Cause #2: Blocking File Operations
Reading or writing files with standard library functions blocks the event loop.
The Problem:
from fastapi import FastAPI
app = FastAPI()
@app.get("/logs")
async def get_logs():
# This blocks while reading the file!
with open("/var/log/app.log", "r") as f:
logs = f.read()
# Event loop is blocked during the entire file read
return {"logs": logs}
The Fix (Option 1): Use aiofiles
import aiofiles
from fastapi import FastAPI
app = FastAPI()
@app.get("/logs")
async def get_logs():
# Async file reading - doesn't block
async with aiofiles.open("/var/log/app.log", "r") as f:
logs = await f.read()
return {"logs": logs}
Install with: pip install aiofiles
The Fix (Option 2): Use Regular Function
from fastapi import FastAPI
app = FastAPI()
@app.get("/logs")
def get_logs():
# Sync function - FastAPI handles threading
with open("/var/log/app.log", "r") as f:
logs = f.read()
return {"logs": logs}
The Fix (Option 3): Run in Executor
import asyncio
from fastapi import FastAPI
app = FastAPI()
def read_log_file():
with open("/var/log/app.log", "r") as f:
return f.read()
@app.get("/logs")
async def get_logs():
logs = await asyncio.get_event_loop().run_in_executor(
None, read_log_file
)
return {"logs": logs}
Cause #3: Blocking HTTP Requests
Using synchronous HTTP clients like requests library in async functions.
The Problem:
import requests
from fastapi import FastAPI
app = FastAPI()
@app.get("/weather")
async def get_weather(city: str):
# This blocks the event loop during the HTTP request!
response = requests.get(
f"https://api.weather.com/v1/forecast?city={city}"
)
# While waiting for the API response, nothing else can run
return response.json()
The Fix (Option 1): Use httpx Async Client
import httpx
from fastapi import FastAPI
app = FastAPI()
@app.get("/weather")
async def get_weather(city: str):
# Async HTTP request - properly yields during I/O
async with httpx.AsyncClient() as client:
response = await client.get(
f"https://api.weather.com/v1/forecast?city={city}"
)
return response.json()
Install with: pip install httpx
The Fix (Option 2): Reuse Client for Better Performance
import httpx
from fastapi import FastAPI
app = FastAPI()
# Create client once, reuse for all requests
http_client = httpx.AsyncClient()
@app.on_event("startup")
async def startup():
global http_client
http_client = httpx.AsyncClient()
@app.on_event("shutdown")
async def shutdown():
await http_client.aclose()
@app.get("/weather")
async def get_weather(city: str):
response = await http_client.get(
f"https://api.weather.com/v1/forecast?city={city}"
)
return response.json()
The Fix (Option 3): Use aiohttp
import aiohttp
from fastapi import FastAPI
app = FastAPI()
@app.get("/weather")
async def get_weather(city: str):
async with aiohttp.ClientSession() as session:
async with session.get(
f"https://api.weather.com/v1/forecast?city={city}"
) as response:
data = await response.json()
return data
Install with: pip install aiohttp
Cause #4: CPU-Bound Operations
Long-running computations block the event loop.
The Problem:
from fastapi import FastAPI
app = FastAPI()
@app.get("/compute")
async def compute_result(n: int):
# CPU-intensive calculation blocks everything
result = sum(i * i for i in range(n))
# Event loop is blocked during the entire computation
return {"result": result}
The Fix (Option 1): Use Regular Function
from fastapi import FastAPI
app = FastAPI()
# FastAPI runs this in a thread pool
@app.get("/compute")
def compute_result(n: int):
result = sum(i * i for i in range(n))
return {"result": result}
The Fix (Option 2): Use ProcessPoolExecutor for Heavy Work
import asyncio
from concurrent.futures import ProcessPoolExecutor
from fastapi import FastAPI
app = FastAPI()
executor = ProcessPoolExecutor(max_workers=4)
def heavy_computation(n: int):
"""This runs in a separate process"""
return sum(i * i for i in range(n))
@app.get("/compute")
async def compute_result(n: int):
loop = asyncio.get_event_loop()
result = await loop.run_in_executor(
executor, heavy_computation, n
)
return {"result": result}
The Fix (Option 3): Use Background Tasks for Non-Urgent Work
from fastapi import FastAPI, BackgroundTasks
app = FastAPI()
def process_data(data: dict):
"""Long-running background task"""
# Expensive processing here
pass
@app.post("/submit")
async def submit_data(data: dict, background_tasks: BackgroundTasks):
# Add task to run after response is sent
background_tasks.add_task(process_data, data)
return {"message": "Processing started"}
Cause #5: Forgetting ‘await’ Keyword
The classic mistake—calling an async function without await.
The Problem:
from fastapi import FastAPI
import asyncio
app = FastAPI()
async def fetch_user_data(user_id: int):
await asyncio.sleep(1) # Simulating I/O
return {"id": user_id, "name": "John"}
@app.get("/user/{user_id}")
async def get_user(user_id: int):
# Missing 'await' - this doesn't actually call the function!
user = fetch_user_data(user_id)
# 'user' is a coroutine object, not the actual data
return user
You’ll get this error or a coroutine object instead of data:
RuntimeWarning: coroutine 'fetch_user_data' was never awaited
The Fix:
from fastapi import FastAPI
import asyncio
app = FastAPI()
async def fetch_user_data(user_id: int):
await asyncio.sleep(1)
return {"id": user_id, "name": "John"}
@app.get("/user/{user_id}")
async def get_user(user_id: int):
# Added 'await' - now it actually runs
user = await fetch_user_data(user_id)
return user
Cause #6: Mixing Async and Sync Database Sessions
Accidentally using both sync and async database code in the same app.
The Problem:
from fastapi import FastAPI
from sqlalchemy import create_engine
from sqlalchemy.ext.asyncio import create_async_engine
# Two different engines - confusion ahead!
sync_engine = create_engine("postgresql://...")
async_engine = create_async_engine("postgresql+asyncpg://...")
app = FastAPI()
@app.get("/users")
async def get_users():
# Wait, which session should I use?
# Using the wrong one causes weird errors
pass
The Fix:
Pick ONE approach for your entire app:
from fastapi import FastAPI
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession
from sqlalchemy.orm import sessionmaker
# Use ONLY async database access
async_engine = create_async_engine(
"postgresql+asyncpg://user:pass@localhost/db",
echo=True
)
AsyncSessionLocal = sessionmaker(
async_engine,
class_=AsyncSession,
expire_on_commit=False
)
app = FastAPI()
async def get_db():
async with AsyncSessionLocal() as session:
yield session
@app.get("/users")
async def get_users(db: AsyncSession = Depends(get_db)):
result = await db.execute(select(User))
users = result.scalars().all()
return users
When to Use ‘async def’ vs ‘def’
Here’s a simple decision tree:
Use async def when:
- You’re using
awaitinside the function - You’re calling async database libraries (asyncpg, motor)
- You’re using async HTTP clients (httpx, aiohttp)
- You’re doing async file I/O (aiofiles)
- You need concurrent I/O operations
Use regular def when:
- You’re using synchronous libraries (requests, sqlite3, standard file I/O)
- You’re doing CPU-intensive work
- You’re not sure (FastAPI handles it well either way)
- You’re calling blocking operations and can’t easily make them async
Examples:
from fastapi import FastAPI
app = FastAPI()
# Use 'def' - no async operations
@app.get("/simple")
def simple_endpoint():
return {"message": "Hello"}
# Use 'def' - blocking database
@app.get("/sync-db")
def get_data():
db = SessionLocal() # Sync SQLAlchemy
data = db.query(Model).all()
return data
# Use 'async def' - async operations
@app.get("/async-db")
async def get_data_async():
async with AsyncSessionLocal() as db:
result = await db.execute(select(Model))
return result.scalars().all()
# Use 'def' - CPU bound
@app.get("/compute")
def compute_heavy():
return sum(range(1000000))
# Use 'async def' - multiple I/O operations
@app.get("/combined")
async def get_combined_data():
async with httpx.AsyncClient() as client:
api1, api2 = await asyncio.gather(
client.get("http://api1.com/data"),
client.get("http://api2.com/data")
)
return {"api1": api1.json(), "api2": api2.json()}
Still Not Working? Advanced Issues
Edge Case 1: Event Loop Closed Errors
RuntimeError: Event loop is closed
This happens when you try to run async code after the event loop has shut down. Common in testing:
import pytest
from httpx import AsyncClient
from main import app
@pytest.mark.asyncio
async def test_endpoint():
async with AsyncClient(app=app, base_url="http://test") as client:
response = await client.get("/users")
assert response.status_code == 200
# Make sure pytest-asyncio is installed:
# pip install pytest-asyncio
Edge Case 2: Async Context Managers
Using regular context managers with async code:
# This won't work properly in async context
@app.get("/data")
async def get_data():
with database.connect() as conn: # Blocking!
data = conn.fetch_all()
return data
# Use async context managers instead
@app.get("/data")
async def get_data():
async with database.connect() as conn: # Non-blocking
data = await conn.fetch_all()
return data
Edge Case 3: Thread-Local Storage Issues
Some libraries use thread-local storage, which doesn’t work well with async:
# Flask's 'g' object or similar thread-local constructs
# don't work properly in async FastAPI
# Instead, use contextvars
from contextvars import ContextVar
request_id_var = ContextVar('request_id', default=None)
@app.middleware("http")
async def add_request_id(request: Request, call_next):
request_id = str(uuid.uuid4())
request_id_var.set(request_id)
response = await call_next(request)
return response
Performance Testing and Benchmarking
Here’s how to verify your fixes actually improved performance.
Before and After Comparison
# Test script: test_performance.py
import asyncio
import httpx
import time
async def test_concurrent_requests(url: str, num_requests: int):
async with httpx.AsyncClient() as client:
start = time.time()
tasks = [client.get(url) for _ in range(num_requests)]
responses = await asyncio.gather(*tasks)
end = time.time()
success = sum(1 for r in responses if r.status_code == 200)
print(f"Requests: {num_requests}")
print(f"Success: {success}")
print(f"Time: {end - start:.2f}s")
print(f"Requests/sec: {num_requests / (end - start):.2f}")
# Test blocking version
asyncio.run(test_concurrent_requests(
"http://localhost:8000/blocking", 100
))
# Test async version
asyncio.run(test_concurrent_requests(
"http://localhost:8000/async", 100
))
Good async code should handle 100 concurrent requests much faster than blocking code.
Use FastAPI’s Built-in Profiling
from fastapi import FastAPI, Request
import time
app = FastAPI()
@app.middleware("http")
async def add_performance_headers(request: Request, call_next):
start = time.time()
response = await call_next(request)
duration = time.time() - start
response.headers["X-Response-Time"] = f"{duration:.4f}"
if duration > 1.0:
print(f"SLOW REQUEST: {request.url.path} took {duration:.2f}s")
return response
Best Practices Summary
Do:
- Use async database drivers (asyncpg, motor) with
async def - Use async HTTP clients (httpx, aiohttp) with
async def - Use
deffor blocking operations or CPU-bound work - Always use
awaitwhen calling async functions - Run blocking code in executor if needed:
await run_in_executor(None, blocking_func)
Don’t:
- Mix
requestslibrary withasync defroutes - Use
time.sleep()in async functions (useawait asyncio.sleep()) - Use blocking file I/O with
async def(useaiofiles) - Forget
awaitwhen calling async functions - Use
async defif you’re not using anyawaitinside
Testing Checklist:
- [ ] Load test with concurrent requests (use
wrk,locust, orhttpx) - [ ] Monitor response times under load
- [ ] Check that adding concurrent users doesn’t linearly increase response time
- [ ] Verify CPU usage is appropriate (low for I/O bound, high for CPU bound)
- [ ] Run with
PYTHONASYNCIODEBUG=1to catch issues
Related Posts
For more FastAPI troubleshooting:
- How to Fix 404 Not Found in FastAPI: Complete Guide
- FastAPI 422 Unprocessable Entity: How to Fix Validation Errors
Debugging Complex Async Errors? Use Debugly’s trace formatter to quickly parse and analyze Python async tracebacks with clear, formatted output.