While building an app with FastAPI can be reasonably straightforward, deploying and operating it might be more challenging.
The whole user experience can be ruined by unexpected errors, slow responses, or even worse — downtime.
AppSignal is a great tool of choice for efficiently tracking your FastAPI app’s performance.
It allows you to easily monitor average/95th percentile/90th percentile response times, error rates, throughput, and much more.
Useful charts are available out of the box. Let’s see it in action!
What Can You Do with Performance Monitoring?
With performance monitoring, we can track app response times, throughput, error rates, CPU consumption, memory usage, etc.
Changes in these metrics can indicate something is not quite right, and we should investigate.
For example, if response times are monotonically increasing on a specific endpoint, we can investigate what’s causing it. It might be inefficient code, a slow database query, a slow external API call, or something else.
In such cases, you can intervene before a gateway timeout occurs, for example, and your users start complaining.
Setting Up Our FastAPI Python Project and Configuring AppSignal
Let’s use an app I’ve already prepared.
First, clone the repository from GitHub:
$ git clone git@github.com:jangia/fastapi_performance_with_appsignal.git
$ cd fastapi_performance_with_appsignal
Second, create a virtual environment and install the dependencies:
$ python3.12 -m venv venv
$ source venv/bin/activate
$ pip install -r requirements.txt
Note: For things to work, you need the following packages installed: opentelemetry-instrumentation-fastapi
and appsignal
.
Third, set the AppSignal environment variables:
$ export APPSIGNAL_PUSH_API_KEY=<your_appsignal_push_api_key>
$ export APPSIGNAL_REVISION=main
Note: You can read more about configuring AppSignal for FastAPI in Track Errors in FastAPI for Python with AppSignal.
Use environment variables to configure AppSignal in the __appsignal__.py
file:
import os
from appsignal import Appsignal
appsignal = Appsignal(
active=True,
name="fastapi_performance_with_appsignal",
push_api_key=os.getenv("APPSIGNAL_PUSH_API_KEY"),
revision=os.getenv("APPSIGNAL_REVISION"),
enable_host_metrics=True,
)
The FastAPI app looks like this:
import json
import random
import time
import requests
from appsignal import set_category, set_sql_body, set_body
from fastapi import FastAPI, Depends
from opentelemetry import trace
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
from sqlalchemy.orm import Session
from __appsignal__ import appsignal
from models import SessionLocal, Task
appsignal.start()
tracer = trace.get_tracer(__name__)
app = FastAPI(
title="FastAPI with AppSignal",
)
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
@app.get("/hello-world")
def hello_world():
time.sleep(random.random())
return {"message": "Hello World"}
@app.get("/error")
def hello_world():
raise Exception("Something went wrong. Oops!")
@app.get("/slow-external-api")
def slow_external_api():
api_url = "http://docs.appsignal.com/"
with tracer.start_as_current_span("Call External API"):
set_category("external_api.http")
set_body(json.dumps({"url": api_url}))
response = requests.get(api_url)
return {"message": "External API successfully called!"}
@app.get("/slow-query")
def slow_query(db: Session = Depends(get_db)):
with tracer.start_as_current_span("List tasks"):
query = db.query(Task)
tasks = query.all()
set_category("tasks.sql")
set_sql_body(str(query))
return {
"tasks": [
{"id": task.id, "title": task.title, "status": task.status}
for task in tasks
],
}
FastAPIInstrumentor().instrument_app(app)
We’ll use a few endpoints to demonstrate how to monitor performance.
Monitor Response Times and Throughput in AppSignal
With our dependencies installed and environment variables set, we can start the app:
(venv)$ uvicorn main:app --reload
Once the app is up and running, we can use the call_api.py
script to send requests to the app endpoints in parallel:
(venv)$ python call_api.py
This script will call every endpoint we have 20 times in parallel using asyncio
.
import asyncio
from aiohttp import ClientSession
async def call_api(url: str):
async with ClientSession() as session:
async with session.get(url) as response:
response = await response.text()
print(response)
async def main():
base_url = "http://localhost:8000"
endpoints = ["slow-query", "slow-external-api", "hello-world", "error"]
async with asyncio.TaskGroup() as group:
for endpoint in endpoints:
url = f"{base_url}/{endpoint}"
for i in range(20):
group.create_task(call_api(url))
asyncio.run(main())
Once the script completes, go to the AppSignal dashboard and select your application. Once on the application dashboard, choose the Performance tab -> Graphs. You’ll see two charts — Response Times and Throughput:
The Response Times chart shows the average response time, 95th percentile response time, and 90th percentile response time across all endpoints inside your application.
The Throughput chart shows the number of processed requests per minute across all endpoints inside your application.
Looking at these charts is a great way to get a quick overview of how your application is performing, but it’s not enough. Fortunately, AppSignal provides a way to drill into the details — you can see the same charts per endpoint.
Digging Deeper
To do that, click on Actions inside Performance:
You’ll see a list of all endpoints inside your app that were called within the specified time range.
For each endpoint, you can see the average response time, 95th percentile response time, and 90th percentile response time.
You can go even deeper by clicking on the endpoint name. That will take you to the endpoint details page.
You’ll see the same charts as before. This time, they are for the selected endpoint only.
You’ll see another thing on the endpoint details page — errors and the error rate for the selected endpoint:
Looking at these charts, you can quickly see whether:
- Response times are increasing/decreasing/stable
- Throughput is increasing/decreasing/stable
- Error rate is increasing/decreasing/stable
If any of these metrics show a trend in the wrong direction, you can investigate and fix them before they become a problem for your users.
Setting Alerts
While this is all great and useful, you must go to the AppSignal dashboard to see how your app is performing.
This way, you might still not react quickly enough when something goes wrong.
To overcome this, you can set an alert that triggers whenever a request takes longer than the specified threshold.
Go to the details of a /slow-query
endpoint and click on View Incident.
After that, set alerts on the right side, with 5 seconds as the threshold, and select Every occurrence as Alerting:
Run the script that calls the endpoints one more time:
(venv)$ python call_api.py
You should receive an email from AppSignal warning you about the request/s that take/took too long to process. This means you don’t need to constantly check the AppSignal dashboard to see how your app is performing. You’ll be notified straight away if something strange happens.
Monitor Database Queries
Monitoring response times and throughput is very useful, but it only tells us a little about what’s causing a problem.
As mentioned, multiple things can cause slow response times.
One is slow database queries: we’ll also want to monitor them.
Let’s take a look at the /slow-query
endpoint:
# ... other code
tracer = trace.get_tracer(__name__)
# ... other code
@app.get("/slow-query")
def slow_query(db: Session = Depends(get_db)):
with tracer.start_as_current_span("List tasks"):
query = db.query(Task)
tasks = query.all()
set_category("tasks.sql")
set_sql_body(str(query))
return {
"tasks": [
{"id": task.id, "title": task.title, "status": task.status}
for task in tasks
],
}
Here, we’re using SQLAlchemy to query the database for all tasks. To track queries inside FastAPI, we need to utilize custom instrumentation.
We can do that by executing the query inside the with tracer.start_as_current_span("List tasks"):
block.
We’ll create a “List tasks” span. You’ll see it inside the AppSignal dashboard.
We execute the query inside the span. We need to set a category for AppSignal to recognize that this is a database query measurement using set_category("tasks.sql")
.
The category name must end with one of the following suffixes to be recognized as a database query:
- *.active_record
- *.ecto
- *.elasticsearch
- *.knex
- *.mongodb
- *.mysql
- *.postgres
- *.psycopg2
- *.redis
- *.sequel
- *.sql
We also add a query string to the span body — set_sql_body(str(query))
. This way, we can see the executed query inside the AppSignal dashboard.
Note: str(query)
returns a query with placeholders. If you want to see a query with values, you can use str(query.statement.compile(compile_kwargs={"literal_binds": True}))
. But be careful not to send any sensitive data this way.
Note: A span is a single step in the execution flow.
Since you’ve already executed the script that calls the endpoints, you should see the /slow-query
endpoint query inside the AppSignal dashboard.
Go to Slow queries under Performance:
You can click on the query name tasks.sql
to see the query details. There you’ll find:
- The query itself (as we sent it, with
set_sql_body
) - Response times chart
- Throughput chart
Looking at the charts, you can see the query’s performance trend. If the response times are monotonically increasing, you can investigate why and fix the issue.
Hint: some possible causes for slow queries are:
- Queries are not using indexes (e.g., there’s a missing index or filtering is only set on non-indexed columns).
- Queries are loading all the data from a database (e.g., there’s missing pagination or all the data for rows is being loaded despite this being unnecessary).
- N + 1 queries (e.g., querying for all users and then querying each user’s tasks to satisfy a single request).
- Inefficient query planning (e.g., max set on an empty result can take an abnormally long time in PostgreSQL).
Monitor Slow External API Calls
External API calls can also cause slow response times. Since external APIs are outside your control, you just have to hope for the best.
In reality, external APIs often cause problems due to being unresponsive or even down entirely.
That’s why you should monitor those calls as well.
As with database queries, AppSignal has got you covered.
Let’s take a look at the /slow-external-api
endpoint:
@app.get("/slow-external-api")
def slow_external_api():
api_url = "http://docs.appsignal.com/"
with tracer.start_as_current_span("Call External API"):
set_category("external_api.http")
set_body(json.dumps({"url": api_url}))
requests.get(api_url)
return {"message": "External API successfully called!"}
Like with database queries, we need to use custom instrumentation to track external API calls.
We can do that by executing the query inside the with tracer.start_as_current_span("Call External API")
block.
We set the span name to “Call External API”. You’ll see it inside the AppSignal dashboard.
Inside the span, we execute the external API call. Set the correct category for AppSignal to recognize that this is an external API call measurement — set_category("external_api.http")
.
The category name must end with one of the following suffixes to be recognized as an external API call:
- *.faraday
- *.grpc
- *.http
- *.http_rb
- *.net_http
- *.excon
- *.request
- *.requests
- *.service
- *.finch
- *.tesla
- *.fetch
We also add a called URL to the span body — set_body(json.dumps({"url": api_url}))
. This lets us see which URL was called inside the AppSignal dashboard.
You can add more details to the span’s body by extending the dictionary you’re sending to set_body
. (For example, you can also add a request or response body).
Note: Ensure you don’t send sensitive data inside the span’s body.
Since you’ve already executed the script that calls the endpoints, you should see the /slow-external-api
endpoint query inside the AppSignal dashboard.
Go to Slow queries under Performance:
You can click on the API call name external_api.http
to see the details. There you’ll find:
- The URL that was called (as we sent it with
set_body
) - Response times chart
- Throughput chart
Looking at the charts, you can see how the API call performs over time. You can decide whether to increase the timeout for the API call or change API usage (e.g., use a smaller page size or utilize a different API).
Monitor Host Metrics in AppSignal for FastAPI
Last but not least, you can monitor host metrics. AppSignal can track CPU, memory, disk, and network usage. You can enable that by setting enable_host_metrics=True
inside the __appsignal__.py
file:
import os
from appsignal import Appsignal
appsignal = Appsignal(
active=True,
name="fastapi_performance_with_appsignal",
push_api_key=os.getenv("APPSIGNAL_PUSH_API_KEY"),
revision="main",
enable_host_metrics=True, # THIS
)
So stop uvicorn, and let’s run our API with docker-compose:
$ docker-compose up -d --build
Once again, run the script that calls the endpoints:
(venv)$ python call_api.py
Note: macOS/OSX is not supported. That’s why we’re using Docker. See AppSignal’s official docs for more info.
Once the script completes, go to the AppSignal dashboard under Performance -> Host metrics — you’ll see the list of hosts that are running your app:
Click on the hostname to see the details. There, you’ll find these charts:
- Load Average
- CPU Usage
- Memory Usage
- Swap Usage
- Disk I/O Read
- Disk I/O Write
- Disk Usage
- Network Traffic Received
- Network Traffic Transmitted
These metrics can help you determine whether your hosts have enough resources to handle the load.
You can also see whether the load is roughly equally spread across the hosts.
If you see anything that concerns you, you can investigate further and take action before it becomes a problem for your users.
And that’s it!
Wrapping Up
In this post, we’ve seen how to monitor the performance of FastAPI applications using AppSignal.
Monitoring performance means that we can intervene before things go south and our users start complaining.
To be fully in control of our app, we should combine performance monitoring with error tracking.
Happy coding!
P.S. If you’d like to read Python posts as soon as they get off the press, subscribe to our Python Wizardry newsletter and never miss a single post!
Source link
lol