I am writing code to receive a JSON payload in FastAPI.
Here is my code:
from fastapi import FastAPI, status, Request
from fastapi.responses import ORJSONResponse
import uvicorn
import asyncio
import orjson
app = FastAPI()
@app.post("/", status_code = status.HTTP_200_OK)
async def get_data(request: Request):
param = await request.json()
return param
However, what I want is request.json()
to be used with orjson
instead of the default json
library of Python.
Any idea how to address this problem? Please help me, thanks.
request
data using orjson
When calling await request.json()
, FastAPI (actually Starlette) first reads the body (using the .body()
method of the Request
object), and then calls json.loads()
(using the standard json
library of Python) to return a dict
/list
object to you inside the endpoint—it doesn't use json.dumps()
, as you mentioned in the comments section beneath your question, as that method is used to serialize a Python object into JSON instead. It is worth noting that, as can been seen in the relevant implementation, Starlette does not run the json.loads()
(which is a blocking operation) in a separate thread, using await run_in_threadpool()
for instance, as decribed in this answer, meaning that if a request body is rather large that would take a lot of time to be deserialized, the event loop (essentially, the entire server) would be blocked, until that operation is complete.
It should be noted that orjson.loads()
in the first example below, as well as orjson.dumps()
in the second example later on, are blocking operations as well, and thus, using them instead, one could have those operations run in a separate thread/process, as decribed in the linked answer above, instead of calling them directly, when using an async def
endpoint (see the linked answer above for more details on def
vs async def
in FastAPI).
Alternatively, you could have the endpoint defined with normal def
, and have the raw request body retrieved witihin an async
dependency, as demonstrated in Update 2 of this answer, while keeping the blocking orjson.loads()
operation inside the def
endpoint. In this way, the endpoint will run in a separate thread from the extrernal threadpool that will then be await
ed, and thus, the blocking operation inside won't block the event loop (again, please have a look at the linked answers above for more details).
It should be noted that, in theory, there is no limit to how big a JSON request/response can be, but in practice there definetely are. As mentioned earlier, if you are deserializing/serializing rather large data within an async def
endpoint, the event loop will get blocked, if you don't have these operations run in a separate thread/process, as explained in a linked answer earlier. Also, you might be running out of memory (on either, or both, server and client sides), if the data can't fit into the device's available RAM. Additionally, client timeouts could be another issue, if a response is not returned within a predefined time to clients (i.e., web browsers, Python HTTP clients, etc.). Thus, for rather large request/response bodies, one should rather think about saving the result in a file, and having the user to upload/download the file instead—see related answers here, here, as well as here and here. That way, you might not need to worry that much about the event loop getting blocked, if you have a blocking operation, such as orjson.dumps()
or json.dumps()
, executed directly within an async def
endpoint, as long as this is about quick actions, i.e., dealing with small amounts of JSON data (however, you should always perform benchmark tests—see this answer—and choose the approach that suits you best, based on the requirements of your project, as well as your server's resources). Note that, if your API is accessible to the public, you should always consider applying some limitation on the request body size, in order to prevent malicious acts that would attempt to overload your server etc., either through a reverse proxy server, such as nginx, or through your application, as demonstrated in this answer, by consuming the request body in chunks, using request.stream()
(instead of await request.body()
, etc.), and calculating the body length as the chunks arrive.
from fastapi import FastAPI, Request
import orjson
app = FastAPI()
@app.post('/')
async def submit(request: Request):
body = await request.body()
data = orjson.loads(body) # could be run in a separate thread/process
return 'success'
response
data using orjson
When returning data such as dict
, list
, etc, FastAPI will automatically convert that returned value into JSON, using the Python standard json.dumps()
, after inspecting every item inside and making sure it is JSON serializable, using the JSON Compatible Encoder (i.e., jsonable_encoder()
)—see this answer for more details, as well as FastAPI/Starlette's JSONResponse
implementation, where you could see that the content you are returning is serialized using json.dumps()
, which, it is worth noting, is a blocking operation as well, and Starlette does not execute it in a separate thread.
Hence, if you would like to use the orjson
library instead—thus, using a faster JSON encoder, as well as avoiding the jsonable_encoder()
, given that you are certain that the content you would be serializing is serializable with JSON; otherwise, orjson
has a default
parameter (that may be a function, lambda, or callable class instance) for the caller to specify how to serialize arbitrary types, which you could use, e.g., orjson.dumps(data, default=str)
—you would need to send a custom Response
directly, as described in this answer and as shown below, which would result in responding much faster. Note that, as mentioned earlier, orjson.loads()
and orjson.dumps()
are both blocking operations, and since an async def
endpoint is used in the example below, it might be a good idea, depending on the size of data you are expecting to deserialize/serialize, to have these operations run in a separate thread/process, or use a def
endpoint in the same way described earlier in the previous section. Otherwise, if you are dealing with small amounts of JSON data, it might not worth spawning a new thread or process for such operations (as described earlier, always perform tests to find the best suited approach, based on your project's requirements and device's resources).
from fastapi import FastAPI, Request
import orjson
app = FastAPI()
@app.post('/')
async def submit(request: Request):
body = await request.body()
data = orjson.loads(body) # could be run in a separate thread/process
return Response(orjson.dumps(data), media_type='application/json') # `orjson.dumps()` could be run in a separate thread/process
response
data using FastAPI's ORJSONResponse
Alternatively, you could use the ORJSONResponse
provided by FastAPI (still make sure you have the orjson
libray installed, as well as the content that you are returning is serializable with JSON). Have a look at futher documentation here and here on how to customize and/or set ORJSONResponse
as the default response class (the implementation of the ORJSONResponse
could be found here).
Note that when using the response_class
parameter in the endpoint's decorator to set the Response
class, one does not necessarily need to use that response class when returning the data from the endpoint as well. This is because FastAPI, behind the scenes, will encode/serialize the data based on the response_class
you set, as well as use it to define the "media type" of the response. Hence, one could either set response_class=ORJSONResponse
or return ORJSONResponse(...)
. Having said that, it wouldn't hurt using both, as shown in the example below and in some examples provided in the official FastAPI documentation, not only for clarity purposes, but also for proper Swagger UI documentation purposes. The response_class
will inform Swagger UI/OpenAPI docs for the expected media type of a successful response—one could confirm that by looking at the expected "Responses" and their media type under an endpoint in /docs
(see FastAPI's documentation for declaring additional responses in OpenAPI).
It should also be noted, as described earlier, that ORJSONResponse
uses the same orjson.dumps()
/etc. operations, behind the scenes, which are blocking operations. Hence, if an async def
endpoint was used, and you expected to return some rather large and complex JSON data that would require some time to be serialized, which would render the server unresponsive until they did so, you should rather follow the approach in the previous example, instead of using ORJSONResponse
, and have the blocking operation run in a separate thread/process. Otherwise, you could define the endpoint with normal def
and use an async
dependency, in the same way it was described in the first section of this answer.
from fastapi import FastAPI, Request
from fastapi.responses import ORJSONResponse
app = FastAPI()
@app.post('/', response_class=ORJSONResponse)
async def submit(request: Request):
body = await request.body()
return ORJSONResponse(body.decode('utf-8'))
In the case of the example above, one wouldn't notice any difference in Swagger UI autodocs whether or not using the response_class=ORJSONResponse
, as application/json
is the default media type for FastAPI endpoints, regardless. However, in a following example of this answer, where HTMLResponse
is returned from a specific endpoint, if one didn't use response_class=HTMLResponse
in the decorator, Swagger UI/OpenAPI docs would incorrectly indicate that an application/json
response is expected from that endpoint (in case of a successful response).
ORJSONResponse
as the default_response_class
As explained in the documentation, one can define the default response class for their FastAPI application as shown in the example below. In that way, every response from FastAPI will be encoded using ORJSONResponse
in the example below; thus, there would be no need for you to either set response_class=ORJSONResponse
or use return ORJSONResponse(...)
.
from fastapi import FastAPI
from fastapi.responses import ORJSONResponse
app = FastAPI(default_response_class=ORJSONResponse)
@app.get("/items")
async def read_items():
return [{"item_id": "Foo"}]
You would still be able to override the default response class in an endpoint, if you had to, as shown in the examples earlier, by either setting the response_class
parameter in the endpoint's decorator or returning that Response
class directly, or using both methods, as explained earlier, so that Swagger UI/OpenAPI documentation is informed for the expected response type of that endpoint. For instance:
from fastapi import FastAPI
from fastapi.responses import ORJSONResponse, HTMLResponse
app = FastAPI(default_response_class=ORJSONResponse)
@app.get("/items")
async def read_items():
return [{"item_id": "Foo"}]
@app.get("/html", response_class=HTMLResponse)
async def get_html():
html_content = """
<html>
<body>
<h1>HTML!</h1>
</body>
</html>
"""
return HTMLResponse(content=html_content, status_code=200)
Please make sure to have a look here, here, as well as here and here to learn about the various approaches of sending JSON data to a FastAPI backend, and how to define an endpoint to expect and validate JSON data, instead of relying on using await request.json()
(which is useful when the app requires passing arbitrary JSON data, but does not perform any validation on the data).