Search code examples
performance-testinglocust

Locust Performance different from time() function


I wrote a FastAPI and try to perform load tests using different tools. I have found that the performance from Locust is vastly different from time() python function:

  • Locust shows min=17ms, max=2469ms, 99%=2000ms
  • time() function shows min()=3ms, max=1739ms

Can someone please shed a light on why is that? Which one is more accurate?


Below are my programs:

Fast API Function:

app = FastAPI()

@app.post('/predict/')
def predict(request: PredictRequest):
    logger.info('Invocation triggered')
    start_time = time.time()
    response = adapter.predict(request.dict())
    latency_time = (time.time() - start_time) * 1000
    latency_logger.info(f'Predict call latency: {latency_time} ms')
    return response

Locust parameters: -u 500 -t 10 -r 500

Locust File:

class User(HttpUser):
    wait_time = between(1, 2.5)
    host = "http://0.0.0.0:80"

    @task
    def generate_predict(self):
        self.client.post("/predict/",
                         json={"cid": [],
                               "user_id": 5768586,
                               "store_ids": [2725, 2757],
                               "device_type": "ios"},
                         name='predict')

Locust Output: enter image description here


Solution

  • Locust and time are measuring two different things. time is measuring how long it takes to run only your adapter.predict function, server side. Locust measures the time it takes a client to get a response from your server route, which includes not only your adapter.predict call but also who knows what all else before and after that. "Which is more accurate" depends on what it is you're trying to measure. If you just want to know how long it takes to call adapter.predict, then time will be more accurate. If you want to know how long it takes a client to get the results of your /predict route, Locust is more accurate.