Search code examples
wso2wso2-api-managerapi-manager

WSO2 API Manager parallel limit parallel requests


I wonder if there is any limit requests made ​​to the API Manager per second.

This limit depends on the processing power of the machine or is there any restriction of technology?


Solution

  • WSO2 API Manger has Throttling tier policy which allows you to limit the number of successful hits to an API during a given period of time.for example

    1. Bronze: 1 request per minute
    2. Silver: 5 requests per minute
    3. Gold: 20 requests per minute
    4. Unlimited: Unlimited access.

    Also you can throttle requests based on IP address.refer this link for how to throttle based on IP address

    According to our LAB benchmark testings, Gateway node can handle ~3000 Transactions Per Second (TPS) when run with 150 concurrency in a 2 gateway node cluster and Response Time 30. Please find detailed information of setup and performance test

    Basic Setup Details

    WSO2 API Manager : Gateway - 2 - active/active WSO2 API Manager : Key manager - 2 - active/active WSO2 API Manager : Publisher - 1 - active/passive WSO2 API Manager : Store - 1 - active/passive

    Cache Settings

    Gateway Cache enabled

    Hardware Settings

    Physical : 3GHz Dual-core Xeon/Opteron (or latest), 4 GB RAM (minimum : 2 GB for JVM and 2GB for the OS, 10GB free disk space (minimum) disk based on the expected storage requirements (calculate by considering the file uploads and the backup policies) . (e.g if 3 Carbon instances running in a machine, it requires 4 CPU, 8 GB RAM 30 GB free space) Virtual Machine : 2 compute units minimum (each unit having 1.0-1.2 GHz Opteron/Xeon processor) 4 GB RAM 10GB free disk space. One cpu unit for OS and one for JVM. (e.g if 3 Carbon instances running require VM of 4 compute units 8 GB RAM 30 GB free space) EC2 : c3.large instance to run one Carbon instance. (e.g if 3 Carbon instances EC2 Extra-Large instance) Note : based on the I/O performance of c3.large instance, it is recommended to run multiple instances in a Larger instance (c3.xlarge or c3.2xlarge).

    According to these results a single node can handle up to 3000 TPS. This TPS value can vary according to the concurrency level and load during the time. So when scaling we assume each node can handle up to 3000 TPS by which overall TPS will be increased.