Search code examples
ruby-on-railswisper

How can I catch the response on request and then execute something?


I have an application in which I have the END-point where the request is sent by another service to collect requests from my application to this service. This service then executes requests and sends me responses to another END-point.

And suppose I have END-point (api/get_cost) at which we give the customer a cost info. We get the cost info just from another service. So before responding to a user, I have to create a request for a cost info, get an response to it, and only then respond to the user. How best to do this from an architectural point of view?


Solution

  • There are 2 things which I think, you can do.

    Let's simplify it. You have 2 components in given use case.

    Customer(user) U1 --(wants to get cost info using api/get_cost)--> B1(Your backend Server) ---> C1 (3rd party API server)

    Approach 1:

    In this scenario you can create a service in your lib folder which will have a client who will be responsible for making api calls and a service as an interface through which you will expose methods to make api calls from B1 to C1.

    So, you will use methods of service to make api calls to C1.

    A happy case would be: U1 requested cost info from B1 -> B1 asks C1-> C1 responds with data -> B1 responds with data to U1

    Things which can go wrong:

    U1 -> B1 -> C1 (But server of U1 is down) -> (times out) -> gets back to B1 -> B1 gives error to U1.

    U1 -> B1 -> C1 (C1 takes a lot of time to give response back) -> responds with data to B1-> B1 gives the data back to C1

    So, to make it more reliable.

    • Add timeout while making call to B1 -> C1 so that if data is not received in a particular time you will show user an error.

    • You can add caching between B1 -> C1 for eg: (If user wants to get data for item 1, you ask for the data from C1 then cache it and set it's expire time for 15 minutes(depends on you)). Now next time user is going to ask for the same data under 15 minutes the data will be fetched from Cache and not from C1). Note: This also depends on the kind of data it is. If you think data is dynamic and changes in every 2 minutes then there's no need of adding caching.

    Another way but requires a bit more efforts:

    Approach 2:

    Use short polling. It can be used in a scenario when there's a strict requirement of getting the data no matter whether it takes 2 secs on 4.

    You can trigger a background job for communicating from B1 to C1 and give response back to U1 with a uuid or request id and also forward this in background job as well. Maintain the status of the job in redis against that like this : {request id: { uuid: 123, status: (started, failed, successful), data: {}}} In background job, on getting the response update the redis against request id with data.

    Also create one api to check status against the request uuid which you received and poll the api in regular intervals, till you received either failed or successful). You can also put a limit on time so that system not poll the api after 2 minutes if status stays "started".

    I know this is a little bit more than expected :3. Also, In Software architecture there is nothing good or bad. There are only tradeoffs.