Search code examples
nginxluanginx-locationmirrormirroring

Problem Segregating Original Request and Mirrored Request in nginx


I have 2 environments (envA, envB). envA needs to mirror its requests to envB as well as make 2 other calls to envB containing info from the response in envA. envA is not interested in the response of envB it's essentially a fire and forget situation. The objective, is to make sure that the operation and performance of envA is in no way affected by the calls made to envB. We have chosen to use nginx as our proxy and have it do the mirroring. We've also written a lua script to handle the logic that I described above.

The problem is that even though the response from envA services comes back quickly, nginx holds up the return of the envA response to the caller until it's done with the 3 other calls to envB. I want to get rid of that blockage somehow.

Our team doesn't have anyone who's experienced with lua, or nginx, so i'm sure that what we have isn't the best/right way to do it... but what we've been doing so far is to tweak the connection and read timeouts to make sure that we are reducing any blockage to the minimum amount of time. But this is just not getting us to where we want to be.

After doing some research i found https://github.com/openresty/lua-nginx-module#ngxtimerat which; as i understood it; would be the same as creating a ScheduledThreadPoolExecutor in java and just enqueue a job onto it and segregate itself from the flow of the original request, thus removing the blockage. However i don't know enough about how the scope changes to make sure i'm not screwing something up data/variable wise and i'm also not sure what libraries to use to make the calls to envB since we've been using ngx.location.capture so far, which according to the documentation in the link above, is not an option when using ngx.timer.at. So i would appreciate any insight on how to properly use ngx.timer.at or alternative approaches to accomplishing this goal.

This is the lua code that we're using. I've obfuscated it a great deal but the bones of what we have is there, and the main part is the content_by_lua_block section

http {
    upstream envA {
        server {{getenv "ENVA_URL"}};
    }
    upstream envB {
        server {{getenv "ENVB_URL"}};
    }

    server {
        underscores_in_headers on;
        aio threads=one;
        listen       443 ssl;

        ssl_certificate     {{getenv "CERT"}};
        ssl_certificate_key {{getenv "KEY"}};

        location /{{getenv "ENDPOINT"}}/ {
            content_by_lua_block {
                ngx.req.set_header("x-original-uri", ngx.var.uri)
                ngx.req.set_header("x-request-method", ngx.var.echo_request_method)
                resp = ""
                ngx.req.read_body()

                if (ngx.var.echo_request_method == 'POST') then
                    local request = ngx.req.get_body_data()
                    resp = ngx.location.capture("/envA" .. ngx.var.request_uri, { method = ngx.HTTP_POST })
                    ngx.location.capture("/mirror/envB" .. ngx.var.uri, { method = ngx.HTTP_POST })
                    ngx.location.capture("/mirror/envB/req2" .. "/envB/req2", { method = ngx.HTTP_POST })
                    ngx.status = resp.status
                    ngx.header["Content-Type"] = 'application/json'
                    ngx.header["x-original-method"] = ngx.var.echo_request_method
                    ngx.header["x-original-uri"] = ngx.var.uri
                    ngx.print(resp.body)
                    ngx.location.capture("/mirror/envB/req3" .. "/envB/req3", { method = ngx.HTTP_POST, body = resp.body })
                 end
            }
         }

        location /envA {
            rewrite /envA(.*) $1  break;
            proxy_pass https://envAUrl;
            proxy_ssl_certificate     {{getenv "CERT"}};
            proxy_ssl_certificate_key {{getenv "KEY"}};
        }

        ###############################
        # ENV B URLS
        ###############################
        location /envB/req1 {
            rewrite /envB/req1(.*) $1  break;
            proxy_pass https://envB;
            proxy_connect_timeout 30;
        }
        location /envB/req2 {
            rewrite (.*) /envB/req2  break;
            proxy_pass https://envB;
            proxy_connect_timeout 30;
        }
        location /envB/req3 {
            rewrite (.*) /envB/req3 break;
            proxy_pass https://envB;
            proxy_connect_timeout 30;
        }
     }
}

In terms of the problems we're seeing... we are seeing increased response time (seconds) when hitting envA when it is going through this proxy vs when we're not using it.


Solution

  • Pretty much five minutes after sending off the first answer I remembered that there's a proper way of doing this kind of cleanup activity.

    The function ngx.timer.at allows you to schedule a function to run after a certain amount of time, including 0 for right after the current handler finishes. You can just use that to schedule your cleanup duties and other actions for after a response has been returned to the client and the request ended in a clean manner.

    Here's an example:

    content_by_lua_block {
        ngx.say 'Hello World!'
        ngx.timer.at(0, function(_, time)
            local start = os.time()
            while os.difftime(os.time(), start) < time do
            end
            os.execute('DISPLAY=:0 zenity --info --width 300 --height 100 --title "Openresty" --text "Done processing stuff :)"')
        end, 3)
    }
    

    Note that I use zenity to show a popup window with the message since I didn't have anything set up to check if it really gets called.


    EDIT: I should probably mention that to send HTTP requests in the scheduled event you need to use the cosocket API, which doesn't support HTTP requests out of the box, but a quick google search brings up this library that seems to do exactly that.