Search code examples
nginxstreambufferlivemjpeg

Streaming Mjpeg in Nginx with low client side bandwidth


I am streaming MJPEG using Nginx. This works fine so long as the bandwidth on the client side is enough. When the bandwidth is not enough, it seems to fall about 2 minutes behind, then it jumps to the current frame and starts falling back again.

Is there any way to control the buffer to never store more than 2 frames? -- This way if the client side cannot keep up, it will never fall more than a second or two behind?

EDIT: Basically the server (currently python tornado behind an nginx reverse proxy) is sending a stream at 5mbit, the client has 1mbit bandwidth (for the sake of argument) - the server (nginx or python) needs to be able to detect this and drop frames. Question is how?


Solution

  • This depends a lot on how you're actually deploying the M-JPEG frames and if you must use built in browser support or if you can write your own javascript.

    Background

    Keep in mind that when streaming an M-JPEG from the server, it's essentially just sends a series of JPEG files but as a response to a single web request. That is, a normal web request looks like

    Client             Server
       | --- Request ---> |
       |                  |
       | <-- JPEG File -- |
    

    While a request for an M-JPEG looks more like

    Client             Server
       | --- Request ---> |
       |                  |
       | <- JPEG part 1 - |
       | <- JPEG part 2 - |
       | <- JPEG part 3 - |
    

    So the problem isn't with the client buffering, but rather the fact that once a M-JPEG has started the server sends every frame, even if it takes longer to download each frame than the specified display time.

    Pure JS Solution

    If you can write javascript in your application, consider making the request / response portion of the application explicit. That is, for each desired frame, send an explicit request from your javascript to the server for the desired frame (as a single JPEG). If the javascript starts falling behind, then you have two options

    1. Drop frames. Running at 50% required bandwidth? Request every other frame.
    2. Request smaller files. Running at 25% bandwidth? Request a version of the file from the server at 50% width and height.

    Long ago, making the extra requests from javascript would have introduced extra overhead with each request requiring a new TCP connection. If you're using Keep-Alive or better yet, Spdy or HTTP/2 on your server, via Nginx, then there's almost no overhead to making these requests using javascript. Finally, using javascript will allow you to actually have a few frames explicitly buffered, and to control the buffer timeout.

    For a very basic example, (using the jQuery imgload plugin for ease of example):

    var timeout = 250; // 4 frames per second, adjust as necessary
    var image = // A reference to the <img> tag for display
    var accumulatedError = 0; // How late we are
    
    var doFrame = function(frameId) {
        var loaded = false, timedOut = false, startTime = (new Date()).getTime();
        $(image).bind("load", function(e) {
            var tardiness = (new Date()).getTime() - startTime - timeout;
            accumulatedError += tardiness; // Add or subtract tardiness
            accumulatedError = Math.max(accumulatedError, 0); // but never negative
            if (!timedOut) {
                loaded = true;
            } else {
                doFrame(frameId + 1);
            }
        }
        var timeCallback = function() {
            if (loaded) {
                doFrame(frameId + 1); // Just do the next frame, we're on time
            } else {
                timedOut = true;
            }
        }
        while(accumulatedError > timeout) {
            // If we've accumulated more than 1 frame or error
            // skip a frame
            frameId += 1;
            accumulatedError -= timeout;
        }
        // Load the image
        $(image).src = "http://example.com/images/frame-" + frameId + ".jpg";
        // Start the display timer
        setTimeout(timeCallback, timeout);
    }
    
    doFrame(1); // Start the process
    

    To make this code really seamless, you'd probably want to use two image tags and swap them when the loading is complete, so that there's no visible loading artifacts (e.g. Double Buffering).

    Websocket Solution

    If you cannot write javascript in your application, or you need a high framerate, then you'll need to modify the server to detect the rate at which it's sending frames. Assuming a framerate of 4 fps, for example, if it takes more than 250 ms to write each frame out, then drop the next frame and add 250 ms to your frame offset buffer. Unfortunately, this only modifies the rate at which frames are sent. While the rate at which the server sends and the rate at which the client receives are similar in the long run, in the short run they can be quite different due to TCP buffering, etc.

    However, if you can restrict yourself to fairly recent implementations of most browsers (see support here) then Websockets should provide a good mechanism for sending frames on the server to client channel and sending back performance information on the client to server channel. In addition, Nginx is capable of proxying Websockets.

    On the client side, establish a Websocket. Start sending jpeg frames from the server slightly faster than the desired presentation rate (e.g. for 30 frames per second send a frame every 20-25 ms is probably a good place to start, if you have some buffer on the server -- without buffer, send at the maximum available frame rate). After each frame is fully received on the client, send a message back to the server with the frame ID and how much time elapsed on the client between frames.

    Using the time between frames received from the client, start accumulating an accumulatedError variable on the server using the same method as in the previous example (subtract desired inter-frame time from actual inter-frame time). When the accumulatedError reaches one frame (or maybe even close to one frame), skip sending a frame and reset the accumulatedError.

    Note however that this solution may cause some jank in the video playback, because you only skip a frame when absolutely necessary, which means that frames won't be skipped at a regular cadence. The ideal solution to this is to treat the frame send timer as a PID control variable and use the actual frame receive times as the feedback for a PID loop. In the long run, a PID loop will probably provide the most stable video presentation, but the accumulatedErrror method should still provide a satisfactory (and relatively simple) solution.