Search code examples
ffmpegscalingbilinear-interpolation

Does FFMPEG apply a blur filter after scaling?


So I am implementing my own version of bilinear scaling and comparing results with FFMPEG and ImageMagick. These tools create a scaled version of my image but it seems the results are not obtained only by applying the interpolation operations, it seems the result suffer a blur to smooth out jaggyness after the scaling. Here is what I mean.

This is the original image (6x6 yuv422p):

https://snag.gy/Z5pa8f.jpg

As you can see, there are only black and white columns. After my scaling operation with a bilinear filter I get gray columns between the black and white ones, which is expected. This is my result:

My image (12x12 yuv422p):

https://snag.gy/deMJy1.jpg

Now the problem is the result of FFMPEG. As I will show next, the FFMPEG creates an image with only a black and white column, the rest are only shades of grey which does not makes sense with bilinear filtering.

FFMPEG image (12x12 yuv422p):

https://snag.gy/prz54g.jpg

Can someone please enlight me about what FFMPEG does in this conditions?

    // Iterate through each line
    for(int lin = 0; lin < dstHeight; lin++){
        // Get line in original image
        int linOrig = lin / scaleHeightRatio;
        float linOrigRemainder = fmod(lin, scaleHeightRatio);
        float linDist = linOrigRemainder / scaleHeightRatio;

        // For border pixels
        int linIndexA = linOrig;
        int linIndexB = linOrig + 1;
        if(linIndexB >= srcHeight)
            linIndexB = linIndexA;
        linIndexA *= srcWidth;
        linIndexB *= srcWidth;

        // Iterate through each column
        for(int col = 0; col < dstWidth; col++){
            // Get column in original image
            int colOrig = col / scaleWidthRatio;
            float colOrigRemainder = fmod(col, scaleWidthRatio);
            float colDist = colOrigRemainder / scaleWidthRatio;

            // If same position as an original pixel
            if(linOrigRemainder == 0 && colOrigRemainder == 0){
                // Original pixel to the result
                dstSlice[0][lin * dstWidth + col] = srcSlice[0][linOrig * srcWidth + colOrig];
                dstSlice[1][lin * dstWidth + col] = srcSlice[1][linOrig * srcWidth + colOrig];
                dstSlice[2][lin * dstWidth + col] = srcSlice[2][linOrig * srcWidth + colOrig];

                // Continue processing following pixels
                continue;
            }

            // For border pixels
            int colIndexA = colOrig;
            int colIndexB = colOrig + 1;
            if(colIndexB >= srcWidth)
                colIndexB = colIndexA;

            // Perform interpolation
        }
    }

Solution

  • So, I found the problem. With performance in mind I was copying every pixel from the original picture to the scaled one that aligned exactly with a pixel from the latter. After that, I only interpolated the unknown pixels, or the unaligned pixels, and that is really not how bilinear interpolation works since it creates jaggies.

    FFMPEG and all those other image processing tools create a padding around the original pixels, so in my case I should be scaling a 8x8 picture to a 12x12 one which would cause that NO pixels would be aligned between the original and scaled picture. Because of the unalignment a pixel of the scaled picture was always the interpolation of pixels of the original image, it is the weighted average of the surrounding pixels and that is why it LOOKS LIKE that the result of the FFMPEG scaling is blurred (since it is fundamentaly an averaging).