Search code examples
firefoxhtml5-canvasantialiasing

How to improve smoothing in CanvasRenderingContext2D in Firefox?


I want to display a scaled down image in a canvas. When doing so, jagged edges appear on the bottom of the spaceship, it seems that antialiasing is disabled.

Here is a zoom of the image produced in Firefox:
enter image description here
The image is very sharp but we see jagged edges (especially the bottom of the spaceship, the windshield, the nose wing).

And in Chrome:
enter image description here
The image stays sharp (the portholes stay sharp, all lines) and we have no jagged edges. Only the clouds got blurred a little.

And in Chrome with smoothing disabled:
enter image description here

I tried setting the property imageSmoothingEnabled to true, but it has no effect in Firefox, my example:

<!DOCTYPE html>
<html>
<head>
    <meta http-equiv="Content-Type" content="text/html;charset=UTF-8">
</head>
<body>
    <!-- <canvas id="canvas1" width="1280" height="720" style="width: 640px; height: 360px;"></canvas> -->
    <canvas id="canvas1" width="640" height="360" style="width: 640px; height: 360px;"></canvas>
    <script>
        const canvas = document.getElementById("canvas1")
        const ctx = canvas.getContext("2d")

        console.log("canvas size", canvas.width, canvas.height);

        const img = new Image()

        img.onload = () => {
            const smooth = true;
            ctx.mozImageSmoothingEnabled = smooth;
            ctx.webkitImageSmoothingEnabled = smooth;
            ctx.msImageSmoothingEnabled = smooth;
            ctx.imageSmoothingEnabled = smooth;
            // ctx.filter = 'blur(1px)';
            ctx.drawImage(img, 0, 0, 3840, 2160, 0, 0, canvas.width, canvas.height);
        }

        img.src = "https://upload.wikimedia.org/wikipedia/commons/f/f8/BFR_at_stage_separation_2-2018.jpg";
    </script>
</body>
</html>

How can I apply antialiasing?

Edit: Antialising is applied when viewing the site in Chrome, but not in Firefox.

Edit 2: Compare the images more precisely. Actually it seems that Firefox applies some image enhancement, but does not disable it when setting imageSmoothingEnabled to false

Edit 3: Replace mentions of antialising to smoothing because it seems that there is more than just AA involved.

Workarounds so far (I am eager to hear your proposals!):

  • render the canvas with more pixels, then shrink it via CSS -> shifts the quality / performance cursor manually
  • use an offline tool to resize the image -> not interactive
  • apply 1px blur to the image -> no more jagged edges, but obviously a blurry image

Screenshot with the blur technique:
enter image description here


Solution

  • High quality down sample.

    This answer presents a down sampler that will have consistent results across browsers and allows for a wide range of reductions both uniform and non uniform.

    Pros

    It has a significant advantage in terms of quality as it can use 64bit floating point JS numbers rather than the 32bit float used by the GPU. It also does the reduction in sRGB rather than the lower quality RGB used by the 2d API.

    Cons

    Its drawback is of course performance. This could make it impractical when down sampling large images. However it can be run in parallel via web workers thus not block the main UI.

    Only for down sampling at or below 50%. It will only take a few minor mods to scale to any size, but the example opted for speed over versatility.

    The quality gain for 99% of people viewing the result will barely be noticeable.

    Area samples

    The method samples the source pixels under the new destination pixel calculating the color based on overlapping pixel areas.

    The following illustration will help in understanding how it work.

    enter image description here

    • Left side shows smaller high res source pixels (blue) overlapped by new low res destination pixel (red).
    • The right illiterates which parts of the source pixels contribute to the destination pixels color. The % values are the % the destination pixel overlaps each source pixel.

    Overview of process.

    First we create 3 values to hold the new R,G,B color to zero (black)

    We perform the following for each pixel under the destination pixel.

    • Calculate the overlap area between the destination and source pixel.
    • Divide the source pixels overlap by the destination pixels area to get a fractional contribution the source pixel has to the destination pixels color
    • Convert the source pixel RGB to sRGB, normalize and multiply by the fractional contribution calculated in previous step, then add the result to the stored R,G,B values.

    When all pixel under the new pixel have been processed, the new colors R,G,B values are converted back to RGB and added to the image data.

    When done the pixel data is added to a canvas which is returned ready for use

    Example

    The example down-scales the image by approx ~ 1/4

    When done the example displays the scaled image and the images scaled via the 2D API.

    You can click on the top image to swap between the two methods and compare results.

    /* Image source By SharonPapierdreams - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=97564904 */
    
    
    // reduceImage(img, w, h) 
    // img is image to down sample. w, h is down sampled image size.
    // returns down sampled image as a canvas. 
    function reduceImage(img, w, h) {
        var x, y = 0, sx, sy, ssx, ssy, r, g, b, a;
        const RGB2sRGB = 2.2;  // this is an approximation of sRGB
        const sRGB2RGB = 1 / RGB2sRGB;
        const sRGBMax = 255 ** RGB2sRGB;
    
        const srcW = img.naturalWidth;
        const srcH = img.naturalHeight;
        const srcCan = Object.assign(document.createElement("canvas"), {width: srcW, height: srcH});
        const sCtx = srcCan.getContext("2d");
        const destCan = Object.assign(document.createElement("canvas"), {width: w, height: h});
        const dCtx = destCan.getContext("2d");
        sCtx.drawImage(img, 0 , 0);
        const srcData = sCtx.getImageData(0,0,srcW,srcH).data;
        const destData = dCtx.getImageData(0,0,w,h);
    
        // Warning if yStep or xStep span less than 2 pixels then there may be
        // banding artifacts in the image
        const xStep = srcW / w, yStep = srcH / h;
        if (xStep < 2 || yStep < 2) {console.warn("Downsample too low. Should be at least 50%");}
        const area = xStep * yStep
        const sD = srcData, dD = destData.data;
    
        
        while (y < h) {
            sy = y * yStep;
            x = 0;
            while (x < w) {
                sx = x * xStep;
                const ssyB = sy + yStep;
                const ssxR = sx + xStep;
                r = g = b = a = 0;
                ssy = sy | 0;
                while (ssy < ssyB) {
                    const yy1 = ssy + 1;
                    const yArea = yy1 > ssyB ? ssyB - ssy : ssy < sy ? 1 - (sy - ssy) : 1;
                    ssx = sx | 0;
                    while (ssx < ssxR) {
                        const xx1 = ssx + 1;
                        const xArea = xx1 > ssxR ? ssxR - ssx : ssx < sx ? 1 - (sx - ssx) : 1;
                        const srcContribution = (yArea * xArea) / area;
                        const idx = (ssy * srcW + ssx) * 4;
                        r += ((sD[idx  ] ** RGB2sRGB) / sRGBMax) * srcContribution;
                        g += ((sD[idx+1] ** RGB2sRGB) / sRGBMax) * srcContribution;
                        b += ((sD[idx+2] ** RGB2sRGB) / sRGBMax) * srcContribution;
                        a +=  (sD[idx+3] / 255) * srcContribution;
                        ssx += 1;
                    }
                    ssy += 1;
                }
                const idx = (y * w + x) * 4;
                dD[idx]   = (r * sRGBMax) ** sRGB2RGB;
                dD[idx+1] = (g * sRGBMax) ** sRGB2RGB;
                dD[idx+2] = (b * sRGBMax) ** sRGB2RGB;
                dD[idx+3] = a * 255;
                x += 1;
            }
            y += 1;
        }
    
        dCtx.putImageData(destData,0,0);
        return destCan;
    }
    
    
    
    
    
    
    
    
    
    const scaleBy = 1/3.964; 
    const img = new Image;
    img.crossOrigin = "Anonymous";
    img.src = "https://upload.wikimedia.org/wikipedia/commons/7/71/800_Houston_St_Manhattan_KS_3.jpg";
    img.addEventListener("load", () => {
        const downScaled = reduceImage(img, img.naturalWidth * scaleBy | 0, img.naturalHeight * scaleBy | 0);
        const downScaleByAPI = Object.assign(document.createElement("canvas"), {width: downScaled.width, height: downScaled.height});
        const ctx = downScaleByAPI.getContext("2d");
        ctx.drawImage(img, 0, 0, ctx.canvas.width, ctx.canvas.height);
        const downScaleByAPI_B = Object.assign(document.createElement("canvas"), {width: downScaled.width, height: downScaled.height});
        const ctx1 = downScaleByAPI_B.getContext("2d");
        ctx1.drawImage(img, 0, 0, ctx.canvas.width, ctx.canvas.height);    
        img1.appendChild(downScaled);
        img2.appendChild(downScaleByAPI_B);
        info2.textContent = "Original image " + img.naturalWidth + " by " + img.naturalHeight + "px Downsampled to " + ctx.canvas.width + " by " + ctx.canvas.height+ "px"
        var a = 0;
        img1.addEventListener("click", () => {
            if (a) {
                info.textContent = "High quality JS downsampler";
                img1.removeChild(downScaleByAPI);
                img1.appendChild(downScaled);   
            } else {            
                info.textContent = "Standard 2D API downsampler"; 
                img1.removeChild(downScaled);
                img1.appendChild(downScaleByAPI);            
            }
            a = (a + 1) % 2;
        })
    }, {once: true})
    body { font-family: arial }
    <br>Click first image to switch between JS rendered and 2D API rendered versions<br><br>
    <span id="info2"></span><br><br>
    <div id="img1"> <span id="info">High quality JS downsampler </span><br></div>
    <div id="img2"> Down sampled using 2D API<br></div>
    
    Image source <cite><a href="https://commons.wikimedia.org/w/index.php?curid=97564904">By SharonPapierdreams - Own work, CC BY-SA 4.0,</a></cite>

    More on RGB V sRGB

    sRGB is the color space that all digital media devices use to display content. Humans see brightness logarithmic meaning that the dynamic range of a display device is 1 to ~200,000 which would require 18bits per channel.

    Display buffers overcome this by storing the channel values as sRGB. The brightness in the range 0 - 255. When the display hardware converts this value to photons it first expands 255 values by raising it to the power of 2.2 as to provide the high dynamic range needed.

    The problem is that processing the display buffer (2D API) ignores this and does not expand the sRGB values. It is treated as RGB resulting in incorrect mixing of color.

    The image shows the difference between sRGB and RGB (RGB as used by the 2D API) rendering.

    Note the dark pixels on the center and right image. That is the result of RGB rendering. The left image is rendered using sRGB and does not lose brightness.

    enter image description here