Search code examples
c#image-processingcompressioncolor-depth

Reducing color depth in an image is not reducin the file size?


I use this code to reduce the depth of an image:

public void ApplyDecreaseColourDepth(int offset)
{
    int A, R, G, B;

    Color pixelColor;

    for (int y = 0; y < bitmapImage.Height; y++)
    {
        for (int x = 0; x < bitmapImage.Width; x++)
        {
            pixelColor = bitmapImage.GetPixel(x, y);

            A = pixelColor.A;

            R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2)) % offset) - 1);

            if (R < 0)
            {
                R = 0;
            }

            G = ((pixelColor.G + (offset / 2)) - ((pixelColor.G + (offset / 2)) % offset) - 1);

            if (G < 0)
            {
                G = 0;
            }

            B = ((pixelColor.B + (offset / 2)) - ((pixelColor.B + (offset / 2)) % offset) - 1);

            if (B < 0)
            {
                B = 0;
            }

            bitmapImage.SetPixel(x, y, Color.FromArgb(A, R, G, B));
        }
    }
}

first question is: the offset that I give the function is not the depth, is that right?

the second is that when I try to save the image after I reduce the depth of its colors, I get the same size of the original Image. Isn't it logical that I should get a file with a less size, or I am wrong.

This is the code that I use to save the modified image:

private Bitmap bitmapImage;

public void SaveImage(string path)
{
    bitmapImage.Save(path);
} 

Solution

  • Let's start by cleaning up the code a bit. The following pattern:

    R = ((pixelColor.R + (offset / 2)) - ((pixelColor.R + (offset / 2)) % offset) - 1);
    if (R < 0)
    {
        R = 0;
    }
    

    Is equivalent to this:

    R = Math.Max(0, (pixelColor.R + offset / 2) / offset * offset - 1);
    

    You can thus simplify your function to this:

    public void ApplyDecreaseColourDepth(int offset)
    {
        for (int y = 0; y < bitmapImage.Height; y++)
        {
            for (int x = 0; x < bitmapImage.Width; x++)
            {
                int pixelColor = bitmapImage.GetPixel(x, y);
    
                int A = pixel.A;
    
                int R = Math.Max(0, (pixelColor.R + offset / 2) / offset * offset - 1);
                int G = Math.Max(0, (pixelColor.G + offset / 2) / offset * offset - 1);
                int B = Math.Max(0, (pixelColor.B + offset / 2) / offset * offset - 1);
    
                bitmapImage.SetPixel(x, y, Color.FromArgb(A, R, G, B));
            }
        }
    }
    

    To answer your questions:

    1. Correct; the offset is the size of the steps in the step function. The depth per color component is the original depth minus log2(offset). For example, if the original image has a depth of eight bits per component (bpc) and the offset is 16, then the depth of each component is 8 - log2(16) = 8 - 4 = 4 bpc. Note, however, that this only indicates how much entropy each output component can hold, not how many bits per component will actually be used to store the result.
    2. The size of the output file depends on the stored color depth and the compression used. Simply reducing the number of distinct values each component can have won't automatically result in fewer bits being used per component, so an uncompressed image won't shrink unless you explicitly choose an encoding that uses fewer bits per component. If you are saving a compressed format such as PNG, you might see an improvement with the transformed image, or you might not; it depends on the content of the image. Images with a lot of flat untextured areas, such as line art drawings, will see negligible improvement, whereas photos will probably benefit noticeably from the transform (albeit at the expense of perceptual quality).