Search code examples
gifspecifications

What's the difference between "Color Resolution" and "Size of Global Color Table" in the GIF89a spec?


I can't figure out the difference between the "Color Resolution" field and the "Size of Global Color Table" field in the gif89a spec https://www.w3.org/Graphics/GIF/spec-gif89a.txt (page 9-10):

Color Resolution - Number of bits per primary color [...], minus 1. This value represents the size of the entire palette [...]

Size of Global Color Table - [...] To determine that actual size of the color table, raise 2 to [the value of the field + 1].

...

[The Global Color Table] is a sequence of bytes representing red-green-blue color triplets [...] and contains a number of bytes equal to 3 x 2^(Size of Global Color Table+1).

Let's say we want to create an image with 3 colors, red, green and blue. In this case we need at least ceil(log2(3)) = 2 bits per color (00 = red, 01 = green, 10 = blue). Then, according to the spec, the Color Resolution (CR) field must be set to ceil(log2(3)) - 1 = 2 - 1 = 1:

packed fields = X001XXXX
                 ---
                 CR field

More generally, if N is the number of colors, then CR = ceil(log2(N)) - 1.

Regarding the size of the Global Color Table (GCT), 1 color requires 3 RGB bytes, therefore the number of entries = the number of bytes / 3 = 2^(Size of GCT + 1). Since we want to store 3 colors, I would go for the power of 2 that comes immediately after 3, which is 2 raised to ceil(log2(3)) = 2^2 = 4. Then, 2^(Size of GCT + 1) = 2^ceil(log2(3)), Size of GCT + 1 = ceil(log2(3)) and Size of GCT = ceil(log2(3)) - 1 = 2 - 1 = 1:

packed fields = XXXXX001
                     ---
                     Size of GCT field

Again, if N is the number of colors, then Size of GCT = ceil(log2(N)) - 1.

As you can see, CR = Size of GCT = ceil(log2(N)) - 1. It seems like the CR field and the Size of GCT field always hold the same value. I suppose I'm wrong because if I was right one of these two fields would be useless, but I didn't find any clarification in the spec so far.

I'm not alone being confused: http://giflib.sourceforge.net/whatsinagif/bits_and_bytes.html. This article applies the definition of the Size of GCT field to the CR field, and the Size of GCT field is simply not defined at all:

[...] the color resolution [...] allow you to compute [the] size [of the GCT]. If the value of this filed is N, the number of entries in the global color table will be 2 ^ (N+1) [...]. Thus, the 001 in the sample image represents 2 bits/pixel; 111 would represent 8 bits/pixel.

Anybody knows a situation where these two fields differ?


Solution

  • From the spec:

    iv) Color Resolution - Number of bits per primary color available
                to the original image, minus 1. This value represents the size of
                the entire palette from which the colors in the graphic were
                selected, not the number of colors actually used in the graphic.
                For example, if the value in this field is 3, then the palette of
                the original image had 4 bits per primary color available to create
                the image.  This value should be set to indicate the richness of
                the original palette, even if not every color from the whole
                palette is available on the source machine.
    

    So to be clear, color resolution is supposed to be the number of bits per color (less 1) in the original image the GIF was made from.