I just began working with PyPNG. But in the metadata is one item I don't understand: planes. So two examples of two files:
File 1
'bitdepth': 8, 'interlace': 0, 'planes': 1, 'greyscale': False, 'alpha': False, 'size': (818, 1000)
File 2
'bitdepth': 8, 'interlace': 0, 'planes': 4, 'greyscale': False, 'alpha': True, 'size': (818, 1000)
I skipped the palette information to shorten the snippets and obviously both files only differ in number of planes and the alpha channel.
So far I figured out that in file 2 my pixel array contains exactly four items per pixel defining red, green, blue and alpha. So for each array has a length of 3272 items.
But in file 1 each array has only a length of 818 items.
Therefore can anybody explain if there is a relation between number of planes and array length and how to extract the colors for a given pixel out of file 1?
The "planes" are sort of "channels". The number of planes correspond to the dimension of each pixel value.
If you have 1 plane, then each pixel is represented by a single scalar value. This could be a byte, if bitdepth=8; or a bit if bitdepth=1; or a word (16-bits) if bitdepth=16, etc. That value can represent either a Grayscale value (monochrome images) or a palette index (indexed images).
If you have more than one plane, then each pixel is represented by a tuple (array) of scalar values.
The possibilities (in PNG) are:
planes
1 [G] (gray) monochrome
1 [I] (indexed) palette
2 [G A] (gray, alpha) monochrome with transparency
3 [R G B] (red, green, blue) full colour
4 [R G B A] (red, green, blue, alpha) full colour with transp.
In your case, the first image has 1 plane and it has a palette. Each pixel ocuppies one byte. In the second case, it's RGBA, each pixel ocuppies 4 bytes.
To extract the pixel values in the first case, you interpret the value (0-255) as an entry into the palette. The palette should include 256 colours (perhaps less), stored as RGB or RGBA