At Wikipedia's Steganography Article there is an example image given with hidden image data.
Wikipedia notes:
Image of a tree with a steganographically hidden image. The hidden image is revealed by removing all but the two least significant bits of each color component and a subsequent normalization. The hidden image is shown (here).
QUESTION: I'm confused about "subsequent normalisation"; assuming working Python 2.x code based on PIL module, how does normalisation factor into the retrieval?
The subsequent normalization is linear interpolation of each color component.
Say, the the red color component of pixel 1,1 is 234.
The binary representation of 234 is
In [1]: bin(234)
Out[1]: '0b11101010'
We can remove everything but the two least significant bits with some bitwise operation:
In [2]: bin(234 & 0b11)
Out[2]: '0b10'
The range of a 8-bit image is 8-bits or 256 possible shades. But the range of our color value is just 2-bits or 4 possible shades.
The normalization part is doing linear interpolation to stretch the 2-bit value to fill 8-bit space:
In [3]: (234 & 0b11) * (256/4)
Out[2]: 128
Do this is done on each color component and the cat would appear.