I am using vImage_Buffer for image processing like gray scale conversion. When I perform converting an image to gray scale, I need to know the pixel format of source image so that I can apply different coefficient for each color.
In other words, in this example from apple, I need to multiply 0.2126 to red color, 0.7152 for green, and 0.0722 for blue individually to get gray value of the pixel.
But the problem is, that I don't know how to get the pixel format (ARGB? RGBA? BGRA? ...) from existing vImage_Buffer. Even the document says vImage_Buffer does not describe pixel format it self.
Any Idea?
The vImage_Buffer only describes a rectangular array of pixel data. The type of the data (unorm8, float, etc.) is inferred from the name of the function that operates on it. This all should be fairly clear.
From vImage's perspective, the channel order is whatever you say it is. For most vImage functions, the channel order doesn't matter since all the channels are treated the same. They may be named _ARGB8888 but really, they are _XXXX8888. For other vImage functions, (e.g. PremultiplyData) only one channel is treated differently. In that case, it is only important that the alpha channel appear either in the first or last channel as described by the function name. The ordering of the other channels doesn't matter because they are treated the same. For the particular function you are talking about, it is your job to know the ordering of the red, green and blue channels and adjust the ordering of the coefficients matrix accordingly.
The channel ordering in your data is probably set by whatever produced your image data in the first place. Often that is CoreGraphics / ImageIO. In that case -- its a bit complicated -- the color channel order matches the order of the colors in the CGImageRef colorspace. The alpha comes either first or last (if present) based on the CGImage bitmap info, part of which is the CGImageAlphaInfo. As a final complication, the entire thing may be subject to a 16- or 32- bit endianness transform. If the size of a channel is smaller than the endianness transform quantum (16- or 32-bit) then the order of the channels relative to one another have been swapped around per the endianness transform. The by far most common case of this is BGRA unorm8 data, which is encoded as ARGB 8-bit data with a 32-bit little endian transform tagged on to it. However, it is possible that one might get 16-bit per channel grayscale alpha data with the GA order transposed due to a 32-bit endian transform, which will simultaneously swap G and A relative to one another and convert the 16-bit samples to 16-bit little endian. (Note that this probably never happens in nature, since you could more clearly classify this as alpha first with a 16-bit little endian transform. The encoding is legal, though.)
There are a few examples in vImage_Utilities.h C headers (not sure about the swift version) that show examples of common CG encodings.