Search code examples
performanceimage-processingintelintel-ipp

Converting an image with n-bit bit depth to a 16-bit bit depth image with Intel IPP. (where 8 < n < 16)


I need to call the following Intel IPP methods on an input image data.

  • ippiColorToGray
  • ippiLUTPalette
  • ippiScale (just for 16 bit images)
  • ippiCopy
  • ippiSet
  • ippiAlphaComp

I was using the 8-bit and 16 bit versions of this methods till now. But now we also allow 12 bit images as input. For ippiLUTPalette, I see that we can pass the bitSize that we are dealing with. But for the other API's we don't have it.

One approach, I was thinking of, was to convert the images that has a bit depth falling between 8 and 16 bit to a 16 bit image and continue working on the result. I believe, ippiScale performs such conversions. But I couldn't find a flavor of it that works on bit depths other than 8, 16 & 32.

Is there a way to perform this convertion?

Or Is it possible to call the before mentioned APIs on images with bit depths other than 8 and 16 bits?


Solution

  • Datatypes are based on the processor architecture. Usually they are a fraction or multiple of the word length.

    Hence with modern CPUs and therefor in modern programming languages there is no 12bit datatype. You have 64, 32, 16, 8 of adressable memory.

    But no one stops you from putting a smaller number of bits into a register.

    So if you want to store 12bit you usually store them in the lower 12bits of a 16bit type.

    Thats's why image processing algorithm usually support 8, 16,... bit. You can use any 16bit algorithm to work on 12bit intensity information as you would on 16bit.

    In some cases you may scale the 12bit information to 16bit. But in most cases that is not necessary.

    Scaling 12 to 16bit is simple math. 12bit_value / (2^12-1) = 16bit_value / (2^16-1). Of course you may also refer your 12bit value to the maximum value in the image instead of 2^12. Then you would always use the full 16bit.