Search code examples
pythonctypeslabview

which type of ctype pointer to pass to NI IMAQ's imgBayerColorDecode?


I'm using ctypes to access the image acquisition API from National Instruments (NI-IMAQ). In it, there's a function called imgBayerColorDecode() which I'm using on a Bayer encoded image returned from the imgSnap() function. I would like to compare the decoded output (that is an RGB image) to some numpy ndarrays that I will create based on the raw data, which is what imgSnap returns.

However, there are 2 problems.

The first is simple: passing the imgbuffer returned by imgSnap into a numpy array. Now first of all there's a catch: if your machine is 64-bit and you have more than 3GB of RAM, you cannot create the array with numpy and pass it as a pointer to imgSnap. That's why you have to implement a workaround, which is described on NI's forums (NI ref - first 2 posts): disable an error message (line 125 in the code attached below: imaq.niimaquDisable32bitPhysMemLimitEnforcement) and ensure that it is the IMAQ library that creates the memory required for the image (imaq.imgCreateBuffer). After that, this recipe on SO should be able to convert the buffer into a numpy array again. But I'm unsure if I made the correct changes to the datatypes: the camera has 1020x1368 pixels, each pixel intensity is recorded with 10 bits of precision. It returns the image over a CameraLink and I'm assuming it does this with 2 bytes per pixel, for ease of data transportation. Does this mean I have to adapt the recipe given in the other SO question:

buffer = numpy.core.multiarray.int_asbuffer(ctypes.addressof(y.contents), 8*array_length)
a = numpy.frombuffer(buffer, float)

to this:

bufsize = 1020*1368*2
buffer = numpy.core.multiarray.int_asbuffer(ctypes.addressof(y.contents), bufsize)
a = numpy.frombuffer(buffer, numpy.int16)

The second problem is that imgBayerColorDecode() does not give me an output I'm expecting. Below are 2 images, the first being the output of imgSnap, saved with imgSessionSaveBufferEx(). The second is the output of imgSnap after it has gone through the demosaicing of imgBayerColorDecode().

  • raw data: i42.tinypic.com/znpr38.jpg
  • bayer decoded: i39.tinypic.com/n12nmq.jpg

As you can see, the bayer decoded image is still a grayscale and moreover it does not resemble the original image (small remark here, the images were scaled for upload with imagemagick). The original image was taken with a red color filter in front of some mask. From it (and 2 other color filters), I know that the Bayer color filter looks like this in the top left corner:

BGBG
GRGR

I believe I'm doing something wrong in passing the correct type of pointer to imgBayerDecode, my code is appended below.

#!/usr/bin/env python
from __future__ import division

import ctypes as C
import ctypes.util as Cutil
import time


# useful references:
# location of the niimaq.h: C:\Program Files (x86)\National Instruments\NI-IMAQ\Include
# location of the camera files: C:\Users\Public\Documents\National Instruments\NI-IMAQ\Data
# check it C:\Users\Public\Documents\National Instruments\NI-IMAQ\Examples\MSVC\Color\BayerDecode

class IMAQError(Exception):
    """A class for errors produced during the calling of National Intrument's IMAQ functions.
    It will also produce the textual error message that corresponds to a specific code."""

    def __init__(self, code):
        self.code = code
        text = C.c_char_p('')
        imaq.imgShowError(code, text)
        self.message = "{}: {}".format(self.code, text.value)
        # Call the base class constructor with the parameters it needs
        Exception.__init__(self, self.message)


def imaq_error_handler(code):
    """Print the textual error message that is associated with the error code."""

    if code < 0:
        raise IMAQError(code)
        free_associated_resources = 1
        imaq.imgSessionStopAcquisition(sid)
        imaq.imgClose(sid, free_associated_resources)
        imaq.imgClose(iid, free_associated_resources)
    else:
        return code

if __name__ == '__main__':
    imaqlib_path = Cutil.find_library('imaq')
    imaq = C.windll.LoadLibrary(imaqlib_path)


    imaq_function_list = [  # this is not an exhaustive list, merely the ones used in this program
        imaq.imgGetAttribute,
        imaq.imgInterfaceOpen,
    imaq.imgSessionOpen,
        imaq.niimaquDisable32bitPhysMemLimitEnforcement,  # because we're running on a 64-bit machine with over 3GB of RAM
        imaq.imgCreateBufList,
        imaq.imgCreateBuffer,
        imaq.imgSetBufferElement,
        imaq.imgSnap,
        imaq.imgSessionSaveBufferEx,
        imaq.imgSessionStopAcquisition,
        imaq.imgClose,
        imaq.imgCalculateBayerColorLUT,
        imaq.imgBayerColorDecode ]

    # for all imaq functions we're going to call, we should specify that if they
    # produce an error (a number), we want to see the error message (textually)
    for func in imaq_function_list:
        func.restype = imaq_error_handler




    INTERFACE_ID = C.c_uint32
    SESSION_ID = C.c_uint32
    BUFLIST_ID = C.c_uint32
    iid = INTERFACE_ID(0)
    sid = SESSION_ID(0)
    bid = BUFLIST_ID(0)
    array_16bit = 2**16 * C.c_uint32
    redLUT, greenLUT, blueLUT  = [ array_16bit() for _ in range(3) ]
    red_gain, blue_gain, green_gain = [ C.c_double(val) for val in (1., 1., 1.) ]

    # OPEN A COMMUNICATION CHANNEL WITH THE CAMERA
    # our camera has been given its proper name in Measurement & Automation Explorer (MAX)
    lcp_cam = 'JAI CV-M7+CL'
    imaq.imgInterfaceOpen(lcp_cam, C.byref(iid))
    imaq.imgSessionOpen(iid, C.byref(sid)); 

    # START C MACROS DEFINITIONS
    # define some C preprocessor macros (these are all defined in the niimaq.h file)
    _IMG_BASE = 0x3FF60000

    IMG_BUFF_ADDRESS = _IMG_BASE + 0x007E  # void *
    IMG_BUFF_COMMAND = _IMG_BASE + 0x007F  # uInt32
    IMG_BUFF_SIZE = _IMG_BASE + 0x0082  #uInt32
    IMG_CMD_STOP = 0x08  # single shot acquisition

    IMG_ATTR_ROI_WIDTH = _IMG_BASE + 0x01A6
    IMG_ATTR_ROI_HEIGHT = _IMG_BASE + 0x01A7
    IMG_ATTR_BYTESPERPIXEL = _IMG_BASE + 0x0067  
    IMG_ATTR_COLOR = _IMG_BASE + 0x0003  # true = supports color
    IMG_ATTR_PIXDEPTH = _IMG_BASE + 0x0002  # pix depth in bits
    IMG_ATTR_BITSPERPIXEL = _IMG_BASE + 0x0066 # aka the bit depth

    IMG_BAYER_PATTERN_GBGB_RGRG = 0
    IMG_BAYER_PATTERN_GRGR_BGBG = 1
    IMG_BAYER_PATTERN_BGBG_GRGR = 2
    IMG_BAYER_PATTERN_RGRG_GBGB = 3
    # END C MACROS DEFINITIONS

    width, height = C.c_uint32(), C.c_uint32()
    has_color, pixdepth, bitsperpixel, bytes_per_pixel = [ C.c_uint8() for _ in range(4) ]

    # poll the camera (or is it the camera file (icd)?) for these attributes and store them in the variables
    for var, macro in [ (width, IMG_ATTR_ROI_WIDTH), 
                        (height, IMG_ATTR_ROI_HEIGHT),
                        (bytes_per_pixel, IMG_ATTR_BYTESPERPIXEL),
                        (pixdepth, IMG_ATTR_PIXDEPTH),
                        (has_color, IMG_ATTR_COLOR),
                        (bitsperpixel, IMG_ATTR_BITSPERPIXEL) ]:
        imaq.imgGetAttribute(sid, macro, C.byref(var))  


    print("Image ROI size: {} x {}".format(width.value, height.value))
    print("Pixel depth: {}\nBits per pixel: {} -> {} bytes per pixel".format(
        pixdepth.value, 
        bitsperpixel.value, 
        bytes_per_pixel.value))

    bufsize = width.value*height.value*bytes_per_pixel.value
    imaq.niimaquDisable32bitPhysMemLimitEnforcement(sid)

    # create the buffer (in a list)
    imaq.imgCreateBufList(1, C.byref(bid))  # Creates a buffer list with one buffer

    # CONFIGURE THE PROPERTIES OF THE BUFFER
    imgbuffer = C.POINTER(C.c_uint16)()  # create a null pointer
    RGBbuffer = C.POINTER(C.c_uint32)()  # placeholder for the Bayer decoded imgbuffer (i.e. demosaiced imgbuffer)
    imaq.imgCreateBuffer(sid, 0, bufsize, C.byref(imgbuffer))  # allocate memory (the buffer) on the host machine (param2==0)
    imaq.imgCreateBuffer(sid, 0, width.value*height.value * 4, C.byref(RGBbuffer))

    imaq.imgSetBufferElement(bid, 0, IMG_BUFF_ADDRESS, C.cast(imgbuffer, C.POINTER(C.c_uint32)))  # my guess is that the cast to an uint32 is necessary to prevent 64-bit callable memory addresses
    imaq.imgSetBufferElement(bid, 0, IMG_BUFF_SIZE, bufsize)
    imaq.imgSetBufferElement(bid, 0, IMG_BUFF_COMMAND, IMG_CMD_STOP)

    # CALCULATE THE LOOKUP TABLES TO CONVERT THE BAYER ENCODED IMAGE TO RGB (=DEMOSAICING)
    imaq.imgCalculateBayerColorLUT(red_gain, green_gain, blue_gain, redLUT, greenLUT, blueLUT, bitsperpixel)


    # CAPTURE THE RAW DATA 

    imgbuffer_vpp = C.cast(C.byref(imgbuffer), C.POINTER(C.c_void_p))
    imaq.imgSnap(sid, imgbuffer_vpp)
    #imaq.imgSnap(sid, imgbuffer)  # <- doesn't work (img produced is entirely black). The above 2 lines are required
    imaq.imgSessionSaveBufferEx(sid, imgbuffer,"bayer_mosaic.png")
    print('1 taken')


    imaq.imgBayerColorDecode(RGBbuffer, imgbuffer, height, width, width, width, redLUT, greenLUT, blueLUT, IMG_BAYER_PATTERN_BGBG_GRGR, bitsperpixel, 0) 
    imaq.imgSessionSaveBufferEx(sid,RGBbuffer,"snapshot_decoded.png");

    free_associated_resources = 1
    imaq.imgSessionStopAcquisition(sid)
    imaq.imgClose(sid, free_associated_resources )
    imaq.imgClose(iid, free_associated_resources )
    print "Finished"

Follow-up: after a discussion with an NI representative, I am getting convinced that the second issue is due to imgBayerColorDecode being limited to 8bit input images prior to its 2012 release (we are working on 2010). However, I would like to confirm this: if I cast the 10-bit image to an 8-bit image, keeping only the most significant bytes, and passing this cast version to imgBayerColorDecode, I'm expecting to see an RGB image.

To do so, I am casting the imgbuffer to a numpy array and shifting the 10-bit data with 2 bits:

np_buffer = np.core.multiarray.int_asbuffer(
    ctypes.addressof(imgbuffer.contents), bufsize)
flat_data = np.frombuffer(np_buffer, np.uint16)

# from 10 bit to 8 bit, keeping only the non-empty bytes
Z = (flat_data>>2).view(dtype='uint8')[::2] 
Z2 = Z.copy()  # just in case

Now I pass the ndarray Z2 to imgBayerColorDecode:

bitsperpixel = 8
imaq.imgBayerColorDecode(RGBbuffer, Z2.ctypes.data_as(
    ctypes.POINTER(ctypes.c_uint8)), height, width, 
    width, width, redLUT, greenLUT, blueLUT, 
    IMG_BAYER_PATTERN_BGBG_GRGR, bitsperpixel, 0)

Remark that the original code (shown way above) has been altered slightly, such that redLUt, greenLUT and blueLUT are now only 256 element arrays. And finally I call imaq.imgSessionSaveBufferEx(sid,RGBbuffer, save_path). But it is still a grayscale and the img shape is not preserved, so I am still doing something terribly wrong. Any ideas?


Solution

  • After a bit of playing around, it turns out that the RGBbuffer mentioned must hold the correct data, but imgSessionSaveBufferEx is doing something odd at that point.

    When I pass the data from RGBbuffer back to numpy, reshape this 1D-array into the dimension of the image and then split it into color channels by masking and using bitshift operations (e.g. red_channel = (np_RGB & 0XFF000000)>>16), I can then save it as a nice color image in png format with PIL or pypng.

    I haven't found out why imgSessionSaveBufferEx behaves oddly though, but the solution above works (even though speed-wise it's really inefficient).