Search code examples
c++opencvkinectkinect-sdk

Displaying Kinect streams using OpenCV (C++)


I'm trying to get every frame of the stream produced by the RGB camera of the Kinect (using SDK version 1.8) into an OpenCV (2.4.10) Mat_<Vec3b>. This is my current algorithm, which is not at all fast:

Mat_<Vec3b> mat = Mat::zeros(480, 640, CV_8UC3);

NUI_IMAGE_FRAME imageFrame;
NUI_LOCKED_RECT lockedRect;

if (sensor->NuiImageStreamGetNextFrame(colorStream, 0, &imageFrame) < 0) { return; }

INuiFrameTexture* texture = imageFrame.pFrameTexture;
texture->LockRect(0, &lockedRect, NULL, 0);


if (lockedRect.Pitch != 0)
{
    BYTE* upperLeftCorner = (BYTE*)lockedRect.pBits;
    BYTE* pointerToTheByteBeingRead = upperLeftCorner;

    for (int i = 0; i < 480; i++)
    {
        for (int j = 0; j < 640; j++)
        {
            unsigned char r = *pointerToTheByteBeingRead;
            pointerToTheByteBeingRead += 1;
            unsigned char g = *pointerToTheByteBeingRead;
            pointerToTheByteBeingRead += 1;
            unsigned char b = *pointerToTheByteBeingRead;
            pointerToTheByteBeingRead += 2; //So to skip the alpha channel

            mat.at<Vec3b>(Point(j, i))[0] = r;
            mat.at<Vec3b>(Point(j, i))[1] = g;
            mat.at<Vec3b>(Point(j, i))[2] = b;
        }
    }

}

texture->UnlockRect(0);
sensor->NuiImageStreamReleaseFrame(colorStream, &imageFrame);

I checked the OpenCV documentation and I understand I'm supposed to use pointer access to increase efficiency. Are Mat_<Vec3b>s stored into memory the same way as Mats or should I do some other pointer arithmetic?

Also, I understand updating every single pixel every time is not the most efficient way of achieving the display of the stream through a Mat. What other things could I do?


Solution

  • Finally figured out how to use pointer arithmetic. The code is self-explanatory:

    Mat_<Vec3b> mat = Mat::zeros(480, 640, CV_8UC3);
    
    NUI_IMAGE_FRAME imageFrame;
    NUI_LOCKED_RECT lockedRect;
    
    if (sensor->NuiImageStreamGetNextFrame(colorStream, 0, &imageFrame) < 0) { return; }
    
    INuiFrameTexture* texture = imageFrame.pFrameTexture;
    texture->LockRect(0, &lockedRect, NULL, 0);
    
    
    if (lockedRect.Pitch != 0)
    {
        BYTE* upperLeftCorner = (BYTE*)lockedRect.pBits;
        BYTE* pointerToTheByteBeingRead = upperLeftCorner;
    
        for (int i = 0; i < 480; i++)
        {
            Vec3b *pointerToRow = mat.ptr<Vec3b>(i);
    
            for (int j = 0; j < 640; j++)
            {
                unsigned char r = *pointerToTheByteBeingRead;
                pointerToTheByteBeingRead += 1;
                unsigned char g = *pointerToTheByteBeingRead;
                pointerToTheByteBeingRead += 1;
                unsigned char b = *pointerToTheByteBeingRead;
                pointerToTheByteBeingRead += 2; //So to skip the alpha channel
    
                pointerToRow[j] = Vec3b(r, g, b);
            }
        }
    
    }
    
    texture->UnlockRect(0);
    sensor->NuiImageStreamReleaseFrame(colorStream, &imageFrame);