Search code examples
performanceopenglopencvfbo

Fastest possible OpenCV 2 OpenGL context


I've been searching through the net for a few day looking for the fastest possible way to take a OpenCV webcam capture and display it on an OpenGL context. So far this seems to work OK until I need to zoom.

void Camera::DrawIplImage1(IplImage *image, int x, int y, GLfloat xZoom, GLfloat yZoom)
{
    GLenum format;
        switch(image->nChannels) {
            case 1:
                format = GL_LUMINANCE;
                break;
            case 2:
                format = GL_LUMINANCE_ALPHA;
                break;
            case 3:
                format = GL_BGR;
                break;
            default:
                return;
        }

    yZoom =- yZoom;
        glRasterPos2i(x, y);
        glPixelZoom(xZoom, yZoom);  //Slow when not (1.0f, 1.0f);
        glDrawPixels(image->width, image->height, format, GL_UNSIGNED_BYTE, image->imageData);
}

I've heard that maybe taking the FBO approach would be even faster. Any ideas out there on the fastest possible way to get an OpenCV webcam capture to an OpenGL context. I will test everything I see and post results.


Solution

  • Are you sure your openGL implementation needs ^2 textures? Even very poor PC implementations (yes Intel) can manage arbitrary sizes now.

    Then the quickest is probably to use a openGL Pixel buffer Sorry the code is from Qt, so the function names are slightly different but the sequence is the same

    Allocate the opengl Texture

    glEnable(GL_TEXTURE_2D);
    glGenTextures(1,&texture);
    
    glBindTexture(GL_TEXTURE_2D,texture);       
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);          
    glTexImage2D(GL_TEXTURE_2D, 0, glFormat, width, height, 0, glFormatExt, glType, NULL ); 
    
    glDisable(GL_TEXTURE_2D);       
    

    Now get a pointer to the texture to use the memeory

    glbuffer.bind();
    unsigned char *dest = (unsigned char*)glbuffer.map(QGLBuffer::ReadWrite);
    // creates an openCV image but the pixel data is stored in an opengl buffer
    cv::Mat opencvImage(rows,cols,CV_TYPE,dest);  
    .... do stuff ....
    glbuffer.unmap(); // pointer is no longer valid - so neither is openCV image
    

    Then to draw it - this should be essentially instant because the data was copied to the GPU in the mapped calls above

    glBindTexture(GL_TEXTURE_2D,texture);                       
    glTexSubImage2D(GL_TEXTURE_2D, 0, 0,0, width, height, glFormatExt, glType, 0);                                                                                  
    glbuffer.release();
    

    By using different types for glFormat and glFormatExt you can have the graphics card automatically convert between opencVs BGR and typical RGBA display formats for you in hardware.