Search code examples
androidccamerayuvgoogle-project-tango

Color Image in the Google Tango Leibniz API


I am trying to capture the image data in the onFrameAvailable method from a Google Tango. I am using the Leibniz release. In the header file it is said that the buffer contains HAL_PIXEL_FORMAT_YV12 pixel data. In the release notes they say the buffer contains YUV420SP. But in the documentation it is said the pixels are RGBA8888 format (). I am a little confused and additionally. I don't really get image data but a lot of magenta and green. Right now I am trying to convert from YUV to RGB similar to this one. I guess there is something wrong with the stride, too. Here eís the code of the onFrameAvailable method:

int size = (int)(buffer->width * buffer->height);
for (int i = 0; i < buffer->height; ++i)
{
   for (int j = 0; j < buffer->width; ++j)
   {
       float y = buffer->data[i * buffer->stride + j];
       float v = buffer->data[(i / 2) * (buffer->stride / 2) + (j / 2) + size];
       float u = buffer->data[(i / 2) * (buffer->stride  / 2) + (j / 2) + size + (size / 4)];

               const float Umax = 0.436f;
               const float Vmax = 0.615f;

               y = y / 255.0f;
               u =  (u / 255.0f - 0.5f) ;
               v =  (v / 255.0f - 0.5f) ;

               TangoData::GetInstance().color_buffer[3*(i*width+j)]=y;
               TangoData::GetInstance().color_buffer[3*(i*width+j)+1]=u;
               TangoData::GetInstance().color_buffer[3*(i*width+j)+2]=v;
   }
}

I am doing the yuv to rgb conversion in the fragment shader.

Has anyone ever obtained an RGB image for the Google Tango Leibniz release? Or had someone similar problems when converting from YUV to RGB?


Solution

  • YUV420SP (aka NV21) is correct for the time being. An explanation is here. In this format you have a width x height array where each element is a Y byte, followed by a width/2 x height/2 array where each element is a V byte and a U byte. Your code is implementing YV21, which has separate arrays for V and U instead of interleaving them in one array.

    You mention that you are doing YUV to RGB conversion in a fragment shader. If all you want to do with the camera images is draw then you can use TangoService_connectTextureId() and TangoService_updateTexture() instead of TangoService_connectOnFrameAvailable(). This approach delivers the camera image to you already in an OpenGL texture that gives your fragment shader RGB values without bothering with the pixel format details. You will need to bind to GL_TEXTURE_EXTERNAL_OES (instead of GL_TEXTURE_2D), and your fragment shader would look something like this:

    #extension GL_OES_EGL_image_external : require
    
    precision mediump float;
    
    varying vec4 v_t;
    uniform samplerExternalOES colorTexture;
    
    void main() {
       gl_FragColor = texture2D(colorTexture, v_t.xy);
    }
    

    If you really do want to pass YUV data to a fragment shader for some reason, you can do so without preprocessing it into floats. In fact, you don't need to unpack it at all - for NV21 just define a 1-byte texture for Y and a 2-byte texture for VU, and load the data as-is. Your fragment shader will use the same texture coordinates for both.