My app handles incoming and outgoing WebRTC streams.
I want to record incoming stream by converting each VideoFrame to Bitmap. My app have one main service that handles the traffic. I am using VideoSink to process incoming and outgoing VideoFrames.
the most interesting part: When I record outgoing VideoFrames - the VideoFrame->Bitmap conversion works. When I record incoming VideoFrames - the VideoFrame->Bitmap conversion crashes the openGL engine.
My guess it has to be with initialization of the openGL engine and the fact that the incoming VideoFrame belongs to different thread.
this is the error that I get: First:
java.lang.IllegalStateException: Framebuffer not complete, status: 0
And for the next frames, looks like the OpenGL program crashed, those logs repeats for each incoming frame:
java.lang.RuntimeException: glCreateShader() failed. GLES20 error: 0
java.lang.NullPointerException: Attempt to invoke virtual method 'void org.webrtc.GlShader.useProgram()' on a null object reference
This is the code I run on every incoming frame, the crash happens on this line:
bitmapTextureFramebuffer.setSize(scaledWidth, scaledHeight);
Complete code from the mainservice:
private VideoFrameDrawer frameDrawer;
private GlRectDrawer drawer;
boolean mirrorHorizontally = true;
boolean mirrorVertically = true;
private final Matrix drawMatrix = new Matrix();
private GlTextureFrameBuffer bitmapTextureFramebuffer ;
private boolean isInitGPU = false;
public Bitmap GetBitmapFromVideoFrame(VideoFrame frame){
try {
if (!isInitGPU)
{
bitmapTextureFramebuffer =
new GlTextureFrameBuffer(GLES20.GL_RGBA);
isInitGPU = true;
frameDrawer = new VideoFrameDrawer();
drawer = new GlRectDrawer();
}
drawMatrix.reset();
drawMatrix.preTranslate(0.5f, 0.5f);
drawMatrix.preScale(mirrorHorizontally ? -1f : 1f, mirrorVertically ? -1f : 1f);
drawMatrix.preScale(1f, -1f);//We want the output to be upside down for Bitmap.
drawMatrix.preTranslate(-0.5f, -0.5f);
final int scaledWidth = (int) (1 * frame.getRotatedWidth());
final int scaledHeight = (int) (1 * frame.getRotatedHeight());
bitmapTextureFramebuffer.setSize(scaledWidth, scaledHeight);
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, bitmapTextureFramebuffer.getFrameBufferId());
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0,
GLES20.GL_TEXTURE_2D, bitmapTextureFramebuffer.getTextureId(), 0);
GLES20.glClearColor(0/* red */, 0/* green */, 0/* blue */, 0/* alpha */);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
frameDrawer.drawFrame(frame, drawer, drawMatrix, 0/* viewportX */,
0/* viewportY */, scaledWidth, scaledHeight);
final ByteBuffer bitmapBuffer = ByteBuffer.allocateDirect(scaledWidth * scaledHeight * 4);
GLES20.glViewport(0, 0, scaledWidth, scaledHeight);
GLES20.glReadPixels(
0, 0, scaledWidth, scaledHeight, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bitmapBuffer);
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
GlUtil.checkNoGLES2Error("EglRenderer.notifyCallbacks");
final Bitmap bitmap = Bitmap.createBitmap(scaledWidth, scaledHeight, Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(bitmapBuffer);
return bitmap;
} catch (Exception e) {
Log.e(TAG, e.toString());
return null;
}
}
the code of the VideoSink
private class ProxyVideoSink implements VideoSink {
private VideoSink target;
@Override
synchronized public void onFrame(VideoFrame frame) {
//send the frame to the service to handle
if (onStreamWebRTCListener != null )
onStreamWebRTCListener.OnStreamBitmapListener(frame);
target.onFrame(frame);
}
}
values of the "bitmapTextureFramebuffer" when the first crash happens.
Could it be related to the allowed dimension?
incoming VideoFrame run on "IncomingVideoSt - 23064" thread and have 640x480 dimension ->crash.
outgoing VideoFrame run on "main" thread and have 360x540 dimension -> works.
I have tried to run on the main thread by using "runOnUiThread" but it did not help.
Any other way to fast convert VideoFrame to Bitmap is also welcome(OpenGL or RenderScript)
I will answer it, in case someone have the same problem. The problem is that when getting frame from remote webRTC - the opengl context is not initialed on the local machine. You should init it with fake surface ( in my case 64x64 is enough) like that:
if (!isInitGPU)
{
isInitGPU = true;
initializeEGL();
bitmapTextureFramebuffer =
new GlTextureFrameBuffer(GLES20.GL_RGBA);
frameDrawer = new VideoFrameDrawer();
drawer = new GlRectDrawer();
}
void initializeEGL() {
if (((EGL10) EGLContext.getEGL()).eglGetCurrentContext().equals(EGL10.EGL_NO_CONTEXT)) {
// no current context.
try {
EGLDisplay dpy = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY);
int[] vers = new int[2];
EGL14.eglInitialize(dpy, vers, 0, vers, 1);
int[] configAttr = {
EGL14.EGL_COLOR_BUFFER_TYPE, EGL14.EGL_RGB_BUFFER,
EGL14.EGL_LEVEL, 0,
EGL14.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT,
EGL14.EGL_SURFACE_TYPE, EGL14.EGL_PBUFFER_BIT,
EGL14.EGL_NONE
};
EGLConfig[] configs = new EGLConfig[1];
int[] numConfig = new int[1];
EGL14.eglChooseConfig(dpy, configAttr, 0,
configs, 0, 1, numConfig, 0);
if (numConfig[0] == 0) {
// TROUBLE! No config found.
}
EGLConfig config = configs[0];
int[] surfAttr = {
EGL14.EGL_WIDTH, 64,
EGL14.EGL_HEIGHT, 64,
EGL14.EGL_NONE
};
EGLSurface surf = EGL14.eglCreatePbufferSurface(dpy, config, surfAttr, 0);
int[] ctxAttrib = {
EGL14.EGL_CONTEXT_CLIENT_VERSION, 2,
EGL14.EGL_NONE
};
android.opengl.EGLContext ctx = EGL14.eglCreateContext(dpy, config, EGL14.EGL_NO_CONTEXT, ctxAttrib, 0);
EGL14.eglMakeCurrent(dpy, surf, surf, ctx);
}
catch (Exception ex)
{
Log.e(TAG, "Init Collapse::" + ex.toString());
}
}
}