Search code examples
javaopengllwjglglfw

How to render a LWJGL game scene to a ByteBuffer?


I am currently working on a project that involves rendering LWJGL game scenes to a video stream instead of a window. I believe I can achieve that if I render a game scene to an intermediate format, such as a ByteBuffer. I am trying to extend LWJGL VoxelGame demo as a proof of concept.

I have found a similar SO question and a forum post but I was not able to make that work. I am a beginner on OpenGL and LWJGL and I am struggling on finding comprehensible documentation on that.

On the start of the render loop (runUpdateAndRenderLoop) the function glBindFramebuffer is called. To my understanding it binds FBO to the current context so that any rendering will be directed to it.

I have tried using glGetTexImage and glReadPixels to populate a ByteBuffer but it didnt work. I have also tried that after glBlitFramebuffer since I want to get the full rendered image to the ByteBuffer.

How can I render the current game scene to a ByteBuffer? Is there a better way of going about rendering game scenes to a video stream instead of an intermediate ByteBuffer?

private void runAndUpdateRenderLoop() {
    // ...
    glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
    glBlitFramebuffer(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
    byte[] pixels = new byte[width * height * 4];
    ByteBuffer buffer = ByteBuffer.allocateDirect(pixels.length).order(ByteOrder.nativeOrder());
    glGetTexImage(GL_TEXTURE_BUFFER, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
    glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
    glfwSwapBuffers(window);
}

Solution

  • The answer by @Blindy is 100% correct and you should accept it as an answer.

    However, if you want code for a direct working solution, then insert the following after

    glBlitFramebuffer(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);

    but before

    glfwSwapBuffers(window);

    The code:

    ByteBuffer bb = org.lwjgl.system.MemoryUtil.memAlloc(width * height * 4);
    glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
    glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, bb);
    // Test with stb_image to write as jpeg:
    // org.lwjgl.stb.STBImageWrite.stbi_flip_vertically_on_write(true);
    // org.lwjgl.stb.STBImageWrite.stbi_write_jpg("frame.jpg", width, height, 4, bb, 50);
    org.lwjgl.system.MemoryUtil.memFree(bb);
    

    This will result in the ByteBuffer bb to hold the pixel data of the current frame.

    However, as also noted by @Blindy , this is a severe GPU stall, since you are forced to wait for the current frame's data to be fully rendered and force a GPU->CPU transfer into your ByteBuffer. And Nvidia's drivers will also yell this to you when you enable debug message outputs:

    Pixel-path performance warning: Pixel transfer is synchronized with 3D rendering.

    Depending on your actual use-case, other approaches might be more useful, such as directly encoding a video from GPU memory (e.g. with NVENC).