Search code examples
c++opengloculusvirtual-realitypoint-sprites

Oculus Rift + Point Sprites + Point size attenuation


I am coding a small project with Oculus Rfit support, and i use point sprites to render my particles. I calculate the size of the point sprites in pixels, based on their distance from the "camera" in the vertex shader. When drawing on the default screen (not on the Rift) the size works perfectly, but when i switch to the Rift i notice these phenomena:

The particles on the Left Eye are small and get reduced in size very rapidly. The particles on the Right Eye are huge and do not change in size.

Screenshots: Rift disabled: https://i.sstatic.net/03l3o.jpg Rift enabled: https://i.sstatic.net/4tswC.jpg

Here is the vertex shader:

#version 120

attribute vec3 attr_pos;
attribute vec4 attr_col;
attribute float attr_size;

uniform mat4 st_view_matrix;
uniform mat4 st_proj_matrix;
uniform vec2 st_screen_size;

varying vec4 color;

void main()
{
    vec4 local_pos = vec4(attr_pos, 1.0);
    vec4 eye_pos = st_view_matrix * local_pos;
    vec4 proj_vector = st_proj_matrix * vec4(attr_size, 0.0, eye_pos.z, eye_pos.w);
    float proj_size = st_screen_size.x * proj_vector.x / proj_vector.w;

    gl_PointSize = proj_size;
    gl_Position = st_proj_matrix * eye_pos;

    color = attr_col;
}

The st_screen_size uniform is the size of the viewport. Since i am using a single frambuffer when rendering on the Rift (1 half for each eye), the value of st_screen_size should be (frabuffer_width / 2.0, frambuffer_height).

Here is my draw call:

    /*Drawing starts with a call to ovrHmd_BeginFrame.*/
    ovrHmd_BeginFrame(game::engine::ovr_data.hmd, 0);

    /*Start drawing onto our texture render target.*/
    game::engine::ovr_rtarg.bind();
    glClearColor(0, 0, 0, 1);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

   //Update the particles.
    game::engine::nuc_manager->update(dt, get_msec());

    /*for each eye... */
    for(unsigned int i = 0 ; i < 2 ; i++){
        ovrEyeType eye = game::engine::ovr_data.hmd->EyeRenderOrder[i];
        /* -- Viewport Transformation --
         * Setup the viewport to draw in the left half of the framebuffer when we're
         * rendering the left eye's view (0, 0, width / 2.0, height), and in the right half
         * of the frambuffer for the right eye's view (width / 2.0, 0, width / 2.0, height)
         */
        int fb_width = game::engine::ovr_rtarg.get_fb_width();
        int fb_height = game::engine::ovr_rtarg.get_fb_height();

        glViewport(eye == ovrEye_Left ? 0 : fb_width / 2, 0, fb_width / 2, fb_height);

      //Send the Viewport size to the shader.
      set_unistate("st_screen_size", Vector2(fb_width /2.0 , fb_height));

        /* -- Projection Transformation --
         * We'll just have to use the projection matrix supplied but he oculus SDK for this eye.
         * Note that libovr matrices are the transpose of what OpenGL expects, so we have to
         * send the transposed ovr projection matrix to the shader.*/
        proj = ovrMatrix4f_Projection(game::engine::ovr_data.hmd->DefaultEyeFov[eye], 0.01, 40000.0, true);

      Matrix4x4 proj_mat;
      memcpy(proj_mat[0], proj.M, 16 * sizeof(float));

      //Send the Projection matrix to the shader.
      set_projection_matrix(proj_mat);

        /* --view/camera tranformation --
         * We need to construct a view matrix by combining all the information provided by
         * the oculus SDK, about the position and orientation of the user's head in the world.
         */
         pose[eye] = ovrHmd_GetHmdPosePerEye(game::engine::ovr_data.hmd, eye);

         camera->reset_identity();

         camera->translate(Vector3(game::engine::ovr_data.eye_rdesc[eye].HmdToEyeViewOffset.x,
          game::engine::ovr_data.eye_rdesc[eye].HmdToEyeViewOffset.y,
          game::engine::ovr_data.eye_rdesc[eye].HmdToEyeViewOffset.z));

         /*Construct a quaternion from the data of the oculus SDK and rotate the view matrix*/
         Quaternion q = Quaternion(pose[eye].Orientation.w, pose[eye].Orientation.x,
                                   pose[eye].Orientation.y, pose[eye].Orientation.z);
         camera->rotate(q.inverse().normalized());


         /*Translate the view matrix with the positional tracking*/
         camera->translate(Vector3(-pose[eye].Position.x, -pose[eye].Position.y, -pose[eye].Position.z));

       camera->rotate(Vector3(0, 1, 0), DEG_TO_RAD(theta));

       //Send the View matrix to the shader.
       set_view_matrix(*camera);



         game::engine::active_stage->render(STAGE_RENDER_SKY | STAGE_RENDER_SCENES | STAGE_RENDER_GUNS |
          STAGE_RENDER_ENEMIES | STAGE_RENDER_PROJECTILES, get_msec());
         game::engine::nuc_manager->render(RENDER_PSYS, get_msec());
       game::engine::active_stage->render(STAGE_RENDER_COCKPIT, get_msec());
    }

    /* After drawing both eyes into the texture render target, revert to drawing directly to the display,
     * and we call ovrHmd_EndFrame, to let the Oculus SDK draw both images properly, compensated for lens
     * distortion and chromatic abberation onto the HMD screen.
     */
    game::engine::ovr_rtarg.unbind();

    ovrHmd_EndFrame(game::engine::ovr_data.hmd, pose, &game::engine::ovr_data.fb_ovr_tex[0].Texture);

This problem has troubled me for many days now...and i feel like i have reached a dead end. I could just use billboarded quads.....but i don't want to give up that easily :) Plus point sprites are faster. Do the math behind Point size attenuation based on distance change when rendering on the Rift? Am a not taking something into account? Math is not (,yet at least) my strongest point. :) Any insight will be greatly appreciated!

PS: If any additional information is required about the code i posted, i will gladly provide it.


Solution

  • vec4 local_pos = vec4(attr_pos, 1.0);
    vec4 eye_pos = st_view_matrix * local_pos;
    vec4 proj_voxel = st_proj_matrix * vec4(attr_size, 0.0, eye_pos.z, eye_pos.w);
    float proj_size = st_screen_size.x * proj_voxel.x / proj_voxel.w;

    gl_PointSize = proj_size;

    Basically you are first transforming your point to view space to figure out it's Z coordinate in view space (distance from the viewer) and then you're constructing a vector aligned with the X axis with the desired particle size, and projecting that to see how many pixels it covers when projected and viewport-transformed (sortof).

    This is perfectly reasonable, assuming your projection matrix is symmetrical. This assumption is wrong when dealing with the rift. I've drawn a diagram to illustrate the problem better:

    https://i.sstatic.net/aLKkx.jpg

    As you can see, when the frustum is assymetrical, which is certainly the case with the rift, using the distance of the projected point from the center of the screen will give you wildly different values for each eye, and certainly different from the "correct" projection size you're looking for.

    What you must do instead, is project two points, say (0, 0, z, 1) AND (attr_size, 0, z, 1), using the same method, and compute their difference in screen space (after projection, perspective divide, and viewport).