Search code examples
openglmathglm-math

Combining(?) Quaterions Accurately from Keyboard/Mouse and other sources


I would like to combine mouse and keyboard inputs with the Oculus Rift to create a smooth experience for the user. The goals are:

  • Positional movement 100% controlled by the keyboard relative to the direction the person is facing.
  • Orientation controlled 100% by HMD devices like the Oculus Rift.
  • Mouse orbit capabilities adding to the orientation of the person using the Oculus Rift. For example, if I am looking left I can still move my mouse to "move" more leftward.

Now, I have 100% working code for when someone doesn't have an Oculus Rift, I just don't know how to combine the orientation and other elements of the Oculus Rift to my already working code to get it 100%.

Anyway, here is my working code for controlling the keyboard and mouse without the Oculus Rift:

Note that all of this code assumes a perspective mode of the camera:

/*

Variables

*/

glm::vec3 DirectionOfWhereCameraIsFacing;
glm::vec3 CenterOfWhatIsBeingLookedAt;
glm::vec3 PositionOfEyesOfPerson;
glm::vec3 CameraAxis;
glm::vec3 DirectionOfUpForPerson;
glm::quat CameraQuatPitch;
float     Pitch;
float     Yaw;
float     Roll;
float     MouseDampingRate;
float     PhysicalMovementDampingRate;
glm::quat CameraQuatYaw;
glm::quat CameraQuatRoll;
glm::quat CameraQuatBothPitchAndYaw;
glm::vec3 CameraPositionDelta;

/*

Inside display update function.

*/

DirectionOfWhereCameraIsFacing = glm::normalize(CenterOfWhatIsBeingLookedAt - PositionOfEyesOfPerson);
CameraAxis = glm::cross(DirectionOfWhereCameraIsFacing, DirectionOfUpForPerson);
CameraQuatPitch = glm::angleAxis(Pitch, CameraAxis);
CameraQuatYaw = glm::angleAxis(Yaw, DirectionOfUpForPerson);
CameraQuatRoll = glm::angleAxis(Roll, CameraAxis);
CameraQuatBothPitchAndYaw = glm::cross(CameraQuatPitch, CameraQuatYaw);
CameraQuatBothPitchAndYaw = glm::normalize(CameraQuatBothPitchAndYaw);
DirectionOfWhereCameraIsFacing = glm::rotate(CameraQuatBothPitchAndYaw, DirectionOfWhereCameraIsFacing);
PositionOfEyesOfPerson += CameraPositionDelta;
CenterOfWhatIsBeingLookedAt = PositionOfEyesOfPerson + DirectionOfWhereCameraIsFacing * 1.0f;
Yaw *= MouseDampingRate;
Pitch *= MouseDampingRate;
CameraPositionDelta = CameraPositionDelta * PhysicalMovementDampingRate;
View = glm::lookAt(PositionOfEyesOfPerson, CenterOfWhatIsBeingLookedAt, DirectionOfUpForPerson);
ProjectionViewMatrix = Projection * View;

The Oculus Rift provides orientation data via their SDK and can be accessed like so:

/*

Variables

*/

ovrMatrix4f OculusRiftProjection;
glm::mat4   Projection;
OVR::Quatf  OculusRiftOrientation;
glm::quat   CurrentOrientation;

/*

Partial Code for retrieving projection and orientation data from Oculus SDK

*/

OculusRiftProjection = ovrMatrix4f_Projection(MainEyeRenderDesc[l_Eye].Desc.Fov, 10.0f, 6000.0f, true);

for (int o = 0; o < 4; o++){
    for (int i = 0; i < 4; i++) {
        Projection[o][i] = OculusRiftProjection.M[o][i];
    }
}

Projection = glm::transpose(Projection);

OculusRiftOrientation = PredictedPose.Orientation.Conj();

CurrentOrientation.w = OculusRiftOrientation.w;
CurrentOrientation.x = OculusRiftOrientation.x;
CurrentOrientation.y = OculusRiftOrientation.y;
CurrentOrientation.z = OculusRiftOrientation.z;

CurrentOrientation = glm::normalize(CurrentOrientation);

After that last line the glm based quaterion "CurrentOrientation" has the correct information which, if plugged straight into an existing MVP matrix structure and sent into OpenGL will allow you to move your head around in the environment without issue.

Now, my problem is how to combine the two parts together successfully.

When I have done this in the past it results in the rotation stuck in place (when you turn your head left you keep rotating left as opposed to just rotating in the amount that you turned) and the fact that I can no longer accurately determine the direction the person is facing so that my position controls work.

So at that point since I can no longer determine what is "forward" my position controls essentially become crap...

How can I successfully achieve my goals?


Solution

  • I've done some work on this by maintaining a 'camera' matrix which represents the position and orientation of they player, and then during rendering, composing that with the most recent orientation data collected from the headset.

    I have a single interaction class which is designed to pull input from a variety of sources, including keyboard and joystick (as well as a spacemouse, or a Razer Hydra).

    You'll probably find it easier to maintain the state as a single combined matrix like I do, rather than trying to compose a lookat matrix every frame.

    If you look at my Rift.cpp base class for developing my examples you'll see that I capture keyboard input and accumulate it in the CameraControl instance. This is accumulated in the instance so that during the applyInteraction call later we can apply movement indicated by the keyboard, along with other inputs:

    void RiftApp::onKey(int key, int scancode, int action, int mods) {
      ...
      // Allow the camera controller to intercept the input
      if (CameraControl::instance().onKey(player, key, scancode, action, mods)) {
        return;
      }
      ... 
    }
    

    In my per-frame update code I query any other enabled devices and apply all the inputs to the matrix. Then I update the modelview matrix with the inverse of the player position:

    void RiftApp::update() {
      ...
      CameraControl::instance().applyInteraction(player);
      gl::Stacks::modelview().top() = glm::inverse(player);
      ...
    }
    

    Finally, in my rendering code I have the following, which applies the headset orientation:

    void RiftApp::draw() {
      gl::MatrixStack & mv = gl::Stacks::modelview();
      gl::MatrixStack & pr = gl::Stacks::projection();
      for_each_eye([&](ovrEyeType eye) {
        gl::Stacks::with_push(pr, mv, [&]{
          ovrPosef renderPose = ovrHmd_BeginEyeRender(hmd, eye);
          // Set up the per-eye modelview matrix
          {
            // Apply the head pose
            glm::mat4 m = Rift::fromOvr(renderPose);
            mv.preMultiply(glm::inverse(m));
            // Apply the per-eye offset
            glm::vec3 eyeOffset = Rift::fromOvr(erd.ViewAdjust);
            mv.preMultiply(glm::translate(glm::mat4(), eyeOffset));
          }
    
          // Render the scene to an offscreen buffer
          frameBuffers[eye].activate();
          renderScene();
          frameBuffers[eye].deactivate();
    
          ovrHmd_EndEyeRender(hmd, eye, renderPose, &eyeTextures[eye].Texture);
        });
        GL_CHECK_ERROR;
      });
      ...
    }