I'm programming a 3D game where the user controls a first-person camera, and movement is constrained to the inside surface of a sphere. I've managed to constrain the movement, but I'm having trouble figuring out how to manage the camera orientation using quaternions. Ideally the camera up vector should point along the normal of the sphere towards its center, and user should be able to free look around - as if we was always on the bottom of the sphere, no matter where he moves.
Presumably you have two vectors describing the camera's orienation. One will be your V'up
describing which way is up relative to the camera orientation and the other will be your V'norm
which will be the direction the camera is aimed. You will also have a position p'
, where your camera is located at some time. You define a canonical orientation and position given by, say:
Vup = <0, 1, 0> Vnorm = <0, 0, 1> p = <0, -1, 0>
Given a quaternion rotation q
you then apply your rotation to those vectors to get:
V'up = qVupq-1 V'norm = qVnormq-1 p' = qpq-1
In your particular situation, you define q
to incrementally accumulate the various rotations that result in the final rotation you apply to the camera. The effect will be that it looks like what you're describing. That is, you move the camera inside a statically oriented and positioned sphere rather than moving the spehere around a statically oriented and positioned camera.
Each increment is computed by a rotation of some angle θ about the vector V = V'up x V'norm
.