I have to convert poses (coordiantes + quaternion for rotation) from two different APIs I'm using. More specifically I get coordinates of objects relative to the camera's local position.
My detection library (for detecting those objects) has the coordinate system of the camera oriented with Z in the direction the camera is looking, X to the right of the camera, and Y down from the camera (if you look from the perspective of the camera itself). I will use ACII Art here to show what I mean:
Symbols:
+------+
| | = camera from the back
+------+
+--+
| +-+
| | = camera from the right side (imagine the front part as the lens)
| +-+
+--+
Detection Coordinate System from the back of the camera
+--------> x
|
| +------+
| | |
V y +------+
Detection Coordinate System from the right side of the camera
+--------> z
| +--+
| | +-+
| | |
V y | +-+
+--+
The library where I use the object poses however has X in the same direction, but Y and Z are both inverted. So Z is pointing opposite the looking direction of the camera and Y is pointing straight up. More ASCII sketches:
Usage Coordinate System from the back of the camera
^ y +------+
| | |
| +------+
|
+--------> x
Usage Coordinate System from the right side of the camera
+--+
| +-+ ^ y
| | |
| +-+ |
+--+ |
z <--------+
So now I get object poses (including rotation) in the detection coordinate system but want to use them in the usage coordinate system. I know I can transform the coordinates by just inverting the values for y and z, but how do I convert the quaternions for the rotation? I tried a few combinations but none seem to work.
In this case your change of basis are just permutations of the axes, so to convert from one to the other you just have to replicate the same permutation in the imaginary vector in the quaternion.
i.e. if your quaternion is (w,x,y,z) and the basis permutation is (z,y,x) your new quaternion is (w,z,y,x).