Search code examples
objective-ciosgpsaccelerometergyroscope

How to apply a filter on CMRotationMatrix using a CADisplayLink


How to apply a filter on CMRotationMatrix? maybe kalman filter. I need fix the noise of CMRotationMatrix (transformFromCMRotationMatrix), to get a linear values of result matrix

This matrix values will be convert to XYZ, in my case I'm simulate 3D on 2D screen like that:

// Casting matrix to x, y

vec4f_t v;
multiplyMatrixAndVector(v, projectionCameraTransform, boxMatrix);

float x = (v[0] / v[3] + 1.0f) * 0.5f;
float y = (v[1] / v[3] + 1.0f) * 0.5f;

CGPointMake(x * self.bounds.size.width, self.bounds.size.height - (y * self.bounds.size.height));

code:

// define variable

mat4f_t cameraTransform;

// start the display link loop

- (void)startDisplayLink
{
    displayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(onDisplayLink:)];
    [displayLink setFrameInterval:1];
    [displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
}

// stop the display link loop

- (void)stopDisplayLink
{
    [displayLink invalidate];
    displayLink = nil;      
}

// event of display link

- (void)onDisplayLink:(id)sender
{
    CMDeviceMotion *d = motionManager.deviceMotion;

    if (d != nil) {
        CMRotationMatrix r = d.attitude.rotationMatrix;

        transformFromCMRotationMatrix(cameraTransform, &r);
        [self setNeedsDisplay];
    }
}

// function trigger before [self setNeedDisplay];

void transformFromCMRotationMatrix(vec4f_t mout, const CMRotationMatrix *m)
{    
    mout[0] = (float)m->m11;
    mout[1] = (float)m->m21;
    mout[2] = (float)m->m31;
    mout[3] = 0.0f;

    mout[4] = (float)m->m12;
    mout[5] = (float)m->m22;
    mout[6] = (float)m->m32;
    mout[7] = 0.0f;

    mout[8] = (float)m->m13;
    mout[9] = (float)m->m23;
    mout[10] = (float)m->m33;
    mout[11] = 0.0f;

    mout[12] = 0.0f;
    mout[13] = 0.0f;
    mout[14] = 0.0f;
    mout[15] = 1.0f;
}

// Matrix-vector and matrix-matricx multiplication routines

void multiplyMatrixAndVector(vec4f_t vout, const mat4f_t m, const vec4f_t v)
{
    vout[0] = m[0]*v[0] + m[4]*v[1] + m[8]*v[2] + m[12]*v[3];
    vout[1] = m[1]*v[0] + m[5]*v[1] + m[9]*v[2] + m[13]*v[3];
    vout[2] = m[2]*v[0] + m[6]*v[1] + m[10]*v[2] + m[14]*v[3];
    vout[3] = m[3]*v[0] + m[7]*v[1] + m[11]*v[2] + m[15]*v[3];
}

Solution

  • In general I would distinguish between improving the signal noise ratio and smoothing the signal.

    Signal Improvement

    If you really want to be better than Apple's Core Motion which has already a sensor fusion algorithm implemented, stay prepared for a long term project with an uncertain outcome. In this case you would be better off to take the raw accelerometer and gyro signals to build your own sensor fusion algorithm but you have to care about a lot of problems like drift, hardware dependency of different iPhone versions, hardware differences of the sensors within same generation, ... So my advice: try everything to avoid it.

    Smoothing

    This just means interpolating two or more signals and building kind of average. I don't know about any suitable approach to use for rotation matrices directly (maybe there is one) but you can use quaternions instead (more resources: OpenGL Tutorial Using Quaternions to represent rotation or Quaternion FAQ).

    The resulting quaternion of such an interpolation can be multiplied with your vector to get the projection similarly to the matrix way (you may look at Finding normal vector to iOS device for more information).

    Interpolation between two unit quaternions representing rotations can be accomplished with Slerp. In practice you will use what is described as Geometric Slerp in Wikipedia. If you have two points in time t1 and t2 and the corresponding quaternions q1 and q2 and the angular distance omega between them, the formula is:

    q'(q1, q2, t) = sin((1- t) * omega) / sin(omega) * q0 + sin(t * omega) / sin(omega) * q1

    t should be 0.5 because you want the average between both rotations. Omega can be calculated by the dot product:

    cos(omega) = q1.q2 = w1w2 + x1x2 + y1y2 + z1z2

    If this approach using two quaternions still doesn't match your needs, you can repeat this by using slerp (slerp (q1, q2), slerp (q3, q4)). Some notes:

    • From a performance point fo view it's not that cheap to perform three sin and one arccos call in your run loop 1/frequency times per second. Thus you should avoid using too many points
    • In your case all signals are close to each other especially when using high sensor frequencies. You have to take care about angles that are very small and let 1/sin(omega) explode. In this case set sin(x) ≈ x
    • Like in other filters like low-pass filter the more points in time you use the more time delay you get. So if you have frequency f you will get about 0.5/f sec delay when using two points and 1.5/f for the double slerp.
    • If something appears weird, check that your resulting quaternions are unit quaternions i.e. ||q|| = 1
    • If you are running into performance issues you might have a look at Hacking Quaternions

    The C++ project pbrt at github contains a quaternion class to get some inspiration from.