Search code examples
cmatlabfilteringlowpass-filter

How to implement a MATLAB lowpass filter in C


I have designed a board that samples audio input using a 16 bit DAC at 48kHz. It stores the data as signed 16-bit integers. I have also implemented a 16 bit ADC on the board and I am able to pass audio through the board successfully.

I would like to design a low pass filter using MATLAB and implement it on this board. I understand how to create basic filters using MATLAB but I cant quite grasp how to bridge the gap between creating the filter in MATLAB and implementing this filter using C code on my board. I would like to be able to pass the signal into the board and observe the filtered signal on the output in 'real-time'.

How can this be achieved?


Solution

  • ok, you said that you get your coefficients from a [B,A]= butter(..) likewise filter (try getting them in Z domain AKA digital filter), those A,B coefficients correspond to a simple transfer function you know

    H(z) = B(z)/A(z) = (b(1)+b(2) z^−1+⋯+b(n+1) z^−n)/(a(1)+a(2) z^−1+⋯+a(n+1) z^−n)
    

    right?

    you just need to remember that the output y = H(z)*x or in other words

    y = B(z)/A(z) * x and finally A(z)*y = b(z)*x
    

    and what was x(t) * z^-1 equals? yep x(t-1)

    that means that you'll end with an ecuation similar to:

    y(t)*a(1)+y(t-1)*a(2)+⋯+y(t-n)a(n+1) = x(t)*b(1)+x(t-1)*b(2)+⋯+x(t-n)b(n+1)
    

    and what we need is the actual value of y(t) with the known values of actual x(t) and past x(t-1) etc, and also with known and stored values of past y(t-1) etc...

    y(t) = 1/a(1) * (x(t)*b(1)+x(t-1)*b(2)+⋯+x(t-n)b(n+1) - y(t-1)*a(2)-⋯-y(t-n)a(n+1))
    

    that means you need two arrays for x and y, and apply the equation with the B and A arrays you got from matlab...

    sadly, this assumes you ALREADY took in consideration the sampling time in the butter() (hence Wn should be normalized) and make sure you take your samples at that exact sampling time (and ideally calculate your output at the exact time too)