I am working on the Matlab simulink block: Mean (variable frequency). The block is shown at http://www.mathworks.com/help/physmod/sps/powersys/ref/meanvariablefrequency.html
The first step of this algorithm is integrating the input signal. However, when the input signal is a constant, the integrator will accumulate until it overflows. Does anyone know how to solve this problem in such block.
I also attach the diagram of this block below: Later, I will change it to discrete-time model and implement such algorithm in my DSP. If you have any suggestion, I am a good listener.
The function you are implementing is
y(t) = Integrate_{x=0->t} u(x) dx - Integrate_{y=0->t-T} u(y) dy (1)
where T
is the transport delay. This can be reordered by substituting z = y + T
and due to the linearity of the integral to
y(t) = Integrate_{x=0->t} u(x) dx - Integrate_{z=T->t} u(z - T) dz
= Integrate_{x=0->t} [ u(x) - u(x - T) ] dx + C (2)
where
C = Integrate_{z=0->T} u(z) dz
is a finite constant that depends on the initial conditions and can be assumed to be 0 if your signal u
is zero for the initial time t = 0 ... T
.
If we look at an input signal with DC-offset such as
u(t) = DC + sin(w*t)
then implementation (1)
will first integrate and then subtract, which will saturate or lead to a loss of precision as you have noted. But (2)
will first subtract and thus remove any DC
u(x) - u(x - T) = DC - DC + sin(w*t) - sin(w*t - w*T)
= 0 sin(w*t) - sin(w*t - w*T)
and then integrate, without risking saturation. Thus I recommend to change the implementation as follows:
Alternatively you could change the ideal integrator 1/s
to a low-pass filter with finite gain at DC, e.g. 1/(1+s)
although this (as well as the anti-windup controller suggested by @thewaywewalk) will distort your signal compared to the ideal behaviour.
PS: Thanks to stackoverflow for not supporting proper math-notation... :-/