Search code examples
pythonalgorithmfilterastronomy

Algorithm to smooth out noise in a running system while converging on an initially unknown constant


I'm trying to smooth out noise in a slightly unusual situation. There's probably a common algorithm to solve my problem, but I don't know what it is.

I'm currently building a robotic telescope mount. To track the movement of the sky, the mount takes a photo of the sky once per second and tracks changes in the X, Y, and rotation of the stars it can see.

If I just use the raw measurements to track rotation, the output is choppy and noisy, like this:

Guiding with raw rotation measurements: img

If I use a lowpass filter, the mount overshoots and never completely settles down. A lower Beta value helps with this, but then the corrections are too slow and error accumulates.

Guiding with lowpass filter: img

(In both graphs, purple is the difference between sky and mount rotation, red is the corrective rotations made by the mount.)

A moving average had the same problems as the lowpass filter.

More information about the problem:

  • For a given area of the sky, the rotation of the stars will be constant. However, we don't know where we are and the measurement of sky rotation is very noisy due to atmospheric jitter, so the algorithm has to work its way towards this initially unknown constant value while guiding.
  • The mount can move as far as necessary in one second, and has its own control system. So I don't think this is a PID loop control system problem.
  • It's OK to guide badly (or not at all) for the first 30 seconds or so.

I wrote a small Python program to simulate the problem - might as well include it here, I suppose. This one is currently using a lowpass filter.

#!/usr/bin/env python3
import random
import matplotlib.pyplot as plt

ROTATION_CONSTANT = 0.1
TIME_WINDOW       = 300

skyRotation     = 0
mountRotation   = 0
error           = 0
errorList       = []
rotationList    = []
measurementList = []

smoothData = 0
LPF_Beta = 0.08

for step in range(TIME_WINDOW):
    skyRotation += ROTATION_CONSTANT
    randomNoise = random.random() - random.random()
    rotationMeasurement = skyRotation - mountRotation + randomNoise

    # Lowpass filter
    smoothData = smoothData - (LPF_Beta * (smoothData - rotationMeasurement));
    mountRotation += smoothData

    rotationList.append(smoothData)
    errorList.append(skyRotation - mountRotation)
    measurementList.append(rotationMeasurement)

plt.plot([0, TIME_WINDOW], [ROTATION_CONSTANT, ROTATION_CONSTANT], color='black', linestyle='-', linewidth=2)
plt.plot(errorList, color="purple")
plt.plot(rotationList, color="red")
plt.plot(measurementList, color="blue", alpha=0.2)
plt.axis([0, TIME_WINDOW, -1.5, 1.5])
plt.xlabel("Time (seconds)")
plt.ylabel("Rotation (degrees)")
plt.show()

If anyone knows how to make this converge smoothly (or could recommend relevant learning resources), I would be most grateful. I'd be happy to read up on the topic but so far haven't figured out what to look for!


Solution

  • I would first of all try and do this the easy way by making your control outputs the result of a PID and then tuning the PID as described at e.g. https://robotics.stackexchange.com/questions/167/what-are-good-strategies-for-tuning-pid-loops or from your favourite web search.

    Most other approaches require you to have an accurate model of the situation, including the response of the hardware under control to your control inputs, so your next step might be experiments to try and work this out, e.g. by attempting to work out the response to simple test inputs, such as an impulse or a step. Once you have a simulator you can, at the very least, tune parameters for proposed approaches more quickly and safely on the simulator than on the real hardware.

    If your simulator is accurate, and if you are seeing more problems in the first 30 seconds than afterwards, I suggest using a Kalman filter to estimate the current error, and then sending in the control that (according to the model that you have constructed) will minimise the mean squared error between the time the control is acted upon and the time of the next observation. Using a Kalman filter will at least take account of the increased observational error when the system starts up.

    Warning: the above use of the Kalman filter is myopic, and will fail dramatically in some situations where there is something corresponding to momentum: it will over-correct and end up swinging wildly from one extreme to another. Better use of the Kalman filter results would be to compute a number of control inputs, minimizing the predicted error at the end of this sequence of inputs (e.g. with dynamic programming) and then revisit the problem after the first control input has been executed. In the simple example where I found over-correction you can get stable behavior if you calculate the single control action that minimizes the error if sustained for two time periods, but revisit the problem and recalculate the control action at the end of one time period. YMMV.

    If that doesn't work, perhaps it is time to take your accurate simulation, linearize it to get differential equations, and apply classical control theory. If it won't linearize reasonably over the full range, you could try breaking that range down, perhaps using different strategies for large and small errors.

    Such (little) experience as I have from control loops suggests that it is extremely important to minimize the delay and jitter in the loop between the sensor sensing and the control actuating. If there is any unnecessary source of jitter or delay between input and control forget the control theory while you get that fixed.