Search code examples
filtersignalskalman-filterorbit

Noise filtering for data points separated by significantly varying time steps


In the context of orbit determination, I gather angle measurements from three optical telescopes, resulting in data that exhibits different time intervals between measurements. For example:

  • Telescope 1 provides approximately 10 measurements spaced 2 seconds apart.
  • Telescope 2 waits for 20 minutes before performing measurements similar to those of Telescope 1
  • same for telescope 3

This leads to data points separated by significantly different time intervals, such as :

0s, 2s, 4s, 6s, ... , 20s, 20min20s, 20min22s, 20min24s, ... 20min40s, 40min40s, ... and so on

I would like to filter it ; to recover hidden states such as Kalman filters do with regularly spaced data. However, dealing with these varying time steps is a challenge. I'm unsure of how to proceed with filtering in this context.

Given that I have an estimate of the standard error in my predictions, I've considered the following approach:

  1. Initially, predict an orbit with raw data without applying any filtering.
  2. Create an artificially noisy orbit to interpolate data points between the measurements.
  3. Apply a standard Kalman filter to the data, which now includes the interpolated values.
  4. Try to refine my orbit on this filtered data

If anyone has insights or recommendations for handling such data, I would greatly appreciate your input. Thank you.


Solution

  • Based on what you have said, a batch filter is probably your best bet to start with. For most batch algorithms, you'll need an initial guess of the orbit. So I agree with your step (1); using Gauss's method for angles-only orbit determination using the noisy measurements would give you a decent initial estimate.

    After that, the batch estimation algorithm can be approached as a nonlinear least squares problem. Batch estimators are usually set up to refine the estimate of the initial state x_0, which will often just be the initial position and velocity states. The cost function to minimize will be

    J = sum_{i=1}^N (y_i - h_i(x_0))^2

    In the above I'm assuming that you have N measurements. The true measurements are y_i, and the predicted measurement at time t_i as a function of the initial state is h_i(x_0). To evaluate this h() function, you'll need to propagate the initial state x_0 to time t_i and figure out what the measurement would be.

    There are some tricky things that you need to do with process noise in the Kalman Filter when you have large gaps between measurements like you do, but with a batch filter it doesn't make much of a difference. Your h() function will just be based on which ground station is being used at that particular time.

    There are a few ways that you can solve this. The method that I am the most familiar with is using the Gauss-Newton algorithm. It works well, but for that you will need the derivative of the measurement function with respect to the initial state, which requires integrating the state transition matrix (I can elaborate if that would be helpful, but it requires some math). You might have some luck using a derivative-free method (e.g. one of the derivative-free methods in scipy.optimize.minimize), but I've never tried that for an orbit determination problem.

    A big assumption with the batch filter is that the effect of noise during the time period being considered is negligible. If your simulation adds a lot of process noise during the simulation, then you may need to use a Kalman Filter instead.

    I kept this post fairly high level since I'm not sure about your technical background, but I can elaborate more on the details if you have any questions. A great reference is the book "Statistical Orbit Determination" by Tapley, Schutz, and Born.