I have a platform rotating and a single sensor measuring it's position i tried to program a simple kalman filter to smooth the measurements. My measurement runs are between around 10000-15000 datapoints long. The simulation runs over 3 minutes.
I expected the code to be quite faster since kalman is used in real time applications. Also other calculations i did with the measurements don't take nearly as long and are almost instant. Or is this normal because of the matrix operations?
I mostly used this example as template Kalman Filter Implementation for Constant Velocity Model (CV) in Python. Below is my code maybe there's possibilities to write it differently to make it faster?
import numpy as np
import matplotlib.pyplot as plt
x = np.matrix([[0.0, 0.0]]).T # initial state vector x and x_point
P = np.diag([1.0, 1.0]) #!!! initial uncertainty
H = np.matrix([[1.0, 0.0]]) # Measurement matrix H
StD = 20 # Standard deviation of the sensor
R = StD**2 # Measurment Noise Covariance
sv = 2 # process noise basically through possible acceleration
I = np.eye(2) # Identity matrix
for n in range(len(df.loc[[name], ['Time Delta']])):
# Time Update (Prediction)
# ========================
# Update A and Q with correct timesteps
dt = float(df.loc[[name], ['Time Delta']].values[n])
A = np.matrix([[1.0, dt],
[0.0, 1.0]])
G = np.matrix([dt**2/2,dt]).T #!!! np.matrix([[dt**2/2], [dt]]) # Process Noise
Q = G*G.T*sv**2 # Process Noise Covariance Q
# Project the state ahead
x = A*x # not used + B*u
# Project the error covariance ahead
P = A*P*A.T + Q
# Measurement Update (Correction)
# ===============================
# Compute the Kalman Gain
S = H*P*H.T + R
K = (P*H.T) * S**-1 #!!! Use np.linalg.pinv(S) instead of S**-1 if S is a matrix
# Update the estimate via z
Z = df.loc[[name], ['HMC Az Relative']].values[n]
y = Z - (H*x)
x = x + (K*y)
# Update the error covariance
P = (I - (K*H))*P
I found the answer.
Calling the pandas "address" in the for iteration for dt and Z made the code really slow. I created two new variables for the dt and z arrays now my code is pretty much instant. PythonSpeedPerformanceTips helped me to realize where my problem is. Also StackEchange Code Review would've been a better place to ask this kind of question for anyone who reads this and has a similar problem.
import numpy as np
import matplotlib.pyplot as plt
x = np.matrix([[0.0, 0.0]]).T # initial state vector x and x_point
P = np.diag([1.0, 1.0]) #!!! initial uncertainty
H = np.matrix([[1.0, 0.0]]) # Measurement matrix H
StD = 20 # Standard deviation of the sensor
R = StD**2 # Measurment Noise Covariance
sv = 2 # process noise basically through possible acceleration
I = np.eye(2) # Identity matrix
timesteps = df.loc[[name], ['Time Delta']].values
measurements = df.loc[[name], ['HMC Az Relative']].values
for n in range(len(df.loc[[name], ['Time Delta']])):
# Time Update (Prediction)
# ========================
# Update A and Q with correct timesteps
dt = timesteps[n]
A = np.matrix([[1.0, dt],
[0.0, 1.0]])
G = np.matrix([dt**2/2,dt]).T #!!! np.matrix([[dt**2/2], [dt]]) # Process Noise
Q = G*G.T*sv**2 # Process Noise Covariance Q
# Project the state ahead
x = A*x # not used + B*u
# Project the error covariance ahead
P = A*P*A.T + Q
# Measurement Update (Correction)
# ===============================
# Compute the Kalman Gain
S = H*P*H.T + R
K = (P*H.T) * S**-1 #!!! Use np.linalg.pinv(S) instead of S**-1 if S is a matrix
# Update the estimate via z
Z = measurements[n]
y = Z - (H*x)
x = x + (K*y)
# Update the error covariance
P = (I - (K*H))*P