here is the situation: I've got ~20000 mat files, each file contains a simulation result, a time vector and multiple signal vectors. The time vector contains variable step timestamps and for each timestamp there are corresponding signal values.
What I'm trying to do is reducing the data amount by converting from variable steps to fixed steps of 0.001 seconds, because the target system will also use a sample rate of 1000 Samples / s. Another reason is the next step, I'm going to apply machine learning techniques and therefore less data means faster results.
I already got some working code, but it seems to be very inefficient, I would like to get some help to improve the efficiency.
So here is what I got:
time_vs; % Variable step timestamp vector
s1; s2; s3; % Signals corresponding to time_vs
% For each mat file I do (loading and so on omitted):
new_time = min(time_vs):0.001:max(time_vs); % Create the new fixed step time vector.
% Find indexes for nearest timestamps.
indexes = [];
for t = new_time
time_diff = abs(time_vs - t);
add_index = find(time_diff == min(time_diff));
indexes = [indexes, add_index(1)];
end
% Reduce the signals.
s1 = s1(indexes);
% and so on ...
My approach is searching for timestamps in the variable step time vector, which are closest to the fixed step timestamps min(time_diff)
. The indexes of these timestamps are collected to reduce the signals s1 = s1(indexes)
.
Hopefully there is a more efficient solution.
Thanks in advance.
Regards, Ed
As your loop has quadratic runtime complexity i would instead consider the built-in function for linear interpolation:
s1 = interp1(time_vs, s1, new_time);