I am trying to create a sliding window for a time series. So far I have a function that I managed to get working that lets you take a given series, set a window size in seconds and then create a rolling sample. My issue is that it is taking very long to run and seems like an inefficient approach.
# ========== create dataset =========================== #
import pandas as pd
from datetime import timedelta, datetime
timestamp_list = ["2022-02-07 11:38:08.625",
"2022-02-07 11:38:09.676",
"2022-02-07 11:38:10.084",
"2022-02-07 11:38:10.10000",
"2022-02-07 11:38:11.2320"]
bid_price_list = [1.14338,
1.14341,
1.14340,
1.1434334,
1.1534334]
df = pd.DataFrame.from_dict(zip(timestamp_list, bid_price_list))
df.columns = ['timestamp','value']
# make date time object
df.timestamp = [datetime.strptime(time_i, "%Y-%m-%d %H:%M:%S.%f") for time_i in df.timestamp]
df.head(3)
timestamp value timestamp_to_sec
0 2022-02-07 11:38:08.625 1.14338 2022-02-07 11:38:08
1 2022-02-07 11:38:09.676 1.14341 2022-02-07 11:38:09
2 2022-02-07 11:38:10.084 1.14340 2022-02-07 11:38:10
# ========== create rolling time-series function ====== #
# get the floor of time (second value)
df["timestamp_to_sec"] = df["timestamp"].dt.floor('s')
# set rollling window length in seconds
window_dt = pd.Timedelta(seconds=2)
# containers for rolling sample statistics
n_list = []
mean_list = []
std_list =[]
# add dt (window) seconds to the original time which was floored to the second
df["timestamp_to_sec_dt"] = df["timestamp_to_sec"] + window_dt
# get unique end times
time_unique_endlist = np.unique(df.timestamp_to_sec_dt)
# remove end times that are greater than the last actual time, i.e. max(df["timestamp_to_sec"])
time_unique_endlist = time_unique_endlist[time_unique_endlist <= max(df["timestamp_to_sec"])]
# loop running the sliding window (time_i is the end time of each window)
for time_i in time_unique_endlist:
# start time of each rolling window
start_time = time_i - window_dt
# sample for each time period of sliding window
rolling_sample = df[(df.timestamp >= start_time) & (df.timestamp <= time_i)]
# calculate the sample statistics
n_list.append(len(rolling_sample)) # store n observation count
mean_list.append(rolling_sample.mean()) # store rolling sample mean
std_list.append(rolling_sample.std()) # store rolling sample standard deviation
# plot histogram for each sample of the rolling sample
#plt.hist(rolling_sample.value, bins=10)
# tested and n_list brought back the correct values
>>> n_list
[2,3]
Is there a more efficient way of doing this, a way I could improve my interpretation or an open-source package that allows me to run a rolling window like this? I know that there is the .rolling()
in pandas but that rolls on the values. I want something that I can use on unevenly-spaced data, using the time to define the fixed rolling window.
It seems like this is the best performance, hope it helps anyone else.
# set rollling window length in seconds
window_dt = pd.Timedelta(seconds=2)
# add dt seconds to the original timestep
df["timestamp_to_sec_dt"] = df["timestamp_to_sec"] + window_dt
# unique end time
time_unique_endlist = np.unique(df.timestamp_to_sec_dt)
# remove end values that are greater than the last actual value, i.e. max(df["timestamp_to_sec"])
time_unique_endlist = time_unique_endlist[time_unique_endlist <= max(df["timestamp_to_sec"])]
# containers for rolling sample statistics
mydic = {}
counter = 0
# loop running the rolling window
for time_i in time_unique_endlist:
start_time = time_i - window_dt
# sample for each time period of sliding window
rolling_sample = df[(df.timestamp >= start_time) & (df.timestamp <= time_i)]
# calculate the sample statistics
mydic[counter] = {
"sample_size":len(rolling_sample),
"sample_mean":rolling_sample["value"].mean(),
"sample_std":rolling_sample["value"].std()
}
counter = counter + 1
# results in a DataFrame
results = pd.DataFrame.from_dict(mydic).T