I have data of 50 samples per time series. I want to build a time series classifier.
Each sample has three inputs - a vector with the shape 1X768, a vector with the shape 1X25, a vector with the shape 1X496.
Each input is from different modality so need to go through some input-specific layers before concatenate all of them.
The data is store in the dataframe:
df = time_series_id timestamp input1 input2 input3 time_series_label
0 0 [x0..x768] [x0..x25] [x0..x496] A
0 1 [x0..x768] [x0..x25] [x0..x496] A
..
0 50 [x0..x768] [x0..x25] [x0..x496] A
1 0 [x0..x768] [x0..x25] [x0..x496] B
1 50 [x0..x768] [x0..x25] [x0..x496] B
I am new with DL and I want to build a network that classify each 50 timestamps-long time series to one of 2 classes, but I couldn't find any tutorial that exemplify how to insert multimodal data into Conv1d
or LSTM
layers.
How can I built such network, preferebly with keras, and train in on my dataframe in order to classify time series? (So, when I give it a new time series of 50 timestamps I will get A/B prediction for the entire time series)?
Please notice, the label is the same for all rows with the same id. So every time, I need to feed the RNN only with samples with the same id.
I have created nice example for you:
# Define mini-dataset similar to yours example
df = pd.DataFrame({'A':[np.zeros((768))]*100,'B':[np.ones((25))]*100})
# 100 rows, 2 columns (each value in column A is a list size 768, each value in column B is a list size 25)
Preprocess the data to match rolling windows of 50 timestamps
# Create windows of data:
list_of_indexes=[]
df.index.to_series().rolling(50).apply((lambda x: list_of_indexes.append(x.tolist()) or 0), raw=False)
d_A = df.A.apply(list)
d_B = df.B.apply(list)
a = [[d_A[ix] for ix in x] for x in list_of_indexes]
b = [[d_B[ix] for ix in x] for x in list_of_indexes]
a = np.array(a)
b = np.array(b)
print(f'a shape: {a.shape}')
print(f'b shape: {b.shape}')
Data after preprocess:
a shape: (51, 50, 768)
b shape: (51, 50, 25)
Explanation:
a: 51 sample when each sample contains 50 timestamps and each timestamp contains 768 values. (b is the same with 25 values.)
Create a model with two inputs, input a and input b, you can process each of them separately and then concatenate.
# define two sets of inputs
input_A = Input(shape=(50, 768))
input_B = Input(shape=(50, 25))
LSTM_A = Bidirectional(LSTM(32))(input_A)
LSTM_B = Bidirectional(LSTM(32))(input_B)
combined = concatenate([
LSTM_A,
LSTM_B
])
dense1 = Dense(32, activation='relu')(combined)
output = Dense(1, activation='sigmoid')(dense1)
model = Model(inputs=[
input_A,
input_B
], outputs=output)
model.summary()
Model Summary:
Fit the model:
adam = Adam(lr=0.00001)
model.compile(loss='binary_crossentropy', optimizer=adam)
history = model.fit([a,b], y, batch_size=2, epochs=2)
Of course You can do concatenate before the LSTM:
# define two sets of inputs
input_A = Input(shape=(50, 768))
input_B = Input(shape=(50, 25))
combined = concatenate([
input_A,
input_B
])
LSTM_layer = Bidirectional(LSTM(32))(combined)
dense1 = Dense(32, activation='relu')(LSTM_layer)
output = Dense(1, activation='sigmoid')(dense1)
model = Model(inputs=[
input_A,
input_B
], outputs=output)
model.summary()
EDIT:
The df:
Preprocess code:
def split_into_inputs(group):
x_data_inp1.append(group.input1)
x_data_inp2.append(group.input2)
# supposing time_series_id have the same label for all of its rows (thats what i understood from the question details)
y_data.append(group.time_series_label.unique()[0])
x_data_inp1 = []
x_data_inp2 = []
y_data = []
df.groupby('time_series_id').apply(lambda group: split_into_inputs(group))
# convert list into array with np.float dtype to match the nn.
x_data_inp1 = np.array(x_data_inp1, dtype=np.float)
x_data_inp2 = np.array(x_data_inp2, dtype=np.float)
# Convert labels from chars into digits
from sklearn.preprocessing import LabelEncoder
# creating instance of labelencoder
labelencoder = LabelEncoder()
# Assigning numerical values. Convert 'A','B' into 0, 1
y_data = labelencoder.fit_transform(y_data)
x_data_inp1.shape, x_data_inp2.shape, y_data.shape
Output:
((2, 50, 768), (2, 50, 25), (2,))
After the preprocessing for our 100 samples, there are 2 sequences of 50 samples each according to the "time_series_id" column, and there are 2 labels, label A as 0 for the first sequence, and label B as 1 for the second sequence. Question: Each sequence of 50 samples has a different "time_series_id"?
Defining the mode:
# define two sets of inputs
input_A = Input(shape=(50, 768))
input_B = Input(shape=(50, 25))
LSTM_A = Bidirectional(LSTM(32))(input_A)
LSTM_B = Bidirectional(LSTM(32))(input_B)
combined = concatenate([
LSTM_A,
LSTM_B
])
dense1 = Dense(32, activation='relu')(combined)
output = Dense(1, activation='sigmoid')(dense1)
model = Model(inputs=[
input_A,
input_B
], outputs=output)
model.summary()
Fitting the model:
adam = Adam(lr=0.00001)
model.compile(loss='binary_crossentropy', optimizer=adam)
history = model.fit([x_data_inp1, x_data_inp2], y_data, batch_size=2, epochs=2)