Search code examples
machine-learningneural-networkkeraslstmkeras-layer

Issue in LSTM Input Dimensions in Keras


I am trying to implement a multi-input LSTM model using keras. The code is as follows:

data_1 -> shape (1150,50) 
data_2 -> shape (1150,50)
y_train -> shape (1150,50)

input_1 = Input(shape=data_1.shape)
LSTM_1 = LSTM(100)(input_1)

input_2 = Input(shape=data_2.shape)
LSTM_2 = LSTM(100)(input_2)

concat = Concatenate(axis=-1)
x = concat([LSTM_1, LSTM_2])
dense_layer = Dense(1, activation='sigmoid')(x)
model = keras.models.Model(inputs=[input_1, input_2], outputs=[dense_layer])

model.compile(loss='binary_crossentropy',
                      optimizer='adam',
                      metrics=['acc'])

model.fit([data_1, data_2], y_train, epochs=10)

When I run this code, I get a ValueError:

ValueError: Error when checking model input: expected input_1 to have 3 dimensions, but got array with shape (1150, 50)

Do anyone have any solution to this problem?


Solution

  • Use data1 = np.expand_dims(data1, axis=2), before you define the model. LSTM expects inputs with dimensions (batch_size, timesteps, features), so, in your case, I guessing you have 1 feature, 50 time steps and 1150 samples, you need to add a dimension at the end of your vector.

    This need to be done before you define the model otherwise when you set input_1 = Input(shape=data_1.shape) you are telling keras that your input has 1150 timesteps and 50 features,so it will expect inputs of shape (None, 1150, 50) (the non stands for "any dimension will be accepted").

    The same holds for input_2

    Hope this helps