I'm attempting to perform a sentiment classification using CNN. The error seems to be related to the input_shape parameters.
The x data consists of arrays of integers created using tokenizer.texts_to_sequences.
? x_train.shape
(4460, 20)
? x_trains.shape[0]
array([ 49, 472, 4436, 843, 756, 659, 64, 8, 1328, 87, 123,
352, 1329, 148, 2996, 1330, 67, 58, 4437, 144])
The y data consist of one hot encoded values for classification.
y_train.shape
(4460, 2)
y_train[0]
array([1., 0.], dtype=float32)
here is the model:
model.add(layers.Conv1D(filters=256, kernel_size=3, activation='relu', input_shape=(max_seqlen,)))
model.add(layers.SpatialDropout1D(0.2))
model.add(layers.GlobalMaxPooling1D())
model.add(layers.Dense(100, activation='relu'))
model.add(layers.Dense(num_classes, activation="softmax"))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
history = model.fit(x_train, y_train, epochs=3, batch_size=512,
validation_data=(x_val, y_val), class_weight=label_weights)
An error is thrown adding the Conv1D layer. The message is:
"Input 0 is incompatible with layer conv1d_1: expected ndim=3, found ndim=2"
I have no idea what I'm doing wrong. Any help is greatly appreciated.
Conv1D takes a 2D input (I don't know why this is the case). As your input is only 1D, your dimensions don't match up. I'm afraid that you will probably have to stick to other keras layer types, or alternatively reshape your data so that it is (4460, 20, 1), allowing you to pass a conv1D over it.