I am trying to test the simpleRNN
stft_librosa
is numpy data (257, 958)
My Idea is put sliced (10,958)
for input and get (1,958)
for output.
epochs = 10
batch = 24
model.add(
SimpleRNN(1, activation=None, input_shape=(958,1), return_sequences=True)
)
model.add(Dense(1, activation="linear"))
model.compile(loss="mean_squared_error", optimizer="sgd")
print(model.summary())
for num in range(0, epochs):
print(num + 1, '/', epochs, ' start')
for i,data in enumerate(stft_librosa):
if i == 0:continue
in_data = stft_librosa[i - 1]
out_data = stft_librosa[i]
model.fit(in_data, out_data, epochs=1, shuffle=False, batch_size=batch)
model.reset_states()
print(num+1, '/', epochs, ' epoch is done!')
model.save('/data/mymodel')
There comes error like this, how should I fix it??
ValueError: Data cardinality is ambiguous:
x sizes: 9
y sizes: 958
Please provide data which shares the same first dimension.
model.summary() is here.
Output Shape Param #
simple_rnn (SimpleRNN) (None, 9, 958) 1836486
dense (Dense) (None, 9, 1) 959
Total params: 1,837,445
Trainable params: 1,837,445
Non-trainable params: 0
That input_shape=(9,958)
is wrong (this is after you have edited the question). Correct one is input_shape=(958, 1)
- the one that you have in your original question.(please, don't edit your questions like that, you have changed both the model, by adding the Dense layer, and the error by changing the input shape).
The original error, before you updated the question, is due to the incorrect shape of the inputs that you are feeding into the model. This model expects 3-dimensional input - the first one being the batch size but you are feeding it a single sample at a time.
Here is how you can reshape your data.
x = stft_librosa[:-1, :].reshape((-1, 958, 1))
y = stft_librosa[1:, :].reshape((-1, 958, 1))
print(x.shape, y.shape)
# ((256, 958, 1), (256, 958, 1))
This will shift y
one index ahead and remove the last index from x
.
Once you have that, you can just call the fit
method once without the need for that loop.
model = Sequential([
SimpleRNN(1, activation=keras.activations.linear, input_shape=(958, 1), return_sequences=True),
Dense(1, activation=keras.activations.linear)
])
model.compile(optimizer=SGD(lr=0.000001), loss=keras.losses.MeanSquaredError())
model.fit(x, y, epochs=5, batch_size=24)
#Epoch 1/5
#11/11 [==============================] - 3s 266ms/step - loss: 2.8277
#Epoch 2/5
#11/11 [==============================] - 3s 266ms/step - loss: 2.0028
#Epoch 3/5
#11/11 [==============================] - 2s 224ms/step - loss: 1.8698
#Epoch 4/5
#11/11 [==============================] - 3s 229ms/step - loss: 1.7877
#Epoch 5/5
#11/11 [==============================] - 2s 226ms/step - loss: 1.7307
I suggest you revert the edit to your question because the original code made more sense than the current one