I am working on a keyword spotter that processes an audio input and returns the class of the audio based on a list of speech commands similar to what is shown here: https://www.tensorflow.org/tutorials/audio/simple_audio
Instead of processing only 1 second of audio as input, I would like to be able to process multiple frames of audio, say 5 time steps with a 10ms step and feed them into the machine learning model.
In essence, this amounts to adding a TimeDistributed
layer on top of my network.
The second thing I am trying to do is to add an LSTM layer prior to the dense layer that maps my hidden layers to the output classes.
My question: How can I effectively change the code below to add a TimeDistributed
layer that takes in multiple time steps and an LSTM layer.
Started code:
model = models.Sequential([
layers.Input(shape=input_shape),
preprocessing.Resizing(32, 32),
norm_layer,
layers.Conv2D(32, 3, activation='relu'),
layers.Conv2D(64, 3, activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.25),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dropout(0.5),
layers.Dense(num_labels),
])
Model summary:
Input shape: (124, 129, 1)
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
resizing (Resizing) (None, 32, 32, 1) 0
_________________________________________________________________
normalization (Normalization (None, 32, 32, 1) 3
_________________________________________________________________
conv2d (Conv2D) (None, 30, 30, 32) 320
_________________________________________________________________
conv2d_1 (Conv2D) (None, 28, 28, 64) 18496
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 64) 0
_________________________________________________________________
dropout (Dropout) (None, 14, 14, 64) 0
_________________________________________________________________
flatten (Flatten) (None, 12544) 0
_________________________________________________________________
dense (Dense) (None, 128) 1605760
_________________________________________________________________
dropout_1 (Dropout) (None, 128) 0
_________________________________________________________________
dense_1 (Dense) (None, 8) 1032
=================================================================
Total params: 1,625,611
Trainable params: 1,625,608
Non-trainable params: 3
_________________________________________________________________
Attempt1: Adding an LSTM layer
model = models.Sequential([
layers.Input(shape=input_shape),
preprocessing.Resizing(32, 32),
norm_layer,
layers.Conv2D(32, 3, activation='relu'),
layers.Conv2D(64, 3, activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.25),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dropout(0.5),
layers.Flatten(),
layers.LSTM(32, activation='relu', input_shape=(1,128,98)),
layers.Dense(num_labels),
])
Error: ValueError: Input 0 of layer lstm_5 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 128]
Attempt2: Adding a TimeDistributed layer:
model = models.Sequential([
layers.Input(shape=input_shape),
preprocessing.Resizing(32, 32),
norm_layer,
TimeDistributed(layers.Conv2D(32, 3, activation='relu'), input_shape=(None, 32, 32, 1)),
TimeDistributed(layers.Conv2D(64, 3, activation='relu'), input_shape=(None, 30, 30, 1)),
TimeDistributed(layers.MaxPooling2D()),
TimeDistributed(layers.Dropout(0.25)),
TimeDistributed(layers.Flatten()),
TimeDistributed(layers.Dense(128, activation='relu')),
TimeDistributed(layers.Dropout(0.5)),
TimeDistributed(layers.Flatten()),
layers.Dense(num_labels),
])
Error: ValueError: Input 0 of layer conv2d_43 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [None, 32, 1]
I understand there is a problem with my dimensions. I am not sure how to proceed.
LSTM
layer expects inputs: A 3D tensor with shape [batch, timesteps, feature]
Sample code snippet
import tensorflow as tf
inputs = tf.random.normal([32, 10, 8])
lstm = tf.keras.layers.LSTM(4)
output = lstm(inputs)
print(output.shape)
tf.keras.layers.TimeDistributed
expects inputs: Input tensor of shape (batch, time, ...)
Working sample code
inputs = tf.keras.Input(shape=(10, 128, 128, 3))
conv_2d_layer = tf.keras.layers.Conv2D(64, (3, 3))
outputs = tf.keras.layers.TimeDistributed(conv_2d_layer)(inputs)
outputs.shape