I have tried to create a u-net-like neural network (autoencoder) structure for signal processing applications using Tensorflow2. However, I have run into some problems when training the model.
Here is how I define my neural network structure.
def build_unet(input_shape, n_filters_list = [16, 32]):
inputs = Input(shape=input_shape)
print("in", inputs)
contraction = {}
for f in n_filters_list:
x = Conv1D(f, 5, activation='relu', kernel_initializer='he_normal', padding='same')(inputs)
x = Dropout(0.1)(x)
x = Conv1D(f, 5, activation='relu', kernel_initializer='he_normal', padding='same')(x)
contraction[f'conv{f}'] = x
x = MaxPooling1D(pool_size=4,strides=2)(x)
print("enc", x)
inputs = x
c5 = Conv1D(160, 5, activation='relu', kernel_initializer='he_normal', padding='same')(inputs)
c5 = Dropout(0.2)(c5)
c5 = Conv1D(160, 5, activation='relu', kernel_initializer='he_normal', padding='same')(c5)
print("c5",c5)
inputs = c5
print(inputs)
for i,f in zip([0,0],reversed(n_filters_list)):
x = Conv1DTranspose(f, 4 + i, 2)(inputs)
print("dec",x)
x = concatenate([x, contraction[f'conv{f}']])
x = Conv1D(f, 5, activation='relu', kernel_initializer='he_normal', padding='same')(x)
x = Dropout(0.2)(x)
x = Conv1D(f, 5, activation='relu', kernel_initializer='he_normal', padding='same')(x)
inputs = x
outputs = Conv1D(filters=1, kernel_size=3, activation="tanh", padding="same")(inputs)
print("out",outputs)
return Model(inputs=inputs, outputs=outputs)
It can compile as usual.
model = build_unet(input_shape=(3490,1))
model.compile(optimizer="Adam", loss='mean_squared_error')
I also try to see the model summary. However, it displays the unexpected result as it shows that I have only 2 layers.
And when I try to train my model as follows:
history = model.fit(training_generator,
validation_data=validation_generator,
epochs=100)
It has this error:
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 859, in train_step
y_pred = self(x, training=True)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py", line 228, in assert_input_compatibility
raise ValueError(f'Input {input_index} of layer "{layer_name}" '
ValueError: Exception encountered when calling layer "model_1" (type Functional).
Input 0 of layer "conv1d_77" is incompatible with the layer: expected min_ndim=3, found ndim=2. Full shape received: (None, None)
Call arguments received:
• inputs=tf.Tensor(shape=(None, None), dtype=float32)
• training=True
• mask=None
Could anyone provide the reason I have this unexpected model summary and error and how can I fix this?
Much appreciated.
You override inputs
variable several times so when you create a model here Model(inputs = inputs, outputs = outputs)
your inputs
variable actually doesn't contain Input(shape=input_shape)
.
You can basically remain code as it is but add additional variable to memorize initial input layer like this initial_input = inputs = Input(shape=input_shape)
and then use initial_input
variable when creating model like this Model(inputs = initial_input, outputs = outputs)