I'm doing anomaly detection. I was able to do that with the auto-encoder. Now I'm doing a combination between autoencoder and mlp to compare the results, for that I've created the following code:
encoding_dim=58
input_dim=xtrain.shape[1]# The input layer must have an equal number of neurons as the number of columns which is 116
############################
#Define the DAE architecture
############################
inputArray=Input(shape=(input_dim,)) #definir le type d'input : <tf.Tensor 'input_1:0' shape=(?, 29) dtype=float32>
encoded= Dense(units=encoding_dim,activation="tanh")(inputArray) # units ; nember of neurons for the layer , relu =max(0,x)
encoded= Dense(units=29,activation="tanh")(encoded)
encoded= Dense(units=15,activation="tanh")(encoded)
encoded= Dense(units=10,activation="tanh")(encoded)
encoded= Dense(units=3,activation="tanh")(encoded)
encoded= Dense(units=10,activation="tanh")(encoded)
decoded= Dense(units=15,activation="tanh")(encoded)
decoded= Dense(units=29,activation="tanh")(decoded)
decoded= Dense(units=encoding_dim,activation="tanh")(decoded)
decoded= Dense(units=input_dim,activation="softmax",name='decoded')(decoded) #softmax return a vector of probabilty for each class
############################
#Define the MLP architecture
############################
output_mlp = 70
first_input = Input(shape=(input_dim, ))
mlp = Dense(40, )(first_input)
mlp = Dense(80, input_dim=60, activation='relu')(mlp)
dropout_mlp = Dropout(0.1)(mlp)
mlp =Dense(70, input_dim=80, activation='relu')(dropout_mlp)
dropout_mlp =Dropout(0.1)(mlp)
mlp =Dense(30, input_dim=output_mlp, activation='relu',name='mlp')(dropout_mlp)
#dropout_mlp =Dropout(0.1)(mlp)
#mlp =Dense(10, input_dim=30, activation='relu')(dropout_mlp)
#dropout_mlp =Dropout(0.1)(mlp)
############################
#Define the concatenate layer
############################
merge_layer = concatenate([mlp, decoded])
############################
#Define the rest of layers
############################
third_layer =Dense(input_dim+ output_mlp, input_dim=input_dim+ output_mlp, activation='relu')(merge_layer)
dropout_mlp =Dropout(0.1)(third_layer)
third_layer =Dense(40, input_dim=70, activation='relu')(dropout_mlp)
dropout_mlp =Dropout(0.1)(third_layer)
third_layer =Dense(5, input_dim=40, activation='relu')(dropout_mlp)
third_layer = Dense(1, activation='sigmoid')(third_layer)
############################
#Compile and plot the model
############################
autoecoder = Model(inputs=[first_input, inputArray], outputs=third_layer)
autoecoder.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
plot_model(autoecoder,to_file='demo.png',show_shapes=True)
For training the model, I have the following code:
#hyperparametrs :
batchsize=100
epoch=10
start_time = time.time()
autoecoder.fit([xtrain,xtrain],xtrain,
batch_size=batchsize,
epochs=epoch,
verbose=1,
shuffle=True,
validation_data=([xtest,xtest],xtest),
callbacks=[TensorBoard(log_dir="../logs/autoencoderHoussem")])
but I have this error:
ValueError: Error when checking target: expected dense_35 to have shape (1,) but got array with shape (116,)
Can anyone help please
In this line (which defines the last layer in your model):
third_layer = Dense(1, activation='sigmoid')(third_layer)
You say that the model should output 1 value. However here:
autoecoder.fit(x = [xtrain,xtrain], y = xtrain,
...,
)
You give your model an array of 116 values (dimension of xtrain) as expected value. This is generating the ValueError, since it cannot compute the loss because of the size mismatch.