Search code examples
pythondeep-learningkeraspcaautoencoder

Why my autoencoder doesn't give the reduced representation?


I'm trying to create a reduced representation of my data, that I will use in another model, and I proceed in the following way :

   input = Input(shape=(70,))
   encoded = Dense(output_dim=10, input_dim=70, activation='relu')(input)
   decoded = Dense(70, activation='relu')(encoded)
   autoencoder = Model(input, decoded)
   autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
   autoencoder.fit(df_values, df_values,epochs=10,batch_size=32)
   reduced_input = autoencoder.predict(df_values)

But there are still 100 columns in the reduced_input, they have modified values, i.e. not the same values that in the initial input, but still not reduced representation as I expected (like the components of PCA) even if I specified output_dim=10

I guess there is a mistake somewhere in my way of getting the reduced inputs, but I don't see where exactly. If you can help me to spot it, please!


Solution

  • If you want 7th layer then:

    output_func_Layer_7 = K.function([autoencoder.layers[0].input, K.learning_phase()],
                         [autoencoder.layers[7].output])
    intermediate_output = output_func_Layer_7([X_train, False])