Search code examples
tensorflowkerastensorboard

How to show latent layer in tensorboard?


I have a trained auto-encoder model which I want to visualize the latent layer in tensor-board.

How can I do it ?

    el1 = Conv2D(8, (3, 3), activation='relu', padding='same', input_shape=(224, 224, 3))
    el2 = MaxPooling2D((2, 2), padding='same')
    el3 = Conv2D(8, (3, 3), activation='relu', padding='same')
    el4 = MaxPooling2D((2, 2), padding='same')


    dl1 = Conv2DTranspose(8, (3, 3), strides=2, activation='relu', padding='same')
    dl2 = Conv2DTranspose(8, (3, 3), strides=2, activation='relu', padding='same')
    output_layer = Conv2D(3, (3, 3), activation='sigmoid', padding='same')

    autoencoder = Sequential()
    autoencoder.add(el1)
    autoencoder.add(el2)
    autoencoder.add(el3)
    autoencoder.add(el4)
    autoencoder.add(dl1)
    autoencoder.add(dl2)
    autoencoder.add(output_layer)
    autoencoder.compile(optimizer='adam', loss="binary_crossentropy")




logdir = os.path.join("logs/fit/", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)

autoencoder.fit(X_train, X_train, epochs=100, batch_size=64, validation_data=(X_test, X_test), verbose=1,
                    callbacks=[tensorboard_callback])

After the model was fitted, how can I add the latent layer into tensor-board and view it after running tsne or pca ?


Solution

  • You can follow the guide: Visualizing Data using the Embedding Projector in TensorBoard.

    I assumed that by "latent layer" you mean "latent space", i.e the representation of the encoded input.

    In your case, if you want to represent your latent space, it's first needed to extract the encoder part from your autoencoder. This can be achieved with the functional API of keras:

    # After fitting the autoencoder, we create a model that represents the encoder
    encoder = tf.keras.Model(autoencoder.input, autoencoder.get_layer(el4.name).output)
    

    Then, it's possible to calculate the latent representation of your test set using the encoder:

    latent_test = encoder(X_test)
    

    Then, by following the guide linked above, the latent representation can be saved in a Checkpoint to be visualized with the Tensorboard projector:

    # Save the weights we want to analyze as a variable.
    # The weights need to have the shape (Number of sample, Total Dimensions)
    # Hence why we flatten the Tensor
    weights = tf.Variable(tf.reshape(latent_test,(X_test.shape[0],-1)), name="latent_test")
    # Create a checkpoint from embedding, the filename and key are the
    # name of the tensor.
    checkpoint = tf.train.Checkpoint(latent_test=weights)
    checkpoint.save(os.path.join(logdir, "embedding.ckpt"))
    
    from tensorboard.plugins import projector
    # Set up config.
    config = projector.ProjectorConfig()
    embedding = config.embeddings.add()
    # The name of the tensor will be suffixed by `/.ATTRIBUTES/VARIABLE_VALUE`.
    embedding.tensor_name = "latent_test/.ATTRIBUTES/VARIABLE_VALUE"
    projector.visualize_embeddings(logdir, config)
    

    Finally, the projector can be accessed by running the Tensorboard:

    $ tensorboard --logdir /path/to/logdir
    

    Finally an image of the projector with PCA (here with some random data):

    TensorBoard projector