I was looking at this code: https://github.com/keras-team/keras/blob/master/examples/variational_autoencoder.py#L198, where Model.fit()
is called without an output or target Tensor. At first, I thought the behavior of Model.fit()
is to use the input as the output (which would make sense for this autoencoder implementation). But then I looked into the documentation, and that's not what it says: https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit
It implies when y
, the target, is None
, x
should be some kind of structure that contains both the input and target.
But it's clear in this autoencoder implementation, that's not what happens (x
only contains the input). Could someone explain what happens in this case?
In the Keras documentation for model.fit the following is stated:
- y: Numpy array of target (label) data (if the model has a single output), or list of Numpy arrays (if the model has multiple outputs). If output layers in the model are named, you can also pass a dictionary mapping output names to Numpy arrays. y can be None (default) if feeding from framework-native tensors (e.g. TensorFlow data tensors).
Now, notice that in the variational autoencoder example, the argument outputs
of the model vae
is a TensorFlow native tensor, since it is given by the output of another model decoder
(a TensorFlow native tensor) whose inputs
argument is independent of the vae
's input.