I found there are a lot of same names in keras.backend
or keras.layers
, for example keras.backend.concatenate
and keras.layers.Concatenate
. I know vaguely that one is for tensor while the other is for layer. But when the code is so big, so many function made me confused that which is tensor or which is layer. Anybody has a good idea to solve this problem?
One way I found is to define all placeholders in one function at first, but the function take it as variable may return layers at end, while another function take this layer as variable may return another variable.
You should definitely use keras.layers
if there is a layer that achieves what you want to do. That's because, when building a model, Keras layers only accept Keras Tensors (i.e. the output of layers) as the inputs. However, the output of methods in keras.backend.*
is not a Keras Tensor (it is the backend Tensor, such as TensorFlow Tensor
) and therefore you can't pass them directly to a layer.
Although, if there is an operation that could not be done with a layer, then you can use keras.backned.*
methods in a Lambda
layer to perform that custom operation/computation.
Note: Keras Tensor is actually the same type as the backend Tensor (e.g. tf.Tensor
); however, it has been augmented with some additional Keras-specific attributes which Keras needs when building a model.