I would like to create a Keras model and add sequencial number to a input tensor.
What I would like to do is something like this:
input_layer = Input(shape=(3, 3))
seq = tf.range(3)
seq = tf.reshape(seq, (3, 1))
concatenated = Concatenate(axis=-1)([input_layer, seq])
additional_layer = Dense(4, activation="relu")(concatenated)
...
The problem is that input layer is of a size (none, 3,3)
and the seq is of a size (3,1)
Even if I transform it to a
seq = tf.reshape(seq, (1, 3, 1))
the concatenation will give me an error that shapes don't match.
How to add sequential numbers to every row that will go into the Input layer?
a = np.array([
[[10.0,20.0,30.0], [20.0,30.0,40.0], [40.0,50.0,50.0]],
[[10.0,20.0,30.0], [20.0,30.0,40.0], [40.0,50.0,50.0]],
[[10.0,20.0,30.0], [20.0,30.0,40.0], [40.0,50.0,50.0]]
])
x_tf = tf.convert_to_tensor(a)
input_layer = tf.keras.layers.Input(shape=(3, 3))
seq = tf.range(3, dtype=tf.float32)
seq = tf.reshape(seq, (1, 3, 1))
concatenated = tf.keras.layers.Lambda(lambda x:tf.concat([x, seq], axis=-1))(input_layer)
model = Model(inputs=input_layer, outputs=concatenated)
print(model(x_tf))
and I get:
InvalidArgumentError: Exception encountered when calling layer 'lambda_5' (type Lambda).
{{function_node __wrapped__ConcatV2_N_2_device_/job:localhost/replica:0/task:0/device:CPU:0}} ConcatOp : Dimension 0 in both shapes must be equal: shape[0] = [3,3,3] vs. shape[1] = [1,3,1] [Op:ConcatV2] name: concat
Call arguments received by layer 'lambda_5' (type Lambda):
• inputs=tf.Tensor(shape=(3, 3, 3), dtype=float32)
• mask=None
• training=None
This works for me:
import tensorflow as tf
input_layer = tf.keras.layers.Input(shape=(3, 3))
seq = tf.range(3, dtype=tf.float32)
seq = tf.reshape(seq, (1, 3, 1))
concatenated = tf.keras.layers.Lambda(lambda x:tf.concat([x, seq], axis=-1))
# print(concatenated(tf.ones((1,3,3))))
# <tf.Tensor: shape=(1, 3, 4), dtype=float32, numpy=
# array([[[1., 1., 1., 0.],
# [1., 1., 1., 1.],
# [1., 1., 1., 2.]]], dtype=float32)>
In few words, tf.keras.layers.Lambda
is a layer that is called during the forward pass and just applies the lambda function that is given... inside which, you do nothing more than the concatenation that you were asking.
Updated answer for varying batch size:
import numpy as np
import tensorflow as tf
a = np.array([
[[10.0,20.0,30.0], [20.0,30.0,40.0], [40.0,50.0,50.0]],
[[10.0,20.0,30.0], [20.0,30.0,40.0], [40.0,50.0,50.0]],
[[10.0,20.0,30.0], [20.0,30.0,40.0], [40.0,50.0,50.0]]
])
x_tf = tf.convert_to_tensor(a)
input_layer = tf.keras.layers.Input(shape=(3, 3))
seq = tf.repeat(tf.range(3, dtype=tf.float32)[None, ...], repeats=tf.shape(input_layer)[0], axis=0)[..., None]
concatenated = tf.concat([input_layer, seq], axis=-1)
model = tf.keras.Model(inputs=input_layer, outputs=concatenated)
print(model(x_tf))