Same as the title, in tf.keras.layers.Embedding, why it is important to know the size of dictionary as input dimension?
In such setting, the dimensions/shapes of the tensors are the following:
[batch_size, max_time_steps]
such that each element of that tensor can have a value in the range 0 to vocab_size-1
.[vocab_size, embedding_size]
. The output of the embedding layer is of shape [batch_size, max_time_steps, embedding_size]
.3D
tensor is the input of a recurrent neural network.Here's how this is implemented in Tensorflow so you can get a better idea:
inputs = tf.placeholder(shape=(batch_size, max_time_steps), ...)
embeddings = tf.Variable(shape=(vocab_size, embedding_size], ...)
inputs_embedded = tf.nn.embedding_lookup(embeddings, encoder_inputs)
Now, the output of the embedding lookup table has the [batch_size, max_time_steps, embedding_size]
shape.