Search code examples
keras3dreshapeembeddingbatchsize

How to embed 3d input in keras?


I am trying to make an Embedding layer in Keras.

My input size is 3d: (batch, 8, 6), and I want to have an embedding for the last dimension.
So the embedding should work as (batch*8, 6) -> embedding output

But I don't want to keep this batchsize for all the learning step, just for the embedding layer.

I think one of the solution is seperating 8 inputs and applying the embedding to each input.
But then this embedding layer is not the same as one big embedding layer.

Is there any possible solution? Thanks!


Solution

  • The solution is very simple:

    input_shape = (8,6)
    

    And pass through embedding. You will get exactly what you want.


    A complete working example:

    from keras.layers import *
    from keras.models import *
    
    ins = Input((8,6))
    out = Embedding(10, 15)(ins)
    model = Model(ins, out)
    model.summary()
    

    Where 10 is the dictionary size (number of words or similars) and 15 is the embedding size (the resulting dimension).

    Resulting summary:

    Model: "model_1"
    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    input_1 (InputLayer)         (None, 8, 6)              0         
    _________________________________________________________________
    embedding_1 (Embedding)      (None, 8, 6, 15)          150       
    =================================================================
    Total params: 150
    Trainable params: 150
    Non-trainable params: 0
    _________________________________________________________________