Search code examples
pythondeep-learningkerasdimensionspytorch

Non linear mapping to vector of higher dimension


I am learning Keras and need help on the following. I currently have a sequence of floats in lists X and Y. What I need to do is to have a non-linear mapping to map each element to a vector of higher dimension following the below equation.

pos(i) = tanh(W.[concat(X[i],Y[i]])
#where W is a learnable weight matrix, concat performs the concatenation and pos(i) is a vector of 16x1. (I'm trying to create 16 channel inputs for a CNN).

I found that Pytorch implementation for the above is

m = nn.linear(2,16)
input = torch.cat(X[i],Y[i])    
torch.nn.functional.tanh(m(input))

Currently I've tried the concat and tanh in numpy and it seems that is not what I want here.

Can you help me implement the above using Keras.


Solution

  • Based on what you have there.

    This is what I would do in keras. Im going to assume that you just want your to concatenate your inputs before you feed them into the model.

    So we'll do it with numpy. Note

    something like :

    import numpy as np
    from keras.model import Dense, Model,Input
    X = np.random.rand(100, 1)
    Y = np.random.rand(100, 1)
    y = np.random.rand(100, 16)
    # concatenate along the features in numpy
    XY = np.cancatenate(X, Y, axis=1)
    
    
    # write model
    in = Input(shape=(2, ))
    out = Dense(16, activation='tanh')(in)
    # print(out.shape) (?, 16)
    model = Model(in, out)
    model.compile(loss='mse', optimizer='adam')
    model.fit(XY, y)
    
    
    ....