I'm trying to feed sentences in which each world has word2vec representation. How can I do it in tensorflow seq2seq models?
Suppose the variable
enc_inp = [tf.placeholder(tf.int32, shape=(None,10), name="inp%i" % t)
for t in range(seq_length)]
Which has dimensions [num_of_observations or batch_size x word_vec_representation x sentense_lenght].
when I pass it to embedding_rnn_seq2seq
decode_outputs, decode_state = seq2seq.embedding_rnn_seq2seq(
enc_inp, dec_inp, stacked_lstm,
seq_length, seq_length, embedding_dim)
error occurs
ValueError: Linear is expecting 2D arguments: [[None, 10, 50], [None, 50]]
Also there is a more complex problem How can i pas as input a vector, not a scalar to first cell of my RNN?
By now it looks like (when we are about any sequence)
But this is needed:
The main point is that: seq2seq make inside themself word embedding. Here is reddit question and answer
Also, if smbd wants to use pretrained Word2Vec there are ways to do it, see:
So this can be used no only for word embedding