i am new to nlp and trying to learn the skip gram from the site:
I am trying to implement the skip gram and the problem i run into is that the code below is a sequential API of keras and it doesn't support the merge ( later in the code as show below)
word_model.add(Embedding(vocab_size, embed_size,
embeddings_initializer="glorot_uniform",
input_length=1))
word_model.add(Reshape((embed_size, )))
so i am trying to convert it to functional api
word_model = Embedding(input_dim=vocab_size, output_dim=embed_size,
embeddings_initializer="glorot_uniform",
input_length=1)
word_model = Reshape(target_shape= (embed_size,))(word_model)
however i am getting the below error
Unexpectedly found an instance of type <class 'keras.layers.embeddings.Embedding'>
. Expected a symbolic tensor instance.
i have tried reshape of layer and also background but still not working.
please suggest how to convert this or make it work.
thanks in advance.
from keras.layers import Merge
from keras.layers.core import Dense, Reshape
from keras.layers.embeddings import Embedding
from keras.models import Sequential
# build skip-gram architecture
word_model = Sequential()
word_model.add(Embedding(vocab_size, embed_size,
embeddings_initializer="glorot_uniform",
input_length=1))
word_model.add(Reshape((embed_size, )))
context_model = Sequential()
context_model.add(Embedding(vocab_size, embed_size,
embeddings_initializer="glorot_uniform",
input_length=1))
context_model.add(Reshape((embed_size,)))
model = Sequential()
model.add(Merge([word_model, context_model], mode="dot"))
model.add(Dense(1, kernel_initializer="glorot_uniform", activation="sigmoid"))
model.compile(loss="mean_squared_error", optimizer="rmsprop")
# view model summary
print(model.summary())
# visualize model structure
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model, show_shapes=True, show_layer_names=False,
rankdir='TB').create(prog='dot', format='svg'))
You need an input layer first and then pass that on to the embedding layer. The following is an example using two inputs (one for the target word and one for the context word):
target_input = keras.layers.Input(input_shape)
context_input = keras.layers.Input(input_shape)
target_emb = Embedding(input_dim=vocab_size, output_dim=embed_size,
embeddings_initializer="glorot_uniform",
input_length=1)(target_input)
target_emb = Reshape((embed_size,))(target_emb)
context_emb = Embedding(input_dim=vocab_size, output_dim=embed_size,
embeddings_initializer="glorot_uniform",
input_length=1)(context_input)
context_emb = Reshape((embed_size,))(target_emb)
# Add the remaining layers here...
model = keras.models.Model(inputs=[target_input, context_input], outputs=output)