From udacity notebook exercise; After embeddings were trained, I'm trying to get all related words from an input word and I'm getting wierd results. Is the code below correct?
final_embeddings = normalized_embeddings.eval()
word='history'
nearest = (-final_embeddings[dictionary[word], :]).argsort()[1:9]
for idx in range(len(nearest)):
print reverse_dictionary[nearest[idx]]
sorry for the dumb question. Just realized that final_embeddings is the W trained matrix. Answer is: given an input word. Similarity is computed doing the prob_vec[word] matmul W
word='the'
word_vec = final_embeddings[dictionary[word]]
sim = np.dot(word_vec, -final_embeddings.T).argsort()[0:8]
for idx in range(8):
print reverse_dictionary[sim[idx]]
Thanks!