Search code examples
pythonnlpgensimlda

Cosine Similarity and LDA topics


I want to compute Cosine Similarity between LDA topics. In fact, gensim function .matutils.cossim can do it but I dont know which parameter (vector ) I can use for this function?

Here is a snap of code :

import numpy as np
import lda
from sklearn.feature_extraction.text import CountVectorizer

cvectorizer = CountVectorizer(min_df=4, max_features=10000, stop_words='english')
cvz = cvectorizer.fit_transform(tweet_texts_processed)

n_topics = 8
n_iter = 500
lda_model = lda.LDA(n_topics=n_topics, n_iter=n_iter)
X_topics = lda_model.fit_transform(cvz)

n_top_words = 6
topic_summaries = []

topic_word = lda_model.topic_word_  # get the topic words
vocab = cvectorizer.get_feature_names()
for i, topic_dist in enumerate(topic_word):
    topic_words = np.array(vocab)[np.argsort(topic_dist)][:-(n_top_words+1):-1]
    topic_summaries.append(' '.join(topic_words))
    print('Topic {}: {}'.format(i, ' '.join(topic_words)))

doc_topic = lda_model.doc_topic_
lda_keys = []
for i, tweet in enumerate(tweets):
    lda_keys += [X_topics[i].argmax()]

import gensim
from gensim import corpora, models, similarities
#Cosine Similarity between LDA topics
 **sim = gensim.matutils.cossim(LDA_topic[1], LDA_topic[2])** 

Solution

  • You can use word-topic distribution vector. You need both topic vectors to be with the same dimension, and have first element of tuple to be int, and second - float.

    vec1 (list of (int, float))

    So first element is word_id, that you can find in id2word variable in model. If you have two models, you need to union dictionaries. Your vectors must be:

    [(1, 0.541223), (2, 0.44123)]
    

    then you can compare them.