Search code examples
pythonnlpword-embeddingdoc2vec

similarity score is way off using doc2vec embedding


I'm trying out document de-duplication on an NY-Times corpus that I've prepared very recently. It contains data related to financial fraud.

First, I convert the article snippets to a list of TaggedDocument objects.

nlp = spacy.load("en_core_web_sm")

def create_tagged_doc(doc, nlp):        
    toks = nlp(doc)
    lemmatized_toks = [tok.lemma_ for tok in toks if not tok.is_stop]
    return lemmatized_toks

df_fraud = pd.read_csv('...local_path...')
df_fraud_list = df_fraud['snippet'].to_list()
documents = [TaggedDocument(create_tagged_doc(doc, nlp), [i]) for i, doc in enumerate(df_fraud_list)]

A sample TaggedDocument looks as follows:

TaggedDocument(words=['Chicago', 'woman', 'fall', 'mortgage', 'payment', 
'victim', 'common', 'fraud', 'know', 'equity', 'strip', '.'], tags=[1])

Now I compile and train the Doc2Vec model.

cores = multiprocessing.cpu_count()
model_dbow = Doc2Vec(dm=0, vector_size=100, negative=5, hs=0, min_count=2, sample = 0, workers=cores)
model_dbow.build_vocab(documents)
model_dbow.train(documents, 
                total_examples=model_dbow.corpus_count, 
                epochs=model_dbow.epochs)

Let's define the cosine similarity:

cosine_sim = lambda x, y: np.inner(x, y) / (norm(x) * norm(y))

Now, the trouble is, if I define two sentences which are almost similar and take their cosine similarity score, it's coming very low. E.g.

a = model_dbow.infer_vector(create_tagged_doc('That was a fradulent transaction.', nlp))
b = model_dbow.infer_vector(create_tagged_doc('That transaction was fradulant.', nlp))

print(cosine_sim(a, b)) # 0.07102317

Just to make sure, I checked with exact same vector repeated, and it's proper.

a = model_dbow.infer_vector(create_tagged_doc('That was a fradulent transaction.', nlp))
b = model_dbow.infer_vector(create_tagged_doc('That was a fradulent transaction.', nlp))

print(cosine_sim(a, b)) # 0.9980062

What's going wrong in here?


Solution

  • Looks like it's an issue with number of epochs. When creating a Doc2Vec instance without specifying number of epochs, e.g. model_dbow = Doc2Vec(dm=0, vector_size=100, negative=5, hs=0, min_count=2, sample = 0, workers=cores), it's set to 5 by default. Apparently that wasn't sufficient for my corpus. I set the epochs to 50, and re-trained the model, and voila! It worked.