Search code examples
doc2vec

Building vocabulary using document vector


I am not able to build vocabulary and getting an error:

TypeError: 'int' object is not iterable

Here is my code that is based on medium article:

https://towardsdatascience.com/implementing-multi-class-text-classification-with-doc2vec-df7c3812824d

I tried to provide pandas series, list to build_vocab function.

import pandas as pd

from gensim.test.utils import common_texts
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from sklearn.model_selection import train_test_split
import multiprocessing
import nltk
from nltk.corpus import stopwords

def tokenize_text(text):
    tokens = []
    for sent in nltk.sent_tokenize(text):
        for word in nltk.word_tokenize(sent):
            if len(word) < 2:
                continue
            tokens.append(word.lower())
    return tokens

df = pd.read_csv("https://raw.githubusercontent.com/RaRe-Technologies/movie-plots-by-genre/master/data/tagged_plots_movielens.csv")

tags_index = {
    "sci-fi": 1,
    "action": 2,
    "comedy": 3,
    "fantasy": 4,
    "animation": 5,
    "romance": 6,
}

df["tindex"] = df.tag.replace(tags_index)
df = df[["plot", "tindex"]]

mylist = list()
for i, q in df.iterrows():
    mylist.append(
        TaggedDocument(tokenize_text(str(q["plot"])), tags=q["tindex"])
    )

df["tdoc"] = mylist

X = df[["tdoc"]]
y = df["tindex"]

X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)

cores = multiprocessing.cpu_count()
model_doc2vec = Doc2Vec(
    dm=1,
    vector_size=300,
    negative=5,
    hs=0,
    min_count=2,
    sample=0,
    workers=cores,
)
model_doc2vec.build_vocab([x for x in X_train["tdoc"]])

The documentation is very confusing for this method.


Solution

  • Doc2Vec needs an iterable sequence of TaggedDocument-like objects for its corpus (as is fed to build_vocab() or train()).

    When showing an error, you should also show the full stack that accompanied it, so that it is clear what line-of-code, and surrounding call-frames, are involved.

    But, it's unclear if what you've fed into the dataframe, then out via dataframe-bracket-access, then through the train_test_split(), is actually that.

    So I'd suggest assigning things to descriptive interim variables, and verifying that they contain the right sorts of things at each step.

    Is X_train["tdoc"][0] a proper TaggedDocument, with a words property that is a list-of-strings, and tags property a list-of-tags? (And, where each tag is probably a string, but could perhaps be a plain-int, counting upward from 0.)

    Is mylist[0] a proper TaggedDocument?

    Separately: many online examples of Doc2Vec use have egregious errors, and the Medium article you link is no exception. Its practice of calling train() multiple times in a loop is usually unneeded, and very error-prone, and in fact in that article results in severe learning-rate alpha mismanagement. (For example, deducting 0.002 from the starting-default alpha of 0.025 30 times results in a negative effective alpha, which is never justified and means the model is making itself worse with every example. This may be a factor contributing to the awful reported classifier accuracy.)

    I would disregard that article entirely and seek better examples elsewhere.