Search code examples
pythonclassificationgensimtext-classificationdoc2vec

Doc2Vec & classification - very poor results


I have a dataset of 6000 observations; a sample of it is the following:

job_id      job_title                                           job_sector
30018141    Secondary Teaching Assistant                        Education
30006499    Legal Sales Assistant / Executive                   Sales
28661197    Private Client Practitioner                         Legal
28585608    Senior hydropower mechanical project manager        Engineering
28583146    Warehouse Stock Checker - Temp / Immediate Start    Transport & Logistics
28542478    Security Architect Contract                         IT & Telecoms

The goal is to predict the job sector of each row based on the job title.

Firstly, I apply some preprocessing on the job_title column:

def preprocess(document):
    lemmatizer = WordNetLemmatizer()
    stemmer_1 = PorterStemmer()
    stemmer_2 = LancasterStemmer()
    stemmer_3 = SnowballStemmer(language='english')

    # Remove all the special characters
    document = re.sub(r'\W', ' ', document)

    # remove all single characters
    document = re.sub(r'\b[a-zA-Z]\b', ' ', document)

    # Substituting multiple spaces with single space
    document = re.sub(r' +', ' ', document, flags=re.I)

    # Converting to lowercase
    document = document.lower()

    # Tokenisation
    document = document.split()

    # Stemming
    document = [stemmer_3.stem(word) for word in document]

    document = ' '.join(document)

    return document

df_first = pd.read_csv('../data.csv', keep_default_na=True)

for index, row in df_first.iterrows():

    df_first.loc[index, 'job_title'] = preprocess(row['job_title'])

Then I do the following with Gensim and Doc2Vec:

X = df_first.loc[:, 'job_title'].values
y = df_first.loc[:, 'job_sector'].values

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y, random_state=0)

tagged_train = TaggedDocument(words=X_train.tolist(), tags=y_train.tolist())
tagged_train = list(tagged_train)

tagged_test = TaggedDocument(words=X_test.tolist(), tags=y_test.tolist())
tagged_test = list(tagged_test)

model = Doc2Vec(vector_size=5, min_count=2, epochs=30)

training_set = [TaggedDocument(sentence, tag) for sentence, tag in zip(X_train.tolist(), y_train.tolist())]

model.build_vocab(training_set)

model.train(training_set, total_examples=model.corpus_count, epochs=model.epochs)   

test_set = [TaggedDocument(sentence, tag) for sentence, tag in zip(X_test.tolist(), y_test.tolist())]

predictors_train = []
for sentence in X_train.tolist():

    sentence = sentence.split()
    predictor = model.infer_vector(doc_words=sentence, steps=20, alpha=0.01)

    predictors_train.append(predictor.tolist())

predictors_test = []
for sentence in X_test.tolist():

    sentence = sentence.split()
    predictor = model.infer_vector(doc_words=sentence, steps=20, alpha=0.025)

    predictors_test.append(predictor.tolist())

sv_classifier = SVC(kernel='linear', class_weight='balanced', decision_function_shape='ovr', random_state=0)
sv_classifier.fit(predictors_train, y_train)

score = sv_classifier.score(predictors_test, y_test)
print('accuracy: {}%'.format(round(score*100, 1)))

However, the result which I am getting is 22% accuracy.

This makes me a lot suspicious especially because by using the TfidfVectorizer instead of the Doc2Vec (both with the same classifier) then I am getting 88% accuracy (!).

Therefore, I guess that I must be doing something wrong in how I apply the Doc2Vec of Gensim.

What is it and how can I fix it?

Or it it simply that my dataset is relatively small while more advanced methods such as word embeddings etc require way more data?


Solution

  • You don't mention the size of your dataset - in rows, total words, unique words, or unique classes. Doc2Vec works best with lots of data. Most published work trains on tens-of-thousands to millions of documents, of dozens to thousands of words each. (Your data appears to only have 3-5 words per document.)

    Also, published work tends to train on data where every document has a unique-ID. It can sometimes make sense to use known-labels as tags instead of, or in addition to, unique-IDs. But it isn't necessarily a better approach. By using known-labels as the only tags, you're effectively only training one doc-vector per label. (It's essentially similar to concatenating all rows with the same tag into one document.)

    You're inexplicably using fewer steps in inference than epochs in training - when in fact these are analogous values. In recent versions of gensim, inference will by default use the same number of inference epochs as the model was configured to use for training. And, it's more common to use more epochs during inference than training. (Also, you're inexplicably using different starting alpha values for inference for both classifier-training and classifier-testing.)

    But the main problem is likely your choice of tiny size=5 doc vectors. Instead of the TfidfVectorizer, which will summarize each row as a vector of width equal to the unique-word count – perhaps hundreds or thousands of dimensions – your Doc2Vec model summarizes each document as just 5 values. You've essentially lobotomized Doc2Vec. Usual values here are 100-1000 – though if the dataset is tiny smaller sizes may be required.

    Finally, the lemmatization/stemming may not be strictly necessary and may even be destructive. Lots of Word2Vec/Doc2Vec work doesn't bother to lemmatize/stem - often because there's plentiful data, with many appearances of all word forms.

    These steps are most likely to help with smaller data, by making sure rarer word forms are combined with related longer forms to still get value from words that would otherwise be too rare to be retained (or get useful vectors).

    But I can see many ways they might hurt for your domain. Manager and Management won't have exactly the same implications in this context, but could both be stemmed to manag. Similar for Security and Securities both becoming secur, and other words. I'd only perform these steps if you can prove through evaluation that they're helping. (Are the words passed to the TfidfVectorizer being lemmatized/stemmed?)