Search code examples
pythonmachine-learningscikit-learndoc2vec

Build a learning curve for training a doc2vec embedding


I'm trying to optimize the number of epochs for training an embedding. And Is there a way to generate a learning curve for this process.

I can create a learning curve for regular supervised classification, for example.

import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import learning_curve
from sklearn.model_selection import StratifiedShuffleSplit

def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None, n_jobs=None, train_sizes=np.linspace(.1, 1.0, 5)):
    plt.figure()
    plt.title(title)
    if ylim is not None:
        plt.ylim(*ylim)
    plt.xlabel("Training examples")
    plt.ylabel("Score")
    train_sizes, train_scores, test_scores = learning_curve(
        estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
    train_scores_mean = np.mean(train_scores, axis=1)
    train_scores_std = np.std(train_scores, axis=1)
    test_scores_mean = np.mean(test_scores, axis=1)
    test_scores_std = np.std(test_scores, axis=1)
    plt.grid()

    plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
                     train_scores_mean + train_scores_std, alpha=0.1,
                     color="r")
    plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
                     test_scores_mean + test_scores_std, alpha=0.1, color="g")
    plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
             label="Training score")
    plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
             label="Cross-validation score")

    plt.legend(loc="best")
    return plt

title = "Learning Curves (SGDClassifier)"

cv = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=0)

estimator = SGDClassifier()
plot_learning_curve(estimator, title, X_all.todense(), y, ylim=(0.7, 1.01), cv=cv, n_jobs=4)

And I can train an embedding, for example.

from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from nltk.tokenize import word_tokenize


X_tagged = [TaggedDocument(words=word_tokenize(_d.lower()), tags=[str(i)]) for i, _d in enumerate(X)]

model = Doc2Vec(size=8, alpha=0.05, min_alpha=0.00025, dm =1)

model.build_vocab(X_tagged)

model_title.train(X_tagged, total_examples=model.corpus_count, epochs=50)

But how do I create a learning curve while training the embedding.

I don't have enough intuition about training embeddings to figure this out.


Solution

  • Typically a learning curve plots a model's performance (as some quantitative score, like say 'accuracy') against varying amounts of training data.

    So, you'll need to pick a way to score your Doc2Vec models. (Maybe this will be by using the doc-vectors as inputs to another classifier, or something else.) Then, you'll need to re-create the Doc2Vec model with a variety of different training-set sizes, scoring each, and feeding the (corpus_size, score) datapoints to a plot.

    Note that gensim includes a wrapper class for dropping a Doc2Vec training step into a scikit-learn pipeline:

    https://radimrehurek.com/gensim/sklearn_api/d2vmodel.html

    So, you may be able to replace the simple estimator of your existing code with a multi-step pipeline, including D2VTransformer as a step. Thus, you'd create a learning-curve plot in a manner highly analogous to your existing code.