Search code examples
neural-networkword2vecword-embeddingfasttext

In Fasttext skipgram training, what will happen if some sentences in the corpus have just one word?


Imagine that you have a corpus in which some lines have just one word, so there is no context around some of the words. In this situation how does Fasttext perform to provide embeddings for these single words? Note that the frequency of some of these words are one and there is no cut-off to get rid of them.


Solution

  • There's no way to train a context_word -> target_word skip-gram pair for such words (in either 'context' or 'target' roles), so such words can't receive trained representations. Only texts with at least 2 tokens contribute anything to word2vec or FastText word-vector training.

    (One possible exception: FastText in its 'supervised classification' mode might be able to make use of, and train vectors for, such words, because then even single words can be used to predict the known-label of training texts.)

    I suspect that such corpuses will still result in the model counting the word in its initial vocabulary-discovery scan, and thus it will be allocated a vector (if it appears at least min_count times), and that vector will receive the usual small-random-vector initialization. But the word-vector will receive no further training – so when you request the vector back after training, it will be of low-quality, with the only meaningful contributions coming from any char n-grams shared with other words that received real training.

    You should consider any text-breaking process that results in single-word texts as buggy for the purposes of FastText. If those single-word texts come from another meaningful context where they were once surrounded by other contextual words, you should change your text-breaking process to work in larger chunks that retain that context.

    Also note: it's rare for min_count=1 to be a good idea for word-vector models, at least when the training text is real natural-language material where word-token frequencies roughly follow Zipf's law. There will be many, many 1-occurrence (or few-occurrence) words, but with just one to a few example usage contexts, not likely representing the true breadth and subtleties of that word's real usages, it's nearly impossible for such words to receive good vectors that generalize to other uses of those same words elsewhere.

    Training good vectors require a variety of usage examples, and just one or a few examples will practically be "noise" compared to the tens-to-hundreds of examples of other words' usage. So keeping these rare words, instead of dropping them like a default min_count=5 (or higher in larger corpuses) would do, tends to slow training, slow convergence ("settling") of the model, and lower the quality of the other more-frequent word vectors at the end – due to the significant-but-largely-futile efforts of the algorithm to helpfully position these many rare words.