Search code examples
pythonscikit-learndata-visualizationword2vec

Convert Python dictionary to Word2Vec object


I have obtained a dictionary mapping words to their vectors in python, and I am trying to scatter plot the n most similar words since TSNE on huge number of words is taking forever. The best option is to convert the dictionary to a w2v object to deal with it.


Solution

  • I had the same issue and I finaly found the solution

    So, I assume that your dictionary looks like mine

    d = {}
    d['1'] = np.random.randn(300)
    d['2'] = np.random.randn(300)
    

    Basically, the keys are the users' ids and each of them has a vector with shape (300,).

    So now, in order to use it as word2vec I need to firstly save it to binary file and then load it with gensim library

    from numpy import zeros, dtype, float32 as REAL, ascontiguousarray, fromstring
    from gensim import utils
    
    m = gensim.models.keyedvectors.Word2VecKeyedVectors(vector_size=300)
    m.vocab = d
    m.vectors = np.array(list(d.values()))
    my_save_word2vec_format(binary=True, fname='train.bin', total_vec=len(d), vocab=m.vocab, vectors=m.vectors)
    

    Where my_save_word2vec_format function is:

    def my_save_word2vec_format(fname, vocab, vectors, binary=True, total_vec=2):
    """Store the input-hidden weight matrix in the same format used by the original
    C word2vec-tool, for compatibility.
    
    Parameters
    ----------
    fname : str
        The file path used to save the vectors in.
    vocab : dict
        The vocabulary of words.
    vectors : numpy.array
        The vectors to be stored.
    binary : bool, optional
        If True, the data wil be saved in binary word2vec format, else it will be saved in plain text.
    total_vec : int, optional
        Explicitly specify total number of vectors
        (in case word vectors are appended with document vectors afterwards).
    
    """
    if not (vocab or vectors):
        raise RuntimeError("no input")
    if total_vec is None:
        total_vec = len(vocab)
    vector_size = vectors.shape[1]
    assert (len(vocab), vector_size) == vectors.shape
    with utils.smart_open(fname, 'wb') as fout:
        print(total_vec, vector_size)
        fout.write(utils.to_utf8("%s %s\n" % (total_vec, vector_size)))
        # store in sorted order: most frequent words at the top
        for word, row in vocab.items():
            if binary:
                row = row.astype(REAL)
                fout.write(utils.to_utf8(word) + b" " + row.tostring())
            else:
                fout.write(utils.to_utf8("%s %s\n" % (word, ' '.join(repr(val) for val in row))))
    

    And then use

    m2 = gensim.models.keyedvectors.Word2VecKeyedVectors.load_word2vec_format('train.bin', binary=True)
    

    To load the model as word2vec