Search code examples
python-3.xcondagensim

Gensim errors after updating Python version with conda


I recently updated a conda environment from python=3.4 to python 3.6. The environment is made for a project using gensim which worked perfectly on 3.4. After this update, using the library generates multiple errors such as:

TypeError: object of type 'itertools.chain' has no len()

or

AssertionError: decomposition not initialized yet

Do you guys know why this happens while gensim explicitly says Python 3.5 and 3.6 are supported?

The used code:

# Create Texts
texts = src.data.raw.extract_clean_merge_titles_abstracts(papers)
src.data.raw.train_phraser(texts)
texts = src.data.raw.tokenize_stream(texts)

print("Size of corpus: ", len(texts)) # ERROR 1 HERE

# Create Dictionary
dictionary = gensim.corpora.dictionary.Dictionary(texts, prune_at=None)
dictionary.filter_extremes(no_below=3 ,no_above=0.1, keep_n=None)
dictionary.compactify()
print(dictionary)
dictionary.save(config.paths.PATH_DATA_GENSIM_TEMP_DICTIONARY)

# Create corpus
corpus = [dictionary.doc2bow(text) for text in texts]
#gensim.corpora.MmCorpus.serialize(config.paths.PATH_DATA_GENSIM_TEMP_CORPUS, corpus)
corpus_index = gensim.similarities.docsim.Similarity(config.paths.PATH_DATA_GENSIM_TEMP_CORPUS_INDEX, corpus, num_features=len(dictionary))
corpus_index.save(config.paths.PATH_DATA_GENSIM_TEMP_CORPUS_INDEX)

# tf-idf
tfidf = gensim.models.TfidfModel(corpus)
corpus_tfidf = tfidf[corpus]    #gensim.corpora.MmCorpus.serialize(config.paths.PATH_DATA_GENSIM_TEMP_CORPUS_TFIDF, corpus_tfidf)
tfidf.save(config.paths.PATH_DATA_GENSIM_TEMP_TFIDF)
corpus_tfidf_index = gensim.similarities.docsim.Similarity(config.paths.PATH_DATA_GENSIM_TEMP_CORPUS_TFIDF_INDEX, corpus_tfidf, num_features=len(dictionary))
corpus_tfidf_index.save(config.paths.PATH_DATA_GENSIM_TEMP_CORPUS_TFIDF_INDEX)

# lsa
lsa_num_topics = 100
lsa = gensim.models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=lsa_num_topics)
corpus_lsa = lsa[corpus_tfidf] # ERROR 2 HERE
#gensim.corpora.MmCorpus.serialize(config.paths.PATH_DATA_GENSIM_TEMP_CORPUS_LSA, corpus_lsa)
lsa.save(config.paths.PATH_DATA_GENSIM_TEMP_LSA)
corpus_lsa_index = gensim.similarities.docsim.Similarity(config.paths.PATH_DATA_GENSIM_TEMP_CORPUS_LSA_INDEX, corpus_lsa, num_features=lsa_num_topics)
corpus_lsa_index.save(config.paths.PATH_DATA_GENSIM_TEMP_CORPUS_LSA_INDEX)

Here is the list of the packages installed:

bkcharts                  0.2                      py36_0  
bokeh                     0.12.6                   py36_0  
boto                      2.47.0                   py36_0  
bz2file                   0.98                     py36_0  
cycler                    0.10.0                   py36_0  
dbus                      1.8.20                        1    ostrokach
decorator                 4.0.11                   py36_0  
expat                     2.1.0                         0    ostrokach
fontconfig                2.12.1                        3  
freetype                  2.5.5                         2  
gensim                    2.2.0               np113py36_0  
gettext                   0.19.5                        2    ostrokach
glib                      2.48.2                        0    ostrokach
gst-plugins-base          1.8.0                         0  
gstreamer                 1.8.0                         0  
icu                       54.1                          0    ostrokach
jinja2                    2.9.6                    py36_0  
jpeg                      9b                            0  
libffi                    3.2.1                         8    ostrokach
libgcc                    5.2.0                         0  
libgfortran               3.0.0                         1  
libiconv                  1.14                          0  
libpng                    1.6.27                        0  
libsigcpp                 2.4.1                         3    ostrokach
libxcb                    1.12                          1  
libxml2                   2.9.4                         0  
markupsafe                0.23                     py36_2  
matplotlib                2.0.2               np113py36_0  
mkl                       2017.0.3                      0  
networkx                  1.11                     py36_0  
nltk                      3.2.4                    py36_0  
numpy                     1.13.1                   py36_0  
openssl                   1.0.2l                        0  
pcre                      8.39                          1  
pip                       9.0.1                    py36_1  
pymysql                   0.7.9                    py36_0  
pyparsing                 2.1.4                    py36_0  
pyqt                      5.6.0                    py36_2  
python                    3.6.1                         2  
python-dateutil           2.6.0                    py36_0  
pytz                      2017.2                   py36_0  
pyyaml                    3.12                     py36_0  
qt                        5.6.2                         4  
readline                  6.2                           2  
requests                  2.14.2                   py36_0  
scikit-learn              0.18.2              np113py36_0  
scipy                     0.19.1              np113py36_0  
setuptools                27.2.0                   py36_0  
sip                       4.18                     py36_0  
six                       1.10.0                   py36_0  
smart_open                1.5.3                    py36_0  
sqlite                    3.13.0                        0  
system                    5.8                           2  
tk                        8.5.18                        0  
tornado                   4.5.1                    py36_0  
wheel                     0.29.0                   py36_0  
xz                        5.2.2                         1  
yaml                      0.1.6                         0  
zlib                      1.2.8                         3  

Solution

  • My bad, it came from the Phraser:

    def tokenize_stream(stream, max_num_words = 3):
        tokens_stream = [gensim.utils.simple_preprocess(t, min_len=2, max_len=50) for t in stream]
        for i,tokens in enumerate(tokens_stream):
            tokens_stream[i] = [j for j in tokens if j not in stop_words]
        phrases = gensim.models.phrases.Phrases.load(config.paths.PATH_DATA_GENSIM_PHRASES)
        grams = gensim.models.phrases.Phraser(phrases)
        tokens_stream = list(grams[tokens_stream]) ## HERE LIST IS IMPORTANT
        return tokens_stream
    

    For some reason, with python 3.4, not using "list(grams[...])" did work in my code, and returns an itertool.chain instance which leads to an empty corpus with python 3.6.