word2vec uses either of the model for distributed representation of words. I was checking out the codes of gensim but it is not defined about the model used by gensim .
From the gensim documentation:
sg
defines the training algorithm. By default (sg=0
), CBOW is used. Otherwise (sg=1
), skip-gram is employed.