In Word2vec can we use words instead of sentences for model training
Like below code gberg_sents is sentence tokens model = Word2Vec(sentences=gberg_sents,size=64,sg=1,window=10,min_count=5,seed=42,workers=8)
Like this can we use word tokens also
No, word2vec is trained with a language modeling objective, i.e., it predicts what words appear in surrounding of other words. For this, your training data need to be actual sentences that show how the words are used in context. It is actually the context of the words that gives you the information that is captured in the embeddings.