Search code examples
python-3.xword2vectext-classification

Text Classification with word2vec


I am doing text classification and plan to use word2vec word embeddings. I have used gensim module for word2vec Training.

I have tried several Options. But I am getting error that word 'xyz' not in vocabulary. I am not able to find my mistake.

Text processing

def clean_text(text):

text = text.translate(string.punctuation)

text = text.lower().split()

stops = set(stopwords.words("english"))
text = [w for w in text if not w in stops]

text = " ".join(text)
text = re.sub(r"[^\w\s]", " ",text)
text = re.sub(r"[^A-Za-z0-9^,!.\/'+-=]", " ",text)

text = text.split()
lemmatizer = WordNetLemmatizer()
lemmatized_words = [lemmatizer.lemmatize(w) for w in text]
text = " ".join(lemmatized_words)


return text

data['text'] = data['text'].map(lambda x: clean_text(x))

Please help me to solve my issue.

Definig Corpus

def build_corpus(data):
"Creates a list of lists containing words from each sentence"
corpus = []
for col in ['text']:
    for sentence in data[col].iteritems():
        word_list = sentence[1].split(" ")
        corpus.append(word_list)
return corpus

corpus = build_corpus(data)

Word2vec model

from gensim.models import word2vec
 model = word2vec.Word2Vec(corpus, size=100, window=20, min_count=20,    workers=12, sg=1)

words = list(model.wv.vocab)

tokenizer = Tokenizer()
X = data.text
tokenizer.fit_on_texts(X)
sequences = tokenizer.texts_to_sequences(X)
X = pad_sequences(sequences, maxlen=10000)

embedding_vector_size=100

vocab_size = len(words)
embedding_matrix = np.zeros((vocab_size, embedding_vector_size))
for index, word in enumerate(words):    
 embedding_vector = model.wv[word]
 if embedding_vector is not None:
    embedding_matrix[index] = embedding_vector

Now I am using my created word embeddings on the downstream classification task.

classification model

labels = data['Priority']

where I have two priorities. I want to classify it.

X_train, X_test, y_train, y_test = train_test_split(X , labels, test_size=0.25, random_state=42)

I am using folllowing network for classification

model3 = Sequential()
model3.add(Embedding(input_dim = vocab_size, output_dim = embedding_vector_size, input_length = max_len, weights=[embedding_matrix]))
model3.add(SpatialDropout1D(0.7))
model3.add(LSTM(64, dropout=0.7, recurrent_dropout=0.7))
model3.add(Dense(2, activation='softmax'))
model3.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc'])
print(model3.summary())

I am getting error here:

'ValueError: "input_length" is 10000, but received input has shape (None, 3)'

Please help me to solve it out.Thank you.


Solution

  • Not all words from corpus will be kept in the word2vec model.

    Replace:

    vocab_size = len(tokenizer.word_index) + 1
    

    With:

    vocab_size = len(words)
    

    And replace:

    for word, i in tokenizer.word_index.items():
    

    With:

    for i, word in enumerate(words):
    

    Thus ensuring your embedding matrix contains only words that are in the model.