Following reproducible script is used to compute the accuracy of a Word2Vec classifier with the W2VTransformer
wrapper in gensim:
import numpy as np
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
from gensim.sklearn_api import W2VTransformer
from gensim.utils import simple_preprocess
# Load synthetic data
data = pd.read_csv('https://pastebin.com/raw/EPCmabvN')
data = data.head(10)
# Set random seed
np.random.seed(0)
# Tokenize text
X_train = data.apply(lambda r: simple_preprocess(r['text'], min_len=2), axis=1)
# Get labels
y_train = data.label
train_input = [x[0] for x in X_train]
# Train W2V Model
model = W2VTransformer(size=10, min_count=1)
model.fit(X_train)
clf = LogisticRegression(penalty='l2', C=0.1)
clf.fit(model.transform(train_input), y_train)
text_w2v = Pipeline(
[('features', model),
('classifier', clf)])
score = text_w2v.score(train_input, y_train)
score
0.80000000000000004
The problem with this script is that it only works when train_input = [x[0] for x in X_train]
, which essentially is always the first word only.
Once change to train_input = X_train
(or train_input
simply substituted by X_train
), the script returns:
ValueError: cannot reshape array of size 10 into shape (10,10)
How can I solve this issue, i.e. how can the classifier work with more than one word of input?
Edit:
Apparently, the W2V wrapper can't work with the variable-length train input, as compared to D2V. Here is a working D2V version:
import numpy as np
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from sklearn.metrics import accuracy_score, classification_report
from sklearn.pipeline import Pipeline
from gensim.utils import simple_preprocess, lemmatize
from gensim.sklearn_api import D2VTransformer
data = pd.read_csv('https://pastebin.com/raw/bSGWiBfs')
np.random.seed(0)
X_train = data.apply(lambda r: simple_preprocess(r['text'], min_len=2), axis=1)
y_train = data.label
model = D2VTransformer(dm=1, size=50, min_count=2, iter=10, seed=0)
model.fit(X_train)
clf = LogisticRegression(penalty='l2', C=0.1, random_state=0)
clf.fit(model.transform(X_train), y_train)
pipeline = Pipeline([
('vec', model),
('clf', clf)
])
y_pred = pipeline.predict(X_train)
score = accuracy_score(y_train,y_pred)
print(score)
This is technically not an answer, but cannot be written in comments so here it is. There are multiple issues here:
LogisticRegression
class (and most other scikit-learn models) work with 2-d data (n_samples, n_features)
.
That means that it needs a collection of 1-d arrays (one for each row (sample), in which the elements of array contains the feature values).
In your data, a single word will be a 1-d array, which means that the single sentence (sample) will be a 2-d array. Which means that the complete data (collection of sentences here) will be a collection of 2-d arrays. Even in that, since each sentence can have different number of words, it cannot be combined into a single 3-d array.
Secondly, the W2VTransformer
in gensim looks like a scikit-learn compatible class, but its not. It tries to follows "scikit-learn API conventions" for defining the methods fit()
, fit_transform()
and transform()
. They are not compatible with scikit-learn Pipeline
.
You can see that the input param requirements of fit()
and fit_transform()
are different.
X (iterable of iterables of str) – The input corpus.
X can be simply a list of lists of tokens, but for larger corpora, consider an iterable that streams the sentences directly from disk/network. See BrownCorpus, Text8Corpus or LineSentence in word2vec module for such examples.
X (numpy array of shape [n_samples, n_features]) – Training set.
If you want to use scikit-learn, then you will need to have the 2-d shape. You will need to "somehow merge" word-vectors for a single sentence to form a 1-d array for that sentence. That means that you need to form a kind of sentence-vector, by doing:
Note:- I noticed now that you were doing this thing based on D2VTransformer
. That should be the correct approach here if you want to use sklearn.
The issue in that question was this line (since that question is now deleted):
X_train = vectorizer.fit_transform(X_train)
Here, you overwrite your original X_train
(list of list of words) with already calculated word vectors and hence that error.
Or else, you can use other tools / libraries (keras, tensorflow) which allow sequential input of variable size. For example, LSTMs can be configured here to take a variable input and an ending token to mark the end of sentence (a sample).
Update:
In the above given solution, you can replace the lines:
model = D2VTransformer(dm=1, size=50, min_count=2, iter=10, seed=0)
model.fit(X_train)
clf = LogisticRegression(penalty='l2', C=0.1, random_state=0)
clf.fit(model.transform(X_train), y_train)
pipeline = Pipeline([
('vec', model),
('clf', clf)
])
y_pred = pipeline.predict(X_train)
with
pipeline = Pipeline([
('vec', model),
('clf', clf)
])
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_train)
No need to fit and transform separately, since pipeline.fit()
will automatically do that.