I'm trying to classify text data into multiple classes. I'd like to perform cross-validation to compare several models with sample weights.
With each model, I can put a parameter like this.
all_together = y_train.to_numpy()
unique_classes = np.unique(all_together)
c_w = class_weight.compute_class_weight('balanced', unique_classes, all_together)
clf = MultinomialNB().fit(X_train_tfidf, y_train, sample_weight=[c_w[i] for i in all_together])
It doesn't seem cross_val_score()
allows parameter about sample_weight.
How can I do this with cross validation?
models = [
RandomForestClassifier(n_estimators=200, max_depth=3, random_state=0),
LinearSVC(),
MultinomialNB(),
LogisticRegression(random_state=0),
]
all_together = y_train.to_numpy()
unique_classes = np.unique(all_together)
c_w = class_weight.compute_class_weight('balanced', unique_classes, all_together)
CV = 5
cv_df = pd.DataFrame(index=range(CV * len(models)))
entries = []
for model in models:
model_name = model.__class__.__name__
f1_micros = cross_val_score(model, X_tfidf, y_train, scoring='f1_micro', cv=CV)
for fold_idx, f1_micro in enumerate(f1_micros):
entries.append((model_name, fold_idx, f1_micro))
cv_df_women = pd.DataFrame(entries, columns=['model_name', 'fold_idx', 'f1_micro'])
cross_val_score
has a parameter called fit_params
which accepts a dictionary of parameters (keys) and values to pass to the fit()
method of the estimator. In your case, you can do
cross_val_score(model, X_tfidf, y_train, scoring='f1_micro', cv=CV, fit_params={'sample_weight': [c_w[i] for i in all_together]})