Search code examples
pythonscikit-learntreeclassificationweka

How to use the ADTrees classifier from weka as base of a bagging scikitlearn model?


My intention is to recreate a big model done on weka using scikit-learn and other libraries.

I have this base model done with pyweka.

base_model_1 = Classifier(classname="weka.classifiers.trees.ADTree", 
                  options=["-B", "10", "-E", "-3", "-S", "1"])

base_model_1.build_classifier(train_model_1)
base_model_1

But when i try to use it as base stimattor like that:

model = BaggingClassifier(base_estimator= base_model_1, n_estimators = 100, n_jobs = 1, random_state = 1)

and trying to evaluate the model like that:

cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
AUC_scores = cross_val_score(model, X_data_train, y_data_train, scoring='roc_auc', cv=cv, n_jobs=-1)
F1_scores = cross_val_score(model, X_data_train, y_data_train, scoring='f1', cv=cv, n_jobs=-1)
Precision_scores = cross_val_score(model, X_data_train, y_data_train, scoring='precision', cv=cv, n_jobs=-1)
Recall_scores = cross_val_score(model, X_data_train, y_data_train, scoring='recall', cv=cv, n_jobs=-1)
Accuracy_scores = cross_val_score(model, X_data_train, y_data_train, scoring='accuracy', cv=cv, n_jobs=-1)
print("-------------------------------------------------------")
print(AUC_scores)
print("-------------------------------------------------------")
print(F1_scores)
print("-------------------------------------------------------")
print(Precision_scores)
print("-------------------------------------------------------")
print(Recall_scores)
print("-------------------------------------------------------")
print(Accuracy_scores)
print("-------------------------------------------------------")
print('Mean ROC AUC: %.3f' % mean(AUC_scores))
print('Mean F1: %.3f' % mean(F1_scores))
print('Mean Precision: %.3f' % mean(Precision_scores))
print('Mean Recall: %.3f' % mean(Recall_scores))
print('Mean Accuracy: %.3f' % mean(Accuracy_scores))

Ijust receive NaN:


Distribución Variable Clase Desbalanceada
0    161
1     34
Name: Soft-Tissue_injury_≥4days, dtype: int64
-------------------------------------------------------
[nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
 nan nan nan nan nan nan nan nan nan nan nan nan]
-------------------------------------------------------
[nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
 nan nan nan nan nan nan nan nan nan nan nan nan]
-------------------------------------------------------
[nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
 nan nan nan nan nan nan nan nan nan nan nan nan]
-------------------------------------------------------
[nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
 nan nan nan nan nan nan nan nan nan nan nan nan]
-------------------------------------------------------
[nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
 nan nan nan nan nan nan nan nan nan nan nan nan]
-------------------------------------------------------
Mean ROC AUC: nan
Mean F1: nan
Mean Precision: nan
Mean Recall: nan
Mean Accuracy: nan

So I think i'm useing incorrectly the ADTree classifier as bagging base.

Is there any way to do this correctly?


Solution

  • I've just released a version 0.0.5 of sklearn-weka-plugin, with which you can do the following:

    import os
    from statistics import mean
    
    from sklearn.ensemble import BaggingClassifier
    from sklearn.model_selection import RepeatedStratifiedKFold
    from sklearn.model_selection import cross_val_score
    
    import sklweka.jvm as jvm
    from sklweka.classifiers import WekaEstimator
    from sklweka.dataset import load_arff
    
    jvm.start(packages=True)
    
    # adjust the path to your dataset
    # the example assumes all attributes and class to be nominal
    data_file = "/some/where/vote.arff"
    X, y, meta = load_arff(data_file, class_index="last")
    
    base_model_1 = WekaEstimator(classname="weka.classifiers.trees.ADTree",
                                 options=["-B", "10", "-E", "-3", "-S", "1"],
                                 nominal_input_vars="first-last",  # which attributes need to be treated as nominal
                                 nominal_output_var=True)          # class is nominal as well
    model = BaggingClassifier(base_estimator=base_model_1, n_estimators=100, n_jobs=1, random_state=1)
    
    cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
    accuracy_scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=None)  # single process!
    print("-------------------------------------------------------")
    print(accuracy_scores)
    print("-------------------------------------------------------")
    print('Mean Accuracy: %.3f' % mean(accuracy_scores))
    
    jvm.stop()
    

    This generates the following output:

    -------------------------------------------------------
    [0.97727273 0.95454545 0.95454545 0.95454545 0.97727273 0.90697674
     1.         0.90697674 0.95348837 0.95348837 0.97727273 0.95454545
     0.90909091 0.88636364 0.97727273 0.97674419 0.97674419 0.97674419
     0.97674419 0.97674419 0.93181818 0.97727273 0.93181818 0.90909091
     1.         1.         1.         0.90697674 0.97674419 0.95348837]
    -------------------------------------------------------
    Mean Accuracy: 0.957
    

    Please note, that you might get an exception like object has no attribute 'decision_function' when trying to generate other metrics. This article might help with that.

    Finally, a limitation due to using a JVM and python-javabridge in the background is that you cannot fork processes and distribute jobs across your cores (n_jobs=None).