Search code examples
pythonfunctionmachine-learningscikit-learnlogistic-regression

How can i iterate over a 'list' of models in python with scikit learn?


I built a function that displays some evaluation metrics for a single model, and now I want to apply this function to a pool of models I have estimated.

The inputs of the old function was:

OldFunction(code: str, x, X_train: np.array, X_test: np.array, X:pd.DataFrame)

Where:

code is a string used to create the column name of the dataframe
x is the model name
X_train and X_test are np.arrays of the data splitter
X is the dataframe of the whole data

In order to estimate the metrics for a pool of models, I tried to modify my function by adding a loop in my function and put the models in a list.

But it doesn't work.

The problem arises because I can't iterate over a list of models, so what option I have? Do you have some idea?

I leave the new function below.

import numpy as np
import pandas as pd
from sklearn.metrics import roc_curve, auc
from sklearn.metrics import accuracy_score, recall_score, precision_score
from sklearn.model_selection import cross_val_score

def displaymetrics(code: list, models: list, X_train: np.array, X_test: np.array, X: pd.DataFrame):
    for i in models:      
        
        y_score = models[i].fit(X_train, y_train).decision_function(X_test)
        fpr, tpr, _ = roc_curve(y_test, y_score)
        roc_auc = auc(fpr, tpr)
        
        # Traditional Scores
        
        y_pred = pd.DataFrame(model[i].predict(X_train)).reset_index(drop=True)
        Recall_Train,Precision_Train, Accuracy_Train  = recall_score(y_train, y_pred), precision_score(y_train, y_pred), accuracy_score(y_train, y_pred)
        y_pred = pd.DataFrame(model[i].predict(X_test)).reset_index(drop=True)
        Recall_Test = recall_score(y_test, y_pred)
        Precision_Test = precision_score(y_test, y_pred)
        Accuracy_Test = accuracy_score(y_test, y_pred)
        
        #Cross Validation
        cv_au = cross_val_score(models[i], X_test, y_test, cv=30, scoring='roc_auc')
        cv_f1 = cross_val_score(models[i], X_test, y_test, cv=30, scoring='f1')
        cv_pr = cross_val_score(models[i], X_test, y_test, cv=30, scoring='precision')
        cv_re = cross_val_score(models[i], X_test, y_test, cv=30, scoring='recall')
        cv_ac = cross_val_score(models[i], X_test, y_test, cv=30, scoring='accuracy')
        cv_ba = cross_val_score(models[i], X_test, y_test, cv=30, scoring='balanced_accuracy')
        cv_au_m, cv_au_std =  cv_au.mean() , cv_au.std() 
        cv_f1_m, cv_f1_std = cv_f1.mean() , cv_f1.std()
        cv_pr_m, cv_pr_std = cv_pr.mean() , cv_pr.std()
        cv_re_m, cv_re_std= cv_re.mean() , cv_re.std()
        cv_ac_m, cv_ac_std = cv_ac.mean() , cv_ac.std()
        cv_ba_m, cv_ba_std= cv_ba.mean() , cv_ba.std()
        cv_au, cv_f1, cv_pr =  (cv_au_m, cv_au_std),  (cv_f1_m, cv_f1_std), (cv_pr_m, cv_pr_std) 
        cv_re, cv_ac, cv_ba = (cv_re_m, cv_re_std), (cv_ac_m, cv_ac_std), (cv_ba_m, cv_ba_std)
        tuples = [cv_au, cv_f1, cv_pr, cv_re, cv_ac, cv_ba]
        tuplas = [0]*len(tuples)
        for i in range(len(tuples)):
            tuplas[i] = [round(x,4) for x in tuples[i]]
        results = pd.DataFrame()
        results['Metrics'] = ['roc_auc', 'Accuracy_Train', 'Precision_Train', 'Recall_Train', 'Accuracy_Test', 
                              'Precision_Test','Recall_Test', 'cv_roc-auc (mean, std)', 'cv_f1score(mean, std)', 
                              'cv_precision (mean, std)', 'cv_recall (mean, std)', 'cv_accuracy (mean, std)', 
                              'cv_bal_accuracy (mean, std)']
        results.set_index(['Metrics'], inplace=True)
        results['Model_'+code[i]] = [roc_auc, Accuracy_Train, Precision_Train, Recall_Train, Accuracy_Test, 
                            Precision_Test, Recall_Test, tuplas[0], tuplas[1], tuplas[2], tuplas[3],
                           tuplas[4], tuplas[5]]
    
    return results

The output should be a dataframe where each column represents each model and the row the metrics.


Solution

  • You should probably mention if there was an error or if just the output is not correct. I will assume that you have an error.

    Are you sure that you are passing models as a list when calling displaymetrics?

    E.g.

    models = [model1, model2, ...]
    displaymetrics(code, models, X_train, X_test, X)
    

    Also, there is an error in your code: You call models[i].fit(...) but i is a model itself. You should just do i.fit(...) or better change the name i because it usually refers to an iterating over stuff. (You should use for i in range(0, len(models)): ... if you want to iterate over the indexes of the list.)

    Note: You shouldn't import pandas and numpy for every model iteration. I also suggest you to put all imports (of the sklearn modules) in the upper part of your code.

    So, I think your code should look like this:

    import numpy as np
    import pandas as pd
    from sklearn.metrics import roc_curve, auc
    from sklearn.metrics import accuracy_score, recall_score, precision_score
    from sklearn.model_selection import cross_val_score
    
    def displaymetrics(code: list, models: list, X_train: np.array, X_test: np.array, X: pd.DataFrame):
        for model in models:  # or for i in range(0, len(models)):
            y_score = model.fit(X_train, y_train).decision_function(X_test)
            # or y_score = models[i].fit(X_train, y_train).decision_function(X_test)
            fpr, tpr, _ = roc_curve(y_test, y_score)
            # etc etc
    

    Try editing your code in order to show us how you call displaymetrics and with what arguments.