Search code examples
pythonpandasmachine-learningdecision-treevalueerror

ValueError: Number of labels=1993 does not match number of samples=1994


Hi I am new to Machine Learning and working on a fun project based on Crime Prediction. I ran into an error before which is now fixed but unfortunately, the following code block is returning a new error. I am using the datasets provided on the UCI ML Repo. I have checked similar posts but haven't found any relevant solution.

import pandas as pd
import numpy as np
from sklearn import tree
from sklearn.model_selection import cross_val_score

df_d=pd.read_csv('communities-crime-full.csv')
df
df['highCrime'] = np.where(df['ViolentCrimesPerPop']>0.1, 1, 0)
Y = df['highCrime']

# print('total len is ',len(Y))
initial=pd.read_csv('communities-crime-full.csv')
initial = initial.drop('communityname', 1)
initial = initial.drop('ViolentCrimesPerPop', 1)
initial = initial.drop('fold', 1)
initial = initial.drop('state', 1)
initial = initial.drop('community', 1)
initial = initial.drop('county', 1)
skipinitialspace = True

feature_name=list(initial)
#initial=initial.convert_objects(convert_numeric=True)
initial = initial.apply(pd.to_numeric, errors='coerce')
New_data=initial.fillna(initial.mean())
# print('before...')
# print(initial)
# print('after...')
# print(New_data)  
clf = tree.DecisionTreeClassifier(max_depth=3)
# clf = tree.DecisionTreeClassifier()
clf = clf.fit(New_data, Y)
clf
fold=df['fold']
scores = cross_val_score(clf, New_data, Y,fold,'accuracy',10)
print('cross_val_accuracy is ',scores) 
print('cross_val_accuracy_avg is ',np.array(scores).mean()) 
scores = cross_val_score(clf, New_data, Y,fold,'precision',10)
print('cross_val_precision is ',scores) 
print('cross_val_precision_avg is ',np.array(scores).mean()) 
scores = cross_val_score(clf, New_data, Y,fold,'recall',10)
print('cross_val_recall is ',scores) 
print('cross_val_recall_avg is ',np.array(scores).mean()) 

The error:

ValueError                                Traceback (most recent call last)
<ipython-input-15-444381be2864> in <module>()
     25 clf = tree.DecisionTreeClassifier(max_depth=3)
     26 # clf = tree.DecisionTreeClassifier()
---> 27 clf = clf.fit(New_data, Y)
     28 clf
     29 fold=df['fold']

/root/.local/lib/python3.7/site-packages/sklearn/tree/_classes.py in fit(self, X, y, sample_weight, check_input, X_idx_sorted)
    281         if len(y) != n_samples:
    282             raise ValueError("Number of labels=%d does not match "
--> 283                              "number of samples=%d" % (len(y), n_samples))
    284         if not 0 <= self.min_weight_fraction_leaf <= 0.5:
    285             raise ValueError("min_weight_fraction_leaf must in [0, 0.5]")

ValueError: Number of labels=1993 does not match number of samples=1994

Solution

  • The error indicates that you have one more "label" than "samples". This means that there is one more input that you have outputs for.

    However, I think that is not actually your problem. It seems you were accidentally using data you previously loaded into ram, which had a problem with its dimensions.

    in your code there is:

    df_d=pd.read_csv('communities-crime-full.csv')
    

    that should probably be:

    df=pd.read_csv('communities-crime-full.csv')
    

    the resulting code will be:

    import pandas as pd
    import numpy as np
    from sklearn import tree
    from sklearn.model_selection import cross_val_score
    
    df=pd.read_csv('communities-crime-full.csv')
    df['highCrime'] = np.where(df['ViolentCrimesPerPop']>0.1, 1, 0)
    Y = df['highCrime']
    
    # print('total len is ',len(Y))
    initial=pd.read_csv('communities-crime-full.csv')
    initial = initial.drop('communityname', 1)
    initial = initial.drop('ViolentCrimesPerPop', 1)
    initial = initial.drop('fold', 1)
    initial = initial.drop('state', 1)
    initial = initial.drop('community', 1)
    initial = initial.drop('county', 1)
    skipinitialspace = True
    
    feature_name=list(initial)
    #initial=initial.convert_objects(convert_numeric=True)
    initial = initial.apply(pd.to_numeric, errors='coerce')
    New_data=initial.fillna(initial.mean())
    # print('before...')
    # print(initial)
    # print('after...')
    # print(New_data)  
    clf = tree.DecisionTreeClassifier(max_depth=3)
    # clf = tree.DecisionTreeClassifier()
    clf = clf.fit(New_data, Y)
    clf
    fold=df['fold']
    scores = cross_val_score(clf, New_data, Y,fold,'accuracy',10)
    print('cross_val_accuracy is ',scores) 
    print('cross_val_accuracy_avg is ',np.array(scores).mean()) 
    scores = cross_val_score(clf, New_data, Y,fold,'precision',10)
    print('cross_val_precision is ',scores) 
    print('cross_val_precision_avg is ',np.array(scores).mean()) 
    scores = cross_val_score(clf, New_data, Y,fold,'recall',10)
    print('cross_val_recall is ',scores) 
    print('cross_val_recall_avg is ',np.array(scores).mean()) 
    

    This results in:

    cross_val_accuracy is  [0.81       0.825      0.805      0.8        0.82914573 0.77386935
     0.85427136 0.83417085 0.80904523 0.8040201 ]
    cross_val_accuracy_avg is  0.8144522613065327
    cross_val_precision is  [0.90740741 0.86290323 0.84677419 0.84       0.85826772 0.85714286
     0.92105263 0.92592593 0.85950413 0.90566038]
    cross_val_precision_avg is  0.8784638467535306
    cross_val_recall is  [0.77777778 0.856      0.84       0.84       0.872      0.768
     0.84       0.8        0.832      0.768     ]
    cross_val_recall_avg is  0.8193777777777778
    

    looks like some learning is indeed happening!