Search code examples
pythonpython-3.xmachine-learningscikit-learnsvm

Support vector machine overfitting my data


I am trying to make predictions for the iris dataset. I have decided to use svms for this purpose. But, it gives me an accuracy 1.0. Is it a case of overfitting or is it because the model is very good? Here is my code.

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
svm_model = svm.SVC(kernel='linear', C=1,gamma='auto')
svm_model.fit(X_train,y_train)
predictions = svm_model.predict(X_test)
accuracy_score(predictions, y_test)

Here, accuracy_score returns a value of 1. Please help me. I am a beginner in machine learning.


Solution

  • You can try cross validation:

    Example:

    from sklearn.model_selection import LeaveOneOut
    from sklearn import datasets
    from sklearn.svm import SVC
    from sklearn.model_selection import cross_val_score
    
    #load iris data
    iris = datasets.load_iris()
    X = iris.data
    Y = iris.target
    
    #build the model
    svm_model = SVC( kernel ='linear', C = 1, gamma = 'auto',random_state = 0 )
    
    #create the Cross validation object
    loo = LeaveOneOut()
    
    #calculate cross validated (leave one out) accuracy score
    scores = cross_val_score(svm_model, X,Y, cv = loo, scoring='accuracy')
    
    print( scores.mean() )
    

    Result (the mean accuracy of the 150 folds since we used leave-one-out):

    0.97999999999999998
    

    Bottom line:

    Cross validation (especially LeaveOneOut) is a good way to avoid overfitting and to get robust results.