Search code examples
pythonscikit-learnnaivebayes

sklearn Naive Bayes in python


I have trained a classifier on 'Rocks and Mines' dataset (https://archive.ics.uci.edu/ml/machine-learning-databases/undocumented/connectionist-bench/sonar/sonar.all-data) And when calculating the accuracy score it always seems to be perfectly accurate (output is 1.0) which I find hard to believe. Am I making any mistakes, or naive bayes is this powerful?

url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/undocumented/connectionist-bench/sonar/sonar.all-data'
data = urllib.request.urlopen(url)
df = pd.read_csv(data)

# replace R and M with 1 and 0
m = len(df.iloc[:, -1])
Y = df.iloc[:, -1].values
y_val = []
for i in range(m):
    if Y[i] == 'M':
        y_val.append(1)
    else:
        y_val.append(0)
df = df.drop(df.columns[-1], axis = 1) # dropping column containing 'R', 'M'

X = df.values

from sklearn.model_selection import train_test_split
    # initializing the classifier
    clf = GaussianNB()
    # splitting the data
    train_x, test_x, train_y, test_y = train_test_split(X, y_val, test_size = 0.33, random_state = 42)
    # training the classifier
    clf.fit(train_x, train_y)
    pred = clf.predict(test_x) # making a prediction
    from sklearn.metrics import accuracy_score
    score = accuracy_score(pred, test_y)
    # printing the accuracy score
    print(score)

The X is the input and y_val is the output (I have converted 'R' and 'M' into 0's and 1's)


Solution

  • This is because of random_state argument inside train_test_split() function.
    When you set random_state to an integer sklearn ensures that your data sampling is constant.
    That means that everytime you run it by specifying random_state, you will get a same result, this is expected behavior.
    Refer docs for further details.