Search code examples
pythonpython-3.xmachine-learningscikit-learndata-analysis

ValueError in categorising String data


I am trying to solve the Titanic problem on Kaggle (https://www.kaggle.com/c/titanic) . I am trying to encode the column "Sex" categorically using the LabelEncoder and OneHotEncoder classes of sklearn.preprocessing library. Here is my code:

# Importing data analysis libraries
import pandas as pd
import numpy as np
import random as rnd

# Importing data visualization libraries
import seaborn as sns
import matplotlib.pyplot as plt

# Importing Machine Learning Libraries
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier

# Getting the datasets
train = pd.read_csv('./input/train.csv')
test = pd.read_csv('./input/test.csv')
combine = [train, test]

# Feature visualizations
g = sns.FacetGrid(train, col='Survived')
g.map(plt.hist, 'Age', bins=20)

grid = sns.FacetGrid(train, col='Survived', row='Pclass', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend()

g = sns.FacetGrid(train, col='Survived')
g.map(plt.hist, 'Parch', bins=20)

g = sns.FacetGrid(train, col='Survived')
g.map(plt.hist, 'SibSp', bins=20)

g = sns.FacetGrid(train, col='Survived')
g.map(plt.hist, 'Fare', bins=20)

g = sns.FacetGrid(train, col='Survived')
g.map(plt.hist, 'Sex', bins=20)

# taking care of missing values
train.fillna(train.median(), inplace = True)

# Categorising Embarked and Sex features
# train['Embarked'] = train['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} )
# train['Sex'] = train['Sex'].map( {'male': 0, 'female': 1} )

# Data preprocessing
X_train = train.iloc[:, [0, 2, 4, 5, 6, 7, 9]].values
y_train = train.iloc[:, [1]].values
X_test  = test.iloc[:, [1, 3, 4, 5, 6, 8]].values

from sklearn.preprocessing import Imputer, LabelEncoder, OneHotEncoder, StandardScaler
labelencoder_X=LabelEncoder()
X_train[:, 0]=labelencoder_X.fit_transform(X_train[:, 0])
onehotencoder=OneHotEncoder(categorical_features=[0])
X_train=onehotencoder.fit_transform(X_train).toarray()

When I am executing the last 5 lines, I am getting the following error:

Traceback (most recent call last):

  File "<ipython-input-58-770fc19a6644>", line 5, in <module>
    X_train=onehotencoder.fit_transform(X_train).toarray()

  File "C:\Anaconda3\lib\site-packages\sklearn\preprocessing\data.py", line 2019, in fit_transform
    self.categorical_features, copy=True)

  File "C:\Anaconda3\lib\site-packages\sklearn\preprocessing\data.py", line 1809, in _transform_selected
    X = check_array(X, accept_sparse='csc', copy=copy, dtype=FLOAT_DTYPES)

  File "C:\Anaconda3\lib\site-packages\sklearn\utils\validation.py", line 433, in check_array
    array = np.array(array, dtype=dtype, order=order, copy=copy)

ValueError: could not convert string to float: 'male'

What is my mistake? Is there any alternate technique to efficiently encode categorical data?


Solution

  • OneHotEncoder expects integer values - that's why it's complaining about 'male' (string) value.

    You can first use LabelEncoder in order to encode non-numeric values into numbers and then apply OneHotEncoder

    or use LabelBinarizer to OneHotEncode a single non-numeric column