I'm working with a really simple dataset. It has some missing values, both in categorical and numeric features. Because of this, I'm trying to use sklearn.preprocessing.KNNImpute to get the most accurate imputation I can. However, when I run the following code:
imputer = KNNImputer(n_neighbors=120)
imputer.fit_transform(x_train)
I get the error: ValueError: could not convert string to float: 'Private'
That makes sense, it obviously can't handle categorical data. But when I try to run OneHotEncoder with:
encoder = OneHotEncoder(drop="first")
encoder.fit_transform(x_train[categorical_features])
It throws the error: ValueError: Input contains NaN
I'd prefer to use KNNImpute
even with the categorical data as I feel like I'd be losing some accuracy if I just use a ColumnTransform
and impute with numeric and categorical data seperately. Is there any way to get OneHotEncoder
to ignore these missing values? If not, is using ColumnTransform
or a simpler imputer a better way of tackling this problem?
Thanks in advance
There are open issues/PRs to handle missing values on OneHotEncoder
, but it's not clear yet what the options would be. In the interim, here's a manual approach.
SimpleImputer
with the string "missing".OneHotEncoder
then.get_feature_names
to identify the columns corresponding to each original feature, and in particular the "missing" indicator.np.nan
; then delete the missing indicator column.KNNImputer
.KNNImputer
you could get more than one 1 in a row. You could argmax instead to get back exactly one 1.)