Search code examples
pythonmachine-learningscikit-learnpipelinedata-processing

How to combine text features and categorical features in Python?


I'm trying to build a pipeline to Transform and Encode text and categorical features respectively and combine them to be fed into a classifier. I currently have the following Class to select the data:

class DataFrameSelector(BaseEstimator, TransformerMixin):
    def __init__(self, attribute_names):
        self.attribute_names = attribute_names
    def fit(self, X, y=None):
        return self
    def transform(self, X):
        print(X[self.attribute_names].head())
        return X[self.attribute_names]

Then using that I have the following FeatureUnion combined with a Pipeline:

preprocessing = FeatureUnion([
    ("text_pipeline", Pipeline([
        ("select_text", DataFrameSelector(text_features)),
        ("count_vect", CountVectorizer()),
        ("word_count_to_vector", TfidfTransformer()),
    ])),
    ("cat_pipeline", Pipeline([
        ("select_cat", DataFrameSelector(cat_features)),
        ("cat_encoder", OneHotEncoder(sparse=False)),

    ])),
])

When executing full_pipeline.fit_transform(X_train) I'm getting the following error:

ValueError                                Traceback (most recent call last)
<ipython-input-69-6927adc0ed62> in <module>()
     22 ])
     23 
---> 24 full_pipeline.fit_transform(X_train)

/anaconda3/lib/python3.6/site-packages/sklearn/pipeline.py in fit_transform(self, X, y, **fit_params)
    298         Xt, fit_params = self._fit(X, y, **fit_params)
    299         if hasattr(last_step, 'fit_transform'):
--> 300             return last_step.fit_transform(Xt, y, **fit_params)
    301         elif last_step is None:
    302             return Xt

/anaconda3/lib/python3.6/site-packages/sklearn/pipeline.py in fit_transform(self, X, y, **fit_params)
    798         self._update_transformer_list(transformers)
    799         if any(sparse.issparse(f) for f in Xs):
--> 800             Xs = sparse.hstack(Xs).tocsr()
    801         else:
    802             Xs = np.hstack(Xs)

/anaconda3/lib/python3.6/site-packages/scipy/sparse/construct.py in hstack(blocks, format, dtype)
    462 
    463     """
--> 464     return bmat([blocks], format=format, dtype=dtype)
    465 
    466 

/anaconda3/lib/python3.6/site-packages/scipy/sparse/construct.py in bmat(blocks, format, dtype)
    583                                                     exp=brow_lengths[i],
    584                                                     got=A.shape[0]))
--> 585                     raise ValueError(msg)
    586 
    587                 if bcol_lengths[j] == 0:

ValueError: blocks[0,:] has incompatible row dimensions. Got blocks[0,1].shape[0] == 1, expected 19634.

and I can't figure out what am i doing wrong. Any help is appreciated.


Solution

  • So I got it working by using hstack from spicy.sparse to concat the two sparse matrixes. See code below:

    from scipy.sparse import coo_matrix, hstack
    from sklearn.preprocessing import OneHotEncoder
    with_prod_tfidf = text_pipeline.fit_transform(with_prod['Text'])
    
    #as per https://stackoverflow.com/questions/19710602/concatenate-sparse-matrices-in-python-using-scipy-numpy
    with_prod_all = hstack([with_prod_tfidf, OneHotEncoder().fit_transform(with_prod[cat_features])])
    print(with_prod_all.shape)