I have a CSV of twitter profile data, containing: name, description, followers count, following count, bot (class I want to predict)
I have successfully executed a classification model when using just the CountVectorizer values (xtrain) and Bot (ytrain). But have not been able to add this feature to my set of other features.
vectorizer = CountVectorizer()
CountVecTest = vectorizer.fit_transform(training_data.description.values.astype('U'))
CountVecTest = CountVecTest.toarray()
arr = sparse.coo_matrix(CountVecTest)
training_data["NewCol"] = arr.toarray().tolist()
rf = RandomForestClassifier(criterion='entropy', min_samples_leaf=10, min_samples_split=20)
rf = rf.fit(training_data[["followers_count","friends_count","NewCol","bot"]], training_data.bot)
ERROR:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-54-7d67a6586592> in <module>()
1 rf = RandomForestClassifier(criterion='entropy', min_samples_leaf=10, min_samples_split=20)
----> 2 rf = rf.fit(training_data[["followers_count","friends_count","NewCol","bot"]], training_data.bot)
D:\0_MyFiles\0_Libraries\Documents\Anaconda3\lib\site-packages\sklearn\ensemble\forest.py in fit(self, X, y, sample_weight)
245 """
246 # Validate or convert input data
--> 247 X = check_array(X, accept_sparse="csc", dtype=DTYPE)
248 y = check_array(y, accept_sparse='csc', ensure_2d=False, dtype=None)
249 if sample_weight is not None:
D:\0_MyFiles\0_Libraries\Documents\Anaconda3\lib\site-packages\sklearn\utils\validation.py in check_array(array, accept_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator)
431 force_all_finite)
432 else:
--> 433 array = np.array(array, dtype=dtype, order=order, copy=copy)
434
435 if ensure_2d:
ValueError: setting an array element with a sequence.
I did some debugging:
print(type(training_data.NewCol))
print(type(training_data.NewCol[0]))
>>> <class 'pandas.core.series.Series'>
>>> <class 'numpy.ndarray'>
Any help would be appreciated.
I would do this the other way around and add your features to your vectorization. Here is what I mean with a toy example:
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.ensemble import RandomForestClassifier
import pandas as pd
import numpy as np
from scipy.sparse import hstack, csr_matrix
Suppose now you have you features in a dataframe called df
and your labels in y_train
:
df = pd.DataFrame({"a":[1,2],"b":[2,3],"c":['we love cars', 'we love cakes']})
y_train = np.array([0,1])
You want to perform a text vectorization on column c
and add the features a
and b
to your vectorization.
vectorizer = CountVectorizer()
CountVecTest = vectorizer.fit_transform(df.c)
CountVecTest.toarray()
This will return:
array([[0, 1, 1, 1],
[1, 0, 1, 1]], dtype=int64)
But CountVecTest
now is a scipy sparse matrix. So what you need to do is add your features to this matrix. Like this:
X_train = hstack([CountVecTest, csr_matrix(df[['a','b']])])
X_train.toarray()
This will return, as expected:
array([[0, 1, 1, 1, 1, 2],
[1, 0, 1, 1, 2, 3]], dtype=int64)
Then you can train your random forest:
rf = RandomForestClassifier(criterion='entropy', min_samples_leaf=10, min_samples_split=20)
rf.fit(X_train, y_train)
NB: In the code snippet you provided, you passed the label info (the "bot" column) to the training features, which you should obviously not do.