Search code examples
scikit-learndictvectorizer

sklearn model use for new data


used scikit's DictVectorizer to make a feature vector

X = dataset.drop('Tag', axis=1)
v = DictVectorizer(sparse=False)
X = v.fit_transform(X.to_dict('records')) 
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.33, random_state=0)
classes = np.unique(y)
classes = classes.tolist()
per = Perceptron(verbose=10, n_jobs=-1, max_iter=5)
per.partial_fit(X_train, y_train, classes)
joblib.dump(per, 'saved_model.pkl') 

and save trined model to file. load model in another file for new date

new_X=df
v = DictVectorizer(sparse=False)
new_X = v.fit_transform(new_X.to_dict('records'))
#Load model
per_load = joblib.load('saved_model2.pkl')
per_load.predict(new_X)

i try to predict new data When I execute this code, the output is Value error

ValueError: X has 43 features per sample; expecting 983

How do I save the model ?


Solution

  • you need to save pickle object for vectorizer , as well and apply transform rather fit_transform because your vectorizer has already learned the vocabulary and that need to used for predicting unseen data

     #use 
     import joblib
    
    joblib.dump(v, 'vectorizer.pkl')
    
    #loading pickle 
    v =  joblib.load('vectorizer.pkl') 
    
    
    per_load.predict(v.transform(["new comment"]) #don't use fit_transform , use transfom only