I did created a scikit model which is similar to this. But now I want to extract two outputs. I don't know how to pass this while training. I did tried similar to Keras. [y,z] as list. But its not working in scikit. Did anyone tried this before?
import numpy as np
from sklearn import linear_model
X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])
Y = np.array([1, 1, 2, 2])
Z = np.array([1, 1, 2, 2])
clf = linear_model.SGDClassifier(max_iter=1000)
clf.fit(X, [Y, Z])
Output:
ValueError: bad input shape (2, 4)
First of all, your target [Y, Z]
is not what you think it is:
[Y, Z]
# [array([1, 1, 2, 2]), array([1, 1, 2, 2])]
Arguably, what you want should have four rows like your X
, i.e.
W = np.array([[1, 1], [1, 1], [2, 2], [2, 2]])
W
# result:
array([[1, 1],
[1, 1],
[2, 2],
[2, 2]])
But even with this change, you will again get a similar error:
clf.fit(X, W)
[...]
ValueError: bad input shape (4, 2)
because, as clearly mentioned in the SGDClassifier documentation, your dependent variable y
should have a single column:
fit
(X, y, coef_init=None, intercept_init=None, sample_weight=None)y : numpy array, shape (n_samples,)
Target values
Arguably, what you are looking for, is scikit-learn's MultiOuputClassifier
for multioutput classification:
from sklearn.multioutput import MultiOutputClassifier
sgd = linear_model.SGDClassifier(max_iter=1000)
multi_target_sgd = MultiOutputClassifier(sgd, n_jobs=-1)
multi_target_sgd.fit(X, W)
The fit
now works OK, giving the following output:
MultiOutputClassifier(estimator=SGDClassifier(alpha=0.0001, average=False, class_weight=None, epsilon=0.1,
eta0=0.0, fit_intercept=True, l1_ratio=0.15,
learning_rate='optimal', loss='hinge', max_iter=1000, n_iter=None,
n_jobs=1, penalty='l2', power_t=0.5, random_state=None,
shuffle=True, tol=None, verbose=0, warm_start=False),
n_jobs=-1)
Just keep in mind that the subject classifier does not do anything more sophisticated than fitting one classifier per single target output; from the docs again:
Multi target classification
This strategy consists of fitting one classifier per target. This is a simple strategy for extending classifiers that do not natively support multi-target classification