I do this:
from sklearn.linear_model import SGDClassifier
sgclass = SGDClassifier(random_state=10)
sgclass.fit(X_train,y_train)
pred = sgclass.predict(X_test)
from sklearn.metrics import classification_report,accuracy_score
print(classification_report(y_test, pred))
print(accuracy_score(y_test, pred))
These are useful reports on the recall and precision of the model.
However how to acquire the most influential independent variables that predict the dependent variable? I started with about 12 candidates and want to see their rank order in terms of influence in the model.
As the documentation specifies, you can use the coef_ attribute to get feature weights. The greater the absolute value of the feature is, the greater is its importance.
You can see that in action in the feature selection class from scikit, SelectFromModel. The best features are selected from any classifier that has attributes feature_importances_ or coef_.