I have this code. I have two features. How do I train the two features together?
from textblob import TextBlob, Word, Blobber
from textblob.classifiers import NaiveBayesClassifier
from textblob.taggers import NLTKTagger
import re
import nltk
def get_word_before_you_feature(mystring):
keyword = 'you'
before_keyword, keyword, after_keyword = mystring.partition(keyword)
before_keyword = before_keyword.rsplit(None, 1)[-1]
return {'word_after_you': before_keyword}
def get_word_after_you_feature(mystring):
keyword = 'you'
before_keyword, keyword, after_keyword = mystring.partition(keyword)
after_keyword = after_keyword.split(None, 1)[0]
return {'word_after_you': after_keyword}
classifier = nltk.NaiveBayesClassifier.train(train)
lang_detector = NaiveBayesClassifier(train, feature_extractor=get_word_after_you_feature)
lang_detector = NaiveBayesClassifier(train, feature_extractor=get_word_before_you_feature)
print(lang_detector.accuracy(test))
print(lang_detector.show_informative_features(5))
This is the output I get.
word_before_you = 'do' refere : generi = 2.2 : 1.0
word_before_you = 'when' generi : refere = 1.1 : 1.0
It only seems to get the last feature. How do I get the classifier to train both features instead of one.
You are defining lang_detector
twice, and the second definition simply overrides the first. Define one feature extractor function that returns a dictionary of features, with each feature name as the key. In your case, you would define get_word_features(mystring)
and it could return a dictionary like this:
return {
'word_after_you': after_keyword,
'word_before_you': before_keyword
}
The rest is as you've been doing it: pass the feature detector function to the classifier's constructor, and examine the results.
lang_detector = NaiveBayesClassifier(train, feature_extractor=get_word_features)
lang_detector.show_most_informative_features(5)