Search code examples
machine-learninglanguage-agnosticnlp

How do I form a feature vector for a classifier targeted at Named Entity Recognition?


I have a set of tags (different from the conventional Name, Place, Object etc.). In my case, they are domain-specific and I call them: Entity, Action, Incident. I want to use these as a seed for extracting more named-entities.

I came across this paper: "Efficient Support Vector Classifiers for Named Entity Recognition" by Isozaki et al. While I like the idea of using Support Vector Machines for doing named-entity recognition, I am stuck on how to encode the feature vector. For their paper, this is what they say:

For instance, the words in “President George Herbert Bush said Clinton is . . . ” are classified as follows: “President” = OTHER, “George” = PERSON-BEGIN, “Herbert” = PERSON-MIDDLE, “Bush” = PERSON-END, “said” = OTHER, “Clinton” = PERSON-SINGLE, “is” = OTHER. In this way, the first word of a person’s name is labeled as PERSON-BEGIN. The last word is labeled as PERSON-END. Other words in the name are PERSON-MIDDLE. If a person’s name is expressed by a single word, it is labeled as PERSON-SINGLE. If a word does not belong to any named entities, it is labeled as OTHER. Since IREX de- fines eight NE classes, words are classified into 33 categories.

Each sample is represented by 15 features because each word has three features (part-of-speech tag, character type, and the word itself), and two preceding words and two succeeding words are also used for context dependence. Although infrequent features are usually removed to prevent overfitting, we use all features because SVMs are robust. Each sample is represented by a long binary vector, i.e., a sequence of 0 (false) and 1 (true). For instance, “Bush” in the above example is represented by a vector x = x[1] ... x[D] described below. Only 15 elements are 1.

x[1] = 0 // Current word is not ‘Alice’ 
x[2] = 1 // Current word is ‘Bush’ 
x[3] = 0 // Current word is not ‘Charlie’

x[15029] = 1 // Current POS is a proper noun 
x[15030] = 0 // Current POS is not a verb

x[39181] = 0 // Previous word is not ‘Henry’ 
x[39182] = 1 // Previous word is ‘Herbert

I don't really understand how the binary vector here is being constructed. I know I am missing a subtle point but can someone help me understand this?


Solution

  • There is a bag of words lexicon building step that they omit.

    Basically you have build a map from (non-rare) words in the training set to indicies. Let's say you have 20k unique words in your training set. You'll have mapping from every word in the training set to [0, 20000].

    Then the feature vector is basically a concatenation of a few very sparse vectors that have a 1 corresponding to a particular word, and 19,999 0s, and then 1 for a particular POS, and 50 other 0s for non-active POS. This is generally called a one hot encoding. http://en.wikipedia.org/wiki/One-hot

    def encode_word_feature(word, POStag, char_type, word_index_mapping, POS_index_mapping, char_type_index_mapping)):
      # it makes a lot of sense to use a sparsely encoded vector rather than dense list, but it's clearer this way
      ret = empty_vec(len(word_index_mapping) + len(POS_index_mapping) + len(char_type_index_mapping))
      so_far = 0
      ret[word_index_mapping[word] + so_far] = 1
      so_far += len(word_index_mapping)
      ret[POS_index_mapping[POStag] + so_far] = 1
      so_far += len(POS_index_mapping)
      ret[char_type_index_mapping[char_type] + so_far] = 1
      return ret
    
    def encode_context(context):
      return encode_word_feature(context.two_words_ago, context.two_pos_ago, context.two_char_types_ago, 
                 word_index_mapping, context_index_mapping, char_type_index_mapping) +
             encode_word_feature(context.one_word_ago, context.one_pos_ago, context.one_char_types_ago, 
                 word_index_mapping, context_index_mapping, char_type_index_mapping) + 
             # ... pattern is obvious
    

    So your feature vector is about size 100k with a little extra for POS and char tags, and is almost entirely 0s, except for 15 1s in positions picked according to your feature to index mappings.