Search code examples
pythonmodelnamed-entity-recognitionrasa-nlu

MITIE ner model


I have been exploring on using pretrained MITIE models for named entity extraction. Is there anyway I can look at their actual ner model rather than using a pretrained model? Is the model available as open source?


Solution

  • Setting things up:

    For starters, you can download the English Language Model which contains Corpus of annotated text from a huge dump in a file called total_word_feature_extractor.dat.

    After that, download/clone the MITIE-Master Project from their official Git.

    If you are running Windows O.S then download CMake.

    If you are running a x64 based Windows O.S, then install Visual Studio 2015 Community edition for the C++ compiler.

    After downloading, the above, extract all of them into a folder.

    The project structure will look something like this

    Open Developer Command Prompt for VS 2015 from Start > All Apps > Visual Studio, and navigate to the tools folder, you will see 5 sub-folders inside.

    enter image description here

    The next step is to build ner_conll, ner_stream, train_freebase_relation_detector and wordrep packages, by using following Cmake commands in the Visual Studio Developer Command Prompt.

    Something like this:

    enter image description here

    For ner_conll:

    cd "C:\Users\xyz\Documents\MITIE-master\tools\ner_conll"
    

    i) mkdir build ii) cd build iii) cmake -G "Visual Studio 14 2015 Win64" .. iv) cmake --build . --config Release --target install

    For ner_stream:

    cd "C:\Users\xyz\Documents\MITIE-master\tools\ner_stream"
    

    i) mkdir build ii) cd build iii) cmake -G "Visual Studio 14 2015 Win64" .. iv) cmake --build . --config Release --target install

    For train_freebase_relation_detector:

    cd "C:\Users\xyz\Documents\MITIE-master\tools\train_freebase_relation_detector"
    

    i) mkdir build ii) cd build iii) cmake -G "Visual Studio 14 2015 Win64" .. iv) cmake --build . --config Release --target install

    For wordrep:

    cd "C:\Users\xyz\Documents\MITIE-master\tools\wordrep"
    

    i) mkdir build ii) cd build iii) cmake -G "Visual Studio 14 2015 Win64" .. iv) cmake --build . --config Release --target install

    After you build them you will get some 150-160 warnings, don't worry.

    Now, navigate to the "C:\Users\xyz\Documents\MITIE-master\examples\cpp\train_ner"

    Make a JSON file "data.json" using Visual Studio Code for annotating text manually, something like this:

    {
      "AnnotatedTextList": [
        {
          "text": "I want to travel from New Delhi to Bangalore tomorrow.",
          "entities": [
            {
              "type": "FromCity",
              "startPos": 5,
              "length": 2
            },
            {
              "type": "ToCity",
              "startPos": 8,
              "length": 1
            },
            {
              "type": "TimeOfTravel",
              "startPos": 9,
              "length": 1
            }
          ]
        }
      ]
    }
    

    You can add more utterances and annotate them, the more the training data the better is the prediction accuracy.

    This annotated JSON can also be created via front-end tools like jQuery or Angular. But for brevity, I have created them by hand.

    Now, to parse the our Annotated JSON file and pass it to ner_training_instance's add_entity method.

    But C++ doesn't support reflection to deserialize JSON, that's why you can use this library Rapid JSON Parser. Download the package from their Git page and place it under "C:\Users\xyz\Documents\MITIE-master\mitielib\include\mitie".

    Now we have to customize the train_ner_example.cpp file so as to parse our annotated custom entities JSON and pass it to MITIE to train.

    #include "mitie\rapidjson\document.h"
    #include "mitie\ner_trainer.h"
    
    #include <iostream>
    #include <vector>
    #include <list>
    #include <tuple>
    #include <string>
    #include <map>
    #include <sstream>
    #include <fstream>
    
    using namespace mitie;
    using namespace dlib;
    using namespace std;
    using namespace rapidjson;
    
    string ReadJSONFile(string FilePath)
    {
        ifstream file(FilePath);
        string test;
        cout << "path: " << FilePath;
        try
        {
            std::stringstream buffer;
            buffer << file.rdbuf();
            test = buffer.str();
            cout << test;
            return test;
        }
        catch (exception &e)
        {
            throw std::exception(e.what());
        }
    }
    
    //Helper function to tokenize a string based on multiple delimiters such as ,.;:- or whitspace
    std::vector<string> SplitStringIntoMultipleParameters(string input, string delimiter)
    {
        std::stringstream stringStream(input);
        std::string line;
    
        std::vector<string> TokenizedStringVector;
    
        while (std::getline(stringStream, line))
        {
            size_t prev = 0, pos;
            while ((pos = line.find_first_of(delimiter, prev)) != string::npos)
            {
                if (pos > prev)
                    TokenizedStringVector.push_back(line.substr(prev, pos - prev));
                prev = pos + 1;
            }
            if (prev < line.length())
                TokenizedStringVector.push_back(line.substr(prev, string::npos));
        }
        return TokenizedStringVector;
    }
    
    //Parse the JSON and store into appropriate C++ containers to process it.
    std::map<string, list<tuple<string, int, int>>> FindUtteranceTuple(string stringifiedJSONFromFile)
    {
        Document document;
        cout << "stringifiedjson : " << stringifiedJSONFromFile;
        document.Parse(stringifiedJSONFromFile.c_str());
    
        const Value& a = document["AnnotatedTextList"];
        assert(a.IsArray());
    
        std::map<string, list<tuple<string, int, int>>> annotatedUtterancesMap;
    
        for (int outerIndex = 0; outerIndex < a.Size(); outerIndex++)
        {
            assert(a[outerIndex].IsObject());
            assert(a[outerIndex]["entities"].IsArray());
            const Value &entitiesArray = a[outerIndex]["entities"];
    
            list<tuple<string, int, int>> entitiesTuple;
    
            for (int innerIndex = 0; innerIndex < entitiesArray.Size(); innerIndex++)
            {
                entitiesTuple.push_back(make_tuple(entitiesArray[innerIndex]["type"].GetString(), entitiesArray[innerIndex]["startPos"].GetInt(), entitiesArray[innerIndex]["length"].GetInt()));
            }
    
            annotatedUtterancesMap.insert(pair<string, list<tuple<string, int, int>>>(a[outerIndex]["text"].GetString(), entitiesTuple));
        }
    
        return annotatedUtterancesMap;
    }
    
    int main(int argc, char **argv)
    {
    
        try {
    
            if (argc != 3)
            {
                cout << "You must give the path to the MITIE English total_word_feature_extractor.dat file." << endl;
                cout << "So run this program with a command like: " << endl;
                cout << "./train_ner_example ../../../MITIE-models/english/total_word_feature_extractor.dat" << endl;
                return 1;
            }
    
            else
            {
                string filePath = argv[2];
                string stringifiedJSONFromFile = ReadJSONFile(filePath);
    
                map<string, list<tuple<string, int, int>>> annotatedUtterancesMap = FindUtteranceTuple(stringifiedJSONFromFile);
    
    
                std::vector<string> tokenizedUtterances;
                ner_trainer trainer(argv[1]);
    
                for each (auto item in annotatedUtterancesMap)
                {
                    tokenizedUtterances = SplitStringIntoMultipleParameters(item.first, " ");
                    mitie::ner_training_instance *currentInstance = new mitie::ner_training_instance(tokenizedUtterances);
                    for each (auto entity in item.second)
                    {
                        currentInstance -> add_entity(get<1>(entity), get<2>(entity), get<0>(entity).c_str());
                    }
                    // trainingInstancesList.push_back(currentInstance);
                    trainer.add(*currentInstance);
                    delete currentInstance;
                }
    
    
                trainer.set_num_threads(4);
    
                named_entity_extractor ner = trainer.train();
    
                serialize("new_ner_model.dat") << "mitie::named_entity_extractor" << ner;
    
                const std::vector<std::string> tagstr = ner.get_tag_name_strings();
                cout << "The tagger supports " << tagstr.size() << " tags:" << endl;
                for (unsigned int i = 0; i < tagstr.size(); ++i)
                    cout << "\t" << tagstr[i] << endl;
                return 0;
            }
        }
    
        catch (exception &e)
        {
            cerr << "Failed because: " << e.what();
        }
    }
    

    The add_entity accepts 3 parameters, the tokenized string which can be a vector, the custom entity type name ,the start index of a word in a sentence and the range of the word.

    Now we have to build the ner_train_example.cpp by using following commands in Developer Command Prompt Visual Studio.

    1) cd "C:\Users\xyz\Documents\MITIE-master\examples\cpp\train_ner" 2) mkdir build 3) cd build 4) cmake -G "Visual Studio 14 2015 Win64" .. 5) cmake --build . --config Release --target install 6) cd Release

    7) train_ner_example "C:\\Users\\xyz\\Documents\\MITIE-master\\MITIE-models\\english\\total_word_feature_extractor.dat" "C:\\Users\\xyz\\Documents\\MITIE-master\\examples\\cpp\\train_ner\\data.json"

    On successfully executing the above we will get a new_ner_model.dat file which is a serialized and trained version of our utterances.

    Now, that .dat file can be passed to RASA or used standalone.

    For passing it to RASA:

    Make the config.json file as follows:

    {
        "project": "demo",
        "path": "C:\\Users\\xyz\\Desktop\\RASA\\models",
        "response_log": "C:\\Users\\xyz\\Desktop\\RASA\\logs",
        "pipeline": ["nlp_mitie", "tokenizer_mitie", "ner_mitie", "ner_synonyms", "intent_entity_featurizer_regex", "intent_classifier_mitie"], 
        "data": "C:\\Users\\xyz\\Desktop\\RASA\\data\\examples\\rasa.json",
        "mitie_file" : "C:\\Users\\xyz\\Documents\\MITIE-master\\examples\\cpp\\train_ner\\Release\\new_ner_model.dat",
        "fixed_model_name": "demo",
        "cors_origins": ["*"],
        "aws_endpoint_url": null,
        "token": null,
        "num_threads": 2,
        "port": 5000
    }