I have a requirement of tokenizing the words in a sentence based on the specific word list.
wordlist = ["nlp - nltk", "CIFA R12 - INV"]
Example-input: This is sample text for nlp - nltk CIFA R12 - INV
.
while using word_tokenize(Exapmle-input), here I need nlp - nltk
as one token and CIFA R12 - INV
as another token. Is that possible rather than getting nlp
-
CIFA
as different tokens?
For those who comes here in future:-
After some reading, i have found out nltk.tokenize.mwe module is the option to achieve my above requirement.