Having a review text like:
"The tutu's was for my neice... She LOVED IT!!! It fit well and will fit her for some time with the elastic waist.... great quality and very inexpensive! I would buy her another easily."
and sending it to the CoreNLP Server:
properties = {
"tokenize.whitespace": "true",
"annotators": "tokenize, ssplit, pos, lemma, ner, parse",
"outputFormat": "json"
}
if not isinstance(paragraph, str):
paragraph = unicodedata.normalize('NFKD', paragraph).encode('ascii', 'ignore')
result = self.nlp.annotate(paragraph, properties=properties)
Is giving me this result:
{
u'sentences':[
{
u'parse':u'SENTENCE_SKIPPED_OR_UNPARSABLE',
u'index':0,
u'tokens':[
{
u'index':1,
u'word':u'The',
u'lemma':u'the',
u'pos':u'DT',
u'characterOffsetEnd':3,
u'characterOffsetBegin':0,
u'originalText':u'The'
},
{
u'index':2,
u'word':u"tutu's",
u'lemma':u"tutu'",
u'pos':u'NNS',
u'characterOffsetEnd':10,
u'characterOffsetBegin':4,
u'originalText':u"tutu's"
},
// ...
{
u'index':34,
u'word':u'easily.',
u'lemma':u'easily.',
u'pos':u'NN',
u'characterOffsetEnd':187,
u'characterOffsetBegin':180,
u'originalText':u'easily.'
}
]
}
]
}
I noticed that sentences are not getting splitted - any idea what the problem could be?
If I am using the http://localhost:9000 webinteface then I see those sentences being splitted correctly..
Don't know why but the problem appeared to come from tokenize.whitespace
. I just commented it out:
properties = {
#"tokenize.whitespace": "true",
"annotators": "tokenize, ssplit, pos, lemma, ner, parse",
"outputFormat": "json"
}