Just found an inconsistency issue that the parsing results of the Parser in CoreNLP and the standalone Stanford Parser are different.
For example, given a sentence "Microsoft released Windows 10.".
The Parser in CoreNLP (http://nlp.stanford.edu:8080/corenlp/process) will give the following result:
However, the standalone Stanford Parser (http://nlp.stanford.edu:8080/parser/index.jsp) will give the following result:
I also tried to run the codes on my machines. Both the parsers used the same model trained on the same date (englishPCFG.ser.gz, 2015-01-29). But the results given by the two parsers are still different. I tried several other sentences, and it looks that the standalone parser gives better results.
Anyone has idea on this?
The parser output can be different depending on whether you run it on a part-of-speech tagged sentence or not.
See the Parser FAQ for more information.