I want to use the one described here, part of the Stanford CoreNLP, as it looks promising but I can't understand how it works. I downloaded the entire CoreNLP but the .jar
file mentioned in the README document, i.e. chinese_map_utils.jar
is nowhere to be found. Do you think they're expecting me to create such a .jar
file myself out of the component code they have listed there? That seems a bit absurd.
Essentially what I'm after is a system of breaking down Chinese characters into their component strokes or radicals (I know that not all the parts are called radicals, spare me the pedantics), so if you know of an alternative solution which is actionable then I'd be happy to hear about it.
No need to use this chinese_map_utils.jar
— if you have CoreNLP on your classpath, that should be sufficient.
It looks like the class RadicalMap
may be of interest to you. Execution instructions are included in the class's source code (see the main
method).