I am trying to write an application by using Microsoft in-process speech recognition engine. My application uses sometimes dictation grammar and sometimes SRGS. Obviously, I do not have any problem when I use SRGS.
Even though I use one of the best available microphone (Sennheiser ME3 with Andrea usb sound card), the recognition results are far from being acceptable. My application operates in a specific domain, there are some words and phrases which are more likely to be spoken by a user of the system. My question is, is there any way to use dictation grammar and at the same time specifying important words in the domain of application. It is a kind of partially modifying the language model of the speech recognizer, only for a list of words and phrases provided by developer.
There are a couple of options.