I'm trying to find out if the python library Dragonfly can use the context and grammar you give it to improve recognition. The idea being that if the speech recognition engine itself knows the grammar of what you can say, then recognition should be greatly improved, but if the Dragonfly library is just checking if arbitrary dictation taken from the recognizer matches the grammar, I'd expect no improvement.
Also, since Dragonfly supports both Dragon and Windows Speech Recognition, it'd be helpful to know if the answer differs depending on the engine.
The practical answer is yes. Technically Dragonfly just passes the grammar to the speech recognition engine (either Dragon or WSR), but the engines do in fact use the grammar to improve recognition. I've been using this for awhile now and as long as you don't make your grammars huge it works fairly well. The other answers saying no are just observing that Dragonfly itself doesn't do any of the work, but that's of no practical consequence because the engines do it instead. Grammar recognition is much better than arbitrary dictation. I have over 800+ commands recognized reliably and using SeriesMappingRule from the aenea project I can even say multiple of them in sequence in a single utterance.