In my scenario, buttons are created during runtime. These are to be clicked by a voice command. For this reason I try to find out how I can add voice commands during runtime. But I can't find any approach.
What I tried:
I have extended the interface IMixedRealitySpeechSystem
with two methods, RefreshRecognition
and AddSpeechCommand
:
/// <summary>
/// Refresh recognition after adding new commands
/// </summary>
void RefreshRecognition();
/// <summary>
/// Add command to already existing commands[]
/// </summary>
/// <param name="command"></param>
void AddSpeechCommand(SpeechCommands command);
I have implemented these in the class WindowsSpeechInputProvider: MixedRealitySpeechSystem
. But there are two problems.
First: I can't get to the WindowsSpeechInputProvider. I thought I could get it by trying this:
private IMixedRealitySpeechSystem SpeechSystem
{
get
{
if(_speechSystem is null)
{
MixedRealityServiceRegistry.TryGetService(out _speechSystem);
}
return _speechSystem;
}
}
public void SomeMethod()
{
SpeechCommands command = new SpeechCommands("TestCommand", default, default, null);
SpeechSystem.AddSpeechCommand(command);
SpeechSystem.RefreshRecognition();
}
But the problem is that MixedRealityServiceRegistry
does not contain an instance of that service or to be precise, it is not even a service.
Second: Even if this would work, it is not a good way to go. Because with this I change the MRTK and with another upgrade to a new version, these lines are overwritten.
My Question:
So how can I access and add commands on runtime?
There's an open feature request to allow adding dynamic speech commands in Github: Add keywords dynamically to MRTK speech commands #6369. It is not currently possible.
This thread has some suggestions for alternative ways to approach the overall scenario. In summary, it is recommended that you use a Grammar Recognizer and use an SRGS XML file to define your speech recognition rules. Voice input in Unity and Hologram 212 has an example showing how to use it.