Search code examples
androidkotlinspeech-to-textdagger-hiltgoogle-speech-api

Access Application() from HiltViewModel @Injection


I am trying to access "Speech to Text" audio in Android, via this code:

@HiltViewModel
class SettingsViewModel @Inject constructor(
    private val settingsRepository: SettingsRepository
) : ViewModel(), RecognitionListener
{
    data class SpeechState(
        val spokenText: String = "",
        val error: String = ""
    )

    private val _settings = MutableStateFlow(value = Settings())
    val settings: StateFlow<HomeSettings> = _settings.asStateFlow()
    private val speechState = MutableStateFlow(value = SpeechState())

    private val speechRecognizer: SpeechRecognizer = createSpeechRecognizer(application.applicationContext).apply {
        setRecognitionListener(this@SettingsViewModel)
    }


    private fun updateResults(speechBundle: Bundle?) {
        val userSaid = speechBundle?.getStringArrayList(RESULTS_RECOGNITION)
        speechState.value = speechState.value.copy(spokenText = userSaid?.get(0) ?: "")
        reactToSpeech(speechState.value.spokenText)
    }

    override fun onEndOfSpeech() = speechRecognizer.stopListening()
    override fun onResults(results: Bundle?) = updateResults(speechBundle = results)
    override fun onPartialResults(results: Bundle?) = updateResults(speechBundle = results)


    override fun onError(errorCode: Int) {}

    private val recognizerIntent: Intent = Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH).apply {
        putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM)
        putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, application.packageName)
        putExtra(
            RecognizerIntent.EXTRA_LANGUAGE_MODEL,
            RecognizerIntent.LANGUAGE_MODEL_WEB_SEARCH
        )
        putExtra(RecognizerIntent.EXTRA_PROMPT, "Talk")
        //putExtra(RecognizerIntent.EXTRA_PARTIAL_RESULTS, true)
    }

    init {}

    fun startListening(){
        speechRecognizer.startListening(recognizerIntent)
    }

    private fun reactToSpeech(speech: String){
        when(speech){
            "run" -> Log.w("App", "Running!")
            "stop" -> Log.w("App", "Stopped!")
            else -> {}
        }
    }

    override fun onReadyForSpeech(p0: Bundle?) {}
    override fun onBeginningOfSpeech() {}
    override fun onRmsChanged(p0: Float) {}
    override fun onBufferReceived(p0: ByteArray?) {}
    override fun onEvent(p0: Int, p1: Bundle?) {}
}

I don't know how I can access the Application() part, or context to be able to access the Speech Service API by Google. If, someone does know how this can be done, please, please let me know. I spent hours Googling today.


Solution

  • You can extend AndroidViewModel instead of ViewModel
    https://developer.android.com/reference/androidx/lifecycle/AndroidViewModel

    AndroidViewModel has access to the Application context.

    Or you can choose to simply inject the application context into your viewmodel without extending AndroidViewModel, like it's shown here:
    https://stackoverflow.com/a/63122193/2877453