I went through the tutorial and other questions but found no documentation on how to select a path for a pre-recorded audio file and send it to the service for transcription. I came across this code in the tutorial
curl -X POST -u <username>:<password>
--header "Content-Type: audio/flac"
--header "Transfer-Encoding: chunked"
--data-binary @<path>0001.flac
"https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?continuous=true"
Can I do something similar on the android sdk which currently implements the websocket interface?
I came across this project by mihui on Github which has a few slight modifications from the watson-developer-cloud:master. The project by mihui, in addition to the AudioCaptureThread, has a FileCaptureThread which reads bytes from the audio file and writes them to the websocket. This served my purpose of transcribing audio files. Please check this thread for details.