I'm seeing the word transcriptions, either in the browser or in the console, but I'm not seeing the messages such as {'state': 'listening'}
. More importantly, I'm not seeing the results such as {"results": [{"alternatives": [{"transcript": "name the mayflower "}],"final": true}],"result_index": 0}
.
I read the RecognizeStream documentation and tried this code:
stream.on('message', function(message) {
console.log(message);
});
but that doesn't work. I tried object_mode
in both true
and false
but the output was the same.
Here's the full code that I'm using:
document.querySelector('#button').onclick = function () {
var stream = WatsonSpeech.SpeechToText.recognizeMicrophone({
token: token,
model: 'en-US_BroadbandModel',
keywords: ["Colorado"],
keywords_threshold: 0.50,
word_confidence: true,
// outputElement: '#output' // send text to browser instead of console
object_mode: false
});
stream.setEncoding('utf8'); // get text instead of Buffers for on data events
stream.on('data', function(data) { // send text to console instead of browser
console.log(data);
});
stream.on('error', function(err) {
console.log(err);
});
document.querySelector('#stop').onclick = function() {
stream.stop();
};
};
The recognizeMicrophone()
method is a helper that chains together a number of streams. The message
event is fired on one of the streams in the middle. But, you can get access to that one at stream.recognizeStream
- it's always attached to the last one in the chain in order to support cases like this.
So, in your code, it should look something like this:
stream.recognizeStream.on('message', function(frame, data) {
console.log('message', frame, data)
});
However, that is mainly there for debugging. The results JSON should be emitted on the data
event, if you set objectMode: true
and don't call stream.setEncoding('utf8');
.
(This is somewhat different from Watson Node.js SDK, if you're familiar with it's behavior. There are plans to unify the two, but never enough time...)