Amazon Lex FAQ's mention that we can send the parsed intent and slots back to the client, so that we can place the business logic in the client. But am unable to find anything clear on this in the Lex documentation.
My use case: Send text/voice data to Amazon lex, lex then parses the intent and slots and sends back the JSON with intent, slot and context data back the client which requested it, rather than sending it to Lambda or other backend API endpoint.
Can anyone please point out the right way/configuration for this?
Regards
If I'm understanding you correctly, you want your client to receive the LexResponse and handle it within the client rather than by Lambda or backend API. If this is correct, you can try the following implementation of Lex-Audio.
// This will handle the event when the mic button is clicked on your UI.
scope.audioClick = function () {
// Cognito Credentials for Lex Runtime Service
AWS.config.credentials = new AWS.CognitoIdentityCredentials(
{ IdentityPoolId: Settings.AWSIdentityPool },
{ region: Settings.AWSRegion }
);
AWS.config.region = Settings.AWSRegion;
config = {
lexConfig: { botName: Settings.BotName }
};
conversation = new LexAudio.conversation(config, function (state) {
scope.$apply(function () {
if (state === "Passive") {
scope.placeholder = Settings.PlaceholderWithMic;
}
else {
scope.placeholder = state + "...";
}
});
}, chatbotSuccess
, function (error) {
audTextContent = error;
}, function (timeDomain, bufferLength) {
});
conversation.advanceConversation();
};
The success function which is called after Lex has responded is as follows:
chatbotSuccess = function (data) {
var intent = data.intent;
var slots = data.slots;
// Do what you need with this data
};
Hopefully that gives you some idea of what you need to do. If you need the reference for Lex-Audio, there's a great post about it on the Amazon Blog which you should go check out: https://aws.amazon.com/blogs/machine-learning/capturing-voice-input-in-a-browser/