My first idea to complete a luis call would be to either set the turnContext but most of its properties are readonly. As well I would know how to create an exact context that would be created from a user input, mainly the text they put in providing the context needed to pass through to the LuisRecognizer.
My second idea is from the waterfallstep that is calling the LuisHelper(stepContext.Context) is to set this manually as well. You can't because it too is read only and so is stepContext.Result...
So my question becomes is there a way to feed the luisRecognizer a phrase of text that can be appended to the user's answer.
Example... If I ask user what color car are you looking for. I know my intent is CarColor so if users says... Blue then I want to append to that statement Customer would like a car color of Blue... so that I can abstract out the entity which would be blue and know that I am referring to an intent of CarColor. Just be clear on why I want to do this.
What would be a way that I can take a users response and append text to it and then send that as the phrase to the LuisRecognizer call.
here is some code for reference:
private async Task<DialogTurnResult> ActStepAsync(WaterfallStepContext stepContext, CancellationToken cancellationToken)
{
stepContext.Values["tester"] = "Travel to Chicago";
stepContext.Result = "christian";
// Call LUIS and gather any potential booking details. (Note the TurnContext has the response to the prompt.)
var bookingDetails = stepContext.Result != null
?
await LuisHelper.ExecuteLuisQuery(Configuration, Logger, stepContext.Context, cancellationToken)
:
new BookingDetails();
// In this sample we only have a single Intent we are concerned with. However, typically a scenario
// will have multiple different Intents each corresponding to starting a different child Dialog.
// Run the BookingDialog giving it whatever details we have from the LUIS call, it will fill out the remainder.
return await stepContext.BeginDialogAsync(nameof(BookingDialog), bookingDetails, cancellationToken);
}
How to
You should be able to achieve this by setting the Text
property of the Activity
which is under the Context
property like so:
stepContext.Context.Activity.Text = "The phrase that you want to pass through here";
Do this assignment BEFORE you call LuisHelper.ExecuteLuisQuery
otherwise your updated Text
value won't be sent through.
Why this should work
Since LuisHelper.ExecuteLuisQuery(Configuration, Logger, stepContext.Context, cancellationToken)
passes through stepContext.Context
and under the scenes here this context is passed into the RecognizeAsync
call inside of the ExecuteLuisQuery
method. Furthermore the recognizer
variable is of type LuisRecognizer
, the source code for this class is available here. The line that you are interested in is this one which shows the Text
property of the turnContext
being used as the utterance which is passed through.
Source code explanation/Extra info
For future reference (incase the code or links change) a simplified version of the source code is:
public virtual async Task<RecognizerResult> RecognizeAsync(ITurnContext turnContext, CancellationToken cancellationToken)
=> await RecognizeInternalAsync(turnContext, null, null, null, cancellationToken).ConfigureAwait(false);
where RecognizeInteralAsync
looks like:
private async Task<RecognizerResult> RecognizeInternalAsync(ITurnContext turnContext, LuisPredictionOptions predictionOptions, Dictionary<string, string> telemetryProperties, Dictionary<string, double> telemetryMetrics, CancellationToken cancellationToken)
{
var luisPredictionOptions = predictionOptions == null ? _options : MergeDefaultOptionsWithProvidedOptions(_options, predictionOptions);
BotAssert.ContextNotNull(turnContext);
if (turnContext.Activity.Type != ActivityTypes.Message)
{
return null;
}
// !! THIS IS THE IMPORTANT LINE !!
var utterance = turnContext.Activity?.AsMessageActivity()?.Text;
RecognizerResult recognizerResult;
LuisResult luisResult = null;
if (string.IsNullOrWhiteSpace(utterance))
{
recognizerResult = new RecognizerResult
{
Text = utterance,
Intents = new Dictionary<string, IntentScore>() { { string.Empty, new IntentScore() { Score = 1.0 } } },
Entities = new JObject(),
};
}
else
{
luisResult = await _runtime.Prediction.ResolveAsync(
_application.ApplicationId,
utterance,
timezoneOffset: luisPredictionOptions.TimezoneOffset,
verbose: luisPredictionOptions.IncludeAllIntents,
staging: luisPredictionOptions.Staging,
spellCheck: luisPredictionOptions.SpellCheck,
bingSpellCheckSubscriptionKey: luisPredictionOptions.BingSpellCheckSubscriptionKey,
log: luisPredictionOptions.Log ?? true,
cancellationToken: cancellationToken).ConfigureAwait(false);
recognizerResult = new RecognizerResult
{
Text = utterance,
AlteredText = luisResult.AlteredQuery,
Intents = LuisUtil.GetIntents(luisResult),
Entities = LuisUtil.ExtractEntitiesAndMetadata(luisResult.Entities, luisResult.CompositeEntities, luisPredictionOptions.IncludeInstanceData ?? true),
};
LuisUtil.AddProperties(luisResult, recognizerResult);
if (_includeApiResults)
{
recognizerResult.Properties.Add("luisResult", luisResult);
}
}
// Log telemetry code
await OnRecognizerResultAsync(recognizerResult, turnContext, telemetryProperties, telemetryMetrics, cancellationToken).ConfigureAwait(false);
var traceInfo = JObject.FromObject(
new
{
recognizerResult,
luisModel = new
{
ModelID = _application.ApplicationId,
},
luisOptions = luisPredictionOptions,
luisResult,
});
await turnContext.TraceActivityAsync("LuisRecognizer", traceInfo, LuisTraceType, LuisTraceLabel, cancellationToken).ConfigureAwait(false);
return recognizerResult;
}