It seems as though the latest nest client doesn't support the token_limit
parameter for the phrase suggester? How can I extend NEST to add this without having to create a raw query?
The latest NEST client is missing token_limit
for phrase suggester; I'll open a PR to add it.
In the meantime, you could support it and still use the fluent lambda API by deriving from PhraseSuggesterDescriptor<T>
and implementing the token limit on an interface
private static void Main()
{
var client = new ElasticClient();
var searchResponse = client.Search<Question>(s => s
.Size(0)
.Suggest(su => su
.Phrase("suggest_phrase", p => new MyPhraseSuggesterDescriptor<Question>()
.TokenLimit(5)
.Field(f => f.Title)
.Highlight(h => h
.PreTag("<em>")
.PostTag("</em>")
)
.Text("dotnot entrity framework")
)
)
);
}
public interface IMyPhraseSuggester : IPhraseSuggester
{
[PropertyName("token_limit")]
int? TokenLimit { get; set; }
}
public class MyPhraseSuggesterDescriptor<T> : PhraseSuggesterDescriptor<T>, IMyPhraseSuggester where T : class
{
int? IMyPhraseSuggester.TokenLimit { get; set; }
public MyPhraseSuggesterDescriptor<T> TokenLimit(int tokenLimit)
{
((IMyPhraseSuggester)this).TokenLimit = tokenLimit;
return this;
}
}
which serializes to
{
"size": 0,
"suggest": {
"suggest_phrase": {
"text": "dotnot entrity framework",
"phrase": {
"field": "title",
"highlight": {
"pre_tag": "<em>",
"post_tag": "</em>"
},
"token_limit": 5
}
}
}
}
NOTE that the TokenLimit()
method call is made before any other calls, because other calls will return PhraseSuggesterDescriptor<T>
and not MyPhraseSuggesterDescriptor<T>
. You could go ahead and redefine all the properties of IPhraseSuggester
on MyPhraseSuggesterDescriptor<T>
if you wanted to, to avoid this API quirk, but that's probably more effort than it's worth to work around it for the moment :)