Search code examples
elasticsearchfull-text-searchn-gram

Elasticsearch - searching wildcard using n-gram


I'm having a requirement where the user would type in some characters and expects to get the results similar to SQL like query. I'm using n-gram because I saw lots of people recommend to avoid using a wildcard search. However, the return data sometimes is extremely irrelevant as it contains the characters in the text but mixed up. I added score but it doesn't work. Does anyone have any suggestion? Thanks.

Updates

Below is the index settings:

"settings": {
    "index": {
        "lifecycle": {
            "name": "audit_log_policy",
            "rollover_alias": "audit-log-alias-test"
        },
        "analysis": {
            "analyzer": {
                "abi_analyzer": {
                    "tokenizer": "n_gram_tokenizer"
                }
            },
            "tokenizer": {
                "n_gram_tokenizer": {
                    "token_chars": [
                        "letter",
                        "digit"
                    ],
                    "min_gram": "3",
                    "type": "ngram",
                    "max_gram": "10"
                }
            }
        },
        "number_of_shards": "1",
        "number_of_replicas": "1",
        "max_ngram_diff": "10",
        "max_result_window": "100000"
    }
}

And here's how the field is mapped:

"resourceCode": {
    "type": "text",
    "fields": {
        "ngram": {
            "analyzer": "abi_analyzer",
            "type": "text"
        },
        "keyword": {
            "ignore_above": 256,
            "type": "keyword"
        }
    }
},
"logDetail": {
    "type": "text",
    "fields": {
        "ngram": {
            "analyzer": "abi_analyzer",
            "type": "text"
        },
        "keyword": {
            "ignore_above": 8191,
            "type": "keyword"
        }
    }
}

And here's how I will query:

query_string: {
    fields: ["logDetail.ngram", "resourceCode.ngram"],
    query: data.searchInput.toLowerCase(),
}

Samples

Here's the sample query:

{
    "query": {
        "bool": {
            "must": [
                {
                    "terms": {
                        "organizationIds": [
                            ...
                        ]
                    }
                },
                {
                    "range": {
                        "createdAt": {
                            "gte": "2020-08-11T17:00:00.000Z",
                            "lte": "2020-08-31T16:59:59.999Z"
                        }
                    }
                },
                {
                    "multi_match": {
                        "fields": [
                            "logDetail.ngram",
                            "resourceCode.ngram"
                        ],
                        "query": "0004"
                    }
                }
            ]
        }
    },
    "sort": [
        {
            "createdAt": "desc"
        }
    ],
    "track_scores": true,
    "size": 20,
    "from": 0
}

And here's an irrelevant score

{
    "_index": "user-interaction-audit-log-test-000001",
    "_type": "_doc",
    "_id": "ae325b4a6b45442cbf8a44d595e9a747",
    "_score": 3.4112902,
    "_source": {
        "logOperation": "UPDATE",
        "resource": "CUSTOMER",
        "resourceCode": "Htest11211",
        "logDetail": "<div>Updated Mobile Number from <var isolate><b>+84966123451000<\/b><\/var> to <var isolate><b>+849<\/b><\/var><\/div>",
        "organizationIds": [
            "5e72ea0e4019f01fad0d91c9",
        ],
        "createdAt": "2020-08-20T08:13:36.026Z",
        "username": "test_user",
        "module": "PARTNER",
        "component": "WEB_APP"
    },
    "sort": [
        1597911216026
    ]
}

Solution

  • The issue is that you don't have specified any search analyzer. So your search input also gets analyzed by the abi_analyzer and 0004 gets tokenized into 000 and 004. The former token, i.e. 000 matches one token of the logDetail.ngram field.

    What you need to do is to specify a standard search_analyzer for both fields in your mapping so that you don't analyze your search input but simply try to match it with the same indexed tokens:

    "resourceCode": {
        "type": "text",
        "fields": {
            "ngram": {
                "analyzer": "abi_analyzer",
                "search_analyzer": "standard",           <--- here
                "type": "text"
            },
            "keyword": {
                "ignore_above": 256,
                "type": "keyword"
            }
        }
    },
    "logDetail": {
        "type": "text",
        "fields": {
            "ngram": {
                "analyzer": "abi_analyzer",
                "search_analyzer": "standard",           <--- here
                "type": "text"
            },
            "keyword": {
                "ignore_above": 8191,
                "type": "keyword"
            }
        }
    }
    

    If you don't want to change your mapping because you don't want to reindex your data, you can also specify the search analyzer at query time:

                {
                    "multi_match": {
                        "fields": [
                            "logDetail.ngram",
                            "resourceCode.ngram"
                        ],
                        "analyzer": "standard",          <--- here
                        "query": "0004"
                    }
                }
    

    UPDATE:

            "analyzer": {
                "abi_analyzer": {
                    "tokenizer": "n_gram_tokenizer"
                    "filter": ["lowercase"]
                }
            },