Search code examples
elasticsearchn-gram

How to create ngrams in only forward direction in Elasticsearch?


Is it possible to create the ngrams like this :

homework -> ho,hom,home,homew,homewo,homewor,homework only ? 

which is only in forward direction ?

Currently its creating all possible ways.


Solution

  • The edge_ngram tokenizer first breaks text down into words whenever it encounters one of a list of specified characters, then it emits N-grams of each word where the start of the N-gram is anchored to the beginning of the word.

    Refer to this official documentation to get a detailed explanation on Edge n-grams

    Index Mapping:

    {
      "settings": {
        "analysis": {
          "analyzer": {
            "my_analyzer": {
              "tokenizer": "my_tokenizer"
            }
          },
          "tokenizer": {
            "my_tokenizer": {
              "type": "edge_ngram",
              "min_gram": 2,
              "max_gram": 10,
              "token_chars": [
                "letter",
                "digit"
              ]
            }
          }
        }
      }
    }
    

    Analyze API

    Following tokens will be generated:

    {
      "analyzer": "my_analyzer",
      "text": "Homework"
    }
    
    tokens": [
        {
          "token": "Ho",
          "start_offset": 0,
          "end_offset": 2,
          "type": "word",
          "position": 0
        },
        {
          "token": "Hom",
          "start_offset": 0,
          "end_offset": 3,
          "type": "word",
          "position": 1
        },
        {
          "token": "Home",
          "start_offset": 0,
          "end_offset": 4,
          "type": "word",
          "position": 2
        },
        {
          "token": "Homew",
          "start_offset": 0,
          "end_offset": 5,
          "type": "word",
          "position": 3
        },
        {
          "token": "Homewo",
          "start_offset": 0,
          "end_offset": 6,
          "type": "word",
          "position": 4
        },
        {
          "token": "Homewor",
          "start_offset": 0,
          "end_offset": 7,
          "type": "word",
          "position": 5
        },
        {
          "token": "Homework",
          "start_offset": 0,
          "end_offset": 8,
          "type": "word",
          "position": 6
        }
      ]
    }