I've build an Auto-complete service with Elasticsearch 2.x.x. and one of my auto-completable values is "3M". I've configured the fuzziness to AUTO and my mapping is just default:
"mapping": {
"type": "completion",
"analyzer": "simple",
"payloads": false,
"preserve_separators": true,
"preserve_position_increments": true,
"max_input_length": 50
}
Based on this documentation the analyzers should be simple and Fuzziness AUTO means at MAX 2 spelling errors.
Here's the problem, whenever I type "1000000M" it still auto-completes "3M" although 1000000 and 3 are exceeding the limit of 2 spelling errors.
Does Elastic knows 1000000 and 3 are both numbers and I'm looking for {a number}M?
I would like the numbers to behave as actual String spelling errors because this is not the preferred behavior.
Even when I set Fuzziness to ZERO, it still corrects 1000000M to 3M.
The solution was fairly simple. All I to do was change my analyzer from simple to keyword. One thing to keep in mind though is that keyword doesn't use the lowercase analyzer which means your Auto-completions will be case sensitive unless .toLowercase() it all yourself