We are running out of heap memory and also unstability issues in our ELK, below the configuration screenshot.
-Version 6.2.4
-No of nodes: 5
-Data nodes: 3
-Indices: 6138
-Documents: 3,840,550,046
-Primary shards: 14,934
-Replica Shards: 14,934
-Disk Available: 25.98(1TB/5TB)
-JVM Heap: 62.045(46GB/74GB)
I am know that I have to reduce on number of shards and also data we are holding is since Jan 2019-although 2019 data is in closed state.
I need help on understanding as how can I do
1- re-indexing to reduce the number of shards of the old indices
2- download of old indices and keep in a archive and later re-use the same if and when required
3- we are having daily indices rotation, how to change it to weekly/monthly indices and how it will help.
Looking forward for some guidance as ELK is new to me and am held up with this.
Thanks, Abhishek
1- re-indexing to reduce the number of shards of the old indices
A reindex is rather expensive; if you have more than one primary shard per index (on 6.x the default was 5), I'd start with a _shrink
, which is much cheaper.
2- download of old indices and keep in a archive and later re-use the same if and when required
Sounds like snapshot and restore, but that will be a slow and tedious approach.
3- we are having daily indices rotation, how to change it to weekly/monthly indices and how it will help.
The better approach would be a rollover that offers more flexibility and also allows you to create evenly sized indices / shards. Our default for the Beats in 7.x is 50GB for a single shard index.
Generally your Elasticsearch version is very old. There are a lot of performance and stability improvements in the current 7.10 version. Also features like Index Lifecycle Management, which would be the solution for your kind of problem.
Some additional notes: