What happens when we try to ingest more documents into 'Lucene' instance past its max limit of 2,147,483,519?
I read that as we approach closer to 2 billion documents we start seeing performance degradation. But does 'Lucene' just stop accepting new documents past its max limit.
Also, how does 'Elasticsearch' handle the same scenario for one of its shard when it's document limit is reached.
Every elasticsearch shard under the hood is Lucene Index, so this limit is applicable to Elasticsearch shard as well, and based on this Lucene issue it looks like it stops indexing further docs.
Performance degradation is subject to several factors like the size of these docs, JVM allocated to the Elasticsearch process (~32 GB is a max limit), and available file system cache which is used by Lucene and no of CPU, network bandwidth etc.