I'm inserting data into the influxDb using batch points via Java API (used http API under the hood) after some time the exception is raised.
java.lang.RuntimeException: {"error":"partial write: max-values-per-tag limit exceeded (100010/100000):
According to the Influx docs - docs this parameter prevent high cardinality data from being written before it can be fixed into the Influx.
I can set it to 0 to remove Exception. But I don't clear understand what is "high cardinality data". What's wrong to have "high cardinality data" to be inserted into InfluxDb. I'm going to insert millions unique values and need them to be indexed. Do I need to review my data design ?
They are using in memory index for "tags", the more different tags values you have (the higher aardinality of the data) the more memory influx requires.