I am setting up filebeat on my linux server. After the setup was completed the indices, index templates and the index pattern is created. The documents are coming in as well from filebeat to elasticsearch but when i try to discover the logs from the Discover section in Kibana, I'm getting the below error.
search_phase_execution_exception
all shards failed
Error
at Fetch._callee3$ (https://demo.business.com/logs/36136/bundles/core/core.entry.js:6:59535)
at l (https://demo.business.com/logs/36136/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:380:982071)
at Generator._invoke (https://demo.business.com/logs/36136/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:380:981824)
at forEach.e.<computed> [as next] (https://demo.business.com/logs/36136/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:380:982428)
at fetch_asyncGeneratorStep (https://demo.business.com/logs/36136/bundles/core/core.entry.js:6:52652)
at _next (https://demo.business.com/logs/36136/bundles/core/core.entry.js:6:52968)
All the shards as well are green.
The memory and space as well is good the elasticsearch pods.
Note: The Discover is working for all other indices but just the one I newly created is not working.
I have tried deleting and recreating the index as well still doesn't work
Thanks @llermally.
I got the solution for this problem.
Go to top right -> inspect, run the Elasticsearch query against dev tools, see the response there in the network tab for the failed resource to get a more accurate error.
For me the actual error was
Trying to retrieve too many docvalue_fields. Must be less than or equal to: [200] but was [208].
I resolved this issue by adding the index.max_docvalue_fields_search in the /etc/filebeat/filebeat.yml file
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
index.max_docvalue_fields_search: 300
#index.codec: best_compression
#_source.enabled: false