Here is how my doc looks like.
{
"Summary": "The One Way You're Putting Pressure on Your Partner Without Realizing It=20",
"Industry" : "Lifestyle and Fitness",
"Name": "Kali Coleman",
"Email" : "query-bixh@helpareporter.net",
"Media Outlet": "Best Life Online"
},
{
"Summary": "The One Way You're Putting Pressure on",
"Industry" : "High Tech",
"Name": "John Smith",
"Email" : "query-tech@helpareporter.net",
"Media Outlet": "Anonymous"
}
I want to count the documents for each type of "Industry" field. Here is what I want as an output.
{
"key": "Lifestyle and Fitness",
"count": 1200
},
{
"key": "High Tech",
"count": 590
}
I found a similar post here ElasticSearch count multiple fields grouped by, except that I do not have to filter. And I tried it on my Kibana console, got the following error.
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [Industry] in order to load field data by uninverting the inverted index. Note that this can use significant memory."
}
]
Please let me know if anyone knows the solution for this.
Thanks
You can use a terms
aggregation just like in the example and you can do so w/o a filter. Once you've configured your Industry
field's mapping to be of type keyword
, you can then run
GET index_name/_search
{
"size": 0,
"aggs": {
"by_industry": {
"terms": {
"field": "Industry.keyword"
}
}
}
}