I have some logs with the following format(I changed the IPs from public to private, but you get the idea):
192.168.0.1 [20/Nov/2019:16:09:28 +0000] GET /some_path HTTP/1.1 200 2 2
192.168.0.2 [20/Nov/2019:16:09:28 +0000] GET /some_path HTTP/1.1 200 2 2
I then grok these logs using the following pattern:
grok { match => { "message" => "%{IPORHOST:clientip} \[%{HTTPDATE:timestamp}\] %{WORD:method} %{URIPATHPARAM:request} %{DATA:httpversion} %{NUMBER:response} %{NUMBER:duration}" } }
geoip { source => "clientip" }
On my output section, I have the following code:
else if "host.name" in [host][name]{ #if statement with the hostname
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "mms18-%{+YYYY.MM.dd}"
user => "admin-user"
password => "admin-password"
}
}
The problem is that when I go to Kibana the geoip.location is mapped as an object, and I can not use it on a map Dashboard. Since the index's name changed daily, I can not manually put the correct geoip mapping, since I would have to do it every day.
One solution I thought that partially solves the problem is removing the date from the index in Logstash output, so it has a constant index of "mms18" and then using this on Kibana management console:
PUT mms18
{
"mappings": {
"properties": {
"geoip": {
"properties": {
"location": { "type": "geo_point" }
}
}
}
}
}
However, this is not ideal since I want to have the option of showing all the indexes with their respectful dates, and then choosing what to delete and what not. Is there any way that I can achieve the correct mapping while also preserving the indexes with their dates?
Any help would be appreciated.
Use an index template (with a value for index_patterns like "mms-*") that maps geoip as a geo_point.