I have been trying to fix the below issue but without any success (Logstash 2.1, Elasticsearch 2.1, Kibana 4.3.1)
This is my logstash.conf file
input {
file {
path => ["/var/log/network.log"]
start_position => "beginning"
type => "syslog"
tags => [ "netsyslog" ]
}
} #end input block
########################################
filter {
if [type] == "syslog" {
# Split the syslog part and Cisco tag out of the message
grok {
match => ["message", "%{CISCO_TAGGED_SYSLOG} %{GREEDYDATA:cisco_message}"]
}
# Parse the syslog severity and facility
#syslog_pri { }
# Parse the date from the "timestamp" field to the "@timestamp" field
# 2015-05-01T00:00:00+02:00 is ISO8601
grok {
match => ["message", "%{TIMESTAMP_ISO8601:timestamp}"]
}
date {
#2015-05-01T00:00:00+03:00
match => ["timestamp",
"yyyy-MM-dd'T'HH:mm:ssZ"
# "yyyy MM dd HH:mm:ss",
]
#timezone => "Asia/Kuwait"
}
# Clean up redundant fields if parsing was successful
if "_grokparsefailure" not in [tags] {
mutate {
rename => ["cisco_message", "message"]
remove_field => ["timestamp"]
}
}
# Extract fields from the each of the detailed message types
grok {
match => [
"message", "%{CISCOFW106001}",
"message", "%{CISCOFW106006_106007_106010}",
"message", "%{CISCOFW106014}",
"message", "%{CISCOFW106015}",
"message", "%{CISCOFW106021}",
"message", "%{CISCOFW106023}",
"message", "%{CISCOFW106100}",
"message", "%{CISCOFW110002}",
"message", "%{CISCOFW302010}",
"message", "%{CISCOFW302013_302014_302015_302016}",
"message", "%{CISCOFW302020_302021}",
"message", "%{CISCOFW305011}",
"message", "%{CISCOFW313001_313004_313008}",
"message", "%{CISCOFW313005}",
"message", "%{CISCOFW402117}",
"message", "%{CISCOFW402119}",
"message", "%{CISCOFW419001}",
"message", "%{CISCOFW419002}",
"message", "%{CISCOFW500004}",
"message", "%{CISCOFW602303_602304}",
"message", "%{CISCOFW710001_710002_710003_710005_710006}",
"message", "%{CISCOFW713172}",
"message", "%{CISCOFW733100}"
]
}
}
if [dst_ip] and [dst_ip] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6-9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
geoip {
source => "dst_ip"
database => "/opt/logstash/vendor/GeoLiteCity.dat" ### Change me to location of GeoLiteCity.dat file
target => "dst_geoip"
}
}
if [src_ip] and [src_ip] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6-9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
geoip {
source => "src_ip"
database => "/opt/logstash/vendor/GeoLiteCity.dat" ### Change me to location of GeoLiteCity.dat file
target => "src_geoip"
}
}
mutate {
convert => [ "[src_geoip][coordinates]", "float" ]
}
}
########################################
output {
stdout { codec => rubydebug }
elasticsearch {
hosts => "localhost"
template => "/opt/logstash/elasticsearch-template.json"
template_overwrite => true
}
} #end output block
When I tail the logstash.conf file I can see it is parsing. However when I run curl 'localhost:9200/_cat/indices?v' I get that only .kibana is there Loading the Kibana interface says Unable to fetch mapping. Do you have indices matching the pattern?
Any help would be appreciated.
Thanks in advance.
The initial debug recommendation is to check your logstash and elasticsearch logs. If you have a mapping conflict, elasticsearch will log about it and help you narrow it down.