I started using logstash to manage syslog. In order to test it I am sending from a remote machine simple messges and try to parse them with logstash.
The only Logstash configuration, used via the command line:
input {
syslog {
type => syslog
port => 5514
}
}
filter {
grok {
match => { "message" => "hello %{WORD:who}" }
}
}
output {
stdout { }
elasticsearch {
host => "elk.example.com"
cluster => "security"
protocol => "http"
}
}
I do receive the logs and they are parsed correctly (a who
field is generated). At the same time, tags
contain _grokparsefailure
.
The test log I am sending is hello rambo3
. I see it as
2015-01-29T09:27:48.344+0000 10.242.136.232 <13>1 2015-01-29T10:27:48.113612+01:00 AA1.example.com testlog.txt - - - hello rambo3
The grok debugger also agrees:
Why is _grokparsefailure
added to the tags?
Interestingly enough, the same data sent via pure tcp
are parsed correctly by the same filter (_grokparsefailure
is not in the tags)
The _grokparsefailure
is not add by your own filter grok
. When you use syslog
input, the syslog format must be follow RFC3164, as mentioned in here.
Generally , Syslog
input will parse the log and add corresponding field
like log severity
. So, there is a grok action in it. However, the log you send from remote server is in RFC 5424 format. So, logstash can't parse the log and then add _grokparsefailure
tag.