1) This is my logstash.conf file
input {
beats {
type => beats
port => 5044
}
}
filter {
grok {
match => { "message" => "\[(?<logtime>([0-9]|[\-\+\.\:\ ])*)\] \[(?<level>([a-z-A-Z])*)\] \[(?<msg>(.)+)\] (?<exception>(.)+)" }
}
mutate {
add_field => [ "logtime", "level", "msg", "exception" ]
remove_field => [ "beat", "offset", "source", "prospector", "host", "tags" ]
}
}
output {
if [type] == "beats"{
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{+YYYY.MM.dd}-container.api"
document_type => "%{[@metadata][type]}"
user => "elastic"
password => "secret"
}
}
}
2) I tested my grok with the debugger as you see below
3) This is what logstash writes to elasticsearch
{
"_index": "2019.01.28-container.api",
"_type": "doc",
"_id": "pZctlWgBojxJzDZGWqZz",
"_score": 1,
"_source": {
"type": "beats",
"level": "Debug",
"@timestamp": "2019-01-28T15:56:41.295Z",
"msg": [
"Hosting starting",
"exception"
],
"@version": "1",
"logtime": [
"2019-01-28 15:23:12.911 +03:00",
"level"
],
"message": "[2019-01-28 15:23:12.911 +03:00] [Debug] [Hosting starting] exception 2",
"exception": "exception 2",
"input": {
"type": "log"
}
}
}
4) What I want to see is
{
"_index": "2019.01.28-container.api",
"_type": "doc",
"_id": "pZctlWgBojxJzDZGWqZz",
"_score": 1,
"_source": {
"type": "beats",
"level": "Debug",
"@timestamp": "2019-01-28T15:56:41.295Z",
"msg": "Hosting starting",
"logtime": "2019-01-28 15:23:12.911 +03:00",
"message": "2019-01-28 15:23:12.911 +03:00 Debug Hosting starting [exception 2]",
"exception": "exception 2"
}
}
The issue is with
mutate {
add_field => [ "logtime", "level", "msg", "exception" ]
}
The fields you are adding are already created by the grok filter, doing it again is useless, it will only transform the already present field in an array and add to the array the new value, since the mutate.addField use a hash, it will add to the field logtime
the value level
and to the field msg
the value exception
.