I am using logstash for the first time and can't figure out how to determine index on a parsed field without persisting it.
This is my configuration file:
input {
http {
port => 31311
}
}
filter {
json {
source => "message"
}
mutate {
remove_field => [ "headers", "message" ]
}
grok {
match => [ "name", "^(?<metric-type>\w+)\..*" ]
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "%{metric-type}-%{+YYYY.MM.dd}"
}
}
Json example sent to the http
plugin:
{
"name": "counter.custom",
"value": 321,
"from": "2017-11-30T10:43:17.213Z",
"to": "2017-11-30T10:44:00.001Z"
}
This record is saved in the counter-2017.11.30
index as expected. However, I don't want the field metric-type
to be saved, I just need it to determine the index.
Any suggestions please?
I have used grok
to put my metric-type
into a field since grok pattern does not support [@metadata][metric-type]
syntax. I have used a mutate
filter to copy that field to @metadata
and then removed the temporary field.
input {
http {
port => 31311
}
}
filter {
json {
source => "message"
}
mutate {
remove_field => [ "headers", "message" ]
}
grok {
match => [ "name", "^(?<metric-type>\w+)\..*" ]
}
mutate {
add_field => { "[@metadata][metric-type]" => "%{metric-type}" }
remove_field => [ "metric-type" ]
}
}
output {
elasticsearch {
hosts => [ "http://localhost:9200" ]
index => "%{[@metadata][metric-type]}-%{+YYYY.MM.dd}"
}
}
-- EDIT --
As suggested by @Phonolog in the discussion, there is a simpler and much better solution. By using grok keyword matching instead of regex, I was able to save the captured group directly to the @metadata
.
input {
http {
port => 31311
}
}
filter {
json {
source => "message"
}
mutate {
remove_field => [ "headers", "message" ]
}
grok {
match => [ "name", "%{WORD:[@metadata][metric-type]}." ]
}
}
output {
elasticsearch {
hosts => [ "http://localhost:9200" ]
index => "%{[@metadata][metric-type]}-%{+YYYY.MM.dd}"
}
}