I have logs that I am consuming with Fluentd and sending to Elasticsearch. I would like to create a new field if a string is found.
Sample log:
{
"@timestamp": "2021-01-29T08:05:38.613Z",
"@version": "1",
"message": "Started Application in 110.374 seconds (JVM running for 113.187)",
"level": "INFO"
}
I would like to create a new field STARTIME and the value, in this case, would be 113.187
What I have tried is, used the record_transformer and ruby split to get the value but it seems when it matches it remove the string I want from the log file.
<filter**>
@type record_transformer
enable_ruby true
<record>
STARTIME ${record["message"].split("JVM running").last.split(")")}
</record>
</filter>
How can I create this new field with the desired value?
I have now used the suggested option below:
<filter**>
@type record_transformer
enable_ruby true
<record>
STARTIME ${record["message"].split("JVM running for ").last.split(")")[0]}
</record>
</filter>
Which got me closer. What's happening now is the Field STARTIME is created and when the log entry matches it has the value of 113.187 which is correct however every other line that does not match this pattern just gets added to the new field.
You can try something like this:
<record>
STARTIME ${ s = record['message'][/JVM running for \d{3}.\d{3}/]; s ? s.split(' ')[-1] : nil }
</record>
STARTIME
will have the valid value, null
otherwise.