Hoping to get some help here. My Fluentd setup is configured to ship logs to 2 outputs, each output is expecting a different structure of the logs.
Up to now, the configuration was to scan the log twice, add a different tag each time, and based on the tag configure the relevant parsing and output.
For example:
myapp.log -> tag app_splunk -> filters of type x, y, x -> match and output to splunk
myapp.log -> tag app_s3 -> different set of filters -> output to S3
I'm trying to find a proper way to handle the log once and achieve the same results without double tagging. I've tried to use @relabel and provide a new set of filters based on the label, the results were that the log was already processed by the first collection of filters and now the new filters don't work properly.
Any idea how I can achieve that?
<match **>
@type copy
<store>
@type relabel
@label @app_splunk
</store>
<store>
@type relabel
@label @app_s3
</store>
</match>
<label @app_splunk>
<filter **>
@type grep
<regexp>
key log_type # <- not sure what your're filtering on, replace with your own.
pattern splunk
</regexp>
</filter>
<match **>
@type splunk
...
</match>
</label @app_splunk>
<label @app_s3>
<filter **>
@type grep
<regexp>
key log_type
pattern s3
...
</label @app_splunk>
@type copy
creates an independent log stream copies.You can make as many copies as you need.
This also allows you to produce overlapping substreams inside every label. Like label1 can filter out DEBUG
and higher log levels, and label2 can take only INFO
and higher. Because they are independent streams, both destinations will receive INFO
and higher in this case, and label1 will receive DEBUG
in addition to that.