I have this fluentd
filter:
<filter **>
@type parser
@log_level trace
format json
key_name log
hash_value_field fields
</filter>
I'm writing some JSON to stdout
and everything works as expected.
But when I'm also writing some plain non JSON text like Debugger listening on ws://0.0.0.0:9229/459316ca-5ec5-43e4-ae5d-d4651eca2c9e
to stdout
(or stderr
), I get this error:
fluent/log.rb:342:warn: dump an error event:
error_class=Fluent::Plugin::Parser::ParserError
error="pattern not match with data
'Debugger listening on ws://0.0.0.0:9229/459316ca-5ec5-43e4-ae5d-d4651eca2c9e'"
Is there a way to parse and forward both using fluentd
without getting an error?
Would it even be possible to wrap the plain text in a JSON
string like { message: "Debugger listening on ws://0.0.0.0:9229/459316ca-5ec5-43e4-ae5d-d4651eca2c9e" }
?
Update based on the answer from @Imran:
This is my docker.compose.yml
:
version: "2"
services:
fluentd:
build: ../fluentd
command: /bin/sh -c "/fluentd/config.sh && fluentd -c /fluentd/etc/fluent.conf -v"
ports:
- "24224:24224"
environment:
- AWS_REGION
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
service:
build:
context: ../service
args:
- NPM_TOKEN
command: node --inspect=0.0.0.0 index.js
ports:
- "3000:80"
volumes :
- ../service/:/app
logging:
driver: fluentd
options:
fluentd-address: localhost:24224
tag: 'docker.{{.ImageName}}.{{.Name}}.{{.ID}}'
This is my updated fluent.conf
:
<source>
@type forward
port 24224
</source>
# JSON-Parse
<filter docker.**>
@type parser
@log_level trace
format json
key_name log
hash_value_field fields
</filter>
<label @ERROR>
<match docker.**>
@type stdout
</match>
</label>
<match docker.**>
@type stdout
@include cw.conf
</match>
This is my cw.conf
:
@type cloudwatch_logs
log_group_name dev-logs
log_stream_name dev
auto_create_stream true
The logs created from writing JSON to stdout
are pushed correctly to CloudWatch but the @ERROR
entries are not pushed to CloudWatch.
But they're logged correctly to stdout
now:
2019-08-22 19:25:53.000000000 +0000 docker.integration_service.integration_service_1.2db3cc97a71a: {"container_name":"/integration_service_1","source":"stderr","log":"Debugger listening on ws://0.0.0.0:9229/94a655a4-1bbb-49
3e-abcc-f2637c39583d","container_id":"2db3cc97a71aa27c957fa13e29ac4c1c9f8a616c8c2989dcf72ea8f9b666d513"}
How can I push them to CloudWatch now as well?
I think it's possible. By default, all the unmatched records are emitted to @ERROR
label.
This is being done because emit_invalid_record_to_error
flag is set to true.
Invalid cases are
You can rescue unexpected format logs in @ERROR label.
If you want to ignore these errors, set false
.
More documentation here. https://docs.fluentd.org/filter/parser#emit_invalid_record_to_error
In your case, you want to capture format is not matched records. An example way is like below.
<filter myTag>
@type parser
@log_level trace
key_name log
hash_value_field fields
</filter>
<label @ERROR>
<match myTag>
@type stdout
</match>
</label>
Above match
within label
emits the data in JSON to STDOUT with format you have desired.
{ message: "Debugger listening on ws://0.0.0.0:9229/459316ca-5ec5-43e4-ae5d-d4651eca2c9e" }
Just try and let me know.
Important Note - @ERROR
captures lot of internal fluentd errors and warnings so in-order to capture only the format not matched errors, I specifically provided filter myTag
,match myTag
which makes sure my filter & match process only my tag records and errors. I see that you are using filter **
which performs filterging against all records so I would say best practice is to provide the correct tag
for the match
,filter
etc.,