Search code examples
amazon-s3kubernetesamazon-kinesisfluentd

how to add multiple outputs in fluentd-kubernetes-daemonset in kubernetes


I'm using that fluentd daemonset docker image and sending logs to ES with fluentd is working perfectly by the way of using following code-snippets:

  containers:
    - name: fluentd
      image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
      env:
        - name: FLUENT_ELASTICSEARCH_HOST
          value: "my-aws-es-endpoint"
        - name: FLUENT_ELASTICSEARCH_PORT
          value: "443"
        - name: FLUENT_ELASTICSEARCH_SCHEME
          value: "https"
        - name: FLUENT_ELASTICSEARCH_USER
          value: null
        - name: FLUENT_ELASTICSEARCH_PASSWORD
          value: null

But the problem happening is for DR/HA, we're about to save logs into S3 as well. My question is is there anyway that we can add multiple outputs in fluentd-kubernetes-daemonset in kubernetes such as S3, Kinesis and so on?


Solution

  • It's subjective to how you are deploying Fluentd to the cluster. Do you use a templating engine like Helm or Skaffold?

    If so, these should have a configmap / configuration option inside of them to customize the deployment and provide your own inputs. For example, the Helm fluentd can be defined by adding outputs here:

    https://github.com/helm/charts/blob/master/stable/fluentd/values.yaml#L97

    This should allow you to make multiple streams so the fluentd data is output to numerous locations.

    I notice in your specific Docker Image you provided they have some templated items in Ruby. The config specifically allows for you to mount a volume to conf.d/ in the fluentd folder: https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/templates/conf/fluent.conf.erb#L9

    Maybe /etc/fluentd but I'd recommend running the image locally and checking for yourself.

    As long as your config files end in .conf you should be able to add anything you want.