Search code examples
jenkinsloggingerror-loggingdatadog

How to aggregate logs from several Jenkins Jobs\Pipelines in one place?


Our project is responsible for migrating data from one system to another. We are going to run transformation, validation and migration scripts using Jenkins.

It's unclear for me how to aggregate logs from several Jobs or Pipelines in Jenkins. How it can be done?

We'll rely on logs heavily to identify any issues found during validation etc.

In terms of our planned setup we'll have AWS EC2 instances + we can use Datadog (our company uses it). Can we use Datadog for this purpose?


Solution

  • You can reference this doc to find where the default logging path is for Jenkins depending on your OS. (For linux, it's /var/log/jenkins/jenkins.log if you don't configure it to be something else.

    Then as long as your Datadog agent is v6+ you can use the Datadog agent to tail your jenkins.log file by following this doc.

    Specifically, you'd add this line to your dadatod.yaml:

    logs_enabled: true
    

    and add this content to any old conf.yaml file nested in your conf.d/ directory, such as conf.d/jenkins.d/conf.yaml:

    logs:
      - type: file
        path: /var/log/jenkins/jenkins.log
        service: jenkins
        source: jenkins
    

    Then the agent will tail your log file as it's written to, and will forward it to your Datadog account so you can query, graph, and monitor on your log data there.

    Once you have the logs coming in, you may want to write a processing pipeline to get the critical attributes parsed out, but that would be material for a new question :) .