I have configured an Elastic ECK Beat with autodiscover enabled for all pod logs, but I need to add logs from a specific pod log file too; from this path /var/log/traefik/access.log
inside the container. I've tried with module and log config but still nothing works.
The access.log file exists on the pods and contains data. The filebeat index does not show any data from this log.file.path
Here is the Beat yaml:
---
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
namespace: elastic
spec:
type: filebeat
version: 8.3.1
elasticsearchRef:
name: elasticsearch
kibanaRef:
name: kibana
config:
filebeat:
autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints:
enabled: true
default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
templates:
- condition.contains:
kubernetes.pod.name: traefik
config:
- module: traefik
access:
enabled: true
var.paths: [ "/var/log/traefik/*access.log*" ]
processors:
- add_cloud_metadata: {}
- add_host_metadata: {}
daemonSet:
podTemplate:
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true # Allows to provide richer host metadata
containers:
- name: filebeat
securityContext:
runAsUser: 0
# If using Red Hat OpenShift uncomment this:
#privileged: true
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
- name: varlog
mountPath: /var/log
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
namespace: elastic
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
- nodes
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: elastic
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
namespace: elastic
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elastic
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
Here is the module loaded from Filebeat Logs:
...
{"log.level":"info","@timestamp":"2022-08-18T19:58:55.337Z","log.logger":"esclientleg","log.origin":{"file.name":"eslegclient/connection.go","file.line":291},"message":"Attempting to connect to Elasticsearch version 8.3.1","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-18T19:58:55.352Z","log.logger":"modules","log.origin":{"file.name":"fileset/modules.go","file.line":108},"message":"Enabled modules/filesets: traefik (access)","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-18T19:58:55.353Z","log.logger":"input","log.origin":{"file.name":"log/input.go","file.line":172},"message":"Configured paths: [/var/log/traefik/*access.log*]","service.name":"filebeat","input_id":"fa247382-c065-40ca-974e-4b69f14c3134","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-18T19:58:55.355Z","log.logger":"modules","log.origin":{"file.name":"fileset/modules.go","file.line":108},"message":"Enabled modules/filesets: traefik (access)","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-18T19:58:55.355Z","log.logger":"input","log.origin":{"file.name":"log/input.go","file.line":172},"message":"Configured paths: [/var/log/traefik/*access.log*]","service.name":"filebeat","input_id":"6883d753-f149-4a68-9499-fe039e0de899","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-18T19:58:55.437Z","log.origin":{"file.name":"input/input.go","file.line":134},"message":"input ticker stopped","service.name":"filebeat","ecs.version":"1.6.0"}
{"log.level":"info","@timestamp":"2022-08-18T19:58:55.439Z","log.logger":"input","log.origin":{"file.name":"log/input.go","file.line":172},"message":"Configured paths: [/var/log/containers/*9a1680222e867802388f649f0a296e076193242962b28eb7e0e575bf68826d85.log]","service.name":"filebeat","input_id":"3c1fffae-0213-4889-b0e7-5dda489eeb51","ecs.version":"1.6.0"}
...
Docker logging is based on the stdout/stderr output of a container. If you only write into a log file inside a container it will never be picked up by Docker logging and can therefore also not be processed by your Filebeat setup.
Instead ensure that all logs generated by your containers are sent to stdout. Which would mean in your example start the Traeffic pod with --accesslogsfile=/dev/stdout
to also send the access logs to stdout instead of the log file.