Search code examples
grafanagrafana-tempografana-alloygrafana-beyla

Cannot see any traces from Alloy in Grafana


I am trying to use Grafana Alloy with Grafana Beyla enabled and hope it can send some traces to Grafana Tempo. With this setup, Alloy succeed sending logs to Loki. However, for traces, I cannot see any traces in Grafana and also no Service Graph.

enter image description here

Helm Charts:

Grafana

---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: production-hm-grafana
  namespace: production-hm-argo-cd
  labels:
    app.kubernetes.io/name: hm-grafana
spec:
  project: production-hm
  sources:
    - repoURL: https://grafana.github.io/helm-charts
      # https://artifacthub.io/packages/helm/grafana/grafana
      targetRevision: 8.8.5
      chart: grafana
      helm:
        releaseName: hm-grafana
        values: |
          # https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml
          ---
          sidecar:
            dashboards:
              enabled: true
              searchNamespace: ALL
          datasources:
            datasources.yaml:
              apiVersion: 1
              datasources:
                - name: hm-prometheus
                  type: prometheus
                  isDefault: true
                  url: http://hm-prometheus-kube-pr-prometheus.production-hm-prometheus:9090
                  access: proxy
                - name: hm-loki
                  type: loki
                  isDefault: false
                  url: http://hm-loki-gateway.production-hm-loki:80
                  access: proxy
                - name: hm-tempo
                  type: tempo
                  isDefault: false
                  url: http://hm-tempo-query-frontend.production-hm-tempo:3100
                  access: proxy
                  # https://grafana.com/docs/grafana/next/datasources/tempo/configure-tempo-data-source/#example-file
                  jsonData:
                    tracesToLogsV2:
                      datasourceUid: 'hm-loki'
                      spanStartTimeShift: '-1h'
                      spanEndTimeShift: '1h'
                      tags: ['job', 'instance', 'pod', 'namespace']
                      filterByTraceID: false
                      filterBySpanID: false
                      customQuery: true
                      query: 'method="$${__span.tags.method}"'
                    tracesToMetrics:
                      datasourceUid: 'hm-prometheus'
                      spanStartTimeShift: '-1h'
                      spanEndTimeShift: '1h'
                      tags: [{ key: 'service.name', value: 'service' }, { key: 'job' }]
                      queries:
                        - name: 'Sample query'
                          query: 'sum(rate(traces_spanmetrics_latency_bucket{$$__tags}[5m]))'
                    serviceMap:
                      datasourceUid: 'hm-prometheus'
                    nodeGraph:
                      enabled: true
                    search:
                      hide: false
                    traceQuery:
                      timeShiftEnabled: true
                      spanStartTimeShift: '-1h'
                      spanEndTimeShift: '1h'
                    spanBar:
                      type: 'Tag'
                      tag: 'http.path'
                    streamingEnabled:
                      search: true
    - repoURL: [email protected]:hongbo-miao/hongbomiao.com.git
      targetRevision: main
      path: kubernetes/argo-cd/applications/production-hm/grafana/kubernetes-manifests
  destination:
    namespace: production-hm-grafana
    server: https://kubernetes.default.svc
  syncPolicy:
    syncOptions:
      - ServerSideApply=true
    automated:
      prune: true

Tempo

---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: production-hm-tempo
  namespace: production-hm-argo-cd
  labels:
    app.kubernetes.io/name: hm-tempo
spec:
  project: production-hm
  source:
    repoURL: https://grafana.github.io/helm-charts
    # https://artifacthub.io/packages/helm/grafana/tempo-distributed
    targetRevision: 1.31.0
    chart: tempo-distributed
    helm:
      releaseName: hm-tempo
      values: |
        # https://github.com/grafana/helm-charts/blob/main/charts/tempo-distributed/values.yaml
        # https://grafana.com/docs/tempo/latest/setup/operator/object-storage/
        ---
        tempo:
          structuredConfig:
            # https://grafana.com/docs/tempo/latest/traceql/#stream-query-results
            stream_over_http_enabled: true
        gateway:
          enabled: false
        serviceAccount:
          create: true
          name: hm-tempo
          annotations:
            eks.amazonaws.com/role-arn: arn:aws:iam::272394222652:role/TempoRole-hm-tempo
        storage:
          admin:
            backend: s3
            s3:
              endpoint: s3.amazonaws.com
              region: us-west-2
              bucket: production-hm-tempo-admin-bucket
          trace:
            backend: s3
            s3:
              endpoint: s3.amazonaws.com
              region: us-west-2
              bucket: production-hm-tempo-trace-bucket
        traces:
          otlp:
            http:
              enabled: true
            grpc:
              enabled: true
        metricsGenerator:
          enabled: true
          config:
            processor:
              # https://grafana.com/docs/tempo/latest/operations/traceql-metrics/
              local_blocks:
                filter_server_spans: false
            storage:
              remote_write:
                - url: http://hm-prometheus-kube-pr-prometheus.production-hm-prometheus:9090/api/v1/write
        global_overrides:
          metrics_generator_processors:
            - local-blocks
            - service-graphs
            - span-metrics
  destination:
    namespace: production-hm-tempo
    server: https://kubernetes.default.svc
  syncPolicy:
    syncOptions:
      - ServerSideApply=true
    automated:
      prune: true

Alloy

---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: production-hm-alloy
  namespace: production-hm-argo-cd
  labels:
    app.kubernetes.io/name: hm-alloy
spec:
  project: production-hm
  source:
    repoURL: https://grafana.github.io/helm-charts
    # https://artifacthub.io/packages/helm/grafana/alloy
    targetRevision: 0.11.0
    chart: alloy
    helm:
      releaseName: hm-alloy
      values: |
        # https://github.com/grafana/alloy/blob/main/operations/helm/charts/alloy/values.yaml
        ---
        alloy:
          # For "beyla.ebpf", see https://grafana.com/docs/grafana-cloud/send-data/alloy/reference/components/beyla/beyla.ebpf/
          stabilityLevel: public-preview
          extraEnv:
            - name: LOKI_URL
              value: http://hm-loki-gateway.production-hm-loki:80/loki/api/v1/push
            - name: TEMPO_ENDPOINT
              value: hm-tempo-distributor.production-hm-tempo:4317
          configMap:
            content: |-
              // https://grafana.com/docs/alloy/latest/configure/kubernetes/
              // https://grafana.com/docs/alloy/latest/collect/logs-in-kubernetes/
              logging {
                level = "info"
                format = "logfmt"
              }

              // Loki related config
              // ...

              // https://grafana.com/docs/tempo/latest/configuration/grafana-alloy/automatic-logging/
              // https://grafana.com/docs/tempo/latest/configuration/grafana-alloy/service-graphs/
              // https://grafana.com/docs/tempo/latest/configuration/grafana-alloy/span-metrics/
              // https://grafana.com/blog/2024/05/21/how-to-use-grafana-beyla-in-grafana-alloy-for-ebpf-based-auto-instrumentation/
              beyla.ebpf "default" {
                attributes {
                  kubernetes {
                    enable = "true"
                  }
                }
                discovery {
                  services {
                    exe_path = "http"
                    open_ports = "80"
                  }
                }
                output {
                  traces = [otelcol.processor.batch.default.input]
                }
              }

              otelcol.processor.batch "default" {
                output {
                  metrics = [otelcol.exporter.otlp.hm_tempo.input]
                  logs    = [otelcol.exporter.otlp.hm_tempo.input]
                  traces  = [otelcol.exporter.otlp.hm_tempo.input]
                }
              }

              otelcol.exporter.otlp "hm_tempo" {
                client {
                  endpoint = env("TEMPO_ENDPOINT")
                  tls {
                    insecure = true
                    insecure_skip_verify = true
                  }
                }
              }
  destination:
    namespace: production-hm-alloy
    server: https://kubernetes.default.svc
  syncPolicy:
    syncOptions:
      - ServerSideApply=true
    automated:
      prune: true

Alloy's graph

All are healthy:

enter image description here

Logs

One of alloy pods log

It is long, I put at https://gist.github.com/hongbo-miao/23bf9d16435098267184f090d5f45044

I saw a line

2025/01/30 08:59:35 ERROR Unable to load eBPF watcher for process events component=discover.ProcessWatcher interval=5s error="loading and assigning BPF objects: field BeylaKprobeSysBind: program beyla_kprobe_sys_bind: map watch_events: map create: operation not permitted (MEMLOCK may be too low, consider rlimit.RemoveMemlock)"

inside, however, I am not sure how to resolve. I am using Amazon EKS.

tempo-distributor pod log

level=warn ts=2025-01-30T06:36:10.194320099Z caller=main.go:133 msg="-- CONFIGURATION WARNINGS --"
level=warn ts=2025-01-30T06:36:10.19437038Z caller=main.go:139 msg="Inline, unscoped overrides are deprecated. Please use the new overrides config format."
level=info ts=2025-01-30T06:36:10.197225405Z caller=main.go:121 msg="Starting Tempo" version="(version=v2.7.0, branch=HEAD, revision=b0da6b481)"
level=info ts=2025-01-30T06:36:10.197899194Z caller=server.go:248 msg="server listening on addresses" http=[::]:3100 grpc=[::]:9095
ts=2025-01-30T06:36:10Z level=info msg="OTel Shim Logger Initialized" component=tempo
level=info ts=2025-01-30T06:36:10.413004915Z caller=memberlist_client.go:446 msg="Using memberlist cluster label and node name" cluster_label=hm-tempo.production-hm-tempo node=hm-tempo-distributor-6f579f694c-665x8-51b758e5
level=info ts=2025-01-30T06:36:10.414030557Z caller=module_service.go:82 msg=starting module=internal-server
level=info ts=2025-01-30T06:36:10.414226259Z caller=module_service.go:82 msg=starting module=server
level=info ts=2025-01-30T06:36:10.414342161Z caller=module_service.go:82 msg=starting module=memberlist-kv
level=info ts=2025-01-30T06:36:10.414361461Z caller=module_service.go:82 msg=starting module=overrides
level=info ts=2025-01-30T06:36:10.414402831Z caller=module_service.go:82 msg=starting module=ring
level=info ts=2025-01-30T06:36:10.414436302Z caller=module_service.go:82 msg=starting module=metrics-generator-ring
level=info ts=2025-01-30T06:36:10.414457312Z caller=module_service.go:82 msg=starting module=usage-report
level=warn ts=2025-01-30T06:36:10.414641774Z caller=runtime_config_overrides.go:97 msg="Overrides config type mismatch" err="per-tenant overrides config type does not match static overrides config type" config_type=new static_config_type=legacy
level=error ts=2025-01-30T06:36:10.498151365Z caller=resolver.go:87 msg="failed to lookup IP addresses" host=hm-tempo-gossip-ring err="lookup hm-tempo-gossip-ring on 10.215.0.10:53: no such host"
level=warn ts=2025-01-30T06:36:10.498190246Z caller=resolver.go:134 msg="IP address lookup yielded no results. No host found or no addresses found" host=hm-tempo-gossip-ring
level=info ts=2025-01-30T06:36:10.498203496Z caller=memberlist_client.go:563 msg="memberlist fast-join starting" nodes_found=0 to_join=0
level=warn ts=2025-01-30T06:36:10.498217166Z caller=memberlist_client.go:583 msg="memberlist fast-join finished" joined_nodes=0 elapsed_time=83.858295ms
level=info ts=2025-01-30T06:36:10.498238626Z caller=memberlist_client.go:595 phase=startup msg="joining memberlist cluster" join_members=dns+hm-tempo-gossip-ring:7946
level=info ts=2025-01-30T06:36:10.498317557Z caller=ring.go:316 msg="ring doesn't exist in KV store yet"
level=info ts=2025-01-30T06:36:10.498360498Z caller=ring.go:316 msg="ring doesn't exist in KV store yet"
level=info ts=2025-01-30T06:36:10.498495459Z caller=module_service.go:82 msg=starting module=distributor
ts=2025-01-30T06:36:10Z level=warn msg="Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks." component=tempo documentation=https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks
ts=2025-01-30T06:36:10Z level=info msg="Starting GRPC server" component=tempo endpoint=0.0.0.0:4317
ts=2025-01-30T06:36:10Z level=warn msg="Using the 0.0.0.0 address exposes this server to every network interface, which may facilitate Denial of Service attacks." component=tempo documentation=https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/security-best-practices.md#safeguards-against-denial-of-service-attacks
ts=2025-01-30T06:36:10Z level=info msg="Starting HTTP server" component=tempo endpoint=0.0.0.0:4318
level=info ts=2025-01-30T06:36:10.498914304Z caller=app.go:208 msg="Tempo started"
level=error ts=2025-01-30T06:36:10.509656008Z caller=resolver.go:87 msg="failed to lookup IP addresses" host=hm-tempo-gossip-ring err="lookup hm-tempo-gossip-ring on 10.215.0.10:53: no such host"
level=warn ts=2025-01-30T06:36:10.509682688Z caller=resolver.go:134 msg="IP address lookup yielded no results. No host found or no addresses found" host=hm-tempo-gossip-ring
level=warn ts=2025-01-30T06:36:10.509699598Z caller=memberlist_client.go:629 phase=startup msg="joining memberlist cluster" attempts=1 max_attempts=10 err="found no nodes to join"
level=error ts=2025-01-30T06:36:11.536326569Z caller=resolver.go:87 msg="failed to lookup IP addresses" host=hm-tempo-gossip-ring err="lookup hm-tempo-gossip-ring on 10.215.0.10:53: no such host"
level=warn ts=2025-01-30T06:36:11.536359541Z caller=resolver.go:134 msg="IP address lookup yielded no results. No host found or no addresses found" host=hm-tempo-gossip-ring
level=warn ts=2025-01-30T06:36:11.53637399Z caller=memberlist_client.go:629 phase=startup msg="joining memberlist cluster" attempts=2 max_attempts=10 err="found no nodes to join"
level=error ts=2025-01-30T06:36:15.080790386Z caller=resolver.go:87 msg="failed to lookup IP addresses" host=hm-tempo-gossip-ring err="lookup hm-tempo-gossip-ring on 10.215.0.10:53: no such host"
level=warn ts=2025-01-30T06:36:15.080823057Z caller=resolver.go:134 msg="IP address lookup yielded no results. No host found or no addresses found" host=hm-tempo-gossip-ring
level=warn ts=2025-01-30T06:36:15.080835127Z caller=memberlist_client.go:629 phase=startup msg="joining memberlist cluster" attempts=3 max_attempts=10 err="found no nodes to join"
level=info ts=2025-01-30T06:36:19.988200041Z caller=memberlist_client.go:602 phase=startup msg="joining memberlist cluster succeeded" reached_nodes=7 elapsed_time=9.489949855s

In S3, only has tempo_cluster_seed.json file got written meaning Tempo can write to S3 successfully. However, there is no other trace data:

enter image description here

Any suggestion would be appreciate, thank you!

Update 2/15/2025

Regarding

loading and assigning BPF objects: field BeylaKprobeSysBind: program beyla_kprobe_sys_bind: map watch_events: map create: operation not permitted (MEMLOCK may be too low, consider rlimit.RemoveMemlock)

In the Alloy pod, I have verified ulimit -l already return unlimited. In my case, the actual reason causing it is because of I am missing this section based on https://grafana.com/docs/grafana-cloud/send-data/alloy/reference/components/beyla/beyla.ebpf/

        alloy:
          stabilityLevel: public-preview
          securityContext:
            appArmorProfile:
              type: Unconfined
            capabilities:
              add:
                - SYS_ADMIN
                - SYS_PTRACE

After I adding above parts. There is no more errors related to Beyla.

I further did more research, as there is no external application sending traces to Alloy, so there is no trace. However, I expect Grafana Beyla generating traces. So I have removed not related codes, only focus on debugging this part:

---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: production-hm-alloy
  namespace: production-hm-argo-cd
  labels:
    app.kubernetes.io/name: hm-alloy
spec:
  project: production-hm
  source:
    repoURL: https://grafana.github.io/helm-charts
    # https://artifacthub.io/packages/helm/grafana/alloy
    targetRevision: 0.11.0
    chart: alloy
    helm:
      releaseName: hm-alloy
      values: |
        # https://github.com/grafana/alloy/blob/main/operations/helm/charts/alloy/values.yaml
        ---
        alloy:
          # https://grafana.com/docs/grafana-cloud/send-data/alloy/reference/components/beyla/beyla.ebpf/
          stabilityLevel: public-preview
          securityContext:
            appArmorProfile:
              type: Unconfined
            capabilities:
              add:
                - SYS_ADMIN
                - SYS_PTRACE
          extraEnv:
            - name: LOKI_URL
              value: http://hm-loki-gateway.production-hm-loki:80/loki/api/v1/push
            - name: TEMPO_ENDPOINT
              value: hm-tempo-distributor.production-hm-tempo.svc:4317
            - name: MIMIR_URL
              value: http://hm-mimir-distributor-headless.production-hm-mimir.svc:8080/api/v1/push
          configMap:
            content: |-
              // https://grafana.com/docs/alloy/latest/configure/kubernetes/
              // https://grafana.com/docs/alloy/latest/collect/logs-in-kubernetes/
              logging {
                level = "info"
                format = "logfmt"
              }

              // https://grafana.com/docs/alloy/latest/reference/config-blocks/livedebugging/
              livedebugging {
                enabled = true
              }
              
              // HM Beyla
              // https://grafana.com/docs/grafana-cloud/send-data/alloy/reference/components/beyla/beyla.ebpf/
              beyla.ebpf "hm_beyla" {
                debug = true
                open_port = 8080
                attributes {
                  kubernetes {
                    enable = "true"
                  }
                }
                discovery {
                  services {}
                }
                metrics {
                  features = [
                    "application",
                    "application_service_graph",
                    "application_span",
                    "network",
                  ]
                  instrumentations = ["*"]
                  network {
                    enable = true
                  }
                }
                output {
                  traces = [otelcol.processor.batch.hm_beyla.input]
                }
              }

              // HM Beyla - Trace
              otelcol.processor.batch "hm_beyla" {
                output {
                  traces = [otelcol.exporter.otlp.hm_tempo_for_hm_beyla.input]
                }
              }
              // https://grafana.com/docs/alloy/latest/reference/components/otelcol/otelcol.auth.headers/
              otelcol.auth.headers "hm_tempo_for_hm_beyla" {
                header {
                  key   = "X-Scope-OrgID"
                  value = "hm"
                }
              }
              otelcol.exporter.otlp "hm_tempo_for_hm_beyla" {
                client {
                  endpoint = env("TEMPO_ENDPOINT")
                  compression = "zstd"
                  auth = otelcol.auth.headers.hm_tempo_for_hm_beyla.handler
                  tls {
                    insecure = true
                    insecure_skip_verify = true
                  }
                }
              }

              // HM Beyla - Metrics
              // https://grafana.com/docs/alloy/latest/reference/components/beyla/beyla.ebpf/
              prometheus.scrape "hm_beyla" {
                targets = beyla.ebpf.hm_beyla.targets
                honor_labels = true
                forward_to = [prometheus.remote_write.hm_mimir.receiver]
              }

              prometheus.remote_write "hm_mimir" {
                endpoint {
                  url = env("MIMIR_URL")
                  headers = {
                    "X-Scope-OrgID" = "hm",
                  }
                }
              }

  destination:
    namespace: production-hm-alloy
    server: https://kubernetes.default.svc
  syncPolicy:
    syncOptions:
      - ServerSideApply=true
    automated:
      prune: true

Among these components, otelcol.processor.batch supports live debugging. However, there is no incoming traces. So it seems it does not receive any data from Grafana Beyla:

enter image description here

enter image description here

On the other side, Alloy pod does have some logs showing connections between Grafana Alloy and Grafana Tempo:

network_flow: transport=6 beyla.ip=172.31.179.149 iface= iface_direction=1 src.address=172.31.179.149 dst.address=10.215.206.123 src.name=hm-alloy-dvv7d dst.name=hm-tempo-distributor src.port=45582 dst.port=4317 k8s.src.namespace=production-hm-alloy k8s.src.name=hm-alloy-dvv7d k8s.src.type=Pod k8s.src.owner.name=hm-alloy k8s.src.owner.type=DaemonSet k8s.dst.name=hm-tempo-distributor k8s.dst.type=Service k8s.dst.owner.type=Service k8s.src.node.ip=172.31.178.125 k8s.src.node.name=ip-172-31-178-125.us-west-2.compute.internal k8s.dst.namespace=production-hm-tempo k8s.dst.owner.name=hm-tempo-distributor
network_flow: transport=6 beyla.ip=172.31.179.149 iface= iface_direction=1 src.address=172.31.179.149 dst.address=10.215.206.123 src.name=hm-alloy-dvv7d dst.name=hm-tempo-distributor src.port=45574 dst.port=4317 k8s.src.name=hm-alloy-dvv7d k8s.src.type=Pod k8s.src.owner.name=hm-alloy k8s.dst.namespace=production-hm-tempo k8s.dst.type=Service k8s.dst.owner.name=hm-tempo-distributor k8s.dst.owner.type=Service k8s.src.namespace=production-hm-alloy k8s.src.owner.type=DaemonSet k8s.src.node.ip=172.31.178.125 k8s.src.node.name=ip-172-31-178-125.us-west-2.compute.internal k8s.dst.name=hm-tempo-distributor
network_flow: transport=6 beyla.ip=172.31.179.149 iface= iface_direction=0 src.address=10.215.206.123 dst.address=172.31.179.149 src.name=hm-tempo-distributor dst.name=hm-alloy-dvv7d src.port=4317 dst.port=45590 k8s.dst.namespace=production-hm-alloy k8s.dst.type=Pod k8s.src.type=Service k8s.src.owner.name=hm-tempo-distributor k8s.src.owner.type=Service k8s.dst.name=hm-alloy-dvv7d k8s.dst.owner.name=hm-alloy k8s.dst.owner.type=DaemonSet k8s.dst.node.ip=172.31.178.125 k8s.dst.node.name=ip-172-31-178-125.us-west-2.compute.internal k8s.src.namespace=production-hm-tempo k8s.src.name=hm-tempo-distributor

Solution

  • Turns out it is missing some permissions. Official Alloy document does not mention them. Based on this Beyla config file I found, after updating securityContext in Alloy, I can see traces now.

              securityContext:
                # https://grafana.com/docs/grafana-cloud/send-data/alloy/reference/components/beyla/beyla.ebpf/
                appArmorProfile:
                  type: Unconfined
                # https://github.com/grafana/beyla/blob/main/examples/k8s/unprivileged.yaml
                runAsUser: 0
                capabilities:
                  drop:
                    - ALL
                  add:
                    - BPF
                    - CHECKPOINT_RESTORE
                    - DAC_READ_SEARCH
                    - NET_RAW
                    - PERFMON
                    - SYS_ADMIN
                    - SYS_PTRACE
    

    Full config:

    ---
    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: production-hm-alloy
      namespace: production-hm-argo-cd
      labels:
        app.kubernetes.io/name: hm-alloy
    spec:
      project: production-hm
      source:
        repoURL: https://grafana.github.io/helm-charts
        # https://artifacthub.io/packages/helm/grafana/alloy
        targetRevision: 0.11.0
        chart: alloy
        helm:
          releaseName: hm-alloy
          values: |
            # https://github.com/grafana/alloy/blob/main/operations/helm/charts/alloy/values.yaml
            ---
            controller:
              # https://github.com/grafana/beyla/blob/main/examples/k8s/unprivileged.yaml
              hostPID: true
            alloy:
              stabilityLevel: public-preview
              securityContext:
                # https://grafana.com/docs/grafana-cloud/send-data/alloy/reference/components/beyla/beyla.ebpf/
                appArmorProfile:
                  type: Unconfined
                # https://github.com/grafana/beyla/blob/main/examples/k8s/unprivileged.yaml
                runAsUser: 0
                capabilities:
                  drop:
                    - ALL
                  add:
                    - BPF
                    - CHECKPOINT_RESTORE
                    - DAC_READ_SEARCH
                    - NET_RAW
                    - PERFMON
                    - SYS_ADMIN
                    - SYS_PTRACE
              extraEnv:
                - name: LOKI_URL
                  value: http://hm-loki-gateway.production-hm-loki:80/loki/api/v1/push
                - name: TEMPO_ENDPOINT
                  value: hm-tempo-distributor.production-hm-tempo.svc:4317
                - name: MIMIR_URL
                  value: http://hm-mimir-distributor-headless.production-hm-mimir.svc:8080/api/v1/push
              configMap:
                content: |-
                  // https://grafana.com/docs/alloy/latest/configure/kubernetes/
                  // https://grafana.com/docs/alloy/latest/collect/logs-in-kubernetes/
                  logging {
                    level = "info"
                    format = "logfmt"
                  }
    
                  // https://grafana.com/docs/alloy/latest/reference/config-blocks/livedebugging/
                  livedebugging {
                    enabled = true
                  }
    
                  // Loki related config
                  // ...
    
                  // hm Beyla
                  // https://grafana.com/docs/grafana-cloud/send-data/alloy/reference/components/beyla/beyla.ebpf/
                  beyla.ebpf "hm_beyla" {
                    debug = true
                    open_port = "80,443,8000-8999"
                    attributes {
                      kubernetes {
                        enable = "true"
                      }
                    }
                    discovery {
                      services {
                        kubernetes {
                          namespace = "."
                        }
                      }
                    }
                    routes {
                      unmatched = "heuristic"
                    }
                    metrics {
                      features = [
                        "application",
                        "application_process",
                        "application_service_graph",
                        "application_span",
                        "network",
                      ]
                      instrumentations = ["*"]
                      network {
                        enable = true
                      }
                    }
                    output {
                      traces = [otelcol.processor.batch.hm_beyla.input]
                    }
                  }
    
                  // hm Beyla - Trace
                  otelcol.processor.batch "hm_beyla" {
                    output {
                      traces = [otelcol.exporter.otlp.hm_tempo_for_hm_beyla.input]
                    }
                  }
                  // https://grafana.com/docs/alloy/latest/reference/components/otelcol/otelcol.auth.headers/
                  otelcol.auth.headers "hm_tempo_for_hm_beyla" {
                    header {
                      key   = "X-Scope-OrgID"
                      value = "hm"
                    }
                  }
                  otelcol.exporter.otlp "hm_tempo_for_hm_beyla" {
                    client {
                      endpoint = env("TEMPO_ENDPOINT")
                      compression = "zstd"
                      auth = otelcol.auth.headers.hm_tempo_for_hm_beyla.handler
                      tls {
                        insecure = true
                        insecure_skip_verify = true
                      }
                    }
                  }
    
                  // hm Beyla - Metrics
                  // https://grafana.com/docs/alloy/latest/reference/components/beyla/beyla.ebpf/
                  prometheus.scrape "hm_beyla" {
                    targets = beyla.ebpf.hm_beyla.targets
                    honor_labels = true
                    forward_to = [prometheus.remote_write.hm_mimir.receiver]
                  }
    
                  prometheus.remote_write "hm_mimir" {
                    endpoint {
                      url = env("MIMIR_URL")
                      headers = {
                        "X-Scope-OrgID" = "hm",
                      }
                    }
                  }
      destination:
        namespace: production-hm-alloy
        server: https://kubernetes.default.svc
      syncPolicy:
        syncOptions:
          - ServerSideApply=true
        automated:
          prune: true
    

    enter image description here

    Currently, I have to enable debug mode to make traces work

                  beyla.ebpf "hm_beyla" {
                    debug = true
    

    I have opened a issue ticket here, if there is any update, I will update this answer.