Search code examples
terraformyamlkubernetes-helmdatadog

Copying local directory via Terraform into Kubernetes Cluster


I am trying to copy some files from my local terraform directory into my datadog resources into a preexisting configuration path.

When I try the below in my datadog-values.yaml I do not see any of my configuration files copied into the location. I also cannot see any logs, even in debug mode, that are telling me whether it failed or the path was incorrect.

See datadog helm-charts

  # agents.volumes -- Specify additional volumes to mount in the dd-agent container
  volumes: 
    - hostPath:
        path: ./configs
      name: openmetrics_config

  # agents.volumeMounts -- Specify additional volumes to mount in all containers of the agent pod
  volumeMounts: 
    - name: openmetrics_config
      mountPath: /etc/datadog-agent/conf.d/openmetrics.d
      readOnly: true

What I've tried

I can manually copy the configuration files into the directory like below in a shell script. But Of course if the datadog names change on restart I have to manually update.

kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-sdbh5:/etc/datadog-agent/conf.d/openmetrics.d

kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-t4pgg:/etc/datadog-agent/conf.d/openmetrics.d

kubectl -n datadog -c trace-agent cp ./configs/bookie_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/broker_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/proxy_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d
kubectl -n datadog -c trace-agent cp ./configs/zookeeper_conf.yaml datadog-z8knp:/etc/datadog-agent/conf.d/openmetrics.d

kubectl rollout restart deployment datadog-cluster-agent -n datadog

Solution

  • This volumes that you use here don't work as you wish to. This ./config directory is not your local directory. Kubernetes has no idea about your local machine.

    But fear not. There are few ways of doing it that and it all depending on your needs. They are:

    1. Terraformed config
    2. Terraformed mount
    3. Terraformed copy config action

    Terraformed config

    To have config file terraformed means:

    • to have config updated in k8s whenever change of file occurs - we want terraform to track those changes
    • to have config uploaded before service using it will start (this is a configuration file after all, they configure something I assume)
    • DISCLAIMER - service won't reset after config change (it's achievable, but it's another topic)

    To achieve this create config map for every config:

    resource "kubernetes_config_map" "config" {
    
      metadata {
        name = "some_name"
        namespace = "some_namespace"
      }
      data = {
        "config.conf" = file(var.path_to_config)
      }
    }
    
    

    and then use it in your volumeMounts. I assume that you're working with helm provider, so this should probably be

    set {
        name  = "agents.volumeMounts"
        value = [{
            "mountPath": "/where/to/mount"
            "name": kubernetes_config_map.config.metadata.0.name
         }]
      }
    

    In example above I used single config and single volume for simplification, but for_each should be enough.

    Terraformed mount

    Another variant is that you don't want terraform to track configurations, then what you want to do is:

    1. Create single storage (it can be mounted storage from your kube provider, can be also created dynamic volume in terraform - chose your poison)
    2. Mount this storage to kubernetes volume (kubernetes_persistent_volume_v1 in terraform)
    3. Set set {...} like in previous section.

    Terraformed copy config action

    Last one and my least favorited option is to call action to copy from terraform. It's last resort... Provisioners

    Even terraform docs say it's bad, yet it has one advantage. It's super easy to use. You can simply call your shell command here - it could be: scp, rsync, or even (but please don't do it) kubectl cp.

    To not encourage this solution more I'll just leave doc of null_resource which uses provisioner "remote-exec" (you can use "local-exec") here.