Search code examples
kubernetesterraformterraform-provider-kubernetes

Terraform throws resource name may not be empty error


I'm getting the following error message that I don't understand:

Error: resource name may not be empty

  on main.tf line 48, in data "kubernetes_service" "spark_master_service":
  48: data "kubernetes_service" "spark_master_service" {

Related data source:

data "kubernetes_service" "spark_master_service" {
    metadata {
        labels = {
            "app.kubernetes.io/component" = "master"
            "app.kubernetes.io/instance" = "spark"
            "app.kubernetes.io/name" = "spark"
        }
        namespace = var.namespace
    }
}

My data has a name so I can't quite figure out what Terraform is telling me.


Solution

  • This confusing error is coming from the fact that you can't use your metadata labels as an input to the data source and instead can only provide the namespace and name of the service to look it up.

    If you look at the source for the data source you can see that it is only using the namespace and name fields:

    func dataSourceKubernetesServiceRead(d *schema.ResourceData, meta interface{}) error {
        om := meta_v1.ObjectMeta{
            Namespace: d.Get("metadata.0.namespace").(string),
            Name:      d.Get("metadata.0.name").(string),
        }
        d.SetId(buildId(om))
    
        return resourceKubernetesServiceRead(d, meta)
    }
    

    The docs do show that these are the only two arguments that should be used:

    Arguments

    • name - (Optional) Name of the service, must be unique. Cannot be updated. For more info see Kubernetes reference
    • namespace - (Optional) Namespace defines the space within which name of the service must be unique.

    Unfortunately the docs state that both the namespace and the name are optional because they're using shared schema parts that are the same for much of the Kubernetes provider so Terraform isn't able to check that the name field is used because the underlying implementation requires it.

    I've not looked too deeply into the Kubernetes provider but to me this seems like a bug and if a resource or data source has a different schema in practice to other resources then the schema should match that implementation. This might be tricky to do with so much of the Kubernetes schema being shared but without it you are losing a lot of the niceties of Terraform in that it should be pretty strongly typed and be able to give you better error messages when you are doing something wrong.