Search code examples
terraformterraform-provider-azureterraform-provider-kubernetes

terraform kubernetes provider depending on azurerm_kubernetes_cluster resource


I have a terraform configuration where a provider depends on a resource.

More precisely, an Azure Kubernetes instance is created (using the azurerm provider) and then connected to for further configuration (using the kubernetes provider):

...

// 1. use azure provider to create a kubernetes cluster
resource "azurerm_kubernetes_cluster" "k8s_cluster" { ... }

// 2. configure kubernetes provider to work with the cluster
provider "kubernetes" {
  host                   = azurerm_kubernetes_cluster.k8s_cluster.kube_config.0.host
  client_certificate     = base64decode(azurerm_kubernetes_cluster.k8s_cluster.kube_config.0.client_certificate)
  client_key             = base64decode(azurerm_kubernetes_cluster.k8s_cluster.kube_config.0.client_key)
  cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.k8s_cluster.kube_config.0.cluster_ca_certificate)
}

In the past, this worked fine when creating or modifying the kubernetes cluster; apparently the kubernetes provider was configured after the resource was created. It failed when some change required the re-creation of the cluster; apparently the necessary resource details were not ready then.

However, with the release of azurerm v3.106.0, it stopped working.

What is the propper way to configure the kubernetes provider for a cluster that may be just created or recreated in the same terraform run? Or is there another way to infer the kube_config, instead of reading azurerm_kubernetes_cluster.k8s_cluster.kube_config?

(GitHub's copilot had the great idea to add a depends_on in the provider configuration, unfortunately this is not supported by terraform; SO 69996346 explains why this is, and proposes to use -target to restrict the terraform rollout to the available information; however, I observe the problem when the cluster exists and the data should be available.)


Solution

  • Terraform Kubernetes provider depending on azurerm_kubernetes_cluster resource

    The requirement of recreating and modifying the cluster with provider is not available in recent version of the terraform provider.

    Thanks Matthew Schuchard for the valuable insights of using -target to achieving the requirement.

    Initially, when I tried to recreate the cluster, I faced the blocker the as it doesn't have necessary permission to access the Kubernetes provider.

    To overcome this, I tried using data module after creating cluster configuration and used a depends on the modules so that it make sure the necessary changes updated in configuration was sync with the resource in the portal before connecting to kubernates provider.

    configuration:

    resource "azurerm_kubernetes_cluster" "k8s_cluster" {
      name                = "vinay-aks"
      location            = azurerm_resource_group.rg.location
      resource_group_name = azurerm_resource_group.rg.name
      dns_prefix          = "dns-vinay-aks"
    
      default_node_pool {
        name       = "default"
        node_count = 1
        vm_size    = "Standard_DS2_v2"
      }
    
      identity {
        type = "SystemAssigned"
      }
    
      network_profile {
        network_plugin    = "azure"
        dns_service_ip    = "10.0.0.10"
        service_cidr      = "10.0.0.0/16"
      }
    
      tags = {
        Environment = "Production"
      }
    }
    
    data "azurerm_kubernetes_cluster" "k8s_cluster" {
      name                = azurerm_kubernetes_cluster.k8s_cluster.name
      resource_group_name = azurerm_kubernetes_cluster.k8s_cluster.resource_group_name
      depends_on          = [azurerm_kubernetes_cluster.k8s_cluster]
    }
    
    provider "kubernetes" {
      host                   = azurerm_kubernetes_cluster.k8s_cluster.kube_config.0.host
      client_certificate     = base64decode(azurerm_kubernetes_cluster.k8s_cluster.kube_config.0.client_certificate)
      client_key             = base64decode(azurerm_kubernetes_cluster.k8s_cluster.kube_config.0.client_key)
      cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.k8s_cluster.kube_config.0.cluster_ca_certificate)
    }
    
    resource "kubernetes_namespace" "example" {
      metadata {
        name = "vinay-namespace"
      }
    }
    

    Deployment:

    enter image description here

    enter image description here

    now I made some changes in the configuration such that cluster recreates by changing dns name.

    dns_prefix = "dns-vinay-aks"

    and run the terraform command as below

    terraform apply -target="azurerm_kubernetes_cluster.k8s_cluster"
    

    enter image description here

    enter image description here

    Refer:

    https://developer.hashicorp.com/terraform/tutorials/state/resource-targeting