We are running a terraform to create a GKE Cluster and using the below to create a local kubeconfig file after the creation of the cluster.
module "gke_auth" {
source = "terraform-google-modules/kubernetes-engine/google//modules/auth"
depends_on = [module.gke]
project_id = var.project_id
location = module.gke.location
cluster_name = module.gke.name
}
resource "local_file" "kubeconfig" {
content = module.gke_auth.kubeconfig_raw
filename = "kubeconfig"
}
Post that we would like to continue and deploy istio and other deployments on the cluster and to connect to the cluster we are referring kubeconfig file as below.
provider "helm" {
kubernetes {
config_path = "kubeconfig"
}
}
provider "kubernetes" {
config_path = "kubeconfig"
}
But as soon as we run apply command below warning is shown.
Invalid attribute in provider configuration
with provider["registry.terraform.io/hashicorp/kubernetes"],
on main.tf line 42, in provider "kubernetes":
42: provider "kubernetes" {
'config_path' refers to an invalid path: "kubeconfig": stat kubeconfig: no such file or directory
It is because initially the file is not there but it will be created eventually once the cluster is created. But the problem is after applying the template the session state is not refreshed automatically and even though the kubeconfig file has been created, it throws the below error and exits the execution.
Error: Post "http://localhost/api/v1/namespaces": dial tcp 127.0.0.1:80: connect: connection refused
Invalid attribute in provider configuration
with provider["registry.terraform.io/hashicorp/kubernetes"],
on main.tf line 42, in provider "kubernetes":
42: provider "kubernetes" {
'config_path' refers to an invalid path: ".kubeconfig": stat .kubeconfig: no such file or directory
Please suggest how to make this work?
We have this issue with the setup below.
For the first module, we added an output block.
module "gke_auth" {
source = "terraform-google-modules/kubernetes-engine/google//modules/auth"
depends_on = [module.gke]
project_id = var.project_id
location = module.gke.location
cluster_name = module.gke.name
}
resource "local_file" "kubeconfig" {
content = module.gke_auth.kubeconfig_raw
filename = "kubeconfig"
}
output "kubeconfig_file" {
value = "${path.cwd}/kubeconfig"
}
For the second module we made below changes:
data "terraform_remote_state" "kubeconfig_file" {
backend = "local"
config = {
path = "${path.module}/../dirA/terraform.tfstate"
}
}
provider "helm" {
kubernetes {
config_path = "${data.terraform_remote_state.kubeconfig_file.outputs.kubeconfig_file}"
}
}
provider "kubernetes" {
config_path = "${data.terraform_remote_state.kubeconfig_file.outputs.kubeconfig_file}"
}
Note: In a similar way we can access variables from the different module or stack in a different directory