I have the following situation: I want to have a Terraform configuration in which I:
kubectl_manifest
resource.So I essence, I have the following configuration:
... all the IAM stuff needed for the cluster
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.50"
}
kubectl = {
source = "gavinbunney/kubectl"
version = ">= 1.7.0"
}
}
}
provider "aws" { }
provider "kubectl" {
config_path = "~/.kube/config"
config_context = aws_eks_cluster.this.arn # tried this also, with no luck
}
resource "aws_eks_cluster" "my-cluster" {
... the cluster configuration
}
... node group yada yada..
resource "null_resource" "update_kubeconfig" {
provisioner "local-exec" {
command = "aws eks update-kubeconfig --name my-cluster --region us-east-1"
}
}
resource "kubectl_manifest" "some_manifest" {
yaml_body = <some_valid_yaml>
depends_on = [null_resource.update_kubeconfig]
}
So my hope is that after the null_resource.update_kubeconfig
runs, and updates the .kube/config
(which it does; checked), the kubectl_manifest.some_manifest
will pick up on it and use the newly updated configuration. But it doesn't.
I don't have currently the error message in hand, but essentially what happens is: It tries to communicate the previously created cluster (I have previously - now not existing - clusters in the kubeconfig). Then it throws an error of "can't resolve DNS name" of the old cluster.
So it seems that the kubeconfig file is loaded somewhere in the beginning of the run, and it isn't being refreshed when the kubectl_manifest
resource is being created.
What is the right way to handle this?!
Since your Terraform code itself creating the Cluster, you can refer that in provider configuration,
data "aws_eks_cluster_auth" "main" {
name = aws_eks_cluster.my-cluster.name
}
provider "kubernetes" {
host = aws_eks_cluster.my-cluster.endpoint
token = data.aws_eks_cluster_auth.main.token
cluster_ca_certificate = base64decode(aws_eks_cluster.my-cluster.certificate_authority.0.data)
}