Search code examples
amazon-web-serviceskuberneteskubernetes-ingressamazon-eksamazon-elb

Deploying AWS Load Balancer Controller on EKS with Terraform


Trying to deploy aws-load-balancer-controller on Kubernetes.

I have the following TF code:

resource "kubernetes_deployment" "ingress" {
  metadata {
    name      = "alb-ingress-controller"
    namespace = "kube-system"
    labels = {
      app.kubernetes.io/name = "alb-ingress-controller" 
      app.kubernetes.io/version = "v2.2.3"
      app.kubernetes.io/managed-by = "terraform"
    }
  }

  spec {
    replicas = 1

    selector {
      match_labels = {
        app.kubernetes.io/name = "alb-ingress-controller" 
      }
    }

    strategy {
      type = "Recreate"
    }

    template {
      metadata {
        labels = {
                app.kubernetes.io/name = "alb-ingress-controller" 
                app.kubernetes.io/version = "v2.2.3"
        }
      }

      spec {
        dns_policy                       = "ClusterFirst"
        restart_policy                   = "Always"
        service_account_name             = kubernetes_service_account.ingress.metadata[0].name
        termination_grace_period_seconds = 60

        container {
          name              = "alb-ingress-controller"
          image             = "docker.io/amazon/aws-alb-ingress-controller:v2.2.3"
          image_pull_policy = "Always"

          args = [
            "--ingress-class=alb",
            "--cluster-name=${local.k8s[var.env].esk_cluster_name}",
            "--aws-vpc-id=${local.k8s[var.env].cluster_vpc}",
            "--aws-region=${local.k8s[var.env].region}"
          ]
          volume_mount {
            mount_path = "/var/run/secrets/kubernetes.io/serviceaccount"
            name       = kubernetes_service_account.ingress.default_secret_name
            read_only  = true
          }
        }
        volume {
          name = kubernetes_service_account.ingress.default_secret_name

          secret {
            secret_name = kubernetes_service_account.ingress.default_secret_name
          }
        }
      }
    }
  }

  depends_on = [kubernetes_cluster_role_binding.ingress]
}

resource "kubernetes_ingress" "app" {
  metadata {
    name      = "owncloud-lb"
    namespace = "fargate-node"
    annotations = {
      "kubernetes.io/ingress.class"           = "alb"
      "alb.ingress.kubernetes.io/scheme"      = "internet-facing"
      "alb.ingress.kubernetes.io/target-type" = "ip"
    }
    labels = {
      "app" = "owncloud"
    }
  }

  spec {
    backend {
      service_name = "owncloud-service"
      service_port = 80
    }
    rule {
      http {
        path {
          path = "/"
          backend {
            service_name = "owncloud-service"
            service_port = 80
          }
        }
      }
    }
  }
  depends_on = [kubernetes_service.app]
}

This works up to version 1.9 as required. As soon as I upgrade to version 2.2.3 the pod fails to update and on the pod get the following error:{"level":"error","ts":1629207071.4385357,"logger":"setup","msg":"unable to create controller","controller":"TargetGroupBinding","error":"no matches for kind \"TargetGroupBinding\" in version \"elbv2.k8s.aws/v1beta1\""}

I have read the update the doc and have amended the IAM policy as they state but they also mention:

updating the TargetGroupBinding CRDs

And that where I am not sure how to do that using terraform

If I try to do deploy on a new cluster (e.g not an upgrade from 1.9 I get the same error) I get the same error.


Solution

  • With your Terraform code, you apply an Deployment and an Ingress resource, but you must also add the CustomResourceDefinitions for the TargetGroupBinding custom resource.

    This is described under "Add Controller to Cluster" in the Load Balancer Controller installation documentation - with examples for Helm and Kubernetes Yaml provided.

    Terraform has beta support for applying CRDs including an example of deploying CustomResourceDefinition.