Search code examples
amazon-web-servicesterraformterraform-provider-awsamazon-eks

What are difference between eks_managed_node_group_defaults and eks_managed_node_groups?


I am using AWS EKS Terraform module to create Amazon EKS cluster. Here is the example code from the document:

module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "~> 20.0"

  cluster_name    = "my-cluster"
  cluster_version = "1.29"

  cluster_endpoint_public_access  = true

  cluster_addons = {
    coredns = {
      most_recent = true
    }
    kube-proxy = {
      most_recent = true
    }
    vpc-cni = {
      most_recent = true
    }
  }

  vpc_id                   = "vpc-1234556abcdef"
  subnet_ids               = ["subnet-abcde012", "subnet-bcde012a", "subnet-fghi345a"]
  control_plane_subnet_ids = ["subnet-xyzde987", "subnet-slkjf456", "subnet-qeiru789"]

  # EKS Managed Node Group(s)
  eks_managed_node_group_defaults = {
    instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"]
  }

  eks_managed_node_groups = {
    example = {
      min_size     = 1
      max_size     = 10
      desired_size = 1

      instance_types = ["t3.large"]
      capacity_type  = "SPOT"
    }
  }

  # Cluster access entry
  # To add the current caller identity as an administrator
  enable_cluster_creator_admin_permissions = true

  access_entries = {
    # One access entry with a policy associated
    example = {
      kubernetes_groups = []
      principal_arn     = "arn:aws:iam::123456789012:role/something"

      policy_associations = {
        example = {
          policy_arn = "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy"
          access_scope = {
            namespaces = ["default"]
            type       = "namespace"
          }
        }
      }
    }
  }

  tags = {
    Environment = "dev"
    Terraform   = "true"
  }
}

I found if I deploy above code, the actual EKS cluster will use t3.large only.

Also, I found if I remove this section

  eks_managed_node_group_defaults = {
    instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"]
  }

and run terraform apply again, it will print

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

which means eks_managed_node_group_defaults is not actually being used in this case.

My question is what are difference between eks_managed_node_group_defaults and eks_managed_node_groups? And when do we need eks_managed_node_group_defaults? Thanks!


Solution

  • In eks_managed_node_group_defaults you simply define the common properties for managed groups defined in eks_managed_node_groups so you don't need to repeat them in the definition of each managed group. https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/node_groups.tf shows they serve as the first level of property value fallbacks as there are also hardcoded default values provided in try function.

    As these are only properties, they don't go into terraform state hence no difference if you remove them.