Search code examples
amazon-web-servicesterraformtagsamazon-eks

How do I use terraform to edit the "Name" tag in EC2 for a EKS managed node?


tldr; When using terraform to spin up nodes in an EKS cluster in AWS, you can't change the "Name" tag that shows up in EC2.

Longer Version: When using terraform in AWS, you can create a node_group, but you can't set the "Name" tag. This means when you look in EC2, the instance looks blank and you need to look on the tags pane to see what instance it is.

The problems is that EKS can't directly change that "Name" tag. And the tag configuration block is there, but it doesn't work. For instance, this should work:

resource "aws_eks_node_group" "my_node" {
...
tag {
  key   = "Name"
  value = "my_eks_node_group"
  propagate_at_launch = true
}

But it doesn't. I can set the tag through the GUI, but that's less than helpful. Anyone know of a way to set the "Name" tag through terraform??


Solution

  • This has been a long-standing issue, that apparently has lost some of the momentum it used to have. No matter, there are a couple of solutions.

    Option 1

    The best solution we have is to use the aws_autoscaling_group_tag which will add tags to NEW nodes that spin up. For example, here is my EKS node_group that is in a module in Terraform, and the aws_autoscaling_group_tag that sets the "Name" tag for that node_group:

    resource "aws_eks_node_group" "nodes_group" {
      cluster_name    = aws_eks_cluster.eks_cluster.name
      node_role_arn   = aws_iam_role.eks_assume_role.arn
      subnet_ids      = var.subnet_ids
      ###########
      # Optional
      ami_type        = "AL2_x86_64"
      disk_size       = 60
      instance_types  = ["m6i.xlarge"]
      node_group_name = "worker"
      version         = var.kubenetes_version
    
      scaling_config {
        desired_size = 2
        max_size     = 4
        min_size     = 1
      }
    
      update_config {
        max_unavailable = 2
      }
    
      # Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
      # Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
      depends_on = [
        aws_iam_role_policy_attachment.EKS-AmazonEKSWorkerNodePolicy,
        aws_iam_role_policy_attachment.EKS-AmazonEKS_CNI_Policy,
        aws_iam_role_policy_attachment.EKS-AmazonEC2ContainerRegistryReadOnly,
      ]
    }
    
    #EKS can't directly set the "Name" tag, so we use the autoscaling_group_tag resource. 
    resource "aws_autoscaling_group_tag" "nodes_group" {
      for_each = toset(
        [for asg in flatten(
          [for resources in aws_eks_node_group.nodes_group.resources : resources.autoscaling_groups]
        ) : asg.name]
      )
    
      autoscaling_group_name = each.value
    
      tag {
        key   = "Name"
        value = "eks_node_group"
        propagate_at_launch = true
      }
    }
    

    Which will then set the Name tag as eks_node_group.

    Note, this only works for NEW nodes. If you have existing nodes you'll have to either cycle those out, or add the tag manually. But it does work for new nodes. And thanks to andre-lk for posting this answer in a github issue. Github issue thread

    Option 2

    Use launch templates. You can set the "Name" tag through launch templates. There's a tutorial on that here: Tutorial on launch templates

    Option 3

    Use a lambda. You can kick off a lambda that will run after the instance comes up, and then tag your nodes that way.

    Option 4

    If you don't have a lot of nodes you could tag them manually through the GUI. But it isn't the best idea.

    Wrapping up

    It's possible that there are other options out there, but I think setting the aws_autoscaling_group_tag is the cleanest. It just means that you'll have to cycle out your nodes once for the tag to show up.

    If anyone else has better ideas, please, please post them below as a comment or another answer.