I use terraform-aws-eks provision EKS cluster. Two security groups provisioned after "terraform apply". They are "Cluster security group" and "Additional security groups". I launched an EC2 instance. I would like to access EKS from that EC2. To do that, I need to add EC2 security group into "Additional security groups". As follows are my code. Two questions.
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "18.30.1"
cluster_name = var.cluster_name
cluster_version = var.cluster_version
create_kms_key = true
kms_key_description = "KMS Secrets encryption for EKS Cluster"
kms_key_enable_default_policy = true
cluster_endpoint_private_access = true
cluster_endpoint_public_access = true
vpc_id = var.vpc_id
subnet_ids = var.subnet_ids
cluster_enabled_log_types = var.cluster_enabled_log_types
manage_aws_auth_configmap = var.manage_aws_auth_configmap
aws_auth_roles = var.aws_auth_roles
aws_auth_users = var.aws_auth_users
aws_auth_accounts = var.aws_auth_accounts
#Required for Karpenter role below
enable_irsa = true
create_cloudwatch_log_group = var.create_cloudwatch_log_group
cloudwatch_log_group_retention_in_days = var.cloudwatch_log_group_retention_in_days
node_security_group_additional_rules = {
ingress_nodes_karpenter_port = {
description = "Cluster API to Node group for Karpenter webhook"
protocol = "tcp"
from_port = 8443
to_port = 8443
type = "ingress"
source_cluster_security_group = true
}
}
# Extend cluster security group rules
cluster_security_group_additional_rules = {
ingress_ec2_tcp = {
description = "Access EKS from EC2 instance."
protocol = "tcp"
from_port = 443
to_port = 443
type = "ingress"
security_groups = [var.ec2_sg_id]
source_cluster_security_group = true
}
}
node_security_group_tags = {
# NOTE - if creating multiple security groups with this module, only tag the
# security group that Karpenter should utilize with the following tag
# (i.e. - at most, only one security group should have this tag in your account)
"karpenter.sh/discovery/${var.cluster_name}" = var.cluster_name
}
# Need two nodes to get Karpenter up and running.
# This ensures core services such as VPC CNI, CoreDNS, etc. are up and running
# so that Karpenter can be deployed and start managing compute capacity as required
eks_managed_node_groups = {
"${var.cluster_name}" = {
capacity_type = "ON_DEMAND"
instance_types = ["m5.large"]
# Not required nor used - avoid tagging two security groups with same tag as well
create_security_group = false
# Ensure enough capacity to run 2 Karpenter pods
min_size = 2
max_size = 3
desired_size = 2
iam_role_additional_policies = [
"arn:${local.partition}:iam::aws:policy/AmazonSSMManagedInstanceCore", # Required by Karpenter
"arn:${local.partition}:iam::aws:policy/AmazonEKSWorkerNodePolicy",
"arn:${local.partition}:iam::aws:policy/AmazonEKS_CNI_Policy",
"arn:${local.partition}:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly", #for access to ECR images
"arn:${local.partition}:iam::aws:policy/CloudWatchAgentServerPolicy"
]
labels = {
AL2Nodes = "monitor"
}
tags = {
# This will tag the launch template created for use by Karpenter
"karpenter.sh/discovery/${var.cluster_name}" = var.cluster_name
}
}
}
}
I believe you have 2 errors in your definition of the cluster_security_group_additional_rules
field.
I'll repeat your definition here just to make it easier for future readers:
cluster_security_group_additional_rules = {
ingress_ec2_tcp = {
description = "Access EKS from EC2 instance."
protocol = "tcp"
from_port = 443
to_port = 443
type = "ingress"
security_groups = [var.ec2_sg_id]
source_cluster_security_group = true
}
}
The first error is that you shouldn't be setting source_cluster_security_group = true
. This isn't even a valid option for cluster_security_group_additional_rules
, only for node_security_group_additional_rules
. You can see that here and here.
But regardless of this, you shouldn't set any of these options to true because they mean you just want to use the Cluster SG or the Nodes SG as source for your rule. But in your case you have an external security group that you want to use as source (the EC2 instance SG)
The second error is that the security_groups
field should actually be source_security_group_id
, and shouldn't be an array. This borrows the same naming convention from the aws_security_group_rule resource. You can confirm that in the module source code.
So I would say the correct configuration for you is:
cluster_security_group_additional_rules = {
ingress_ec2_tcp = {
description = "Access EKS from EC2 instance."
protocol = "tcp"
from_port = 443
to_port = 443
type = "ingress"
source_security_group_id = var.ec2_sg_id
}
}